Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-unity-3d-game-development-dont-be-clock-blocker
Packt
29 Sep 2010
9 min read
Save for later

Unity 3D Game Development: Don't Be a Clock Blocker

Packt
29 Sep 2010
9 min read
  Unity 3D Game Development by Example Beginner's Guide A seat-of-your-pants manual for building fun, groovy little games quickly Build fun games using the free Unity 3D game engine even if you've never coded before Learn how to "skin" projects to make totally different games from the same file – more games, less effort! Deploy your games to the Internet so that your friends and family can play them Packed with ideas, inspiration, and advice for your own game design and development Stay engaged with fresh, fun writing that keeps you awake as you learn Read more about this book (For more resources on Unity 3D, see here.) We've taken a baby game like Memory and made it slightly cooler by changing the straight-up match mechanism and adding a twist: matching disembodied robot parts to their bodies. Robot Repair is a tiny bit more interesting and more challenging thanks to this simple modification. There are lots of ways we could make the game even more difficult: we could quadruple the number of robots, crank the game up to a 20x20 card grid, or rig Unity up to some peripheral device that issues a low-grade electrical shock to the player every time he doesn't find a match. NOW who's making a baby game? These ideas could take a lot of time though, and the Return-On-Investment (ROI) we see from these features may not be worth the effort. One cheap, effective way of amping up the game experience is to add a clock. Apply pressure What if the player only has x seconds to find all the matches in the Robot Repair game? Or, what if in our keep-up game, the player has to bounce the ball without dropping it until the timer runs out in order to advance to the next level? In this article let's: Program a text-based countdown clock to add a little pressure to our games Modify the clock to make it graphical, with an ever-shrinking horizontal bar Layer in some new code and graphics to create a pie chart-style clock That's three different countdown clocks, all running from the same initial code, all ready to be put to work in whatever Unity games you dream up. Roll up your sleeves—it's time to start coding! Time for action – prepare the clock script Open your Robot Repair game project and make sure you're in the game Scene. We'll create an empty GameObject and glue some code to it. Go to GameObject | Create Empty. Rename the empty Game Object Clock. Create a new JavaScript and name it clockScript. Drag-and-drop the clockScript onto the Clock Game Object. No problem! We know the drill by now—we've got a Game Object ready to go with an empty script where we'll put all of our clock code. Time for more action – prepare the clock text In order to display the numbers, we need to add a GUIText component to the Clock GameObject, but there's one problem: GUIText defaults to white, which isn't so hot for a game with a white background. Let's make a quick adjustment to the game background color so that we can see what's going on. We can change it back later. Select the Main Camera in the Hierarchy panel. Find the Camera component in the Inspector panel. Click on the color swatch labeled Back Ground Color, and change it to something darker so that our piece of white GUIText will show up against it. I chose a "delightful" puce (R157 G99 B120). Select the Clock Game Object from the Hierarchy panel. It's not a bad idea to look in the Inspector panel and confirm that the clockScript script was added as a component in the preceding instruction. With the Clock Game Object selected, go to Component | Rendering | GUIText. This is the GUIText component that we'll use to display the clock numbers on the screen. In the Inspector panel, find the GUIText component and type whatever in the blank Text property. In the Inspector panel, change the clock's X position to 0.8 and its Y position to 0.9 to bring it into view. You should see the word whatever in white, floating near the top-right corner of the screen in the Game view.. Right, then! We have a Game Object with an empty script attached. That Game Object has a GUIText component to display the clock numbers. Our game background is certifiably hideous. Let's code us some clock. Still time for action – change the clock text color Double-click the clockScript. Your empty script, with one lone Update() function, should appear in the code editor. The very first thing we should consider is doing away with our puce background by changing the GUIText color to black instead of white. Let's get at it. Write the built-in Start function and change the GUIText color: function Start(){ guiText.material.color = Color.black;}function Update() {} Save the script and test your game to see your new black text. If you feel comfy, you can change the game background color back to white by clicking on the Main Camera Game Object and finding the color swatch in the Inspector panel. The white whatever GUIText will disappear against the white background in the Game view because the color-changing code that we just wrote runs only when we test the game (try testing the game to confirm this). If you ever lose track of your text, or it's not displaying properly, or you just really wanna see it on the screen, you can change the camera's background color to confirm that it's still there. If you're happy with this low-maintenance, disappearing-text arrangement, you can move on to the Prepare the clock code section. But, if you want to put in a little extra elbow grease to actually see the text, in a font of your choosing, follow these next steps. Time for action rides again – create a font texture and material In order to change the font of this GUIText, and to see it in a different color without waiting for the code to run, we need to import a font, hook it up to a Material, and apply that Material to the GUIText. Find a font that you want to use for your game clock. I like the LOLCats standby Impact. If you're running Windows, your fonts are likely to be in the C:WindowsFonts directory. If you're a Mac user, you should look in the LibraryFonts folder. Drag the font into the Project panel in Unity. The font will be added to your list of Assets. Right-click (or secondary-click) an empty area of the Project panel and choose Create | Material. You can also click on the Create button at the top of the panel. Rename the new Material to something useful. Because I'm using the Impact font, and it's going to be black, I named mine "BlackImpact" (incidentally, "Black Impact" is also the name of my favorite exploitation film from the 70s). Click on the Material you just created in the Project Panel. In the Inspector panel, click on the color swatch labeled Main Color and choose black (R0 G0 B0), then click on the little red X to close the color picker. In the empty square area labeled None (Texture 2D), click on the Select button, and choose your font from the list of textures (mine was labeled impact - font texture). At the top of the Inspector panel, there's a drop-down labeled Shader. Select Transparent/Diffuse from the list. You'll know it worked when the preview sphere underneath the Inspector panel shows your chosen font outline wrapped around a transparent sphere. Pretty cool! Click on the Clock Game Object in the Hierarchy panel. Find the GUIText component in the Inspector panel. Click and drag your font—the one with the letter A icon—from the Project panel into the parameter labeled Font in the GUIText component. You can also click the drop-down arrow (the parameter should say None (Font) initially) and choose your font from the list. Similarly, click-and-drag your Material—the one with the gray sphere icon—from the Project panel into the parameter labeled Material in the GUIText component. You can also click on the drop-down arrow (the parameter should say None (Material) initially) and choose your Material from the list. Just as you always dreamed about since childhood, the GUIText changes to a solid black version of the fancy font you chose! Now, you can definitely get rid of that horrid puce background and switch back to white. If you made it this far and you're using a Material instead of the naked font option, it's also safe to delete the guiText.material.color = Color.black; line from the clockScript. Time for action – what's with the tiny font? The Impact font, or any other font you choose, won't be very… impactful at its default size. Let's change the import settings to biggify it. Click on your imported font—the one with the letter A icon—in the Project panel. In the Inspector panel, you'll see the True Type Font Importer. Change the Font Size to something respectable, like 32, and press the Enter key on your keyboard. Click on the Apply button. Magically, your GUIText cranks up to 32 points (you'll only see this happen if you still have a piece of text like "whatever" entered into the Text parameter of the GUIText of the Clock Game Object component). What just happened - was that seriously magic? Of course, there's nothing magical about it. Here's what happened when you clicked on that Apply button: When you import a font into Unity, an entire set of raster images is created for you by the True Type Font Importer. Raster images are the ones that look all pixelly and square when you zoom in on them. Fonts are inherently vector instead of raster, which means that they use math to describe their curves and angles. Vector images can be scaled up any size without going all Rubik's Cube on you. But, Unity doesn't support vector fonts. For every font size that you want to support, you need to import a new version of the font and change its import settings to a different size. This means that you may have four copies of, say, the Impact font, at the four different sizes you require. When you click on the Apply button, Unity creates its set of raster images based on the font that you're importing.
Read more
  • 0
  • 0
  • 3993

article-image-flash-multiplayer-virtual-world-smartfoxserver-using-embedded-web-server-and-database
Packt
16 Aug 2010
4 min read
Save for later

Flash Multiplayer Virtual World with SmartFoxServer Using Embedded Web Server and Database

Packt
16 Aug 2010
4 min read
(For more resources on Flash, see here.) Unlike a deployment environment, it is common to have just once machine acting both as server and client in a development environment. The machine will have SmartFoxServer, web server, and database installed. In this case, there are no noticeable differences between using the embedded or third-party web server and database. It is a good habit to simulate the deployment environment as much as possible in development stage. As we are going to use a third-party web server and database, we will set up a development environment that also uses the third-party server instead of the embedded web server and database in the third part of this article series. Installing Java Development Kit The Java Development Kit includes the essential development tools (JDK) and the Java Runtime Environment (JRE). The development tool compiles the Java source code into byte codes and the JRE is the response to execute the byte codes. We will need several Java compilations in later chapters. SmartFoxServer is build on the Java environment and we need the JRE to start up the server. The JDK and JRE may be pre-installed in some OSs. Installing JDK On Windows The steps for installing JDK on Windows are as follows: Go to http://java.sun.com/javase/downloads/. Click on the Download button of Java. It will lead to the Java SE Downloads page. Select Windows (or Windows x64 for 64-bits Windows) in Platform. Click on Download. If it prompts an optional login request, we can click the Skip this Step to bypass it. Launch the installer after the download. Install the Java Development Kit with all default settings. The Java environment is ready after installation completes. Installing JDK on Mac OSX The Mac OSX comes with its own set of Java environment. We can check the JDK and JRE version by following steps: Launch terminal from Applications | Utilities | Terminal. Type the following and press the Enter key: javac -version The command will output the currently installed version of the Java in the Mac OSX. In my case, it outputs: javac 1.6.0_17. The current version of SmartFoxServer at the time of writing recommends the version 1.6. If the Java is not updated, we can update it via Apple Menu | Software Update. The software update will check for any updates for your existing Mac software, including the Java environment. Installing JDK on Linux We can use the general method to download and install the JDK or use the system specific method to install the package. We will show the general method and the Ubuntu method. Installing for General Linux Go to http://java.sun.com/javase/downloads/index.jsp in browser. Click on the Download button. The platform Linux should be selected automatically. Otherwise, select Linux (or Linux x64 for 64-bit Linux). Click on Continue. If it prompts for login, click on Skip this Step to bypass it. For Redhat or Fedora Linux, choose the rpm-bin file to download. For other Linux, choose the .bin file to download. Launch terminal via Applications | Accessories | Terminal after the download completes. Change the directory to the folder that contains the downloaded package. The download destination varies from different profile settings. In my case, it is in Downloads folder. cd ~/Downloads/ The version is Java 6 Update 20 at the time of writing and the filename is jdk-6u20-linux-i586.bin or jdk-6u20-linux-i586-rpm.bin. Then we make it executable and launch the installer by the following commands: chmod a+x jdk-6u20-linux-i586.bin./jdk-6u20-linux-i586.bin The installer displays the license agreement. Type Yes at the end to agree and continue installation. Press the Enter key after the file’s extraction to end the installation. Installing for Ubuntu Linux Ubuntu users can install the JDK via the apt-get command. We will search for the latest package name of the JDK by the following command: apt-cache search --names-only sun-java.*-jdk The result shows the available JDK packet names. At the time of writing, it is JDK6: sun-java6-jdk - Sun Java(TM) Development Kit (JDK) 6 We use the apt-get command to install the JDK: sudo apt-get install sun-java6-jdk Type in the user password because it requires user’s password and the privilege to use apt-get.
Read more
  • 0
  • 0
  • 3987

article-image-creating-and-utilizing-custom-entities
Packt
20 Nov 2013
16 min read
Save for later

Creating and Utilizing Custom Entities

Packt
20 Nov 2013
16 min read
(For more resources related to this topic, see here.) Introducing the entity system The entity system exists to spawn and manage entities in the game world. Entities are logical containers, allowing drastic changes in behavior at runtime. For example, an entity can change its model, position, and orientation at any point in the game. Consider this; every item, weapon, vehicle, and even player that you have interacted with in the engine is an entity. The entity system is one of the most important modules present in the engine, and is dealt regularly by programmers. The entity system, accessible via the IEntitySystem interface, manages all entities in the game. Entities are referenced to using the entityId type definition, which allows 65536 unique entities at any given time. If an entity is marked for deletion, for example, IEntity::Remove(bool bNow = false), the entity system will delete this prior to updating at the start of the next frame. If the bNow parameter is set to true, the entity will be removed right away. Entity classes Entities are simply instances of an entity class, represented by the IEntityClass interface. Each entity class is assigned a name that identifies it, for example, SpawnPoint. Classes can be registered via, IEntityClassRegistry::RegisterClass, or via IEntityClassRegistry::RegisterStdClass to use the default IEntityClass implementation. Entities The IEntity interface is used to access the entity implementation itself. The core implementation of IEntity is contained within, CryEntitySystem.dll, and cannot be modified. Instead, we are able to extend entities using game object extensions (have a look at the Game object extensions section in this article) and custom entity classes. entityId Each entity instance is assigned a unique identifier, which persists for the duration of the game session. EntityGUID Besides the entityId parameter, entities are also given globally unique identifiers, which unlike entityId can persist between game sessions, in the case of saving games and more. Game objects When entities need extended functionality, they can utilize game objects and game object extensions. This allows for a larger set of functionality that can be shared by any entity. Game objects allow the handling of binding entities to the network, serialization, per-frame updates, and the ability to utilize existing (or create new) game object extensions such as Inventory and AnimatedCharacter. Typically in CryENGINE development, game objects are only necessary for more important entity implementations, such as actors. The entity pool system The entity pool system allows "pooling" of entities, allowing efficient control of entities that are currently being processed. This system is commonly accessed via flowgraph, and allows the disabling/enabling groups of entities at runtime based on events. Pools are also used for entities that need to be created and released frequently, for example, bullets. Once an entity has been marked as handled by the pool system, it will be hidden in the game by default. Until the entity has been prepared, it will not exist in the game world. It is also ideal to free the entity once it is no longer needed. For example, if you have a group of AI that only needs to be activated when the player reaches a predefined checkpoint trigger, this can be set up using AreaTrigger (and its included flownode) and the Entity:EntityPool flownode. Creating a custom entity Now that we've learned the basics of the entity system, it's time to create our first entity. For this exercise, we'll be demonstrating the ability to create an entity in Lua, C#, and finally C++. . Creating an entity using Lua Lua entities are fairly simple to set up, and revolve around two files: the entity definition, and the script itself. To create a new Lua entity, we'll first have to create the entity definition in order to tell the engine where the script is located: <Entity Name="MyLuaEntity" Script="Scripts/Entities/Others/MyLuaEntity.lua" /> Simply save this file as MyLuaEntity.ent in the Game/Entities/ directory, and the engine will search for the script at Scripts/Entities/Others/MyLuaEntity.lua. Now we can move on to creating the Lua script itself! To start, create the script at the path set previously and add an empty table with the same name as your entity: MyLuaEntity = { } When parsing the script, the first thing the engine does is search for a table with the same name as the entity, as you defined it in the .ent definition file. This main table is where we can store variables, Editor properties, and other engine information. For example, we can add our own property by adding a string variable: MyLuaEntity = { Properties = { myProperty = "", }, } It is possible to create property categories by adding subtables within the Properties table. This is useful for organizational purposes. With the changes done, you should see the following screenshot when spawning an instance of your class in the Editor, via RollupBar present to the far right of the Editor by default: Common Lua entity callbacks The script system provides a set of callbacks that can be utilized to trigger specific logic on entity events. For example, the OnInit function is called on the entity when it is initialized: function MyEntity:OnInit() end Creating an entity in C# The third-party extension, CryMono allows the creation of entities in .NET, which leads us to demonstrate the capability of creating our very own entity in C#. To start, open the Game/Scripts/Entities directory, and create a new file called MyCSharpEntity.cs. This file will contain our entity code, and will be compiled at runtime when the engine is launched. Now, open the script (MyCSharpEntity.cs) IDE of your choice. We'll be using Visual Studio in order to provide IntelliSense and code highlighting. Once opened, let's create a basic skeleton entity. We'll need to add a reference to the CryENGINE namespace, in which the most common CryENGINE types are stored. using CryEngine; namespace CryGameCode { [Entity] public class MyCSharpEntity : Entity { } } Now, save the file and start the Editor. Your entity should now appear in RollupBar, inside the Default category. Drag MyEntity into the viewport in order to spawn it: We use the entity attribute ([Entity]) as a way of providing additional information for the entity registration progress, for example, using the Category property will result in using a custom Editor category, instead of Default. [Entity(Category = "Others")] Adding Editor properties Editor properties allow the level designer to supply parameters to the entity, perhaps to indicate the size of a trigger area, or to specify an entity's default health value. In CryMono, this can be done by decorating supported types (have a look at the following code snippet) with the EditorProperty attribute. For example, if we want to add a new string property: [EditorProperty] public string MyProperty { get; set; } Now when you start the Editor and drag MyCSharpEntity into the viewport, you should see MyProperty appear in the lower part of RollupBar. The MyProperty string variable in C# will be automatically updated when the user edits this via the Editor. Remember that Editor properties will be saved with the level, allowing the entity to use Editor properties defined by the level designer even in pure game mode. Property folders As with Lua scripts, it is possible for CryMono entities to place Editor properties in folders for organizational purposes. In order to create folders, you can utilize the Folder property of the EditorProperty attribute as shown: [EditorProperty(Folder = "MyCategory")] You now know how to create entities with custom Editor properties using CryMono! This is very useful when creating simple gameplay elements for level designers to place and modify at runtime, without having to reach for the nearest programmer. Creating an entity in C++ Creating an entity in C++ is slightly more complex than making one using Lua or C#, and can be done differently based on what the entity is required for. For this example, we'll be detailing the creation of a custom entity class by implementing IEntityClass. Creating a custom entity class Entity classes are represented by the IEntityClass interface, which we will derive from and register via IEntityClassRegistry::RegisterClass(IEntityClass *pClass). To start off, let's create the header file for our entity class. Right-click on your project in Visual Studio, or any of its filters, and go to Add | New Item in the context menu. When prompted, create your header file ( .h). We'll be calling CMyEntityClass. Now, open the generated MyEntityClass.h header file, and create a new class which derives from IEntityClass: #include <IEntityClass.h> class CMyEntityClass : public IEntityClass { }; Now that we have the class set up, we'll need to implement the pure virtual methods we inherit from IEntityClass in order for our class to compile successfully. For most of the methods, we can simply return a null pointer, zero, or an empty string. However, there are a couple of methods which we have to handle for the class to function: Release(): This is called when the class should be released, should simply perform "delete this;" to destroy the class GetName(): This should return the name of the class GetEditorClassInfo(): This should return the ClassInfo struct, containing Editor category, helper, and icon strings to the Editor SetEditorClassInfo(): This is called when something needs to update the Editor ClassInfo explained just now. IEntityClass is the bare minimum for an entity class, and does not support Editor properties yet (we will cover this a bit further later). To register an entity class, we need to call IEntityClassRegistry::RegisterClass. This has to be done prior to the IGameFramework::CompleteInit call in CGameStartup. We'll be doing it inside GameFactory.cpp, in the InitGameFactory function: IEntityClassRegistry::SEntityClassDesc classDesc; classDesc.sName = "MyEntityClass"; classDesc.editorClassInfo.sCategory = "MyCategory"; IEntitySystem *pEntitySystem = gEnv->pEntitySystem; IEntityClassRegistry *pClassRegistry = pEntitySystem- >GetClassRegistry(); bool result = pClassRegistry->RegisterClass(new CMyEntityClass(classDesc)); Implementing a property handler In order to handle Editor properties, we'll have to extend our IEntityClass implementation with a new implementation of IEntityPropertyHandler. The property handler is responsible for handling the setting, getting, and serialization of properties. Start by creating a new header file named MyEntityPropertyHandler.h. Following is the bare minimum implementation of IEntityPropertyHandler. In order to properly support properties, you'll need to implement SetProperty and GetProperty, as well as LoadEntityXMLProperties (the latter being required to read property values from the Level XML). Then create a new class which derives from IEntityPropertyHandler: class CMyEntityPropertyHandler : public IEntityPropertyHandler { }; In order for the new class to compile, you'll need to implement the pure virtual methods defined in IEntityPropertyHandler. Methods crucial for the property handler to work properly can be seen as shown: LoadEntityXMLProperties: This is called by the Launcher when a level is being loaded, in order to read property values of entities saved by the Editor GetPropertyCount: This should return the number of properties registered with the class GetPropertyInfo: This is called to get the property information at the specified index, most importantly when the Editor gets the available properties SetProperty: This is called to set the property value for an entity GetProperty: This is called to get the property value of an entity GetDefaultProperty: This is called to retrieve the default property value at the specified index To make use of the new property handler, create an instance of it (passing the requested properties to its constructor) and return the newly created handler inside IEntityClass::GetPropertyHandler(). We now have a basic entity class implementation, which can be easily extended to support Editor properties. This implementation is very extensible, and can be used for vast amount of purposes, for example, the C# script seen later has simply automated this process, lifting the responsibility of so much code from the programmer. Entity flownodes You may have noticed that when right-clicking inside a graph, one of the context options is Add Selected Entity. This functionality allows you to select an entity inside a level, and then add its entity flownode to the flowgraph. By default, the entity flownode doesn't contain any ports, and will therefore be mostly useless as shown to the right. However, we can easily create our own entity flownode that targets the entity we selected in all three languages. Creating an entity flownode in Lua By extending the entity we created in the Creating an entity using Lua section, we can add its very own entity flownode: function MyLuaEntity:Event_OnBooleanPort() BroadcastEvent(self, "MyBooleanOutput"); end MyLuaEntity.FlowEvents = { Inputs = { MyBooleanPort = { MyLuaEntity.Event_OnBooleanPort, "bool" }, }, Outputs = { MyBooleanOutput = "bool", }, } We just created an entity flownode for our MyLuaEntity class. If you start the Editor, spawn your entity, select it and then click on Add Selected Entity in your flowgraph, you should see the node appearing. Creating an entity flownode using C# Creating an entity flownode in C# is very simple due to being almost exactly identical in implementation as the regular flownodes. To create a new flownode for your entity, simply derive from EntityFlowNode, where T is your entity class name: using CryEngine.Flowgraph; public class MyEntity : Entity { } public class MyEntityNode : EntityFlowNode { [Port] public void Vec3Test(Vec3 input) { } [Port] public void FloatTest(float input) { } [Port] public void VoidTest() { } [Port] OutputPort BoolOutput { get; set; } } We just created an entity flownode in C#. This allows us to utilize TargetEntity in our new node's logic. Creating an entity flownode in C++ In short, entity flownodes are identical in implementation to regular nodes. The difference being the way the node is registered, as well as the prerequisite for the entity to support TargetEntity. Registering the entity node We utilize same methods for registering entity nodes as before, the only difference being that the category has to be entity, and the node name has to be the same as the entity it belongs to: REGISTER_FLOW_NODE("entity:MyCppEntity", CMyEntityFlowNode); The final code Finally, from what we've learned now, we can easily create our first entity flownode in C++: #include "stdafx.h" #include "Nodes/G2FlowBaseNode.h" class CMyEntityFlowNode : public CFlowBaseNode { enum EInput { EIP_InputPort, }; enum EOutput { EOP_OutputPort }; public: CMyEntityFlowNode(SActivationInfo *pActInfo) { } virtual IFlowNodePtr Clone(SActivationInfo *pActInfo) { return new CMyEntityFlowNode(pActInfo); } virtual void ProcessEvent(EFlowEvent evt, SActivationInfo *pActInfo) { } virtual void GetConfiguration(SFlowNodeConfig &config) { static const SInputPortConfig inputs[] = { InputPortConfig_Void("Input", "Our first input port"), {0} }; static const SOutputPortConfig outputs[] = { OutputPortConfig_Void("Output", "Our first output port"), {0} }; config.pInputPorts = inputs; config.pOutputPorts = outputs; config.sDescription = _HELP("Entity flow node sample"); config.nFlags |= EFLN_TARGET_ENTITY; } virtual void GetMemoryUsage(ICrySizer *s) const { s->Add(*this); } }; REGISTER_FLOW_NODE("entity:MyCppEntity", CMyEntityFlowNode); Game objects As mentioned at the start of the article, game objects are used when more advanced functionality is required of an entity, for example, if an entity needs to be bound to the network. There are two ways of implementing game objects, one being by registering the entity directly via IGameObjectSystem::RegisterExtension (and thereby having the game object automatically created on entity spawn), and the other is by utilizing the IGameObjectSystem::CreateGameObjectForEntity method to create a game object for an entity at runtime. Game object extensions It is possible to extend game objects by creating extensions, allowing the developer to hook into a number of entity and game object callbacks. This is, for example, how actors are implemented by default. We will be creating our game object extension in C++. The CryMono entity we created earlier in the article was made possible by a custom game object extension contained in CryMono.dll, and it is currently not possible to create further extensions via C# or Lua. Creating a game object extension in C++ CryENGINE provides a helper class template for creating a game object extension, called CGameObjectExtensionHelper. This helper class is used to avoid duplicating common code that is necessary for most game object extensions, for example, basic RMI functionality. To properly implement IGameObjectExtension, simply derive from the CGameObjectExtensionHelper template, specifying the first template argument as the class you're writing (in our case, CMyEntityExtension) and the second as IGameObjectExtension you're looking to derive from. Normally, the second argument is IGameObjectExtension, but it can be different for specific implementations such as IActor (which in turn derives from IGameObjectExtension). class CMyGameObjectExtension : public CGameObjectExtensionHelper<CMyGameObjectExtension, IGameObjectExtension> { }; Now that you've derived from IGameObjectExtension, you'll need to implement all its pure virtual methods to spare yourself from a bunch of unresolved externals. Most can be overridden with empty methods that return nothing or false, while more important ones have been listed as shown: Init: This is called to initialize the extension. Simply performSetGameObject(pGameObject); and then return true. NetSerialize: This is called to serialize things over the network. You'll also need to implement IGameObjectExtensionCreatorBase in a new class that will serve as an extension factory for your entity. When the extension is about to be activated, our factory's Create() method will be called in order to obtain the new extension instance: struct SMyGameObjectExtensionCreator : public IGameObjectExtensionCreatorBase { virtual IGameObjectExtension *Create() { return new CMyGameObjectExtension(); } virtual void GetGameObjectExtensionRMIData(void **ppRMI, size_t *nCount) { return CMyGameObjectExtension::GetGameObjectExtensionRMIData (ppRMI, nCount); } }; Now that you've created both your game object extension implementation, as well as the game object creator, simply register the extension: static SMyGameObjectExtensionCreator creator; gEnv->pGameFramework->GetIGameObjectSystem()- >RegisterExtension("MyGameObjectExtension", &creator, myEntityClassDesc); By passing the entity class description to IGameObjectSystem::RegisterExtension, you're telling it to create a dummy entity class for you. If you have already done so, simply pass the last parameter pEntityCls as NULL to make it use the class you registered before. Activating our extension In order to activate your game object extension, you'll need to call IGameObject::ActivateExtension after the entity is spawned. One way to do this is using the entity system sink, IEntitySystemSink, and listening to the OnSpawn events. We've now registered our own game object extension. When the entity is spawned, our entity system sink's OnSpawn method will be called, allowing us to create an instance of our game object extension. Summary In this article, we have learned how the core entity system is implemented and exposed and created our own custom entity. You should now be aware of the process of creating accompanying flownodes for your entities, and be aware of the working knowledge surrounding game objects and their extensions. If you want to get more familiar with the entity system, you can try and create a slightly more complex entity on your own. Resources for Article: Further resources on this subject: CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article] How to Create a New Vehicle in CryENGINE 3 [Article]
Read more
  • 0
  • 0
  • 3972

article-image-using-3d-objects
Packt
15 Sep 2015
11 min read
Save for later

Using 3D Objects

Packt
15 Sep 2015
11 min read
In this article by Liz Staley, author of the book Manga Studio EX 5 Cookbook, you will learn the following topics: Adding existing 3D objects to a page Importing a 3D object from another program Manipulating 3D objects Adjusting the 3D camera (For more resources related to this topic, see here.) One of the features of Manga Studio 5 that people ask me about all the time is 3D objects. Manga Studio 5 comes with a set of 3D assets: characters, poses, and a few backgrounds and small objects. These can be added directly to your page, posed and positioned, and used in your artwork. While I usually use these 3D poses as a reference (much like the wooden drawing dolls that you can find in your local craft store), you can conceivably use 3D characters and imported 3D assets from programs such as Poser to create entire comics. Let's get into the third dimension now, and you will learn how to use these assets in Manga Studio 5. Adding existing 3D objects to a page Manga Studio 5 comes with many 3D objects present in the materials library. This is the fastest way to get started with using the 3D features. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start the recipes covered here. How to do it… The following steps will show us how to add an existing 3D material to a page: Open the materials library. This can be done by going to Window | Material | Material [3D]. Select a category of 3D material from the list on the left-hand side of the library, or scroll down the Material library preview window to browse all the available materials. Select a material to add to the page by clicking on it to highlight it. In this recipe, we are choosing the School girl B 02 character material. It is highlighted in the following screenshot: Hold the left mouse button down on the selected material and drag it onto the page, releasing the mouse button once the cursor is over the page, to display the material. Alternately, you can click on the Paste selected material to canvas icon at the bottom of the Material library menu. The selected 3D material will be added to the page. The School girl B 02 material is shown in this default character pose: Importing a 3D object from another program You don't have to use only the default 3D models included in Manga Studio 5. The process of importing a model is very easy. The types of files that can be imported into Manga Studio 5 are c2fc, c2fr, fbx, 1wo, 1ws, obj, 6kt, and 6kh. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start this recipe. For this recipe, you will also need a model to import into the program. These can be found on numerous websites, including my.smithmicro.com, under the Poser tab. How to do it… The following steps will walk us through the simple process of importing a 3D model into Manga Studio 5: Open the location where the 3D model you wish to import has been saved. If you have downloaded the 3D model from the Internet, it may be in the Downloads folder on your PC. Arrange the windows on your computer screen so that the location of the 3D model and Manga Studio 5 are both visible, as shown in the following screenshot: Click on the 3D model file and hold down the mouse button. While still holding down the mouse button, drag the 3D model file into the Manga Studio 5 window. Release the mouse button. The 3D model will be imported into the open page, as shown in this screenshot: Manipulating 3D objects You've learned how to add a 3D object to our project. But how can you pose it the way you want it to look for your scene? With a little time and patience, you'll be posing characters like a pro in no time! Getting ready Follow the directions in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… This recipe will walk us through moving a character into a custom pose: Be sure that the Object tool under Operation is selected. Click on the 3D object to manipulate, if it is not already selected. To move the entire object up, down, left, or right, hover the mouse cursor over the fourth icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then, drag to move the object in the desired direction. The following screenshot shows the location of the icon used to move the object up, down, left, or right. It is highlighted in pink and also shown over the 3D character. If your models are moving very slowly, you may need to allocate more memory to Manga Studio EX 5. This can be done by going to File | Preferences | Performance. To rotate the object along the y axis (or the horizon line), hover the mouse cursor over the fifth icon in the top-left corner of the box around the selected object. Click on it, hold the left mouse button, and drag. The object will rotate along the y axis, as shown in this screenshot: To rotate the object along the x axis (straight up and down vertically), hover the mouse cursor over the sixth icon in the top-left corner of the box around the selected object. Click and drag. The object will rotate vertically around its center, , as shown in the following screenshot: To move the object back and forth in 3D space, hover the mouse cursor over the seventh icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then drag it. The icon is shown as follows, highlighted in pink, and the character has been moved back—away from the camera: To move one part of a character, click on the part to be moved. For this recipe, we'll move the character's arm down. To do this, we'll click on the upper arm portion of the character to select it. When a portion of the character is selected, a sphere with three lines circling it will appear. Each of these three lines represents one axis (x, y, and z) and controls the rotation of that portion of the character. This set of lines is shown here: Use the lines of the sphere to rotate the part of the character to the desired position. For a more precise movement, the scroll wheel on the mouse can be used as well. In the following screenshot, the arm has been rotated so that it is down at the character's side: Do you keep accidentally moving a part of the model that you don't want to move? Put the cursor over the part of the model that you'd like to keep in place, and then right-click. A blue box will appear on that part of the model, and the piece will be locked in to place. Right-click again to unlock the part. How it works… In this recipe, we covered how to move and rotate a 3D object and portions of 3D characters. This is the start of being able to create your own custom poses and saving them for reuse. It's also the way to pose the drawing doll models in Manga Studio to make pose references for your comic artwork. In the 3D-Body Type folder of the materials library, you will find Female and Male drawing dolls that can be posed just as the premade characters can. These generic dolls are great for getting that difficult pose down. Then use the next recipe, Adjusting the 3D camera, to get the angle you need, and draw away! The following screenshot shows a drawing doll 3D object that has been posed in a custom stance. The preceding pose was relatively easy to achieve. The figure was rotated along the x axis, and then the head and neck joints were both rotated individually so that the doll looked toward the camera. Both its arms were rotated down and then inward. The hands were posed. The ankle joints were selected and the feet were rotated so that the toes were pointed. Then the knee of the near leg was rotated to bend it. The hip of the near leg was also rotated so that the leg was lifted slightly, giving a "cutesy" look to the pose. Having trouble posing a character's hands exactly the way you want them? Then open the Sub Tool Detail palette and click on Pose in the left-hand-side menu. In this area, you will find a menu with a picture of a hand. This is a quick controller for the fingers. Select the hand that you wish to pose. Along the bottom of the menu are some preset hand poses for things such as closed fists. At the top of each finger on this menu is an icon that looks like chain links. Click on one of them to lock the finger that it is over and prevent it from moving. The triangle area over the large blue hand symbol controls how open and closed the fingers are. You will find this menu much easier than rotating each joint individually—I'm sure! Adjusting the 3D camera In addition to manipulating 3D objects or characters, you can also change the position of the 3D camera to get the composition that you desire for your work. Think of the 3D camera just like a camera on a movie set. It can be rotated or moved around to frame the actors (3D characters) and scenery just the way the director wants! Not sure whether you moved the character or the camera? Take a look at the ground plane, which is the "checkerboard" floor area underneath the characters and objects. If the character is standing straight up and down on the ground plane, it means that the camera was moved. If the character is floating above or below the ground plane, or part of the way through it, it means that the character or object was moved. Getting ready Follow the directions given in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… To rotate the camera around an object (the object will remain stationary), hover the mouse cursor over the first icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and the camera rotation are shown in the following screenshot: To move the camera up, down, left, or right, hover the mouse cursor over the second icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and camera movement are shown in this screenshot: To move the camera back and forth in the 3D space, hover the mouse cursor over the third icon in the top-left corner of the box around the selected object. Again, click and hold the left mouse button, and then drag. The next screenshot shows the zoom icon in pink at the top and the overlay on top of the character. Note how the hand of the character and the top of the head are now out of the page, since the camera is closer to her and she appears larger on the canvas. Summary In this article, we have studied to add existing 3D objects to a page using Manga Studio 5 in detail. After adding the existing object, we saw steps to add the 3D object from another program. Then, there are steps to manipulate these 3D objects along the co-ordinate system by using tools available in Manga Studio 5. Finally, we learnt to position the 3D camera, by rotating it around an object. Resources for Article: Further resources on this subject: Ink Slingers [article] Getting Familiar with the Story Features [article] Animating capabilities of Cinema 4D [article]
Read more
  • 0
  • 0
  • 3947

article-image-mograph
Packt
18 Dec 2012
8 min read
Save for later

MoGraph

Packt
18 Dec 2012
8 min read
(For more resources related to this topic, see here.) Before we begin Most of the tools we'll be featuring here are only available in the Broadcast and Studio installations of Cinema 4D. As discussed, this article will cover the basics of MoGraph objects and introduce a couple of sample animation ideas, but as you continue to learn and grow as an animator, you'll most likely be taken aback at how many possibilities there are! MoGraph allows you to create objects with a basic set of parameters and combine them in endless ways to create unique animations. Let's dive in and start imagining! Cloner objects The backbone of MoGraph objects is the cloner object. At its most basic level, it allows you to create multiple clones of an object in your scene. These clones can then be influenced by effectors, which we will discuss shortly. All MoGraph objects can be accessed through the MoGraph menu at the top of your screen. Your menu should look like the following screenshot: Let's open a new scene to explore cloners. Create a sphere, make sure it is selected, then navigate to MoGraph | Cloner. You can parent the sphere to the cloner manually, or hold down the Alt key while creating the cloner to parent it automatically: We've cloned our object, but it doesn't look much like clones so far—just a bumpy, vertical pill shape! This is due to the default sizes of our objects not playing well together. Our sphere has a 100 cm radius, and our clones are set 50 cm apart. Let's change the size of our sphere to 25 cm to start. You should now see three distinct spheres stacked on top of each other. As we create more and more spheres to experiment with cloner settings, you may find that your computer is getting bogged down. We're using a sphere here, but a cube would work just as well and creates far less geometry. You can also reduce your segments on the sphere if desired, but using a simpler form will probably be the most effective method. Let's take a look at the cloner settings in the Attributes Manager: The Basic and Coordinates tabs follow the same standard as the other object types we've encountered so far, but the Object tab is where most of our work will happen. The first step in using a cloner is to choose a Mode: Object mode arranges clones around any specified additional object in your scene. If you switch your cloner to Object mode, you'll see that you still have an object selected, but the clones have disappeared. This is because the cloner is relying on an object to arrange clones, but we haven't specified one. Try creating any primitive—we'll use a Capsule for the following example, then drag it from the Objects Manager into the Object field in the Attributes Manager. Since our sphere is relatively large compared to the Capsule, for the moment, let's change its radius to 4 cm. Your objects should be arranged as shown in the following screenshot: By default, the clones are distributed at each vertex of the object (specified in the Distribution field). If you want more or less clones while in Vertex mode, select the capsule and change its height and cap segments accordingly. Also, the visibility of the clones is linked only to the cloner, and not to the original object. If we turn off visibility on the capsule, the clones stay where they are. Vertex: This aligns clones to all vertices (objects can be parametric or polygonal). Edge: This aligns clones along edges. Edge will look relatively similar to Vertex but will most likely have significantly more clones. Also this can be used with selection sets to specify which edges should be used. Polygon Center: This will look similar to Vertex, but with clones aligned to each polygon. This can be used with selection sets to specify which polygons should be used. Surface: This aligns clones randomly to the surface; number of clones is determined by the count value. Volume: This fills the object with clones and requires either a transparent material on the original object or turning off visibility: Now that we've explored distribution, let's take a look at the different cloner modes. Linear mode arranges clones in a straight line, while Radial mode arranges clones in a circle—you can think of it as a more advanced version of the Array objects we used when creating our desk chair. Grid Array mode arranges clones in a 3D grid, filling a cube, sphere or cylinder, as shown in the following screenshot. Sounds simple, right? Grid Array, when partnered with effectors, is one of the most powerful tools in your MoGraph toolbox. Let's take a look at the settings. The Count field allows you to specify how many clones there are in all three directions. The Size field will specify the dimensions of the container that the clones fill. This is the key difference from the Duplicate function we learned previously; Duplicate will arrange instances that are spaced x distance apart, while the Size field on cloners specifies the total distance between the outer-most and inner-most objects. Note that if you change the count of any objects, it adds additional clones inside our cube rather than adding additional rows at the top or bottom, as shown in the following screenshot: Cloners are incredibly versatile, and you may find yourself using them as a modeling tool as you become more comfortable with the software. Now that we've gotten the basics of cloners down, let's add an Effector and see why this tool is so powerful! Effectors Effectors are, very simply, invisible objects in Cinema 4D that influence the behavior of other objects. The easiest way to learn how they work is to dive right in, so let's get started! With your cloner object selected (and set back to Grid Array, if you've been experimenting with the different modes), navigate to MoGraph | Effector | Random as shown in the previous screenshot. You should see all of the clones move in random directions! If you did not select the cloner before creating an effector, they will not be automatically linked. If the clones were unaffected, select the cloner, switch to the Effectors tab, and drag the Random effector from the Objects Manager into the open window as shown in the following screenshot: The Random effector is set, by default, to move all objects a maximum of 50 cm in any direction. This takes our clones that exist within the 200 cm cube and allows them to shift an additional 50 cm at random. We're even given an amount of control over that randomness, allowing for endless organic animations. Let's take a look at the settings for the Random effector: Click-and-drag on the Strength slider. As you approach 0 percent, the spheres move closer together. The Strength field works directly with the Transform parameters, so if you change the strength to 50 percent but leave the Transform values the same, your positions will be identical to a Random effector with 100 percent strength and 25 cm in all directions, as demonstrated in the following screenshot. The cloner on the left is having 50 percent strength, 50 cm x 50 cm x 50 cm, while the cloner on the right is having 100 percent strength, 25 cm x 25 cm x 25 cm: The reason these appear identical is due to their Seed value. True randomness is near impossible to create, so random algorithms often rely on a unique number to determine the position of objects. If you change the seed value, it will change the random positions. If you create a Random effector and dislike the result, clicking through seed values until you have a more desirable configuration is a quick and easy way to completely change the scene. This value can be keyframed as well, which can be combined with keyed transformation values to create complicated organic animations very quickly. In addition to position, you can also randomize scale and rotation. Scale values represent multipliers, rather than a percentage—so a scale of 2 equates to a potential 200 percent increase. 1 is equivalent to 100 percent, meaning a 25 cm sphere may be up to 50 cm—a 100 percent increase. Clicking on the Uniform Scale option prevents distorting the sphere. If you want to test the rotation option and are still using spheres, you may want to create a basic patterned material and add it to your object as shown in the following screenshot - otherwise it'll be impossible to tell that they're rotated! Cloners can have multiple effectors as well. With a cloner selected, navigate to MoGraph | Effector | Time. In the Attributes Manager, choose the attributes you'd like to manage over time—perhaps leave the position attributes to the Random effector and add Scale and Rotation to Time—then scroll through the timeline to see how the objects are affected:
Read more
  • 0
  • 0
  • 3928

Packt
21 Apr 2014
7 min read
Save for later

Getting Started – An Introduction to GML

Packt
21 Apr 2014
7 min read
(For more resources related to this topic, see here.) Creating GML scripts Before diving into any actual code, the various places in which scripts can appear in GameMaker as well as the reasoning behind placing scripts in one area versus another should be addressed. Creating GML scripts within an event Within an object, each event added can either contain a script or call one. This will be the only instance when dragging-and-dropping is required as the goal of scripting is to eliminate the need for it. To add a script to an event within an object, go to the control tab of the Object Properties menu of the object being edited. Under the Code label, the first two icons deal with scripts. Displayed in the following screenshot, the leftmost icon, which looks like a piece of paper, will create a script that is unique to that object type; the middle icon, which looks like a piece of paper with a green arrow, will allow for a script resource to be selected and then called during the respective event. Creating scripts within events is most useful when the scripts within those events perform actions that are very specific to the object instance triggering the event. The following screenshot shows these object instances: Creating scripts as resources Navigating to Resources | Create Script or using the keyboard shortcut Shift + Ctrl + C will create a script resource. Once created, a new script should appear under the Scripts folder on the left side of the project where resources are located. Creating a script as a resource is most useful in the following conditions: When many different objects utilize this functionality When a function requires multiple input values or arguments When global actions such as saving and loading are utilized When implementing complex logic and algorithms Scripting a room's creation code Room resources are specific resources where objects are placed and gameplay occurs. Room resources can be created by navigating to Resources | Create room or using Shift + Ctrl + R. Rooms can also contain scripts. When editing a room, navigate to the settings tab within the Room Properties panel and you should see a button labeled Creation code as seen in the following screenshot. When clicked on, this will open a blank GML script. This script will be executed as soon as the player loads the specified room, before any objects trigger their own events. Using Creation code is essentially the same as having a script in the Create event of an object. Understanding parts of GML scripts GML scripts are made up of many different parts. The following section will go over these different parts and their syntax, formatting, and usage. Programs A program is a set of instructions that are followed in a specific order. One way to think of it is that every script written in GML is essentially a program. Programs in GML are usually enclosed within braces, { }, as shown in the following example: { // Defines an instanced string variable. str_text = "Hello Word"; // Every frame, 10 units are added to x, a built-in variable. x += 10; // If x is greater than 200 units, the string changes. if (x > 200) { str_text = "Hello Mars"; } } The previous code example contains two assignment expressions followed by a conditional statement, followed by another program with an assignment expression. If the preceding script were an actual GML script, the initial set of braces enclosing the program would not be required. Each instruction or line of code ends with a semicolon ( ;). This is not required as a line break or return is sufficient, but the semicolon is a common symbol used in many other programming languages to indicate the end of an instruction. Using it is a good habit to improve the overall readability of one's code. snake_case Before continuing with this overview of GML, it's very important to observe that the formatting used in GML programs is snake case. Though it is not necessary to use this formatting, the built-in methods and constants of GML use it; so, for the sake of readability and consistency, it is recommended that you use snake casing, which has the following requirements: No capital letters are used All words are separated by underscores Variables Variables are the main working units within GML scripts, which are used to represent values. Variables are unique in GML in that, unlike some programming languages, they are not strictly typed, which means that the variable does not have to represent a specific data structure. Instead, variables can represent either of the following types: A number also known as real, such as 100 or 2.0312. Integers can also correspond to the particular instance of an object, room, script, or another type of resource. A string which represents a collection of alphanumeric characters commonly used to display text, encased in either single or double quotation marks, for example, "Hello World". Variable prefixes As previously mentioned, the same variable can be assigned to any of the mentioned variable types, which can cause a variety of problems. To combat this, the prefixes of variable names usually identify the type of data stored within the variable, such as str_player_name (which represents a string). The following are the common prefixes that will be used: str: String spr: Sprites snd: Sounds bg: Backgrounds pth: Paths scr: Scripts fnt: Fonts tml: Timeline obj: Object rm: Room ps: Particle System pe: Particle Emitter pt: Particle Type ev: Event Variable names cannot be started with numbers and most other non-alphanumeric characters, so it is best to stick with using basic letters. Variable scope Within GML scripts, variables have different scopes. This means that the way in which the values of variables are accessed and set varies. The following are the different scopes: Instance: These variables are unique to the instances or copies of each object. They can be accessed and set by themselves or by other game objects and are the most common variables in GML. Local: Local variables are those that exist only within a function or script. They are declared using the var keyword and can be accessed only within the scripts in which they've been created. Global: A variable that is global can be accessed by any object through scripting. It belongs to the game and not an individual object instance. There cannot be multiple global variables of the same name. Constants: Constants are variables whose values can only be read and not altered. They can be instanced or global variables. Instanced constants are, for example, object_index or sprite_width. The true and false variables are examples of global constants. Additionally, any created resource can be thought of as a global constant representing its ID and unable to be assigned a new value. The following example demonstrates the assignment of different variable types: // Local variable assignment. var a = 1; // Global variable declaration and assignment. globalvar b; b = 2; // Alternate global variable declaration and assignment. global.c = 10; // Instanced variable assignment through the use of "self". self.x = 10; /* Instanced variable assignment without the use of "self". Works identically to using "self". */ y = 10; Built-in variables Some global variables and instanced variables are already provided by GameMaker: Studio for each game and object. Variables such as x, sprite_index, and image_speed are examples of built-in instanced variables. Meanwhile, some built-in variables are also global, such as health, score, and lives. The use of these in a game is really up to personal preference, but their appropriate names do make them easier to remember. When any type of built-in variable is used in scripting, it will appear in a different color, the default being a light, pinkish red. All built-in variables are documented in GameMaker: Studio's help contents, which can be accessed by navigating to Help | Contents... | Reference or by pressing F1. Creating custom constants Custom constants can be defined by going to Resources | Define Constants... or by pressing Shift + Ctrl + N. In this dialog, first a variable name and then a correlating value are set. By default, constants will appear in the same color as built-in variables when written in the GML code. The following screenshot shows this interface with some custom constants created:
Read more
  • 0
  • 0
  • 3926
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-virtually-everything-everyone
Packt
19 Aug 2015
21 min read
Save for later

Virtually Everything for Everyone

Packt
19 Aug 2015
21 min read
This virtual reality thing calls into question, what does it mean to "be somewhere"? Before cell phones, you would call someone and it would make no sense to say, "Hey, where are you?" You know where they are, you called their house, that's where they are. So then cell phones come around and you start to hear people say, "Hello. Oh, I'm at Starbucks," because the person on the other end wouldn't necessarily know where you are, because you became un-tethered from your house for voice communications. So when I saw a VR demo, I had this vision of coming home and my wife has got the kids settled down, she has a couple minutes to herself, and she's on the couch wearing goggles on her face. I come over and tap her on the shoulder, and I'm like, "Hey, where are you?" It's super weird. The person's sitting right in front of you, but you don't know where they are.                                                      -Jonathan Stark, mobile expert and podcaster In this article, by Jonathan Linowes, author of the book Unity Virtual Reality Projects, we will define virtual reality and illustrate how it can be applied not only to games but also many other areas of interest and productivity. Welcome to virtual reality! In this book, we will explore what it takes to create virtual reality experiences on our own. We will take a walk through a series of hands-on projects, step-by-step tutorials, and in-depth discussions using the Unity 5 3D game engine and other free or open source software. Though the virtual reality technology is rapidly advancing, we'll try to capture the basic principles and techniques that you can use to make your VR games and applications feel immersive and comfortable. This article discusses the following topics: What is virtual reality? Differences between virtual reality (VR) and augmented reality (AR) How VR applications may differ from VR games Types of VR experiences Technical skills that are necessary for the development of VR What is virtual reality to you? Today, we are witnesses to the burgeoning consumer virtual reality, an exciting technology that promises to transform in a fundamental way how we interact with information, our friends, and the world at large. What is virtual reality? In general, VR is the computer-generated simulation of a 3D environment, which seems very real to the person experiencing it, using special electronic equipment. The objective is to achieve a strong sense of being present in the virtual environment. Today's consumer tech VR involves wearing a head-mounted display (such as goggles) to view stereoscopic 3D scenes. You can look around by moving your head, and walk around by using hand controls or motion sensors. You are engaged in a fully immersive experience. It's as if you're really there in some other virtual world. The following image shows a guy experiencing an Oculus Rift Development Kit 2 (DK2): Virtual reality is not new. It's been here for decades, albeit hidden away in academic research labs and high-end industrial and military facilities. It was big, clunky, and expensive. Ivan Sutherland invented the first head-mounted display in 1966, which is shown in the following image. It was tethered to the ceiling! In the past, several failed attempts have been made to bring consumer-level virtual reality products to the market. In 2012, Palmer Luckey, the founder of Oculus VR LLC, gave a demonstration of a makeshift head-mounted VR display to John Carmack, the famed developer of Doom, Wolfenstein 3D, and Quake classic video games. Together, they ran a successful Kickstarter campaign and released a developer kit called Oculus Rift Development Kit 1 (DK1) to an enthusiastic community. This caught the attention of investors as well as Mark Zuckerberg, and in March 2014, Facebook bought the company for $2 billion. With no product, no customers, and an infinite promise, the money and attention that it attracted has helped fuel a new category of consumer products. Others have followed suit, including Google, Sony, Samsung, and Steam. New innovations and devices that enhance the VR experience continue to be introduced. Most of the basic research has already been done and the technology is now affordable thanks in large part to the mass adoption of devices that work on mobile technology. There is a huge community of developers with an experience in building 3D games and mobile apps. Creative content producers are joining in and the media is talking it up. At last, virtual reality is real! Say what? Virtual reality is real? Ha! If it's virtual, how can it be... Oh, never mind. Eventually, we will get past the focus on the emerging hardware devices and recognize that content is king. The current generation of 3D development software (commercial, free, and open source) that has spawned a plethora of indie, or independent, game developers can also be used to build non-game VR applications. Though VR finds most of its enthusiasts in the gaming community, the potential applications reach well beyond that. Any business that presently uses 3D modeling and computer graphics will be more effective if it uses the VR technology. The sense of immersive presence that is afforded by VR can enhance all common online experiences today, which includes engineering, social networking, shopping, marketing, entertainment, and business development. In the near future, viewing 3D websites with a VR headset may be as common as visiting ordinary flat websites today. Types of head-mounted displays Presently, there are two basic categories of head-mounted displays for virtual reality—desktop VR and mobile VR. Desktop VR With desktop VR (and console VR), your headset is a peripheral to a more powerful computer that processes the heavy graphics. The computer may be a Windows PC, Mac, Linux, or a game console. Most likely, the headset is connected to the computer with wires. The game runs on the remote machine and the head-mounted display (HMD) is a peripheral display device with a motion sensing input. The term desktop is an unfortunate misnomer since it's just as likely to be stationed in either a living room or a den. The Oculus Rift (https://www.oculus.com/) is an example of a device where the goggles have an integrated display and sensors. The games run on a separate PC. Other desktop headsets include HTC/Valve Vive and Sony's project Morpheus for PlayStation. The Oculus Rift is tethered to a desktop computer via video and USB cables, and generally, the more graphics processing unit (GPU) power, the better. However, for the purpose of this book, we won't have any heavy rendering in our projects, and you can get by even with a laptop (provided it has two USB ports and one HDMI port available). Mobile VR Mobile VR, exemplified by Google Cardboard (http://www.google.com/get/cardboard/), is a simple housing (device) for two lenses and a slot for your mobile phone. The phone's display is used to show the twin stereographic views. It has rotational head tracking, but it has no positional tracking. Cardboard also provides the user with the ability to click or tap its side to make selections in a game. The complexity of the imagery is limited because it uses your phone's processor for rendering the views on the phone display screen. Other mobile VR headsets include Samsung Gear VR and Zeiss VR One, among others. Google provides the open source specifications, and other manufacturers have developed ready-made models for purchase, with prices for the same as low as $15. If you want to find one, just Google it! There are versions of Cardboard-compatible headsets that are available for all sizes of phones—both Android and iOS. Although the quality of the VR experience with a Cardboard device is limited (some even say that it is inadequate) and it's probably a "starter" device that will just be quaint in a couple of years, Cardboard is fine for the small projects in this book, and we'll revisit its limitations from time to time. The difference between virtual reality and augmented reality It's probably worthwhile clarifying what virtual reality is not. A sister technology to VR is augmented reality (AR), which superimposes computer generated imagery (CGI) over views of the real world. Limited uses of AR can be found on smart phones, tablets, handheld gaming systems such as the Nintendo 3DS, and even in some science museum exhibits, which overlay the CGI on top of live video from a camera. The latest innovations in AR are the AR headsets, such as Microsoft HoloLens and Magic Leap, which show the computer graphics directly in your field of view; the graphics are not mixed into a video image. If the VR headsets are like closed goggles, the AR headsets are like translucent sunglasses that employ a technology called light fields to combine the real-world light rays with CGI. A challenge for AR is ensuring that the CGI is consistently aligned with and mapped onto the objects in the real-world space and eliminate latency while moving about so that they (the CGI and objects in real-world space) stay aligned. AR holds as much promise as VR for future applications, but it's different. Though AR intends to engage the user within their current surroundings, virtual reality is fully immersive. In AR, you may open your hand and see a log cabin resting in your palm, but in VR, you're transported directly inside the log cabin and you can walk around inside it. We can also expect to see hybrid devices that somehow either combine VR and AR, or let you switch between modes. Applications versus games The consumer-level virtual reality starts with gaming. Video gamers are already accustomed to being engaged in highly interactive hyper-realistic 3D environments. VR just ups the ante. Gamers are early adopters of high-end graphics technology. Mass production of gaming consoles and PC-based components in the tens of millions and competition between vendors leads to lower prices and higher performance. Game developers follow suit, often pushing the state-of-the-art, squeezing every ounce of performance out of hardware and software. Gamers are a very demanding bunch, and the market has consistently stepped up to keep them satisfied. It's no surprise that many, if not most, of the current wave of the VR hardware and software companies are first targeting the video gaming industry. A majority of the demos and downloads that are available on Oculus Share (https://share.oculus.com/) and Google Play for the Cardboard app (https://play.google.com/store/search?q=cardboard&c=apps) are games. Gamers are the most enthusiastic VR advocates and seriously appreciate its potential. Game developers know that the core of a game is the game mechanics, or the rules, which are largely independent of the skin, or the thematic topic of the game. Gameplay mechanics can include puzzles, chance, strategy, timing, or muscle memory (twitch). VR games can have the same mechanic elements but might need to be adjusted for the virtual environment. For example, a first-person character walking in a console video game is probably going about 1.5 times faster than their actual pace in real life. If this wasn't the case, the player would feel that the game is too slow and boring. Put the same character in a VR scene and they will feel that it is too fast; it could likely make the player feel nauseous. In VR, you will want your characters to walk a normal, earthly pace. Not all video games will map well to VR; it may not be fun to be in the middle of a war zone when you're actually there. That said, virtual reality is also being applied in areas other than gaming. Though games will remain important, non-gaming apps will eventually overshadow them. These applications may differ from games in a number of ways, with the most significant having much less emphasis on game mechanics and more emphasis on either the experience itself or application-specific goals. Of course, this doesn't preclude some game mechanics. For example, the application may be specifically designed to train the user at a specific skill. Sometimes, the gamification of a business or personal application makes it more fun and effective in driving the desired behavior through competition. In general, non-gaming VR applications are less about winning and more about the experience itself. Here are a few examples of the kinds of non-gaming applications that people are working on: Travel and tourism: Visit faraway places without leaving your home. Visit art museums in Paris, New York, and Tokyo in one afternoon. Take a walk on Mars. You can even enjoy Holi, the spring festival of colors, in India while sitting in your wintery cabin in Vermont. Mechanical engineering and industrial design: Computer-aided design software such as AutoCAD and SOLIDWORKS pioneered three-dimensional modeling, simulation, and visualization. With VR, engineers and designers can directly experience the hands-on end product before it's actually built and play with what-if scenarios at a very low cost. Consider iterating a new automobile design. How does it look? How does it perform? How does it appear sitting in the driver's seat? Architecture and civil engineering: Architects and engineers have always constructed scale models of their designs, if only to pitch the ideas to clients and investors, or more importantly, to validate the many assumptions about the design. Presently, modeling and rendering software is commonly used to build virtual models from architectural plans. With VR, the conversation with stakeholders can be so much more confident. Other personnel, such as the interior designers, HVAC, and electrical engineers, can be brought into the process sooner. Real estate: Real estate agents have been quick adopters of the Internet and visualization technology to attract buyers and close sales. Real estate search websites were some of the first successful uses of the Web. Online panoramic video walk-throughs of for-sale properties are commonplace today. With VR, I can be in New York and find a place to live in Los Angeles. This will become even easier with mobile 3D-sensing technologies such as Google Project Tango (https://www.google.com/atap/projecttango), which performs a 3D scan of a room using a smartphone and automatically builds a model of the space. Medicine: The potential of VR for health and medicine may literally be a matter of life and death. Every day, hospitals use MRI and other scanning devices to produce models of our bones and organs that are used for medical diagnosis and possibly pre-operative planning. Using VR to enhance visualization and measurement will provide a more intuitive analysis. Virtual reality is also being used for the simulation of surgery to train medical students. Mental health: Virtual reality experiences have been shown to be effective in a therapeutic context for the treatment of post traumatic stress disorder (PTSD) in what's called exposure therapy, where the patient, guided by a trained therapist, confronts their traumatic memories through the retelling of the experience. Similarly, VR is being used to treat arachnophobia (spiders) and the fear of flying. Education: The educational opportunities for VR are almost too obvious to mention. One of the first successful VR experiences is Titans of Space, which lets you explore the solar system first hand. Science, history, arts, and mathematics—VR will help students of all ages because, as they say, field trips are much more effective than textbooks. Training: Toyota has demonstrated a VR simulation of drivers' education to teach teenagers about the risks of distracted driving. In another project, vocational students got to experience the operating of cranes and other heavy construction equipment. Training for first responders, police, and the fire and rescue workers can be enhanced with VR by presenting highly risky situations and alternative virtual scenarios. The NFL is looking to VR for athletic training. Entertainment and journalism: Virtually attend rock concerts and sporting events. Watch music videos Erotica. Re-experience news events as if you were personally present. Enjoy 360-degree cinematic experiences. The art of storytelling will be transformed by virtual reality. Wow, that's quite a list! This is just the low-hanging fruit. The purpose of this book is not to dive too deeply into any of these applications. Rather, I hope that this survey helps stimulate your thinking and provides a perspective towards how virtual reality has the potential to be virtually anything for everyone. What this book covers This book takes a practical, project-based approach to teach the specifics of virtual reality development using the Unity 3D game development engine. You'll learn how to use Unity 5 to develop VR applications, which can be experienced with devices such as the Oculus Rift or Google Cardboard. However, we have a slight problem here—the technology is advancing very rapidly. Of course, this is a good problem to have. Actually, it's an awesome problem to have, unless you're a developer in the middle of a project or an author of a book on this technology! How does one write a book that does not have obsolete content the day it's published? Throughout the book, I have tried to distill some universal principles that should outlive any near-term advances in virtual reality technology, that includes the following: Categorization of different types of VR experiences with example projects Important technical ideas and skills, especially the ones relevant to the building of VR applications General explanations on how VR devices and software works Strategies to ensure user comfort and avoid VR motion sickness Instructions on using the Unity game engine to build VR experiences Once VR becomes mainstream, many of these lessons will perhaps be obvious rather than obsolete, just like the explanations from the 1980's on how to use a mouse would just be silly today. Who are you? If you are interested in virtual reality, want to learn how it works, or want to create VR experiences yourself, this book is for you. We will walk you through a series of hands-on projects, step-by-step tutorials, and in-depth discussions using the Unity 3D game engine. Whether you're a non-programmer who is unfamiliar with 3D computer graphics, or a person with experience in both but new to virtual reality, you will benefit from this book. It is not a cold start with Unity, but you do not need to be an expert either. Still, if you're new to Unity, you can pick up this book as long as you realize that you'll need to adapt to the pace of the book. Game developers may already be familiar with the concepts in the book, which are reapplied to the VR projects while learning many other ideas that are specific to VR. Engineers and 3D designers may understand many of the 3D concepts, but they may wish to learn to use the game engine for VR. Application developers may appreciate the potential non-gaming uses of VR and want to learn the tools that can make this happen. Whoever you are, we're going to turn you into a 3D Software VR Ninja. Well, OK, this may be a stretch goal for this little book, but we'll try to set you on the way. Types of VR experiences There is not just one kind of virtual reality experience. In fact, there are many. Consider the following types of virtual reality experiences: Diorama: In the simplest case, we build a 3D scene. You're observing from a third-person perspective. Your eye is the camera. Actually, each eye is a separate camera that gives you a stereographic view. You can look around. First-person experience: This time, you're immersed in the scene as a freely moving avatar. Using an input controller (keyboard, game controller, or some other technique), you can walk around and explore the virtual scene. Interactive virtual environment: This is like the first-person experience, but it has an additional feature—while you are in the scene, you can interact with the objects in it. Physics is at play. Objects may respond to you. You may be given specific goals to achieve and challenges with the game mechanics. You might even earn points and keep score. Riding on rails: In this kind of experience, you're seated and being transported through the environment (or, the environment changes around you). For example, you can ride a roller coaster via this virtual reality experience. However, it may not necessarily be an extreme thrill ride. It can be a simple real estate walk-through or even a slow, easy, and meditative experience. 360-degree media: Think panoramic images taken with GoPro® on steroids that are projected on the inside of a sphere. You're positioned at the center of the sphere and can look all around. Some purists don't consider this "real" virtual reality, because you're seeing a projection and not a model rendering. However, it can provide an effective sense of presence. Social VR: When multiple players enter the same VR space and can see and speak with each other's avatars, it becomes a remarkable social experience. In this book, we will implement a number of projects that demonstrate how to build each of these types of VR experience. For brevity, we'll need to keep it pure and simple, with suggestions for areas for further investigation. Technical skills that are important to VR You will learn about the following in this book: World scale: When building for a VR experience, attention to the 3D space and scale is important. One unit in Unity is usually equal to one meter in the virtual world. First-person controls: There are various techniques that can be used to control the movement of your avatar (first-person camera), such as the keyboard keys, game controllers, and head movements. User interfacecontrols: Unlike conventional video (and mobile) games, all user interface components are in world coordinates in VR, not screen coordinates. We'll explore ways to present notices, buttons, selectors, and other User interface (UI) controls to the users so that they can interact and make selections. Physics and gravity: Critical to the sense of presence and immersion in VR is the physics and gravity of the world. We'll use the Unity physics engine to our advantage. Animations: Moving objects within the scene is called "animation" (duh!) It can either be along predefined paths, or it may use AI (artificial intelligence) scripting that follows a logical algorithm in response to events in the environment. Multiuser services: Real-time networking and multiuser games are not easy to implement, but online services make it easy without you having to be a computer engineer. Build and run: Different HMDs use different developer kits (SDK) and assets to build applications that target a specific devise. We'll consider techniques that let you use a single interface for multiple devices. We will write scripts in the C# language and use features of Unity as and when they are needed to get things done. However, there are technical areas that we will not cover, such as realistic rendering, shaders, materials, and lighting. We will not go into modeling techniques, terrains, or humanoid animations. Effective use of advanced input devices and hand and body tracking is proving to be critical to VR, but we won't have a chance to get into it here either. We also won't discuss game mechanics, dynamics, and strategies. We will talk about rendering performance optimization, but not in depth. All of these are very important topics that may be necessary for you to learn (or for someone in your team), in addition to this book, to build complete, successful, and immersive VR applications. Summary In this article, we looked at virtual reality and realized that it can mean a lot of things to different people and can have different applications. There's no single definition, and it's a moving target. We are not alone, as everyone's still trying to figure it out. The fact is that virtual reality is a new medium that will take years, if not decades, to reach its potential. VR is not just for games; it can be a game changer for many different applications. We identified over a dozen. There are different kinds of VR experiences, which we'll explore in the projects in this book. VR headsets can be divided into those that require a separate processing unit (such as a desktop PC or a console) that runs with a powerful GPU and the ones that use your mobile phone for processing. In this book, we will use an Oculus Rift DK2 as an example of desktop VR and Google Cardboard as the example of mobile VR, although there are many alternative and new devices available. We're all pioneers living at an exciting time. Because you're reading this book, you're one, too. Whatever happens next is literally up to you. As the personal computing pioneer Alan Kay said, "The best way to predict the future is to invent it." So, let's get to it! Resources for Article: Further resources on this subject: Looking Back, Looking Forward [article] Unity Networking – The Pong Game [article] Getting Started with Mudbox 2013 [article]
Read more
  • 0
  • 0
  • 3912

article-image-adobe-flash-11-stage3d-setting-our-tools
Packt
21 Dec 2011
10 min read
Save for later

Adobe Flash 11 Stage3D: Setting Up Our Tools

Packt
21 Dec 2011
10 min read
(For more resources on Adobe Flash, see here.) Before we begin programming, there are two simple steps required to get everything ready for some amazing 3D graphics demos. Step 1 is to obtain the Flash 11 plugin and the Stage3D API. Step 2 is to create a template as3 project and test that it compiles to create a working Flash SWF file. Once you have followed these two steps, you will have properly "equipped" yourself. You will truly be ready for the battle. You will have ensured that your tool-chain is set up properly. You will be ready to start programming a 3D game. Step 1: Downloading Flash 11 (Molehill) from Adobe Depending on the work environment you are using, setting things up will be slightly different. The basic steps are the same regardless of which tool you are using but in some cases, you will need to copy files to a particular folder and set a few program options to get everything running smoothly. If you are using tools that came out before Flash 11 went live, you will need to download some files from Adobe which instruct your tools how to handle the new Stage3D functions. The directions to do so are outlined below in Step 1. In the near future, of course, Flash will be upgraded to include Stage3D. If you are using CS5.5 or another new tool that is compatible with the Flash 11 plugin, you may not need to perform the steps below. If this is the case, then simply skip to Step 2. Assuming that your development tool-chain does not yet come with Stage3D built in, you will need to gather a few things before we can start programming. Let's assemble all the equipment we need in order to embark on this grand adventure, shall we? Time for action – getting the plugin It is very useful to be running the debug version of Flash 11, so that your trace statements and error messages are displayed during development. Download Flash 11 (content debugger) for your web browser of choice. At the time of writing, Flash 11 is in beta and you can get it from the following URL: http://labs.adobe.com/downloads/flashplayer11.html Naturally, you will eventually be able to obtain it from the regular Flash download page: http://www.adobe.com/support/flashplayer/downloads.html On this page, you will be able to install either the Active-X (IE) version or the Plugin (Firefox, and so on) version of the Flash player. This page also has links to an uninstaller if you wish to go back to the old version of Flash, so feel free to have some fun and don't worry about the consequences for now. Finally, if you want to use Chrome for debugging, you need to install the plugin version and then turn off the built-in version of Flash by typing about:plugins in your Chrome address bar and clicking on Disable on the old Flash plugin, so that the new one you just downloaded will run. We will make sure that you installed the proper version of Flash before we continue. To test that your browser of choice has the Stage3D-capable incubator build of the Flash plugin installed, simply right-click on any Flash content and ensure that the bottom of the pop-up menu lists Version 11,0,1,60 or greater, as shown in the following screenshot. If you don't see a version number in the menu, you are running the old Flash 10 plugin. Additionally, in some browsers, the 3D acceleration is not turned on by default. In most cases, this option will already be checked. However, just to make sure that you get the best frame rate, right-click on the Flash file and go to options, and then enable hardware acceleration, as shown in the following screenshot: You can read more about how to set up Flash 11 at the following URL: http://labs.adobe.com/technologies/flashplatformruntimes/flashplayer11/ Time for action - getting the Flash 11 profile for CS5 Now that you have the Stage3D-capable Flash plugin installed, you need to get Stage3D working in your development tools. If you are using a tool that came out after this book was written that includes built-in support for Flash 11, you don't need to do anything—skip to Step 2. If you are going to use Flash IDE to compile your source code and you are using Flash CS5, then you need to download a special .XML file that instructs it how to handle the newer Stage3D functionality. The file can be downloaded from the following URL: http://download.macromedia.com/pub/labs/flashplatformruntimes/incubator/flashplayer_inc_flashprofile_022711.zip If the preceding link no longer works, do not worry. The files you need are included in the source code that accompanies this book. Once you have obtained and unzipped this file, you need to copy some files into your CS5 installation. FlashPlayer11.xml goes in: Adobe Flash CS5CommonConfigurationPlayers playerglobal.swc goes in: Adobe Flash CS5CommonConfigurationActionScript 3.0FP11 Restart Flash Professional after that and then select 'Flash Player 11' in the publish settings. It will publish to a SWF13 file. As you are not using Flex to compile, you can skip all of the following sections regarding Flex. Simple as it can be! Time for action – upgrading Flex If you are going to use pure AS3 (by using FlashDevelop or Flash Builder), or even basic Flex without any IDE, then you need to compile your source code with a newer version of Flex that can handle Stage3D. At the time of writing, the best version to use is build 19786. You can download it from the following URL: http://opensource.adobe.com/wiki/display/flexsdk/Download+Flex+Hero Remember to change your IDE's compilation settings to use the new version of Flex you just downloaded. For example, if you are using Flash Builder as part of the Abode Flex SDK, create a new ActionScript project: File | New | ActionScript project. Open the project Properties panel (right-click and select Properties). Select ActionScript Compiler from the list on the left. Use the Configure Flex SDK's option in the upper-right hand corner to point the project to Flex build 19786 and then click on OK. Alternately, if you are using FlashDevelop, you need to instruct it to use this new version of Flex by going into Tools | Program Settings | AS3 Context | Flex SDK Location and browsing to your new Flex installation folder. Time for action – upgrading the Flex playerglobal.swc If you use FlashDevelop, Flash Builder, or another tool such as FDT, all ActionScript compiling is done by Flex. In order to instruct Flex about the Stage3D-specific code, you need a small file that contains definitions of all the new AS3 that is available to you. It will eventually come with the latest version of these tools and you won't need to manually install it as described in the following section. During the Flash 11 beta period, you can download the Stage3D-enabled playerglobal.swc file from the following URL: http://download.macromedia.com/pub/labs/flashplatformruntimes/flashplayer11/flashplayer11_b1_playerglobal_071311.swc Rename this file to playerglobal.swc and place it into an appropriate folder. Instruct your compiler to include it in your project. For example, you may wish to copy it to your Flex installation folder, in the flex/frameworks/libs/player/11 folder. In some code editors, there is no option to target Flash 11 (yet). By the time you read this book, upgrades may have enabled it. However, at the time of writing, the only way to get FlashDevelop to use the SWC is to copy it over the top of the one in the flex/frameworks/libs/player/10.1 folder and target this new "fake" Flash 10.1 version. Once you have unzipped Flex to your preferred location and copied playerglobal.swc to the preceding folder, fire up your code editor. Target Flash 11 in your IDE—or whatever version number that is associated with the folder, which you used as the location for playerglobal.swc. Be sure that your IDE will compile this particular SWC along with your source code. In order to do so in Flash Builder, for example, simply select "Flash Player 11" in the Publish Settings. If you use FlashDevelop, then open a new project and go into the Project--->Properties--->Output Platform Target drop-down list. Time for action – using SWF Version 13 when compiling in Flex Finally, Stage3D is considered part of the future "Version 13" of Flash and therefore, you need to set your compiler options to compile for this version. You will need to target SWF Version 13 by passing in an extra compiler argument to the Flex compiler: -swf-version=13. If you are using Adobe Flash CS5, then you already copied an XML file which has all the changes, as outlined below and this is done automatically for you. If you are using Flex on the command line, then simply add the preceding setting to your compilation build script command-line parameters. If you are using Flash Builder to compile Flex, open the project Properties panel (right-click and choose Properties). Select ActionScript Compiler from the list on the left. Add to the Additional compiler arguments input: -swf-version=13. This ensures the outputted SWF targets SWF Version 13. If you compile on the command line and not in Flash Builder, then you need to add the same compiler argument. If you are using FlashDevelop, then click on Project | Properties | Compiler Options | Additional Compiler Options, and add -swf-version=13 in this field. Time for action – updating your template HTML file You probably already have a basic HTML template for including Flash SWF files in your web pages. You need to make one tiny change to enable hardware 3D. Flash will not use hardware 3D acceleration if you don't update your HTML file to instruct it to do so. All you need to do is to always remember to set wmode=direct in your HTML parameters. For example, if you use JavaScript to inject Flash into your HTML (such as SWFObject.js), then just remember to add this parameter in your source. Alternately, if you include SWFs using basic HTML object and embed tags, your HTML will look similar to the following: <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" width="640" height="480"><param name="src" value="Molehill.swf" /> <param name="wmode" value="direct" /><embed type="application/ x-shockwave-flash" width="640" height="480" src="Molehill.swf" wmode="direct"></embed></object> The only really important parameter is wmode=direct (and the name of your swf file)— everything else about how you put Flash into your HTML pages remains the same. In the future, if you are running 3D demos and the frame rate seems really choppy, you might not be using hardware 3D acceleration. Be sure to view-source of the HTML file that contains your SWF and check that all mentions of the wmode parameter are set to direct.   Stage3D is now set up! That is it! You have officially gone to the weapons store and equipped yourself with everything you require to explore the depths of Flash 11 3D graphics. That was the hard part. Now we can dive in and get to some coding, which turns out to be the easy part.
Read more
  • 0
  • 0
  • 3880

article-image-video-editing-blender-using-video-sequence-editor-part-1
Packt
19 Feb 2010
6 min read
Save for later

Video Editing in Blender using Video Sequence Editor: Part 1

Packt
19 Feb 2010
6 min read
Blender, the open-source 3D creation suite, as we all know, has been on great heights lately, and with the astounding amount of work and dedication that is being put into the current development, there’s no doubt it is already a state-of-the-art software. From the time I was introduced to Blender, I was pretty amazed at the amount of features it has and the myriad of possibilities you can achieve with it. Features like modeling, shading, texturing, rendering are already a given fact, but what’s much more impressive with Blender is it’s “side features” that come along with it, one great example would be the “Video Sequence Editor”, popularly called VSE is the Blender universe. From the name alone, you can already figure out what it is used for, yup – video editing! Pretty cool, eh? With the right amount of knowledge, strategy, and workarounds, there’s much more leeway than it is really used for. I’ll share with you some tips and guidelines as to how you could start along and begin using Blender itself for editing your videos and not keep jumping from application to application, which can become really troublesome at times. Without further ado, let’s get on with it! Requirements Before we begin, there are a couple of things we need to have: Blender 2.49b Clips/Videos/Movies Skill level: Beginner to Intermediate a little bit of patience lots of coffee to keep you awake (and a comfortable couch?)! Post-processing This might sound odd to you as this came first before anything else in the article. The reason for this is that we don’t want to mess up with a lot of things, and create more trouble later (you’ll see that shortly during the process). But if you already are satisfied with the way your videos look and feel, then you can skip this step and move on to the main one. Partly, we will deal on how to enhance your videos, making them look better than they were originally shot. This part could also dictate how you want the mood of your videos to be (depending on the way you shot it). Just like how we post-process and add more feel to our still images, the same thing goes with our videos, thus using the Composite Nodes to achieve this. And later, use these processed videos into the sequence editor for final editing. To begin with, open up Blender and depending on your default layout, you will see something like this (see screenshot below): Blender Window Let’s change our current 3D viewport into the Node Editor window and under the Scene Button(F10), set the render size to 100%, enable Do Composite under the Anim panel, set the Size/Resolution and the video format under the Format panel, and lastly set your output path under the Output panel on the same window. That was quite a lengthy instruction to pack into one paragraph though, so check out the screenshot below for a clearer picture. Settings Now that we’re already in the Node Editor window, by default, we see nothing but grid and a new header (which apparently gives us a clue what to do next). Since we’re not dealing with Material Nodes and Texture Nodes, we’re safe to ignore some other things that comprise the node editor for now and instead, we’ll use the Composite Nodes, which is represented by a face icon on the header. Click and enable that button. But then, you’ll notice that nothing has happened yet, that’s because we still have to indicate to Blender that, it’s actually going to be using the nodes. So, go ahead and click the Use Nodes and you’ll notice something came up on your node editor window, being the Render Layer Node and the Composite Node respectively.The render layer node is an input node which takes data from our current Blender scene as specified through of render layer options under the scene window, which is often useful in general purpose node compositing direct from our 3d scene or if you wanted to layer your renders into passes. But since, we are not doing that now, we won’t be needing this node right now, go ahead and select the Render Layer Node by right clicking on it and press X or Delete on your keyboard and automatically, without any popups shown, the render layer node is now gone from our composite window. Next step is to load our videos into the node compositor and actually begin the process. To load the videos into our compositor, we use the Image Input Node to call our videos from wherever it is stored. To do this, press spacebar on your keyboard while your mouse cursor is on the node editor window, choose Add < Input > Image. With our Image Node loaded in the compositor, click the Load New button and browse to the directory where the file is loaded. Loading the Video via the Image Input Node After successfully loading the video, you’ll notice the Image Input Node change its appearance, now having a thumbnail preview and some buttons and inputs we can experiment with. The most important setting we have to specify here is the number of frames our video has or else Blender wouldn’t know which ranges to composite. Specifying the Number of Frames in the Image Node Often, this can be a difficult task to deal with and I’ve had so much trouble with this before as I wanted the exact frame count that would precisely match that of my original video without even missing a single glitch. There are a couple of ways to do this, and if you’re smart enough to calculate the number of frames based on the time your video consumes, that’s fair enough, or you could open up a separate application to see how many frames it got, but if you’re like me who likes to make it simple and still within Blender’s grasp, then there’s still hope to this. Right now, we’re off to a tiny part of the main course here which is Blender’s VSE, but this time we’ll only use it to know how many frames our video has. But don’t worry because it’s the main dish and we’ll get to that shortly.
Read more
  • 0
  • 0
  • 3863

article-image-getting-started-marmalade
Packt
02 Jan 2013
4 min read
Save for later

Getting Started with Marmalade

Packt
02 Jan 2013
4 min read
(For more resources related to this topic, see here.) Installing the Marmalade SDK The following sections will show you how to get your PC set up for development using Marmalade, from installing a suitable development environment through to licensing, downloading, and installing your copy of Marmalade. Installing a development environment Before we can start coding, we will first need to install a version of Microsoft's Visual C++, which is the Windows development environment that Marmalade uses. If you don't already have a version installed, you can download a copy for free. At the time of writing, the Express 2012 version had just been released but the most recent, free version directly supported by Marmalade was still Visual C++ 2010 Express, which can be downloaded from the following URL: http://www.microsoft.com/visualstudio/en-us/products/2010-editions/visual-cpp-express Follow the instructions on this web page to download and install the product. For the Apple Mac version of Marmalade, the supported development environment is Xcode, which is available as a free download from the Mac App Store. In this article, we will be assuming that the Windows version of Marmalade will be used, unless specifically stated otherwise. Choosing your Marmalade license type With a suitable development environment in place, we can now get on to downloading Marmalade itself. First, you need to head over to the Marmalade website using the following URL: http://www.madewithmarmalade.com At the top of the website are two buttons labeled Buy and Free Trial. Click on one of these (it doesn't matter which, as they both go to the same place!) and you'll see a page explaining the licensing options, which are also described in the following table: License type Description Evaluation This is free to use but is time limited (currently 45 days), and while you can deploy it to all supported platforms, you are not allowed to distribute the applications built with this version. Community This is the cheapest way of getting started with Marmalade, but you are limited to only being able to release it on iOS and Android, and your application will also feature a Marmalade splash screen on startup. Indie This version removes the limitations of the basic license, with no splash screen and the ability to target any supported platform. Professional This version adds dedicated support from Marmalade should you face any issues during development, and provides early access to the new versions of Marmalade. When you have chosen the license level, you will first need to register with the Marmalade website by providing an e-mail address and password. The e-mail address you register will be linked to your license and will be used to activate it later. Make sure you use a valid e-mail address when registering. Once you are registered, you will be taken to a web page where you can choose the level of license you require. After confirming payment, you will be sent an e-mail that allows you to activate your license and download the Marmalade installer. Downloading and installing Marmalade Now that you have a valid license, head back to the Marmalade website using the same URL we used earlier. If you are not already logged on to the website, do so using the Login link at the top-right corner of the web page. Click on the Download button, and you will be taken to a page where you can download both the most recent and previous releases of the Marmalade installer. Click on the button for the version you require, to start downloading it. Once the download is complete, run the installer and follow the instructions. The installer will first ask you to accept the End User License Agreement by selecting a radio button, and will then ask for an installation location. Next, enter the file location you want to install to. The default installation directory drops the minor revision number (so version 6.1.1 will be installed into a subdirectory called 6.1). You may want to add the minor revision number back in, to make it easier to have multiple versions of Marmalade installed at the same time. Once the installer has finished copying the files to your hard drive, it will then display the Marmalade Configuration Utility, which is described in greater detail in the next section. Once the Configuration Utility has been closed, the installer will then offer you the option of launching some useful resources, such as the SDK documentation, before it exits. It is possible to have more than one version of the Marmalade SDK installed at a time and switch between versions as you need, hence the advice regarding the installation directory. This becomes very useful when device-specific bugs are fixed in a new version of Marmalade, but you still need to support an older project that requires a different version of Marmalade.
Read more
  • 0
  • 0
  • 3842
article-image-integrating-google-play-services
Packt
08 Jul 2015
41 min read
Save for later

Integrating Google Play Services

Packt
08 Jul 2015
41 min read
In this article Integrating Google Play Services by Raul Portales, author of the book Mastering Android Game Development, we will cover the tools that Google Play Services offers for game developers. We'll see the integration of achievements and leaderboards in detail, take an overview of events and quests, save games, and use turn-based and real-time multiplaying. Google provides Google Play Services as a way to use special features in apps. Being the game services subset the one that interests us the most. Note that Google Play Services are updated as an app that is independent from the operating system. This allows us to assume that most of the players will have the latest version of Google Play Services installed. (For more resources related to this topic, see here.) More and more features are being moved from the Android SDK to the Play Services because of this. Play Services offer much more than just services for games, but there is a whole section dedicated exclusively to games, Google Play Game Services (GPGS). These features include achievements, leaderboards, quests, save games, gifts, and even multiplayer support. GPGS also comes with a standalone app called "Play Games" that shows the user the games he or she has been playing, the latest achievements, and the games his or her friends play. It is a very interesting way to get exposure for your game. Even as a standalone feature, achievements and leaderboards are two concepts that most games use nowadays, so why make your own custom ones when you can rely on the ones made by Google? GPGS can be used on many platforms: Android, iOS and web among others. It is more used on Android, since it is included as a part of Google apps. There is extensive step-by-step documentation online, but the details are scattered over different places. We will put them together here and link you to the official documentation for more detailed information. For this article, you are supposed to have a developer account and have access to the Google Play Developer Console. It is also advisable for you to know the process of signing and releasing an app. If you are not familiar with it, there is very detailed official documentation at http://developer.android.com/distribute/googleplay/start.html. There are two sides of GPGS: the developer console and the code. We will alternate from one to the other while talking about the different features. Setting up the developer console Now that we are approaching the release state, we have to start working with the developer console. The first thing we need to do is to get into the Game services section of the console to create and configure a new game. In the left menu, we have an option labeled Game services. This is where you have to click. Once in the Game services section, click on Add new game: This bring us to the set up dialog. If you are using other Google services like Google Maps or Google Cloud Messaging (GCM) in your game, you should select the second option and move forward. Otherwise, you can just fill in the fields for I don't use any Google APIs on my game yet and continue. If you don't know whether you are already using them, you probably aren't. Now, it is time to link a game to it. I recommend you publish your game beforehand as an alpha release. This will let you select it from the list when you start typing the package name. Publishing the game to the alpha channel before adding it to Game services makes it much easier to configure. If you are not familiar with signing and releasing your app, check out the official documentation at http://developer.android.com/tools/publishing/app-signing.html. Finally, there are only two steps that we have to take when we link the first app. We need to authorize it and provide branding information. The authorization will generate an OAuth key—that we don't need to use since it is required for other platforms—and also a game ID. This ID is unique to all the linked apps and we will need it to log in. But there is no need to write it down now, it can be found easily in the console at anytime. Authorizing the app will generate the game ID, which is unique to all linked apps. Note that the app we have added is configured with the release key. If you continue and try the login integration, you will get an error telling you that the app was signed with the wrong certificate: You have two ways to work with this limitation: Always make a release build to test GPGS integration Add your debug-signed game as a linked app I recommend that you add the debug signed app as a linked app. To do this, we just need to link another app and configure it with the SHA1 fingerprint of the debug key. To obtain it, we have to open a terminal and run the keytool utility: keytool -exportcert -alias androiddebugkey -keystore <path-to-debug-keystore> -list -v Note that in Windows, the debug keystore can be found at C:Users<USERNAME>.androiddebug.keystore. On Mac and Linux, the debug keystore is typically located at ~/.android/debug.keystore. Dialog to link the debug application on the Game Services console Now, we have the game configured. We could continue creating achievements and leaderboards in the console, but we will put it aside and make sure that we can sign in and connect with GPGS. The only users who can sign in to GPGS while a game is not published are the testers. You can make the alpha and/or beta testers of a linked app become testers of the game services, and you can also add e-mail addresses by hand for this. You can modify this in the Testing tab. Only test accounts can access a game that is not published. The e-mail of the owner of the developer console is prefilled as a tester. Just in case you have problems logging in, double-check the list of testers. A game service that is not published will not appear in the feed of the Play Services app, but it will be possible to test and modify it. This is why it is a good idea to keep it in draft mode until the game itself is ready and publish both the game and the game services at the same time. Setting up the code The first thing we need to do is to add the Google Play Services library to our project. This should already have been done by the wizard when we created the project, but I recommend you to double-check it now. The library needs to be added to the build.gradle file of the main module. Note that Android Studio projects contain a top-level build.gradle and a module-level build.gradle for each module. We will modify the one that is under the mobile module. Make sure that the play services' library is listed under dependencies: apply plugin: 'com.android.application'     dependencies { compile 'com.android.support:appcompat-v7:22.1.1' compile 'com.google.android.gms:play-services:7.3.0' } At the point of writing, the latest version is 7.3.0. The basic features have not changed much and they are unlikely to change. You could force Gradle to use a specific version of the library, but in general I recommend you use the latest version. Once you have it, save the changes and click on Sync Project with Gradle Files. To be able to connect with GPGS, we need to let the game know what the game ID is. This is done through the <meta-data> tag on AndroidManifest.xml. You could hardcode the value here, but it is highly recommended that you set it as a resource in your Android project. We are going to create a new file for this under res/values, which we will name play_services.xml. In this file we will put the game ID, but later we will also have the achievements and leaderboard IDs in it. Using a separate file for these values is recommended because they are constants that do not need to be translated: <application> <meta-data android_name="com.google.android.gms.games.APP_ID" android_value="@string/app_id" /> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version"/> [...] </application> Adding this metadata is extremely important. If you forget to update the AndroidManifest.xml, the app will crash when you try to sign in to Google Play services. Note that the integer for the gms version is defined in the library and we do not need to add it to our file. If you forget to add the game ID to the strings the app will crash. Now, it is time to proceed to sign in. The process is quite tedious and requires many checks, so Google has released an open source project named BaseGameUtils, which makes it easier. Unfortunately this project is not a part of the play services' library and it is not even available as a library. So, we have to get it from GitHub (either check it out or download the source as a ZIP file). BaseGameUtils abstracts us from the complexity of handling the connection with Play Services. Even more cumbersome, BaseGameUtils is not available as a standalone download and has to be downloaded together with another project. The fact that this significant piece of code is not a part of the official library makes it quite tedious to set up. Why it has been done like this is something that I do not comprehend myself. The project that contains BaseGameUtils is called android-basic-samples and it can be downloaded from https://github.com/playgameservices/android-basic-samples. Adding BaseGameUtils is not as straightforward as we would like it to be. Once android-basic-samples is downloaded, open your game project in Android Studio. Click on File > Import Module and navigate to the directory where you downloaded android-basic-samples. Select the BaseGameUtils module in the BasicSamples/libraries directory and click on OK. Finally, update the dependencies in the build.gradle file for the mobile module and sync gradle again: dependencies { compile project(':BaseGameUtils') [...] } After all these steps to set up the project, we are finally ready to begin the sign in. We will make our main Activity extend from BaseGamesActivity, which takes care of all the handling of the connections, and sign in with Google Play Services. One more detail: until now, we were using Activity and not FragmentActivity as the base class for YassActivity (BaseGameActivity extends from FragmentActivity) and this change will mess with the behavior of our dialogs while calling navigateBack. We can change the base class of BaseGameActivity or modify navigateBack to perform a pop-on fragment navigation hierarchy. I recommend the second approach: public void navigateBack() { // Do a pop on the navigation history getFragmentManager().popBackStack(); } This util class has been designed to work with single-activity games. It can be used in multiple activities, but it is not straightforward. This is another good reason to keep the game in a single activity. The BaseGameUtils is designed to be used in single-activity games. The default behavior of BaseGameActivity is to try to log in each time the Activity is started. If the user agrees to sign in, the sign in will happen automatically. But if the user rejects doing so, he or she will be asked again several times. I personally find this intrusive and annoying, and I recommend you to only prompt to log in to Google Play services once (and again, if the user logs out). We can always provide a login entry point in the app. This is very easy to change. The default number of attempts is set to 3 and it is a part of the code of GameHelper: // Should we start the flow to sign the user in automatically on   startup? If // so, up to // how many times in the life of the application? static final int DEFAULT_MAX_SIGN_IN_ATTEMPTS = 3; int mMaxAutoSignInAttempts = DEFAULT_MAX_SIGN_IN_ATTEMPTS; So, we just have to configure it for our activity, adding one line of code during onCreate to change the default behavior with the one we want: just try it once: getGameHelper().setMaxAutoSignInAttempts(1); Finally, there are two methods that we can override to act when the user successfully logs in and when there is a problem: onSignInSucceeded and onSignInFailed. We will use them when we update the main menu at the end of the article. Further use of GPGS is to be made via the GameHelper and/or the GoogleApiClient, which is a part of the GameHelper. We can obtain a reference to the GameHelper using the getGameHelper method of BaseGameActivity. Now that the user can sign into Google Play services we can continue with achievements and leaderboards. Let's go back to the developer console. Achievements We will first define a few achievements in the developer console and then see how to unlock them in the game. Note that to publish any game with GPGS, you need to define at least five achievements. No other feature is mandatory, but achievements are. We need to define at least five achievements to publish a game with Google Play Game services. If you want to use GPGS with a game that has no achievements, I recommend you to add five dummy secret achievements and let them be. To add an achievement, we just need to navigate to the Achievements tab on the left and click on Add achievement: The menu to add a new achievement has a few fields that are mostly self-explanatory. They are as follows: Name: the name that will be shown (can be localized to different languages). Description: the description of the achievement to be shown (can also be localized to different languages). Icon: the icon of the achievement as a 512x512 px PNG image. This will be used to show the achievement in the list and also to generate the locked image and the in-game popup when it is unlocked. Incremental achievements: if the achievement requires a set of steps to be completed, it is called an incremental achievement and can be shown with a progress bar. We will have an incremental achievement to illustrate this. Initial state: Revealed/Hidden depending on whether we want the achievement to be shown or not. When an achievement is shown, the name and description are visible, players know what they have to do to unlock it. A hidden achievement, on the other hand, is a secret and can be a funny surprise when unlocked. We will have two secret achievements. Points: GPGS allows each game to have 1,000 points to give for unlocking achievements. This gets converted to XP in the player profile on Google Play games. This can be used to highlight that some achievements are harder than others, and therefore grant a bigger reward. You cannot change these once they are published, so if you plan to have more achievements in the future, plan ahead with the points. List order: The order of the achievements is shown. It is not followed all the time, since on the Play Games app the unlocked ones are shown before the locked ones. It is still handy to rearrange them. Dialog to add an achievement on the developer console As we already decided, we will have five achievements in our game and they will be as follows: Big Score: score over 100,000 points in one game. This is to be granted while playing. Asteroid killer: destroy 100 asteroids. This will count them across different games and is an incremental achievement. Survivor: survive for 60 seconds. Target acquired: a hidden achievement. Hit 20 asteroids in a row without missing a hit. This is meant to reward players that only shoot when they should. Target lost: this is supposed to be a funny achievement, granted when you miss with 10 bullets in a row. It is also hidden, because otherwise it would be too easy to unlock. So, we created some images for them and added them to the console. The developer console with all the configured achievements Each achievement has a string ID. We will need these ids to unlock the achievements in our game, but Google has made it easy for us. We have a link at the bottom named Get resources that pops up a dialog with the string resources we need. We can just copy them from there and paste them in our project in the play_services.xml file we have already created. Architecture For our game, given that we only have five achievements, we are going to add the code for achievements directly into the ScoreObject. This will make it less code for you to read so we can focus on how it is done. However, for a real production code I recommend you define a dedicated architecture for achievements. The recommended architecture is to have an AchievementsManager class that loads all the achievements when the game starts and stores them in three lists: All achievements Locked achievements Unlocked achievements Then, we have an Achievement base class with an abstract check method that we implement for each one of them: public boolean check (GameEngine gameEngine, GameEvent gameEvent) { } This base class takes care of loading the achievement state from local storage (I recommend using SharedPreferences for this) and modify it as per the result of check. The achievements check is done at AchievementManager level using a checkLockedAchievements method that iterates over the list of achievements that can be unlocked. This method should be called as a part of onEventReceived of GameEngine. This architecture allows you to check only the achievements that are yet to be unlocked and also all the achievements included in the game in a specific dedicated place. In our case, since we are keeping the score inside the ScoreGameObject, we are going to add all achievements code there. Note that making the GameEngine take care of the score and having it as a variable that other objects can read are also recommended design patterns, but it was simpler to do this as a part of ScoreGameObject. Unlocking achievements To handle achievements, we need to have access to an object of the class GoogleApiClient. We can get a reference to it in the constructor of ScoreGameObject: private final GoogleApiClient mApiClient;   public ScoreGameObject(YassBaseFragment parent, View view, int viewResId) { […] mApiClient =  parent.getYassActivity().getGameHelper().getApiClient(); } The parent Fragment has a reference to the Activity, which has a reference to the GameHelper, which has a reference to the GoogleApiClient. Unlocking an achievement requires just a single line of code, but we also need to check whether the user is connected to Google Play services or not before trying to unlock an achievement. This is necessary because if the user has not signed it, an exception is thrown and the game crashes. Unlocking an achievement requires just a single line of code. But this check is not enough. In the edge case, when the user logs out manually from Google Play services (which can be done in the achievements screen), the connection will not be closed and there is no way to know whether he or she has logged out. We are going to create a utility method to unlock the achievements that does all the checks and also wraps the unlock method into a try/catch block and make the API client disconnect if an exception is raised: private void unlockSafe(int resId) { if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.unlock(mApiClient, getString(resId));    } catch (Exception e) {      mApiClient.disconnect();    } } } Even with all the checks, the code is still very simple. Let's work on the particular achievements we have defined for the game. Even though they are very specific, the methodology to track game events and variables and then check for achievements to unlock is in itself generic, and serves as a real-life example of how to deal with achievements. The achievements we have designed require us to count some game events and also the running time. For the last two achievements, we need to make a new GameEvent for the case when a bullet misses, which we have not created until now. The code in the Bullet object to trigger this new GameEvent is as follows: @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mY += mSpeedFactor * elapsedMillis; if (mY < -mHeight) {    removeFromGameEngine(gameEngine);    gameEngine.onGameEvent(GameEvent.BulletMissed); } } Now, let's work inside ScoreGameObject. We are going to have a method that checks achievements each time an asteroid is hit. There are three achievements that can be unlocked when that event happens: Big score, because hitting an asteroid gives us points Target acquired, because it requires consecutive asteroid hits Asteroid killer, because it counts the total number of asteroids that have been destroyed The code is like this: private void checkAsteroidHitRelatedAchievements() { if (mPoints > 100000) {    // Unlock achievement    unlockSafe(R.string.achievement_big_score); } if (mConsecutiveHits >= 20) {    unlockSafe(R.string.achievement_target_acquired); } // Increment achievement of asteroids hit if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.increment(mApiClient, getString(R.string.achievement_asteroid_killer), 1);    } catch (Exception e) {      mApiClient.disconnect();    } } } We check the total points and the number of consecutive hits to unlock the corresponding achievements. The "Asteroid killer" achievement is a bit of a different case, because it is an incremental achievement. These type of achievements do not have an unlock method, but rather an increment method. Each time we increment the value, progress on the achievement is updated. Once the progress is 100 percent, it is unlocked automatically. Incremental achievements are automatically unlocked, we just have to increment their value. This makes incremental achievements much easier to use than tracking the progress locally. But we still need to do all the checks as we did for unlockSafe. We are using a variable named mConsecutiveHits, which we have not initialized yet. This is done inside onGameEvent, which is the place where the other hidden achievement target lost is checked. Some initialization for the "Survivor" achievement is also done here: public void onGameEvent(GameEvent gameEvent) { if (gameEvent == GameEvent.AsteroidHit) {    mPoints += POINTS_GAINED_PER_ASTEROID_HIT;    mPointsHaveChanged = true;    mConsecutiveMisses = 0;    mConsecutiveHits++;    checkAsteroidHitRelatedAchievements(); } else if (gameEvent == GameEvent.BulletMissed) {    mConsecutiveMisses++;    mConsecutiveHits = 0;    if (mConsecutiveMisses >= 20) {      unlockSafe(R.string.achievement_target_lost);    } } else if (gameEvent == GameEvent.SpaceshipHit) {    mTimeWithoutDie = 0; } […] } Each time we hit an asteroid, we increment the number of consecutive asteroid hits and reset the number of consecutive misses. Similarly, each time we miss a bullet, we increment the number of consecutive misses and reset the number of consecutive hits. As a side note, each time the spaceship is destroyed we reset the time without dying, which is used for "Survivor", but this is not the only time when the time without dying should be updated. We have to reset it when the game starts, and modify it inside onUpdate by just adding the elapsed milliseconds that have passed: @Override public void startGame(GameEngine gameEngine) { mTimeWithoutDie = 0; […] }   @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mTimeWithoutDie += elapsedMillis; if (mTimeWithoutDie > 60000) {    unlockSafe(R.string.achievement_survivor); } } So, once the game has been running for 60,000 milliseconds since it started or since a spaceship was destroyed, we unlock the "Survivor" achievement. With this, we have all the code we need to unlock the achievements we have created for the game. Let's finish this section with some comments on the system and the developer console: As a rule of thumb, you can edit most of the details of an achievement until you publish it to production. Once your achievement has been published, it cannot be deleted. You can only delete an achievement in its prepublished state. There is a button labeled Delete at the bottom of the achievement screen for this. You can also reset the progress for achievements while they are in draft. This reset happens for all players at once. There is a button labeled Reset achievement progress at the bottom of the achievement screen for this. Also note that GameBaseActivity does a lot of logging. So, if your device is connected to your computer and you run a debug build, you may see that it lags sometimes. This does not happen in a release build for which the log is removed. Leaderboards Since YASS has only one game mode and one score in the game, it makes sense to have only one leaderboard on Google Play Game Services. Leaderboards are managed from their own tab inside the Game services area of the developer console. Unlike achievements, it is not mandatory to have any leaderboard to be able to publish your game. If your game has different levels of difficulty, you can have a leaderboard for each of them. This also applies if the game has several values that measure player progress, you can have a leaderboard for each of them. Managing leaderboards on Play Games console Leaderboards can be created and managed in the Leaderboards tag. When we click on Add leaderboard, we are presented with a form that has several fields to be filled. They are as follows: Name: the display name of the leaderboard, which can be localized. We will simply call it High Scores. Score formatting: this can be Numeric, Currency, or Time. We will use Numeric for YASS. Icon: a 512x512 px icon to identify the leaderboard. Ordering: Larger is better / Smaller is better. We are going to use Larger is better, but other score types may be Smaller is better as in a racing game. Enable tamper protection: this automatically filters out suspicious scores. You should keep this on. Limits: if you want to limit the score range that is shown on the leaderboard, you can do it here. We are not going to use this List order: the order of the leaderboards. Since we only have one, it is not really important for us. Setting up a leaderboard on the Play Games console Now that we have defined the leaderboard, it is time to use it in the game. As happens with achievements, we have a link where we can get all the resources for the game in XML. So, we proceed to get the ID of the leaderboard and add it to the strings defined in the play_services.xml file. We have to submit the scores at the end of the game (that is, a GameOver event), but also when the user exits a game via the pause button. To unify this, we will create a new GameEvent called GameFinished that is triggered after a GameOver event and after the user exits the game. We will update the stopGame method of GameEngine, which is called in both cases to trigger the event: public void stopGame() { if (mUpdateThread != null) {    synchronized (mLayers) {      onGameEvent(GameEvent.GameFinished);    }    mUpdateThread.stopGame();  mUpdateThread = null; } […] } We have to set the updateThread to null after sending the event, to prevent this code being run twice. Otherwise, we could send each score more than once. Similarly, as happens for achievements, submitting a score is very simple, just a single line of code. But we also need to check that the GoogleApiClient is connected and we still have the same edge case when an Exception is thrown. So, we need to wrap it in a try/catch block. To keep everything in the same place, we will put this code inside ScoreGameObject: @Override public void onGameEvent(GameEvent gameEvent) { […] else if (gameEvent == GameEvent.GameFinished) {    // Submit the score    if (mApiClient.isConnecting() || mApiClient.isConnected()) {      try {        Games.Leaderboards.submitScore(mApiClient,          getLeaderboardId(), mPoints);      }      catch (Exception e){        mApiClient.disconnect();      }    } } }   private String getLeaderboardId() { return mParent.getString(R.string.leaderboard_high_scores); } This is really straightforward. GPGS is now receiving our scores and it takes care of the timestamp of the score to create daily, weekly, and all time leaderboards. It also uses your Google+ circles to show the social score of your friends. All this is done automatically for you. The final missing piece is to let the player open the leaderboards and achievements UI from the main menu as well as trigger a sign in if they are signed out. Opening the Play Games UI To complete the integration of achievements and leaderboards, we are going to add buttons to open the native UI provided by GPGS to our main menu. For this, we are going to place two buttons in the bottom–left corner of the screen, opposite the music and sound buttons. We will also check whether we are connected or not; if not, we will show a single sign-in button. For these buttons we will use the official images of GPGS, which are available for developers to use. Note that you must follow the brand guidelines while using the icons and they must be displayed as they are and not modified. This also provides a consistent look and feel across all the games that support Play Games. Since we have seen a lot of layouts already, we are not going to include another one that is almost the same as something we already have. The main menu with the buttons to view achievements and leaderboards. To handle these new buttons we will, as usual, set the MainMenuFragment as OnClickListener for the views. We do this in the same place as the other buttons, that is, inside onViewCreated: @Override public void onViewCreated(View view, Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); [...] view.findViewById(    R.id.btn_achievements).setOnClickListener(this); view.findViewById(    R.id.btn_leaderboards).setOnClickListener(this); view.findViewById(R.id.btn_sign_in).setOnClickListener(this); } As happened with achievements and leaderboards, the work is done using static methods that receive a GoogleApiClient object. We can get this object from the GameHelper that is a part of the BaseGameActivity, like this: GoogleApiClient apiClient = getYassActivity().getGameHelper().getApiClient(); To open the native UI, we have to obtain an Intent and then start an Activity with it. It is important that you use startActivityForResult, since some data is passed back and forth. To open the achievements UI, the code is like this: Intent achievementsIntent = Games.Achievements.getAchievementsIntent(apiClient); startActivityForResult(achievementsIntent, REQUEST_ACHIEVEMENTS); This works out of the box. It automatically grays out the icons for the unlocked achievements, adds a counter and progress bar to the one that is in progress, and a padlock to the hidden ones. Similarly, to open the leaderboards UI we obtain an intent from the Games.Leaderboards class instead: Intent leaderboardsIntent = Games.Leaderboards.getLeaderboardIntent( apiClient, getString(R.string.leaderboard_high_scores)); startActivityForResult(leaderboardsIntent, REQUEST_LEADERBOARDS); In this case, we are asking for a specific leaderboard, since we only have one. We could use getLeaderboardsIntent instead, which will open the Play Games UI for the list of all the leaderboards. We can have an intent to open the list of leaderboards or a specific one. What remains to be done is to replace the buttons for the login one when the user is not connected. For this, we will create a method that reads the state and shows and hides the views accordingly: private void updatePlayButtons() { GameHelper gameHelper = getYassActivity().getGameHelper(); if (gameHelper.isConnecting() || gameHelper.isSignedIn()) {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.GONE); } else {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.VISIBLE); } } This method decides whether to remove or make visible the views based on the state. We will call it inside the important state-changing methods: onLayoutCompleted: the first time we open the game to initialize the UI. onSignInSucceeded: when the user successfully signs in to GPGS. onSignInFailed: this can be triggered when we auto sign in and there is no connection. It is important to handle it. onActivityResult: when we come back from the Play Games UI, in case the user has logged out. But nothing is as easy as it looks. In fact, when the user signs out and does not exit the game, GoogleApiClient keeps the connection open. Therefore the value of isSignedIn from GameHelper still returns true. This is the edge case we have been talking about all through the article. As a result of this edge case, there is an inconsistency in the UI that shows the achievements and leaderboards buttons when it should show the login one. When the user logs out from Play Games, GoogleApiClient keeps the connection open. This can lead to confusion. Unfortunately, this has been marked as work as expected by Google. The reason is that the connection is still active and it is our responsibility to parse the result in the onActivityResult method to determine the new state. But this is not very convenient. Since it is a rare case we will just go for the easiest solution, which is to wrap it in a try/catch block and make the user sign in if he or she taps on leaderboards or achievements while not logged in. This is the code we have to handle the click on the achievements button, but the one for leaderboards is equivalent: else if (v.getId() == R.id.btn_achievements) { try {    GoogleApiClient apiClient =      getYassActivity().getGameHelper().getApiClient();    Intent achievementsIntent =      Games.Achievements.getAchievementsIntent(apiClient);    startActivityForResult(achievementsIntent,      REQUEST_ACHIEVEMENTS); } catch (Exception e) {    GameHelper gameHelper = getYassActivity().getGameHelper();    gameHelper.disconnect();    gameHelper.beginUserInitiatedSignIn(); } } Basically, we have the old code to open the achievements activity, but we wrap it in a try/catch block. If an exception is raised, we disconnect the game helper and begin a new login using the beginUserInitiatedSignIn method. It is very important to disconnect the gameHelper before we try to log in again. Otherwise, the login will not work. We must disconnect from GPGS before we can log in using the method from the GameHelper. Finally, there is the case when the user clicks on the login button, which just triggers the login using the beginUserInitiatedSignIn method from the GameHelper: if (v.getId() == R.id.btn_sign_in) { getYassActivity().getGameHelper().beginUserInitiatedSignIn(); } Once you have published your game and the game services, achievements and leaderboards will not appear in the game description on Google Play straight away. It is required that "a fair amount of users" have used them. You have done nothing wrong, you just have to wait. Other features of Google Play services Google Play Game Services provides more features for game developers than achievements and leaderboards. None of them really fit the game we are building, but it is useful to know they exist just in case your game needs them. You can save yourself lots of time and effort by using them and not reinventing the wheel. The other features of Google Play Games Services are: Events and quests: these allow you to monitor game usage and progression. Also, they add the possibility of creating time-limited events with rewards for the players. Gifts: as simple as it sounds, you can send a gift to other players or request one to be sent to you. Yes, this is seen in the very mechanical Facebook games popularized a while ago. Saved games: the standard concept of a saved game. If your game has progression or can unlock content based on user actions, you may want to use this feature. Since it is saved in the cloud, saved games can be accessed across multiple devices. Turn-based and real-time multiplayer: Google Play Game Services provides an API to implement turn-based and real-time multiplayer features without you needing to write any server code. If your game is multiplayer and has an online economy, it may be worth making your own server and granting virtual currency only on the server to prevent cheating. Otherwise, it is fairly easy to crack the gifts/reward system and a single person can ruin the complete game economy. However, if there is no online game economy, the benefits of gifts and quests may be more important than the fact that someone can hack them. Let's take a look at each of these features. Events The event's APIs provides us with a way to define and collect gameplay metrics and upload them to Google Play Game Services. This is very similar to the GameEvents we are already using in our game. Events should be a subset of the game events of our game. Many of the game events we have are used internally as a signal between objects or as a synchronization mechanism. These events are not really relevant outside the engine, but others could be. Those are the events we should send to GPGS. To be able to send an event from the game to GPGS, we have to create it in the developer console first. To create an event, we have to go to the Events tab in the developer console, click on Add new event, and fill in the following fields: Name: a short name of the event. The name can be up to 100 characters. This value can be localized. Description: a longer description of the event. The description can be up to 500 characters. This value can also be localized. Icon: the icon for the event of the standard 512x512 px size. Visibility: as for achievements, this can be revealed or hidden. Format: as for leaderboards, this can be Numeric, Currency, or Time. Event type: this is used to mark events that create or spend premium currency. This can be Premium currency sink, Premium currency source, or None. While in the game, events work pretty much as incremental achievements. You can increment the event counter using the following line of code: Games.Events.increment(mGoogleApiClient, myEventId, 1); You can delete events that are in the draft state or that have been published as long as the event is not in use by a quest. You can also reset the player progress data for the testers of your events as you can do for achievements. While the events can be used as an analytics system, their real usefulness appears when they are combined with quests. Quests A quest is a challenge that asks players to complete an event a number of times during a specific time frame to receive a reward. Because a quest is linked to an event, to use quests you need to have created at least one event. You can create a quest from the quests tab in the developer console. A quest has the following fields to be filled: Name: the short name of the quest. This can be up to 100 characters and can be localized. Description: a longer description of the quest. Your quest description should let players know what they need to do to complete the quest. The description can be up to 500 characters. The first 150 characters will be visible to players on cards such as those shown in the Google Play Games app. Icon: a square icon that will be associated with the quest. Banner: a rectangular image that will be used to promote the quest. Completion Criteria: this is the configuration of the quest itself. It consists of an event and the number of times the event must occur. Schedule: the start and end date and time for the quest. GPGS uses your local time zone, but stores the values as UTC. Players will see these values appear in their local time zone. You can mark a checkbox to notify users when the quest is about to end. Reward Data: this is specific to each game. It can be a JSON object, specifying the reward. This is sent to the client when the quest is completed. Once configured in the developer console, you can do two things with the quests: Display the list of quests Process a quest completion To get the list of quests, we start an activity with an intent that is provided to us via a static method as usual: Intent questsIntent = Games.Quests.getQuestsIntent(mGoogleApiClient,    Quests.SELECT_ALL_QUESTS); startActivityForResult(questsIntent, QUESTS_INTENT); To be notified when a quest is completed, all we have to do is register a listener: Games.Quests.registerQuestUpdateListener(mGoogleApiClient, this); Once we have set the listener, the onQuestCompleted method will be called once the quest is completed. After completing the processing of the reward, the game should call claim to inform Play Game services that the player has claimed the reward. The following code snippet shows how you might override the onQuestCompleted callback: @Override public void onQuestCompleted(Quest quest) { // Claim the quest reward. Games.Quests.claim(mGoogleApiClient, quest.getQuestId(),    quest.getCurrentMilestone().getMilestoneId()); // Process the RewardData to provision a specific reward. String reward = new    String(quest.getCurrentMilestone().getCompletionRewardData(),    Charset.forName("UTF-8")); } The rewards themselves are defined by the client. As we mentioned before, this will make the game quite easy to crack and get rewards. But usually, avoiding the hassle of writing your own server is worth it. Gifts The gifts feature of GPGS allows us to send gifts to other players and to request them to send us one as well. This is intended to make the gameplay more collaborative and to improve the social aspect of the game. As for other GPGS features, we have a built-in UI provided by the library that can be used. In this case, to send and request gifts for in-game items and resources to and from friends in their Google+ circles. The request system can make use of notifications. There are two types of requests that players can send using the game gifts feature in Google Play Game Services: A wish request to ask for in-game items or some other form of assistance from their friends A gift request to send in-game items or some other form of assistance to their friends A player can specify one or more target request recipients from the default request-sending UI. A gift or wish can be consumed (accepted) or dismissed by a recipient. To see the gifts API in detail, you can visit https://developers.google.com/games/services/android/giftRequests. Again, as for quest rewards, this is done entirely by the client, which makes the game susceptible to piracy. Saved games The saved games service offers cloud game saving slots. Your game can retrieve the saved game data to allow returning players to continue a game at their last save point from any device. This service makes it possible to synchronize a player's game data across multiple devices. For example, if you have a game that runs on Android, you can use the saved games service to allow a player to start a game on their Android phone and then continue playing the game on a tablet without losing any of their progress. This service can also be used to ensure that a player's game play continues from where it was left off even if their device is lost, destroyed, or traded in for a newer model or if the game was reinstalled The saved games service does not know about the game internals, so it provides a field that is an unstructured binary blob where you can read and write the game data. A game can write an arbitrary number of saved games for a single player subjected to user quota, so there is no hard requirement to restrict players to a single save file. Saved games are done in an unstructured binary blob. The API for saved games also receives some metadata that is used by Google Play Games to populate the UI and to present useful information in the Google Play Game app (for example, last updated timestamp). Saved games has several entry points and actions, including how to deal with conflicts in the saved games. To know more about these check out the official documentation at https://developers.google.com/games/services/android/savedgames. Multiplayer games If you are going to implement multiplayer, GPGS can save you a lot of work. You may or may not use it for the final product, but it will remove the need to think about the server-side until the game concept is validated. You can use GPGS for turn-based and real-time multiplayer games. Although each one is completely different and uses a different API, there is always an initial step where the game is set up and the opponents are selected or invited. In a turn-based multiplayer game, a single shared state is passed among the players and only the player that owns the turn has permission to modify it. Players take turns asynchronously according to an order of play determined by the game. A turn is finished explicitly by the player using an API call. Then the game state is passed to the other players, together with the turn. There are many cases: selecting opponents, creating a match, leaving a match, canceling, and so on. The official documentation at https://developers.google.com/games/services/android/turnbasedMultiplayer is quite exhaustive and you should read through it if you plan to use this feature. In a real-time multiplayer there is no concept of turn. Instead, the server uses the concept of room: a virtual construct that enables network communication between multiple players in the same game session and lets players send data directly to one another, a common concept for game servers. Real-time multiplayer service is based on the concept of Room. The API of real-time multiplayer allows us to easily: Manage network connections to create and maintain a real-time multiplayer room Provide a player-selection user interface to invite players to join a room, look for random players for auto-matching, or a combination of both Store participant and room-state information on the Play Game services' servers while the game is running Send room invitations and updates to players To check the complete documentation for real-time games, please visit the official web at https://developers.google.com/games/services/android/realtimeMultiplayer. Summary We have added Google Play services to YASS, including setting up the game in the developer console and adding the required libraries to the project. Then, we defined a set of achievements and added the code to unlock them. We have used normal, incremental, and hidden achievement types to showcase the different options available. We have also configured a leaderboard and submitted the scores, both when the game is finished and when it is exited via the pause dialog. Finally, we have added links to the native UI for leaderboards and achievements to the main menu. We have also introduced the concepts of events, quests, and gifts and the features of saved games and multiplayer that Google Play Game services offers. The game is ready to publish now. Resources for Article: Further resources on this subject: SceneKit [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] SpriteKit Framework and Physics Simulation [article]
Read more
  • 0
  • 0
  • 3837

article-image-blender-249-scripting-shape-keys-ipos-and-poses
Packt
29 Apr 2010
6 min read
Save for later

Blender 2.49 Scripting: Shape Keys, IPOs, and Poses

Packt
29 Apr 2010
6 min read
A touchy subject—defining an IPO from scratch Many paths of motion of objects are hard to model by hand, for example, when we want the object to follow a precise mathematical curve or if we want to coordinate the movement of multiple objects in a way that is not easily accomplished by copying IPOs or defining IPO drivers. Imagine the following scenario: we want to interchange the position of some objects over the duration of some time in a fluid way without those objects passing through each other in the middle and without even touching each other. This would be doable by manually setting keys perhaps, but also fairly cumbersome, especially if we would want to repeat this for several sets of objects. The script that we will devise takes care of all of those details and can be applied to any two objects. Code outline: orbit.py The orbit.py script that we will design will take the following steps: Determine the halfway point between the selected objects. Determine the extent of the selected objects. Define IPO for object one. Define IPO for object two. Determining the halfway point between the selected objects is easy enough: we will just take the average location of both objects. Determining the extent of the selected objects is a little bit more challenging though. An object may have an irregular shape and determining the shortest distance for any rotation of the objects along the path that the object will be taking is difficult to calculate. Fortunately, we can make a reasonable approximation, as each object has an associated bounding box. This bounding box is a rectangular box that just encapsulates all of the points of an object. If we take half the body diagonal as the extent of an object, then it is easy to see that this distance may be an exaggeration of how close we can get to another object without touching, depending on the exact form of the object. But it will ensure that we never get too close. This bounding box is readily available from an object's getBoundBox() method as a list of eight vectors, each representing one of the corners of the bounding box. The concept is illustrated in the following figure where the bounding boxes of two spheres are shown: The length of the body diagonal of a bounding box can be calculated by determining both the maximum and minimum values for each x, y, and z coordinate. The components of the vector representing this body diagonal are the differences between these maximums and minimums. The length of the diagonal is subsequently obtained by taking the square root of the sum of squares of the x, y, and z components. The function diagonal() is a rather terse implementation as it uses many built-in functions of Python. It takes a list of vectors as an argument and then iterates over each component (highlighted. x, y, and z components of a Blender Vector may be accessed as 0, 1, and 2 respectively): def diagonal(bb): maxco=[] minco=[] for i in range(3): maxco.append(max(b[i] for b in bb)) minco.append(min(b[i] for b in bb)) return sqrt(sum((a-b)**2 for a,b in zip(maxco,minco))) It determines the extremes for each component by using the built-in max() and min() functions. Finally, it returns the length by pairing each minimum and maximum by using the zip() function. The next step is to verify that we have exactly two objects selected and inform the user if this isn't the case by drawing a pop up (highlighted in the next code snippet). If we do have two objects selected, we retrieve their locations and bounding boxes. Then we calculate the maximum distance w each object has to veer from its path to be half the minimum distance between them, which is equal to a quarter of the sum of the lengths of the body diagonals of those objects: obs=Blender.Scene.GetCurrent().objects.selectedif len(obs)!=2: Draw.PupMenu('Please select 2 objects%t|Ok')else: loc0 = obs[0].getLocation() loc1 = obs[1].getLocation() bb0 = obs[0].getBoundBox() bb1 = obs[1].getBoundBox() w = (diagonal(bb0)+diagonal(bb1))/4.0 Before we can calculate the trajectories of both objects, we first create two new and empty Object IPOs: ipo0 = Ipo.New('Object','ObjectIpo0')ipo1 = Ipo.New('Object','ObjectIpo1') We arbitrarily choose the start and end frames of our swapping operation to be 1 and 30 respectively, but the script could easily be adapted to prompt the user for these values. We iterate over each separate IPO curve for the Location IPO and create the first point (or key frame) and thereby the actual curve by assigning a tuple (framenumber, value) to the curve (highlighted lines of the next code). Subsequent points may be added to these curves by indexing them by frame number when assigning a value, as is done for frame 30 in the following code: for i,icu in enumerate((Ipo.OB_LOCX,Ipo.OB_LOCY,Ipo.OB_LOCZ)): ipo0[icu]=(1,loc0[i]) ipo0[icu][30]=loc1[i] ipo1[icu]=(1,loc1[i]) ipo1[icu][30]=loc0[i] ipo0[icu].interpolation = IpoCurve.InterpTypes.BEZIER ipo1[icu].interpolation = IpoCurve.InterpTypes.BEZIER Note that the location of the first object keyframed at frame 1 is its current location and the location keyframed at frame 30 is the location of the second object. For the other object this is just the other way around. We set the interpolation modes of these curves to "Bezier" to get a smooth motion. We now have two IPO curves that do interchange the location of the two objects, but as calculated they will move right through each other. Our next step therefore is to add a key at frame 15 with an adjusted z-component. Earlier, we calculated w to hold half the distance needed to keep out of each other's way. Here we add this distance to the z-component of the halfway point of the first object and subtract it for the other: mid_z = (loc0[2]+loc1[2])/2.0ipo0[Ipo.OB_LOCZ][15] = mid_z + wipo1[Ipo.OB_LOCZ][15] = mid_z - w Finally, we add the new IPOs to our objects: obs[0].setIpo(ipo0)obs[1].setIpo(ipo1) The full code is available as swap2.py in the file orbit.blend (download full code from here). The resulting paths of the two objects are sketched in the next screenshot:
Read more
  • 0
  • 0
  • 3834

article-image-blender-3d-249-working-textures
Packt
03 Sep 2010
9 min read
Save for later

Blender 3D 2.49: Working with Textures

Packt
03 Sep 2010
9 min read
(For more resources on Blender, see here.) Before we start to dig into textures, let me say that the biggest problem of working with them is actually finding or creating a good texture. That's why it's highly recommended that you start to create your own textures library as soon as possible. Textures are mostly image files, which represent some kind of surface, such as wood or stone, based on the photography. They work like wallpaper, which we can place on a surface or object. For instance, if we place an image of wood on a plane, it will give the impression that the plane is made of wood. That's the main principle of using textures; to make an object look like something in the real world. For some projects, we may need a special kind of texture, which won't be found in a common library, so we will have to take a picture ourselves or buy an image from someone. But don't worry, because often we deal with common surfaces that have common textures as well. Procedural textures vs. Non-procedural textures Blender basically has two types of textures, which are procedural textures and bitmap textures. Each one has positive and negative points; which one is the best? It will depend on your project needs: Procedural: This kind of texture is generated by the software at rendering time, just like vector lines in software, such as Inkscape or Illustrator. This means that it won't depend of any type of image file. The best thing about this type of texture is that being resolution-independent, we can set the texture to be rendered at high resolutions with minimum loss of quality. The negative point of this kind of texture is that it's harder to get realistic textures with it. The advantage of using procedural textures is that because they are all based on algorithms, they don't depend on a fixed number of pixels. Non-Procedural: To use this kind of texture, we will need an image file, such as a JPEG, PNG, or TGA file. The good thing about these textures is that we can quickly achieve a very realistic look and feel with it. On the other hand, we must find the texture file before using it. And what's more, if you are creating a high-resolution render, the texture file size must be as well. Texture library Do you remember the way we organized materials? We can do the exact same thing with textures. Besides setting names and storing the Blender files to import and use again later, collecting bitmap textures is another important point. Even if you don't start right away, it's important to know where to look for textures. So, here is a small list of websites, which provide free texture downloads: http://www.cgtextures.com http://blender-archi.tuxfamily.org/textures Applying textures To use a texture, we must apply a material to an object, and then use the texture with this material. We always use the texture inside a material. For instance, to make a plane that simulates a marble floor, we have to use a texture, and set up how the surface will react to light to give the surface a proper look of marble. To do that, we use the Texture panel, which is located right next to the Materials button. We can use a keyboard shortcut to open this panel; just hit F6 to open it: There is a way to add a texture in the Material panel also, with a menu named Texture: To get all the options, the best way to add a texture is with the Texture panel. In this panel, we will be able to see buttons, which represent the texture channels. Each one of these channels can hold a texture. The final texture will be a mix of all the channels. If we have a texture in channel 1 and another texture in channel 2, these textures will be blended and represented on the material. Before adding a new texture, we must select a channel by clicking on one of them. Usually the first channel will be selected, but if you want to use another one, just click on the channel. When the channel is selected, just click on the Add New button to add a new texture: The texture controls are very similar to the materials controls. We can give a name to the texture at the top and erase the texture if we don't want it anymore. With the selector, we can choose a previously created texture also. Just click and select it: Now, here comes the fun part. With a texture added, we have to choose a texture type. To do that, we click on the Texture Type combo box: There are a lot of textures, but most of them are procedural textures, and we won't use them frequently. The only texture that isn't procedural is the Image type. We see an example of a procedural Wood texture in the following screenshot: We can use textures such as Clouds and Wood to create effects and give surfaces a more complex look, or even create a grass texture with dirt on it. But most of the time, the texture type which we will be using will be the Image type: Each texture has it's own set of parameters to determine how it will look on the object. If we add a Wood texture, it will show the configuration parameters to the right: (Move the mouse over the image to enlarge.) If we choose texture type as Clouds, the parameters shown on the right will be completely different. With the Image texture type, it's not different. This kind of texture has its own type of setup. Following is the control panel: To show how to set up a texture, let's use an image file that represents a wood floor and a plane. We can apply the texture to this plane and set up how it's going to look, testing all parameters: The first thing to do is to assign a material to the plane, and then add a texture to this material. We choose the Image option as texture type. Blender will show the configuration options for this kind of texture. To apply the image as a texture to the plane, just click on the Load button, located in the Image menu. When we hit this button, we will be able to select the image file: Locate the image file, and the texture will be applied. If we want to have more control on how this texture is organized and placed on the plane, we need to learn how the controls work. Every time you make any changes to the setup of a texture, these changes will be shown in the preview window. Use it to make the required changes. Here is a list of what some of the buttons can do for the texture: UseAlpha: If the texture has an alpha channel, we have to press this button for Blender to calculate the channel. An image has an alpha channel when some kind of transparency is stored in the image. For instance, a PNG file with a transparent background has an alpha channel. We can use this to create a texture with a logo, for a bottle, or to add an image of a tree or person, to a plane. Rot90: With this option, we can rotate the texture by 90 degrees. Repeat: Every texture must be distributed on the object surface; repeating the texture in lines and columns is the default way to do that. Extend: If this button is pressed, the texture will be adjusted to fit the entire object surface area. Clip: With this option, the texture will be cropped, and we will be able to show only a part of it. To adjust which parts of the texture will be displayed, use the Min/Max X/Y options. Xrepeat / Yrepeat: This option determines how many times a texture is repeated with the repeat option turned on. Normal Map: If the texture will be used to create Normal Maps, press this button. These are textures used to change the face normals of an object. Still: With this button selected, we will specify that the image used as texture is a still image. This option is marked by default. Movie: If you want to use a movie file as texture, press this button. This is very useful if we need to make something similar to a theater projection screen or a tv screen. Sequence: We can use a sequence of images as a texture too. Just press this button. It works the same way as with a movie file. There are a few more parameters, such as the Reload button. If your texture file is updated outside of Blender, you must press this button to make Blender update the texture in your project. The X button can erase this texture; use it if you need to select another image file. When we add a texture to any material, an external link is created to this file. This link can be absolute or relative. Suppose we add a texture named wood.png, which is located in the root of your primary hard disk, such as C:. A link to this texture will be created like this — c:wood.png. So every time you open this file, the software will look for that file at that exact place. This is an absolute link, but we can use a relative link as well. For instance, when we add a texture located in the same folder as our scene, a relative link will be created. Every time we use an absolute link and we have to move the .blend file to another computer, the texture file must go with it. To imbue the image file with .blend, just press the icon for gift package: To save all the textures used in a scene, just access the File menu and use the Pack Data option. It will cause all the texture files to get embedded with the source .blend file.
Read more
  • 0
  • 0
  • 3825
article-image-blender-3d-interview-allan-brito
Packt
23 Oct 2009
8 min read
Save for later

Blender 3D: Interview with Allan Brito

Packt
23 Oct 2009
8 min read
Meeba Abraham: Hi Allan, thank you for talking to us today, why don’t you tell us a bit about yourself and your background; how did you start working with Blender? Allan Brito: Hi, and thanks for this opportunity to talk a bit about myself. Well, I’m a 29 year-old architect from Brazil. After my graduation, I started working on visualization projects, mostly on 3ds Max for a small studio here in Brazil. After two years I started teaching 3D modeling and animation and I fell in love with teaching. I still teach 3D animation and modeling at a College here. With the help of my teaching experience, I began writing manuals and tutorials about 3D animation. Eventually, I decided to write a book about Blender in Portuguese, and the book was a huge success in Brazil. Currently I`m working on the third edition of this book. With the book, I also needed a way to keep in touch with the readers and discuss about Blender and 3D related stuff. So I started a web site (www.allanbrito.com), where I regularly write short articles and tutorials about Blender and its comparison with other 3D packages. Today the web site has grown considerably, and I continue to update it with content on Blender and other 3D software tools. Meeba Abraham: How long have you been working with it? Allan Brito: My first contact with Blender 3D was in 2003. I was invited by a friend to check out a great open source software for 3D visualization. I was really impressed by Blender, its potential, and the lightweight of the software. Coming from a 3ds Max background, it was a bit hard to get used to the interface and the keyboard shortcuts, but after a few weeks I started getting used to it. After the learning process, I started to use Blender as the main tool for my projects. I can`t say that it was easy to use at first, but with time Blender simply grew on me and became my main tool for my projects. Meeba Abraham: Can you tell about some of the key features of Blender that makes it a viable option to other professional 3D software? Allan Brito: There are many features in Blender that other professional 3D suites do not have. For instance, the integrated Game Engine, which allows you to produce interactive animations, is just awesome! For 3D modeling, Blender has a sculpt module where artists can create 3D models only by sculpt geometry in a way similar to what sculpting tools such as ZBrush and MudBox provides. The node editor in Blender is also an incredible tool to create materials and for post-production. Post-production is a powerful tool in Blender. There is a sequencer editor that works like a video editor. You can cut, join, and post-process videos in the sequence editor. For instance, an animator can create a full animation without the need of any other software. Recently, the Big Buck Bunny project introduced some great tools for character animation in Blender, like better fur, a new and improved particle system, new and improved UV Mapping and much more. I strongly recommend a visit to www.blender.org to check out the full list of features, which is huge. Meeba Abraham: Why is Blender an important 3D application that an aspiring graphics artist should consider using? Allan Brito: I believe that Blender has a great set of features that can help a graphic artist create some impressive art work. Why Blender? I guess the best answer is; why not? All the features offered by other 3D animation software are also available in Blender, such as character animation, physics simulation, particle animation, and much more. And with Blender being a free software, you won’t have to get a single license and be bounded to only one workstation. Besides the features, I believe in the community nature of Blender. If you feel a tool or feature is missing, just make a suggestion to the community or make the feature yourself! Meeba Abraham: Over the years, Blender has grown in popularity. What, in your opinion, are some of the main reasons for this? Allan Brito: In the last few years Blender gained many features that only the so-called high-end and expansive 3D software had. This puts the spotlight right into Blender, and some old and experienced professionals are using Blender today, to take a look at these advanced features, and they like it. Besides the features, the Blender Foundation is doing a great job by supporting Blender and promoting it outside the community. They organize conferences and projects to show the potentials of Blender as a 3D animation package. The last open movie—Big Buck Bunny—supported by the community is a great example of that. Meeba Abraham: Since Blender is an open source 3D application, the Blender community plays an important role in its growth. Can you shed some light on the blender community? How have they helped to popularize Blender? Allan Brito: What can I say? The Blender community is great and has been supporting the development of Blender for a long time. The last open movie is a great example of what this community can do. Big Buck Bunny is a project mainly created by the Blender community. Artists could buy the DVD of the animation even before the project started. And when the animation was finished, all Blender users could buy a shiny DVD of the animation that contains tutorials and all source files of the animation. Now, what if Pixar gave away all the production files of their animations. And even of you don’t want to buy the DVD, you can still download all of the content for free from the project Web site, www.bigbuckbunny.org. This is a great example of the Blender community spirit and how much support Blender gets from around the world. Meeba Abraham: You have just authored a book on Blender; how did you find the experience? Is this the first book you’ve written? Allan Brito: Writing a book on Blender was quite a challenge for me. Even with the experience of writing tutorials and short articles about Blender, writing a book was not easy! But after a few weeks, I was able to write the chapters naturally and almost on schedule. The biggest challenge for me was to write about a subject that no one else had written about yet. In my first book “Blender 3D – Guia do Usuário” written in Brazilian Portuguese, the challenge was even bigger. When I started writing that book, there weren’t any updated documentation on Blender features. So I had to do a lot of research myself. With this book, the challenge again was to write about something that no one else has ever written. Even with a few short tutorials around, there weren`t any full set of procedures or tips for working with architectural visualization in Blender. The experience was great and I hope this is just the first book in a long series of books! I have a few ideas for writing more books about Blender and I’m already working on some of them. Meeba Abraham: How do you anticipate it will help the Blender community? Is it different to other Blender books? Allan Brito: I believe that a lot of users want to use Blender for architectural visualization but have only found tutorials and books on character modeling and animation. This book was written with architectural visualization in mind. So every example and Blender tool is described specifically with architectural examples. Meeba Abraham: You make regular contributions to www.BlenderNation.com, how did you get involved with the site and what does it offer to the community? Allan Brito: BlenderNation is the comprehensive Web site for Blender related news. So if anyone is curious about what`s going on in the Blender community, the first place to look after the Foundation Web site is BlenderNation. My involvement with BlenderNation began with my writing articles about Blender in Brazilian Portuguese for my own web site (www.allanbrito.com). A few months later, I was invited by Bart Veldhuizen to write a few tutorials and I guess they liked my work! After that I was writing articles for BlenderNation as a Contributor Editor. And I have to say that it`s really great to be a part of it, and keep the Blender community updated. The experience with BlenderNation and the books inspired me to start a new project called Blender 3D Architect (www.blender3darchitect.com) where I write articles on how to use Blender for architectural visualization along with tips and tutorials. Meeba Abraham: Thanks for your time and contributions!
Read more
  • 0
  • 0
  • 3825

article-image-developing-flood-control-using-xna-game-development
Packt
22 Dec 2011
15 min read
Save for later

Developing Flood Control using XNA Game Development

Packt
22 Dec 2011
15 min read
(For more resources on XNA, see here.) Animated pieces We will define three different types of animated pieces: rotating, falling, and fading. The animation for each of these types will be accomplished by altering the parameters of the SpriteBatch.Draw() call. Classes for animated pieces In order to represent the three types of animated pieces, we will create three new classes. Each of these classes will inherit from the GamePiece class, meaning they will contain all of the methods and members of the GamePiece class, but will add additional information to support the animation. Child classes Child classes inherit all of their parent's members and methods. The RotatingPiece class can refer to the _pieceType and _pieceSuffix of the piece, without recreating them within RotatingPiece itself. Additionally, child classes can extend the functionality of their base class, adding new methods and properties, or overriding old ones. In fact, Game1 itself is a child of the Micrsoft.Xna.Game class, which is why all of the methods we use (Update(),Draw(),LoadContent(), and so on) are declared with the Overrides modifier. Let's begin by creating the class we will use for rotating pieces. Time for action – rotating pieces Open your existing Flood Control project in Visual Studio, if it is not already active. Add a new class to the project called RotatingPiece. Under the class declaration (Public Class RotatingPiece), add the following line: Inherits GamePiece Add the following declarations to the RotatingPiece class: Public Clockwise As BooleanPublic Shared RotationRate As Single = (MathHelper.PiOver2/10)Private _rotationAmount As SinglePublic rotationTicksRemaining As Single = 10 Add a property to retrieve the current RotationAmount: Public ReadOnly Property RotationAmount As Single Get If Clockwise Then Return _rotationAmount Else Return (MathHelper.Pi * 2) - _rotationAmount End If End GetEnd Property Add a constructor for the RotatingPiece class as follows: Public Sub New(type As String, clockwise As Boolean) MyBase.New(type) Me.Clockwise = clockwiseEnd Sub Add a method to update the piece as follows: Public Sub UpdatePiece() _rotationAmount += RotationRate rotationTicksRemaining = CInt(MathHelper.Max(0, rotationTicksRemaining - 1))End Sub What just happened? In step 3, we modified the RotatingPiece class, by adding Inherits GamePiece on the line, after the class declaration. This indicates to Visual Basic that the RotatingPiece class is a child of the GamePiece class. The Clockwise variable stores a true value, if the piece will be rotating clockwise, and false if the rotation is counter clockwise. When a game piece is rotated, it will turn a total of 90 degrees (or pi/2 radians) over 10 animation frames. The MathHelper class provides a number of constants to represent commonly used numbers, with MathHelper.PiOver2 being equal to the number of radians in a 90 degree angle. We divide this constant by 10 and store the result as the rotationRate for use later. This number will be added to the _rotationAmount single, which will be referenced when the animated piece is drawn. Working with radians All angular math is handled in radians in XNA. A complete (360 degree) circle contains 2*pi radians. In other words, one radian is equal to about 57.29 degrees. We tend to relate to circles more often in terms of degrees (a right angle being 90 degrees, for example), so if you prefer to work with degrees, you can use the MathHelper.ToRadians() method to convert your values, when supplying them to XNA classes and methods. The final declaration, rotationTicksRemaining, is reduced by one, each time the piece is updated. When this counter reaches zero, the piece has finished animating. When the piece is drawn, the RotationAmount property is referenced by a spriteBatch.Draw() call, and returns either the _rotationAmount variable (in the case of a clockwise rotation) or 2*pi (a full circle) minus the _rotationAmount, if the rotation is counter clockwise. The constructor in step 6 illustrates how the parameters passed to a constructor can be forwarded to the class' parent constructor via the MyBase call. Since, the GamePiece class has a constructor that accepts a piece type, we can pass that information along to its constructor, while using the second parameter (clockwise) to update the clockwise member that does not exist in the GamePiece class. In this case, since both the Clockwise member variable and the clockwise parameter have identical names, we specify Me.Clockwise to refer to the clockwise member of the RotatingPiece class. Simply, clockwise in this scope refers only to the parameter passed to the constructor. Me notation You can see that it is perfectly valid for Visual Basic code to have method parameter names that match the names of class variables, thus potentially hiding the class variables from being used in the method (since, referring to the name inside the method will be assumed to refer to the parameter). To ensure that you can always access your class variables even when a parameter name conflicts, you can preface the variable name with Me. when referring to the class variable. Me. indicates to Visual Basic that the variable you want to use is part of the class, and not a local method parameter. In C#, a similar type of notation is used, prefacing class-level members with this. to access a hidden variable. Lastly, the UpdatePiece() method simply increases the _rotationAmount member, while decreasing the rotationTicksRemaining counter (using MathHelper.Max() to ensure that the value does not fall below zero). Time for action – falling pieces Add a new class to the Flood Control project called FallingPiece. Add the Inherits line after the class declaration as follows: Inherits GamePiece Add the following declarations to the FallingPiece class: Public VerticalOffset As IntegerPublic Shared FallRate As Integer = 5 Add a constructor for the FallingPiece class: Public Sub New(type As String, verticalOffset As Integer) MyBase.New(type) Me.VerticalOffset = verticalOffsetEnd Sub Add a method to update the piece: Public Sub UpdatePiece() VerticalOffset = CInt(MathHelper.Max(0, VerticalOffset - FallRate))End Sub What just happened? Simpler than a RotatingPiece, a FallingPiece is also a child of the GamePiece class. A FallingPiece has an offset (how high above its final destination it is currently located) and a falling speed (the number of pixels it will move per update). As with a RotatingPiece, the constructor passes the type parameter to its base class constructor, and uses the verticalOffset parameter to set the VerticalOffset member. Again, we use the Me. notation to differentiate the two identifiers of the same name. Lastly, the UpdatePiece() method subtracts FallRate from VerticalOffset, again using the MathHelper.Max() method to ensure that the offset does not fall below zero. Time for action – fading pieces Add a new class to the Flood Control project called FadingPiece. Add the following line to indicate that FadingPiece also inherits from GamePiece: Inherits GamePiece Add the following declarations to the FadingPiece class: Public AlphaLevel As Single = 1.0Public Shared AlphaChangeRate As Single = 0.02 Add a constructor for the FadingPiece class as follows: Public Sub New(type As String, suffix As String) MyBase.New(type, suffix)End Sub Add a method to update the piece: Public Sub UpdatePiece() AlphaLevel = MathHelper.Max(0, AlphaLevel - AlphaChangeRate)End Sub What just happened? The simplest of our animated pieces, the FadingPiece only requires an alpha value (which always starts at 1.0f, or fully opaque) and a rate of change. The FadingPiece constructor simply passes the parameters along to the base constructor. When a FadingPiece is updated, alphaLevel is reduced by alphaChangeRate, making the piece more transparent. Managing animated pieces Now that we can create animated pieces, it will be the responsibility of the GameBoard class to keep track of them. In order to do that, we will define a Dictionary object for each type of piece. A Dictionary is a collection object similar to a List, except that instead of being organized by an index number, a Dictionary consists of a set of key and value pairs. In an array or a List, you might access an entity by referencing its index as in dataValues(2) = 12. With a Dictionary, the index is replaced with your desired key type. Most commonly, this will be a string value. This way, you can do something like fruitColors("Apple")="red". Time for action – updating GameBoard to support animated pieces In the declarations section of the GameBoard class, add three Dictionaries, shown as follows: Public FallingPieces As Dictionary(Of String, FallingPiece) = New Dictionary(Of String, FallingPiece)Public RotatingPieces As Dictionary(Of String, RotatingPiece) = New Dictionary(Of String, RotatingPiece)Public FadingPieces As Dictionary(Of String, FadingPiece) = New Dictionary(Of String, FadingPiece) Add methods to the GameBoard class to create new falling piece entries in the Dictionaries: Public Sub AddFallingPiece(x As Integer, y As Integer, type As String, verticalOffset As Integer) FallingPieces.Add( x.ToString() + "_" + y.ToString(), New FallingPiece(type, verticalOffset))End SubPublic Sub AddRotatingPiece(x As Integer, y As Integer, type As String, clockwise As Boolean) RotatingPieces.Add( x.ToString() + "_" + y.ToString(), New RotatingPiece(type, clockwise))End SubPublic Sub AddFadingPiece(x As Integer, y As Integer, type As String) FadingPieces.Add( x.ToString() + "_" + y.ToString(), New FadingPiece(type, "W"))End Sub Add the ArePiecesAnimating() method to the GameBoard class: Public Function ArePiecesAnimating() As Boolean If (FallingPieces.Count + FadingPieces.Count + RotatingPieces.Count) = 0 Then Return False Else Return True End IfEnd Function Add the UpdateFadingPieces() method to the GameBoard class: Public Sub UpdateFadingPieces() Dim RemoveKeys As Queue(Of String) = New Queue(Of String) For Each thisKey As String In FadingPieces.Keys FadingPieces(thisKey).UpdatePiece() If FadingPieces(thisKey).AlphaLevel = 0 Then RemoveKeys.Enqueue(thisKey) End If Next While RemoveKeys.Count > 0 FadingPieces.Remove(RemoveKeys.Dequeue()) End WhileEnd Sub Add the UpdateFallingPieces() method to the GameBoard class: Public Sub UpdateFallingPieces() Dim RemoveKeys As Queue(Of String) = New Queue(Of String) For Each thisKey As String In FallingPieces.Keys FallingPieces(thisKey).UpdatePiece() If FallingPieces(thisKey).VerticalOffset = 0 Then RemoveKeys.Enqueue(thisKey) End If Next While RemoveKeys.Count > 0 FallingPieces.Remove(RemoveKeys.Dequeue()) End WhileEnd Sub Add the UpdateRotatingPieces() method to the GameBoard class as follows: Public Sub UpdateRotatingPieces() Dim RemoveKeys As Queue(Of String) = New Queue(Of String) For Each thisKey As String In RotatingPieces.Keys RotatingPieces(thisKey).UpdatePiece() If RotatingPieces(thisKey).rotationTicksRemaining = 0 Then RemoveKeys.Enqueue(thisKey) End If Next While RemoveKeys.Count > 0 RotatingPieces.Remove(RemoveKeys.Dequeue()) End WhileEnd Sub Add the UpdateAnimatedPieces() method to the GameBoard class as follows: Public Sub UpdateAnimatedPieces() If (FadingPieces.Count = 0) Then UpdateFallingPieces() UpdateRotatingPieces() Else UpdateFadingPieces() End IfEnd Sub What just happened? After declaring the three Dictionary objects, we have three methods used by the GameBoard class to create them when necessary. In each case, the key is built in the form X_Y, so an animated piece in column 5 on row 4 will have a key of 5_4. Each of the three Add... methods simply pass the parameters along to the constructor for the appropriate piece types, after determining the key to use. When we begin drawing the animated pieces, we want to be sure that animations finish playing, before responding to other input or taking other game actions (like creating new pieces). The ArePiecesAnimating() method returns true, if any of the Dictionary objects contain entries. If they do, we will not process any more input or fill empty holes on the game board, until they have completed. The UpdateAnimatedPieces() method will be called from the game's Update() method, and is responsible for calling the three different update methods previously (UpdateFadingPiece(), UpdateFallingPiece(), and UpdateRotatingPiece()) for any animated pieces, currently on the board. The first line in each of these methods declares a Queue object called RemoveKeys. We will need this, because Visual Basic does not allow you to modify a Dictionary (or List, or any of the similar generic collection objects), while a for each loop is processing them. A Queue is yet another generic collection object that works like a line at the bank. People stand in a line and await their turn to be served. When a bank teller is available, the first person in the line transacts his/her business and leaves. The next person then steps forward. This type of processing is known as FIFO (First In, First Out). Using the Enqueue() and Dequeue() methods of the Queue class, objects can be added to the Queue(Enqueue()), where they await processing. When we want to deal with an object, we Dequeue() the oldest object in the Queue, and handle it. Dequeue() returns the first object waiting to be processed, which is the oldest object added to the Queue. Collection classes The .NET Framework provides a number of different collection classes, such as the Dictionary, Queue, List, and Stack objects. Each of these classes provide different ways to organize and reference the data in them. For information on the various collection classes and when to use each type, see the following MSDN entry: Each of the update methods loops through all of the keys in its own Dictionary, and in turn calls the UpdatePiece() method for each key. Each piece is then checked to see if its animation has completed. If it has, its key is added to the RemoveKeys queue. After, all of the pieces in the Dictionary have been processed, any keys that were added to RemoveKeys are then removed from the Dictionary, eliminating those animated pieces. If there are any FadingPieces currently active, those are the only animated pieces that UpdateAnimatedPieces() will update. When a row is completed, the scoring tiles fade out, the tiles above them fall into place, and new tiles fall in from above. We want all of the fading to finish before the other tiles start falling (or it would look strange as the new tiles pass through the fading old tiles). Fading pieces In the discussion of UpdateAnimatedPieces(), we stated that fading pieces are added to the board, whenever the player completes a scoring chain. Each piece in the chain is replaced with a fading piece. Time for action – generating fading pieces In the Game1 class, modify the CheckScoringChain() method by adding the following call inside the for each loop, before the square is set to Empty: _gameBoard.AddFadingPiece( CInt(thisPipe.X), CInt(thisPipe.Y), _gameBoard.GetSquare( CInt(thisPipe.X), CInt(thisPipe.Y))) What just happened? Adding fading pieces is simply a matter of getting the type of piece currently occupying the square that we wish to remove (before it is replaced with an empty square), and adding it to the FadingPieces dictionary. We need to use the CInt typecasts, because the thisPipe variable is a Vector2 value, which stores its X and Y components as Singles. Falling pieces Falling pieces are added to the game board in two possible locations: From the FillFromAbove() method when a piece is being moved from one location on the board to another, and in the GenerateNewPieces() method, when a new piece falls in from the top of the game board. Time for action – generating falling pieces Modify the FillFromAbove() method of the GameBoard class by adding a call to generate falling pieces right before the rowLookup = -1 line (inside the If block): AddFallingPiece(x, y, GetSquare(x, y), GamePiece.PieceHeight * (y - rowLookup)) Update the GenerateNewPieces() method by adding the following call, right after the RandomPiece(x,y) line as follows: AddFallingPiece(x, y, GetSquare(x, y), GamePiece.PieceHeight * (GameBoardHeight + 1)) What just happened? When FillFromAbove() moves a piece downward, we now create an entry in the FallingPieces dictionary that is equivalent to the newly moved piece. The vertical offset is set to the height of a piece (40 pixels) times the number of board squares the piece was moved. For example, if the empty space was at location 5, 5 on the board, and the piece above it (5, 4) is being moved down one block, the animated piece is created at 5, 5 with an offset of 40 pixels (5-4 = 1, times 40). When new pieces are generated for the board, they are added with an offset equal to the height (in pixels) of the game board (recall that we specified the height as one less than the real height, to account for the allocation of the extra element in the boardSquares array), determined by multiplying the GamePiece.PieceHeight value by GameBoardHeight +1. This means, they will always start above the playing area and fall into it. Rotating pieces The last type of animated piece that we need to deal with adding, during the play is the rotation piece. This piece type is added, whenever the user clicks on a game piece. Time for action – modify Game1 to generate rotating pieces Update the HandleMouseInput() method in the Game1 class to add rotating pieces to the board by adding the following inside the "if mouseInfo.LeftButton = ButtonState.Pressed" block, before _gameBoard.RotatePiece() is called: _gameBoard.AddRotatingPiece(x, y, _gameBoard.GetSquare(x, y), False) Still in HandleMouseInput(), add the following in the same location inside the if block for the right-mouse button: _gameBoard.AddRotatingPiece(x, y, _gameBoard.GetSquare(x, y), True) What just happened? Recall that the only difference between a clockwise rotation and a counter-clockwise rotation (from the standpoint of the AddRotatingPiece() method) is a true or false in the final parameter. Depending on which button is clicked, we simply add the current square (before it gets rotated, otherwise the starting point for the animation would be the final position) and true for right-mouse clicks or false for left-mouse clicks.
Read more
  • 0
  • 0
  • 3824
Modal Close icon
Modal Close icon