Unity iOS Essentials

By Robert Wiebe US
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Planning Ahead for a Unity3D iOS Game

About this book

Unity is the perfect tool for creating games for a number of platforms. Whether it’s for the iPhone, iPod Touch, iPad, or some combination of the three, this book takes a practical approach which maximizes performance while maintaining maximum compatibility across the entire mobile device line up.

Unity iOS Essentials takes you through all aspects of Unity 3D game development for iOS. Mobile game development means thinking about mobile performance, multiple device screen resolutions, deploying a single application to multiple devices, understanding .NET integration, taking advantage of advanced lighting and culling systems, and integrating a mobile game with a server back end. This comprehensive book on developing Unity 3D games for iOS covers it all and demonstrates the concepts using practical tips, scripts, shaders, and projects.

Learn how to plan for supporting multiple iOS devices before you start writing your first script. Consider performance and advanced game topics that will speed up your game development while solving real world problems. Add GUIs, use sophisticated .NET programming concepts, and examine scripts and shaders, interact with servers, and interact with projects that deal with real issues. Unity iOS Essentials provides you with a fully illustrated and fully commented guide for realising your game idea using advanced concepts that work on the iOS family of mobile devices.

Publication date:
December 2011
Publisher
Packt
Pages
358
ISBN
9781849691826

 

Chapter 1. Planning Ahead for a Unity3D iOS Game

Before we start to develop our game, there is some planning that needs to be done. We want to do this planning, because it's a lot easier to plan first and develop second than it is to rework and rework again as we develop our game.

Note

This book has been tested with Unity3D version 3.4.

A good example of this is the relative sizes of our 3D objects. It is really important to scale objects to the correct size. If we simply started developing a game and making 3D game assets without scaling them correctly, we may later find out that our physics are not working as expected. This is because the physics engine expects objects to be of the correct size. To get the physics to work correctly, we would need to rescale every object in our game. It brings a whole new meaning to the phrase "measure twice, cut (model in this case) once". Using Unity3D's default unit scale, the PhysX engine used by Unity3D expects one unit in your model to represent one meter.

This chapter will walk through the important things we need to consider when planning our game, so that we get them right the first time and save ourselves a lot of time and trouble on the backend of releasing our game.

In this chapter, we will learn the following:

  • The differences between the capabilities of the various iOS devices (iPhone, iPhone with Retina Display, iPod Touch, iPad)

  • Desktop functionality that is not supported on iOS devices and how to work around the limitations it imposes

  • Which iOS Software Development Kit (SDK) to use, regardless of the iOS version being targeted

  • How to set up Unity3D to target multiple iOS devices with a single application (universal binary for iOS)

  • How to plan game levels that are efficient on iOS devices

iOS device differences

Before developing our game on different iOS devices, it is important that we understand the different capabilities of the devices. For the purposes of this book, we recommend using the latest iOS device available. We may choose to use more than one iOS target platforms. However, it should be noted that our game may need to configure itself at runtime to facilitate the capabilities of the individual devices.

The following is a brief overview of the capabilities of each device. The columns indicate capabilities, while the rows indicate the specific device:

Device

Graphics* Normal

     

Vertex-Lit

Diffuse

Decal

Detail

NormalMap

Specular

 

iPhone

1

1.5

1.5

2.5

2

2

iPod Touch

1

1.5

1.5

2.5

2

2

iPad

1

1

1.5

2

2

2

Device

Graphics* Complex

   

Parallax

NormalMap Parallax

Reflective

Other

 

iPhone

2.5

3

2.5

2.5

iPod Touch

2.5

3

2.5

2.5

iPad

2.5

3

2.5

2

Note

Graphics* capability varies by the device's iOS generation; we recommend deploying it for the latest iOS devices only if we believe your game will be graphic intensive. The ratings, given previously for shaders, are based on the assumption that you are using the latest software.

If we consider shader support (note that Unity3D does include a number of shaders optimized for performance on mobile devices), the numbers in each box indicate the degree of support on each device, where the following numbers have the associated meanings:

  • 0: Not supported

  • 1: Supported

  • 2: Use sparingly

  • 3: Supported but not recommended. This means that the shader will work, but the performance cost for using it is high and so it should be used only if there is no better option available.

If we plan to use transparency, then we need to add 0.5 (which is the rating in the previous table).

Graphics are by far one of the most important considerations, because they will, most likely, be the primary limitation that we will encounter during development.

It is understood that the shaders discussed here are using OpenGL ES2.x. Devices or deployment settings that use OpenGL ES1.x will not support certain shaders, such as, those that use cube-maps or normal-maps. To test shader support in the editor switch, we can switch the graphics emulation under the Edit | Graphics Emulation menu, as shown in the following screenshot:

Note

Graphics Emulation menu

The contents of the Graphics Emulation menu will change based on the platform build settings that you have selected. Because we have selected iOS as our build platform, we see only the iOS-specific Graphics Emulation options.

We also need to consider the technical specifications shown in the following table when we are:

  • Designing the graphics and levels for our game

  • Deciding which devices we want to target for our game deployment

Device

Technical Specifications Base

      

CPUMHz

GPU

Memory

Screen

GPS

Mic

OpenGL ES2.0

 

iPhone 4

800

See CPU

512 MB

326 ppi

YES

YES

YES

iPhone 3GS

600

SGX

256 MB

163 ppi

YES

YES

Emulated

iPhone 3G

412

MBX

128 MB

163 ppi

YES

YES

NO

iPod Touch

533

MBX

128 MB

163 ppi

Emulated

NO

NO

iPad

1k

See CPU

256 MB

132 ppi

Emulated

YES

YES

iPad 2

1k Dual

See CPU

512 MB

132 ppi

YES

YES

YES

Device

Technical Specifications Auxiliary

    

Vibration

Compass

3G

Silent Switch

Lock Orientation

 

iPhone 4

YES

YES

YES

YES

NO

iPhone 3GS

YES

NO

YES

YES

NO

iPhone 3G

YES

NO

YES

YES

NO

iPod Touch

NO

NO

NO

NO

NO

iPad

NO

NO

NO

NO

NO

iPad 2

NO

YES

YES

YES

YES

You can find detailed information on all this at http://unity3d.com/support/documentation/Manual/iphone-Hardware.html.

 

iOS device differences


Before developing our game on different iOS devices, it is important that we understand the different capabilities of the devices. For the purposes of this book, we recommend using the latest iOS device available. We may choose to use more than one iOS target platforms. However, it should be noted that our game may need to configure itself at runtime to facilitate the capabilities of the individual devices.

The following is a brief overview of the capabilities of each device. The columns indicate capabilities, while the rows indicate the specific device:

Device

Graphics* Normal

     

Vertex-Lit

Diffuse

Decal

Detail

NormalMap

Specular

 

iPhone

1

1.5

1.5

2.5

2

2

iPod Touch

1

1.5

1.5

2.5

2

2

iPad

1

1

1.5

2

2

2

Device

Graphics* Complex

   

Parallax

NormalMap Parallax

Reflective

Other

 

iPhone

2.5

3

2.5

2.5

iPod Touch

2.5

3

2.5

2.5

iPad

2.5

3

2.5

2

Note

Graphics* capability varies by the device's iOS generation; we recommend deploying it for the latest iOS devices only if we believe your game will be graphic intensive. The ratings, given previously for shaders, are based on the assumption that you are using the latest software.

If we consider shader support (note that Unity3D does include a number of shaders optimized for performance on mobile devices), the numbers in each box indicate the degree of support on each device, where the following numbers have the associated meanings:

  • 0: Not supported

  • 1: Supported

  • 2: Use sparingly

  • 3: Supported but not recommended. This means that the shader will work, but the performance cost for using it is high and so it should be used only if there is no better option available.

If we plan to use transparency, then we need to add 0.5 (which is the rating in the previous table).

Graphics are by far one of the most important considerations, because they will, most likely, be the primary limitation that we will encounter during development.

It is understood that the shaders discussed here are using OpenGL ES2.x. Devices or deployment settings that use OpenGL ES1.x will not support certain shaders, such as, those that use cube-maps or normal-maps. To test shader support in the editor switch, we can switch the graphics emulation under the Edit | Graphics Emulation menu, as shown in the following screenshot:

Note

Graphics Emulation menu

The contents of the Graphics Emulation menu will change based on the platform build settings that you have selected. Because we have selected iOS as our build platform, we see only the iOS-specific Graphics Emulation options.

We also need to consider the technical specifications shown in the following table when we are:

  • Designing the graphics and levels for our game

  • Deciding which devices we want to target for our game deployment

Device

Technical Specifications Base

      

CPUMHz

GPU

Memory

Screen

GPS

Mic

OpenGL ES2.0

 

iPhone 4

800

See CPU

512 MB

326 ppi

YES

YES

YES

iPhone 3GS

600

SGX

256 MB

163 ppi

YES

YES

Emulated

iPhone 3G

412

MBX

128 MB

163 ppi

YES

YES

NO

iPod Touch

533

MBX

128 MB

163 ppi

Emulated

NO

NO

iPad

1k

See CPU

256 MB

132 ppi

Emulated

YES

YES

iPad 2

1k Dual

See CPU

512 MB

132 ppi

YES

YES

YES

Device

Technical Specifications Auxiliary

    

Vibration

Compass

3G

Silent Switch

Lock Orientation

 

iPhone 4

YES

YES

YES

YES

NO

iPhone 3GS

YES

NO

YES

YES

NO

iPhone 3G

YES

NO

YES

YES

NO

iPod Touch

NO

NO

NO

NO

NO

iPad

NO

NO

NO

NO

NO

iPad 2

NO

YES

YES

YES

YES

You can find detailed information on all this at http://unity3d.com/support/documentation/Manual/iphone-Hardware.html.

 

Unity support for multiple iOS devices


There are several Unity3D functions found in desktop and console games that are not, for performance reasons, supported on iOS platforms. This section will outline the functionality that is unsupported and, where possible, the ways that we can work around these platform limitations.

Terrain and trees

The Unity3D terrain engine is designed to work with platforms that can push high numbers of vertices. Soft bodies that would be used to simulate wind blowing in the trees and waving grass are available, but not recommended on the current generation iOS platforms with the exception of the iPad 2 (and later).

We can still create our own terrain using a 3D modeling program. If the 3D modeling program can programmatically create terrain, we need to make sure that it uses a relatively low number of polygons. If the 3D modeling program does not allow us to control the number of polygons, we can create the terrain object and then remove edges to reduce the triangle count.

An easy way to create a low polygon count terrain is to do the following:

  1. 1. Draw an aerial view of the level

  2. 2. Map the drawing to a plane

  3. 3. Cut the major shapes out of the plane

  4. 4. Extrude the major shapes

Trees can be modeled quite easily in a cartoon or semi-cartoon style. If more realistic trees are desired, then we can use two intersecting (non-backface-culled) planes with an image mapped onto the faces, as shown in the following screenshot:

Tree and grass movements can be simulated on iOS using baked animation. Methods for creating foliage will be covered in more detail later.

It may sound easy, but we don't jump into making our level just yet; first, we need to review the information contained in the section, Planning efficient levels.

Cut scenes

On the desktop PC platforms, Unity3D usually handles cutscenes through the use of a camera, facing a movie texture. However, movie textures are not supported on iOS devices. Instead, fullscreen streaming playback is provided. Unfortunately, the streaming playback suffers from a number of limitations, the worst of which does not support device orientation, so your cutscene may begin playing upside down.

This is clearly one area where Unity3D for iOS requires more work on the part of the Unity3D development team.

A potential alternative to using a movie for cutscenes is to use Machinima, which refers to the queuing of animations with the engine, rather than playing a pre-rendered clip. This is definitely not something for the faint of heart, as an effort was started in 2009 as a Summer of Code project to create a Machinima editor for Unity3D, but that project has definitely been stalled and may never be completed.

At this stage of Unity3D for iOS, we recommend using cutscenes only in games, where the addition of the cutscenes outweighs the cost of creating them.

Audio

iOS devices can play back only one compressed audio clip at a time. This means that if you want to play more than one sound simultaneously, you need to decide which of the clips should be uncompressed.

Given the limited memory available to games on iOS, it is likely that large uncompressed audio clips will cause our game to get memory warnings and even be forced to terminate by iOS. This kind of application crash can be very difficult to track down, so it is better to do some judicious planning and avoid it altogether.

Therefore, it is recommended that the compressed stream be used for the longest audio clip in a scene, such as background music, and all other audio clips be kept to the shortest duration possible.

The exception to this rule would be when doing Machinima (see cutscenes), where the entire sequence of audio will be consolidated into a single compressed track, where all of the events are queued to match.

Lighting

iOS devices only support the forward rendering path. The Unity3D manual describes this path as follows:

Forward rendering

"Forward is a shader-based rendering path. It supports per-pixel lighting (including normal maps and light Cookies) and real-time shadows from one directional light. In the default settings, a small number of the brightest lights are rendered in per-pixel lighting mode. The rest of the lights are calculated at object vertices."

As such, lights are relatively expensive to render.

Note

In scenes with several lights, some lights will only be calculated at object vertices. Vertex lighting, combined with the low polygon nature of models on iOS devices, can leave our game with light patches in areas rather than fully lit areas.

For dynamic lighting, in the order of rendering cost from least to most expensive, we have the following:

  • Directional: This is a type of lighting that is indefinitely far away and affects everything in the scene, such as sunlight

  • Point: This is a type of lighting that affects everything within the range of its position in the scene

  • Spot: This is a type of lighting that affects everything in a cone shape, defined by its angle and range from its position in the scene

In terms of shader lighting, in order of rendering cost, from least to most expensive, we have the following:

  • Un-lit: This type of lighting is not applied, and the object's illumination depends only on the textures applied to it

  • Vertex-lit (generally but not always): This type of lighting for all lights is calculated based on the vertices and applied once, so the objects are only drawn one time

  • Pixel-lit: This type of lighting is calculated as each pixel is drawn and needs to be recalculated for each light in the scene that affects the pixel. As a result, the object is drawn multiple times, once for each light in the scene that affects it

Lights can be configured as either lightmapped-only or as realtime-only with a configured degree of importance.

Configuring lights to lightmapped-only will cause them to appear only in the lightmap bitmap, and it will not have any dynamic lighting. This kind of lighting is appropriate where there is a requirement for an area to be lit, but not dynamically, such as those instances where a single dynamic light is the focus, or if we will be baking a scenery in the background (this methodology will be explained later).

Note

It is also important to note that lightmapped lights will not display light cookies. Because of that limitation of lightmapping, lights with cookies should be set to realtime-only with a degree of importance set to important or automatic to achieve the desired effect.

Configuring lights to realtime-only is useful if you want to apply lighting to an object that will always be moving around. Moving objects should not be included in the lightmap, and they will never be marked as static.

Realtime lights can be assigned a degree of importance. They are as follows:

  • Automatic (the default)

  • Important

  • Unimportant

Important lights will always be rendered when visible. Unimportant lights are given lowest priority for rendering and may not be rendered. Automatic lights will also not always render, but are given higher priority than unimportant lights.

Rather than using dynamic lighting, lightmapping should be used, when possible, to render much higher quality effects with less bandwidth.

Shadows

On desktop platforms, shadows would also be an important consideration, but on mobile platforms, dynamic shadows are supported. On mobile platforms, you will need to use lightmap textures.

Modeling and Animation

Depending on their age, iOS devices can push between 10k and 30k vertices. The maximum number of vertices that the original iPhone can push is 10k, and the iPhone 4 can push 30k vertices. In addition, devices will slow down depending on the number of polygons that they need to render. Trying to push too many vertices or render too many polygons will result in degraded frame rates and choppy gameplay.

The FrameCounter, js script, shown as follows, can be attached to a game object that contains a guiText item to display a Frames Per Second (FPS) counter that can be used to gauge the performance of our game. Typically, gameplay is smooth until the FPS counter falls below 20. This is shown in the following code:

// Attach this to a GUIText to make a frames/second indicator.
//
// It calculates frames/second over each updateInterval,
// so the display does not keep changing wildly.
//
// It is also fairly accurate at very low FPS counts (<10).
// We do this not by simply counting frames per interval, but
// by accumulating FPS for each frame. This way we end up with
// correct overall FPS even if the interval renders something
// like 5.5 frames.
//
// You need to create a game object, add a GUIText to it
// and set the Pixel Offset so that the text appears on
// the screen
// This is the frequency at which the FPS are displayed
varupdateInterval = 0.5;
// This is the number of FPS accumulated over the interval
privatevaraccum : float = 0.0;
// This is the number of frames rendered over the interval
privatevar frames : int = 0;
// This is the time left time for current interval
privatevartimeleft : float;
function Start()
{
// Make sure a guiText exists on the game object
if(!guiText)
{
print ("FrameCounter needs a GUIText component!");
enabled = false;
return;
}
// Set the time remaining equal to the interval time
timeleft = updateInterval;
}
function Update()
{
// Subtract deltaTime from the time left
timeleft = timeleft - Time.deltaTime;
// Accumulate the FPS over the interval
accum = accum + Time.timeScale/Time.deltaTime;
// Add one to the number of frames rendered
frames = frames + 1;
// The interval ended, display the FPS and reset the
// counters
if( timeleft<= 0.0 )
{
guiText.text = (accum/frames).ToString("f2");
timeleft = updateInterval;
accum = 0.0;
frames = 0;
}
}

The following screenshots show how the game object will appear after you have added the Frame Counter (Script) in the Unity3D editor and what it would look like in a running game:

Characters animated using bones (skinned meshes) should have between 1 and 30 bones. A good compromise is to let minor characters have 10 to 15 bones, while major characters (such as the player character) have 20 to 25 bones. If only one skinned mesh is used, Unity3D has optimizations related to visibility culling and bounding volume updating. These optimizations are useful for games, where only one skinned mesh is needed for a player character (which is often the case).

Rendering performance can be optimized by atlasing textures that will commonly be rendered together. The following screenshot shows the image settings for a very small texture:

The small texture is used on all the spheres, as shown in the following screenshot. The spheres use different meshes with UVW-maps that project different sections of the same texture onto each sphere. In this case, the mapped texture would be referred to as the texture atlas:

This only works with static geometry or objects that are of the same mesh. Also, as previously mentioned in the section, Lighting, for pixel-lit models, if there are multiple lights illuminating the object, the number of draw calls required to render the objects will increase.

Cloth

Cloth is unsupported on iOS devices, it can, however, be simulated through a baked animation. For example, using a reactor in 3D Studio Max will allow you to make a flowing cape for your character, but this cape will not collide with Unity3D rigid-bodies.

Using cloth on unsupported platforms is usually more trouble than it's worth the time spent figuring it out and adds only a marginal visual effect.

Having said that, one practical time to use a cloth animation is while making a flag blow in the wind, in an area unreachable by the player.

Code stripping

Unity3D can use code stripping to remove unwanted libraries from the engine. However, code stripping should be used with care. Stripping code may break your code, if you use certain functionality. Unity3D is not particularly careful when it comes to stripping, and it is assumed that you know what you are doing. If you set your stripping level too high, your game may simply crash when you run it on the iOS device.

The purpose of code stripping is to reduce the size of the application generated by Unity3D. However, using code stripping may limit your ability to use the .NET framework.

Because the built-in Unity3D support for writing to disk is limited, we make extensive use of the .NET framework for reading and writing game preferences. Therefore, we limit stripping code to strip byte code. If we re-implemented our preferences in a different manner that did not use .NET, then we could take advantage of additional stripping.

Note

In the player settings, found under Edit | Project Settings | Player, under Optimization, API Compatibility Level, we need to select .NET 2.0, rather than the .Net 2.0 Subset. Failure to do so will result in .NET issues that manifest themselves as a crash, when we deploy our game to an iOS device.

The iPhone classes and global enumerations

For legacy reasons, the classes and enumerations to access or describe specific iOS functionality begin with the prefix iPhone. It would be reasonable to expect that a future release of Unity3D will deprecate these items and replace them with iOS equivalents.

These classes and global variables are quite useful for determining which specific functionality is available on the device on which your program is running.

Class or Variable

Description

iPhoneGeneration

A global variable that lets you determine the generation of the iOS device on which your application is running.

iPhoneInput

Use location services to provide the last measured device's geographical location. All other forms of input use the more generic "Input" class.

iPhoneKeyboard

Interface for the software keyboard. We need to test to make sure that our GUI is positioned correctly, so that it is not covered by the keyboard when it is displayed.

iPhoneMovieControlMode

An enumeration used to describe options for movie playback controls.

iPhoneMovieScalingMode

An enumeration used to describe scaling modes for displaying movies.

iPhoneNetworkReachability

An enumeration used to describe network reachability options.

iPhoneSettings

Interface access to iPhone-specific setting information.

iPhoneUtils

Interface to access functions that play movies, vibrate the device, and check if our binary is genuine.

 

Understanding iOS SDKs and target platforms


When we install an iOS SDK from Apple, it's a unique experience. Apple has decided that they only want us to have the latest SDK on the most recent version of Mac OS X. This is far from the typical experience you may have had with other platforms, or multiple SDKs being installed simultaneously is the norm.

It is possible, and supported by Apple, to have multiple SDKs, but it is not very practical. For example, we could do the following:

  1. 1. Use System Update to install the latest version of Mac OS X.

    Note

    If we skip the step of installing the latest version of Mac OS X, it is more than likely that the iOS SDK installer will refuse to install the latest SDK until we have installed the latest version of Mac OS X, so we'll just make that assumption and do it.

  2. 2. Download the latest iOS SDK.

  3. 3. Install the latest iOS SDK.

  4. 4. Notice that the installer removed the previous iOS SDK.

  5. 5. Use out Time Machine backup to restore the previous SDK.

But that is not what Apple want us to do, and really, it's not what we want to do either. There may be some exceptions, but for the most part, we want to jump on the Apple bandwagon and recognize that deploying our game using the latest and greatest SDK is indeed the correct thing to do.

So, what we want to do is this:

  1. 1. Use System Update to install the latest version of Mac OS X.

  2. 2. Download the latest iOS SDK.

  3. 3. Install the latest iOS SDK.

  4. 4. Notice that the installer removed the previous iOS SDK.

  5. 5. Don't worry about it; after all, that is what we wanted it to do.

The reason for this cavalier attitude is that the version of iOS on which your game runs has little to do with the iOS SDK that you are using. The version of the iOS on which your game runs is called the Target Platform, and it's perfectly acceptable to have a target platform of iOS 3.0 with the iOS 4.3 SDK.

It took the folks at Unity3D a while to catch on to, and fully support, this idea. Prior to Unity3D, you would need to go into your project settings after each iOS SDK install and update them to reflect the latest version of the SDK (or use Time Machine to restore the removed previous SDK).

Thankfully, those days are gone. Now, setting up our Unity3D project so that it automatically uses the latest SDK, which for the most part is what we want to do, has been greatly simplified.

What we need to do in Unity3D 3.x is this:

  1. 1. Open our Unity3D project.

  2. 2. From the File menu, choose Build Settings.

  3. 3. Under Platform, select iOS.

  4. 4. If required, click the Switch Platform button and wait for our assets to be re-imported.

  5. 5. Click the Player Settings… button.

  6. 6. In the Inspector panel, click Other Settings.

  7. 7. Under Optimization:SDK Version, choose iOS latest (since 4.2), where 4.2 is an iOS SDK version number.

  8. 8. Under Optimization:Target iOS Version, choose the version of iOS, upon which we wish our game to run.

And that is all there is to it. Now, when we install the latest iOS SDK version and re-build our project, the generated XCODE project should be correctly configured and built. It doesn't get any easier and it looks like the following screenshots:

 

Set up an iOS App for multiple target building


When the first version of Unity3D was released for the iPhone, it was called Unity iPhone. Though the first version of Unity iPhone was released a few years ago, the technology used in mobile devices has changed dramatically since then.

It should be clear that Apple renamed the iPhone OS to iOS for a reason. And it should be clear that when we are developing a Unity3D game for iOS, we want to target that game to a large audience. The way we do that is by targeting our Unity3D game to build for both the iPhone and the iPad as well as for the different versions of the ARM CPU used in those devices.

If there is one area where we may want to limit, rather than expand, the size of the market for our game, it is by selecting the latest version of OpenGL rather than limiting ourselves, and our game, to the limits of previous OpenGL versions.

Some developers prefer to take it one step further and limit the target market for their game to the latest and greatest iPhone and iPad platform. And that is something well worth considering, since building a game that runs on older hardware will not only limit what we can do in our game, but also dramatically limit the performance of our game.

Regardless of the generation of device that we will target though, we will always want to target our game for both the iPhone (and by extension, the iPod Touch) and the iPad.

Setting up Unity3D to build our game for these two (three if you include the Touch as a separate platform) is easy. These are the steps that we need to perform. The first step is to set up the Splash Images as follows:

  1. 1. From the File menu, choose Build Settings.

  2. 2. Make sure iOS is selected as the target platform.

  3. 3. Click the Player Settings… button.

  4. 4. In the Splash Image section, as shown in the following screenshot, assign the Texture2D images to be used for each target platform.

The following screenshot shows our Splash Image settings in the Unity3D editor:

The second step is to set up the Target information:

  1. 1. In the Other Settings section, choose Target Device and click the iPhone + iPad button.

  2. 2. In the Other Settings section, choose Target Platform and click Universal arm6+arm7 (Open GL ES 1.1+2.0).

  3. 3. In the Other Settings section, choose Target Resolution and click Native (default device resolution).

The following screenshot shows our Target settings in the Unity3D editor:

 

Planning efficient levels


The virtual world hierarchy can be imagined using several key words. You may have noticed that these words have been used at earlier points in the book without rigorous definitions. However, they do have specific and precise meanings, which we will be using from this point onwards. We need to take some time to review and understand the following important terminology:

Key Word

Definition

World

The root of the game hierarchy, which refers to the entire virtual reality that we will be designing.

Level

A level is an assembly of different scenes to form a larger structure. An example of changing levels would be going from one virtual country to another, where reaching another country cannot be achieved instantaneously and would typically involve travel by land, sea, or air. Because of the potential load times between levels (if we are very good, our game should have no load time) these transitions are usually facilitated by a transition level (like riding the train) or a cutscene.

Mode

Levels may be reused as the setting for multiple events. Certain areas of a level can be made inaccessible, or accessible, by events in the progression of the story. One minute a level could be a peaceful town and the next moment it could be the site of a final battle. Mode refers to the use of the same setting for a different purpose in the progression of the story.

Scene

A scene is defined using the Unity3D definition of a scene. Levels are assembled by loading one scene and then, optionally, additively loading a number of additional scenes.

Section

Sections are usually differentiated by how terrain is organized or split. Sections can be easily confused with scenes, if each separate scene contains only one terrain section. Sections do not possess the same capabilities as scenes, because separate scenes can contain separate lightmapping and occlusion data.

Area

Sections can be further divided into smaller parts, which would be beneficial for occlusion culling.

Room

If you don't want to deal with large expanses, levels or even the world can be room-based. Rooms are like sections separated by doors. If the next scene has not yet loaded, the transition door remains locked or takes longer to open.

Well then, let's get started!

You can't have a game without at least one scene, even if you plan to make a very simple game such as a board game. Levels and their respective scenes should be planned ahead of time; nothing is more annoying than trying to retrofit sections into scenes.

First consideration: Is my game 2D or 3D?

Will our game be occurring in all three dimensions? Yes, of course it will, we're using a 3D game engine, but how will the player see it? If we lock the player in the X-Y axis, we can easily make a side scroller, and even add depth when we need it to spice things up, because our geometry is in a 3D space. We add the depth by changing the camera view from 2D to 3D, when we want to show 3D depth in the spiced up parts of our 2D game. The same goes for an overhead view (the X-Z axis), we can see our world from the sky, but if something explodes, the flames can fly towards us.

If we are locking the X-Z axis, not much can be done to improve the performance of our game, except reducing the field of view and far clip (after all, if we're always looking down, there is no reason to keep rendering after we hit the bottom of the level).

Side scrollers are more straightforward to implement, while some optimization may be required (for example, keeping the use of textures reasonable), typically, there is always plenty of time and processing power to load the next scene or section. We can spend less of our time optimizing the game, and more of our time developing content.

If we create a full 3D game in the first or third person view, we will need to think about a lot more in terms of what to load when, how to ensure the player passes the load trigger, how to store the state of objects in the scene, and much more.

Users have come to appreciate environments that are both large and seamless. However, we shouldn't be afraid to put in some load screens if we don't want to think about such things; this may drastically reduce our development time. We need to try to keep loads short and/or infrequent to optimize the player's experience.

Second consideration: How will I structure my levels?

Levels for iOS devices should be planned in such a way that they can be split up and then additively loaded using the Application.LoadLevelAdditiveAsync ("sceneName") function.

This stage will take the longest period of time to get through, because if careful consideration is not taken to figure out where scene transitions will occur, terrain geometry will have to be restructured and re-UVW-mapped in order to accommodate new additions.

Is my game extensible?

Do we intend to add more to our game in later updates? Do we want to sell the user more levels? If our game is to appear seamless, we need to leave space for a few locked doors (they may take the form of any barricade).

Note

Apple prefers developers to include all their content in a single Application Bundle and upload it in its entirety, rather than provide downloadable content. If we include all of our assets in the bundle, we can unlock the bundled content, rather than download additional content.

A few pointers

We need to place trigger areas that are used to load and unload scenes at strategic locations.

If we cannot load a scene quickly enough, we need to consider placing a large airlock in the way, the player will be temporarily stuck while they wait for the next door to open, which we will do only once the scene loading is complete.

When breaking levels into pieces, unless we have tools to create global UV maps, the transition between sections will be obvious. We can use clever overlapping of door frames, rock formations, foliage, sidewalks, or doors to mask the texturing error between the separate meshes.

We can make our levels have long curvy areas and bottlenecks when we want to load a big scene or for any transition area.

How to set up unloading?

When setting up a scene, all the contents of that scene should be placed inside a single game object with a name identical to that of the scene.

Using a script, we can then use trigger areas (using the string, that is, its name) to both load the scene when we need it and then destroy it (by finding the game object with the name equivalent to that string), when it is no longer needed.

Scene splitting is required to properly load and unload while maintaining occlusion culling and lightmap date. If we do not use occlusion culling or lightmapping, we may simply load prefabs into the scene at their required positions.

Third consideration: How can I make scenes realistic?

If our levels appear linear, the player will get bored easily. Since iOS devices are limited in their capacity, there are a few ways in which we can make the scenes appear more realistic.

A few pointers

Carefully plan out scenes on paper before implementing them in Unity3D or designing them in a 3D modeling application. We can also effectively use the borders of a sheet of graph paper to indicate a transition zone or the impassible edge of a scene that we implement as a large cliff, pit, or body of water.

A world looks bigger and more believable if we put islands in our oceans. Bay and cove sections appear larger, even if most of the section is not accessible by the player. Cliffs can be used to end the play area with an abyss or steep drops to the land below. Canyon and mountain levels will always make worlds using the cliff technique look much bigger.

We will always include some areas that are on the other side of large gaps, but are not accessible by the player. These gaps give the player a sense of intrigue. If we really want to tease the player, we can put a fancy sculpture or other artifact that is visible on a distant section, so that it seems, to the player at least, to be an important component of the game. We will put a collider around this area so it is not accessible, even if the player goes to extraordinary lengths to try to reach it. If, by some miracle, the player did actually manage to reach this area, we could congratulate them with an Easter Egg.

Easter Eggs

Apple may reject our application if we put a secret in it and don't tell them. Apple has this to say about Easter Eggs:

"Easter Eggs, little programs hidden inside other programs, have long been part of the programming universe, most often as jokes or ways to sign otherwise uncredited work for those able to find them. If you want to add an innocuous Easter Egg to your application for that purpose, just use the demo Account Field to let the review team know the unlocking steps. Apple considers this information confidential and will not reveal those steps or their existence.

On the other hand, not telling the review team about an Easter Egg in your code, in order to circumvent the review process is not allowed. When its existence becomes known, as it inevitably will, our first step will be to remove the offending application from the App Store."

Fourth consideration: Embellishment

Since, we want to keep the player interested, we can take advantage of several classic methods that require relatively little implementation effort. A great way to see how game developers use these classic methods on limited hardware platforms is to play older games.

Fog: Not such a great idea

Fog is one of the oldest tricks in the book. It is a classic that is used when game developers don't want to render something because it's too far away. It is also used when it may be obvious to the player that an object has vanished. The solution has been to obscure objects with fog.

Another factor that has led to the use of fog as an embellishment device is that fog adds ambience.

Particles make everything better

Particle effects will make your game seem more vibrant. They're great to:

  • Add ambience

  • Flag objectives

  • Add sparkle

  • Create explosions, vents, jets, even lasers

The following are two examples of how vibrant we can make our particles on iOS:

For performance reasons, there are several important things to note about particles on iOS platforms, which are as follows:

  • There should be no more than 50 particles per system

  • There should be no more than 50-200 particles on the screen at a time

  • Particles should be small (because particle shaders use alpha testing, which itself has performance issues). For very small particle textures, it's best to use no alpha channel at all. We should also avoid additive shaders on particles as they contain large numbers of multiply operations

  • Particle collisions should be limited; we can even write a script to disable the particle collider if the distance from the player is too far away for a collision to be noticeable or if there are too many particles on screen.

Atlas your textures: Share those materials

Textures that will be frequently rendered together should be consolidated into a single texture atlas.

A texture atlas is simply a single image file that contains multiple sub-images, such that different parts of the large image can be UV Mapped to texture different objects.

Typically related objects that should be atlased include terrain textures, foliage textures, and building textures.

Unity3D's static batching will result in meshes-sharing materials to be combined, so they can be rendered in one pass. In order to enable static batching, we need to make sure that the objects don't move and mark the objects as static in the editor.

The following is an example of a texture atlas that contains flowers and tree bark consolidated into a single image stored in a single file. This means that if trees and flowers that use this texture atlas are visible together, they will require only a single draw call to render. This assumes that they use the same material, which means they also share a shader:

We may also opt to add solid texture and alpha tested textures to larger atlases. This is practical if we have a building with lots of static windows, but also textures for the walls and floors that can all be rendered in a single pass.

Water makes things look bigger

Water features can be added quickly and easily.

The only thing better than water is water in motion, for example, waves on a beach or a flowing river.

To make a non-linear body of water (such as a river) flow, we simply need to give it a linear UVW map, as shown in the following image:

Once the texture is mapped, we can apply a UV offset to the texture of the object and the water will make it appear to flow smoothly, even around curved areas.

If an area looks bland, try adding a fountain or waterfall for some extra color. If water doesn't fit the level theme, we can add lava or radioactive ooze (or anything else that we can think of that flows).

You can fly

Giving the player the illusion of flying, makes your game environment more interesting. We could decide to grant the player a flying power with a time limit (there should always be a time limit, so that the power stays elusive and interesting). One way to make a power temporary is by adding trigger areas that cancel flying in zones that we have not prepared for flying. Using such trigger areas is ideal if there is something in the game story line that would cause the player to lose this special power (for example, the spell has only a small area of effect). Players always want to fly, but our game becomes too simple if they can do it all the time. So, it's a good idea to put a time limit on flights (15-40 seconds), and to only grant flying powers when the player reaches certain tokens.

In addition, the height at which a player can fly needs to be limited, such that they do not achieve a bird's eye view that would result in the frame rate dropping due to the amount of rendering needed or the scene being clipped due to the far clip plane. If we really want the player to achieve a great height with a bird's eye view of the world, we are going to need to implement some pretty fancy scripts to substitute parts of our scene, when the player achieves those lofty heights.

Broad views of great expanses are definitely not a strong point of Unity3D on iOS platforms.

Fifth consideration: Teleportation

If we have large distances between important levels, there should be a means to quickly jump between them. We can implement this many ways, but there are, of course, classic ways of doing it.

The warp gate

Once a new destination is reached, the warp gate in the new location may be opened, allowing quick passage to any other open warp gates.

The airship

The airship is usually used in fantasy games. The airship resembles a seafaring ship, but floats in the air. By boarding the airship, the player can fly to a new location.

The spell

If our game is a Role Playing Game, this is pretty much obligatory. The player will be expected to have an inventory of spells and a teleport spell is something that can be activated when enough mana is available and the player is not in combat.

The world map

If the player's movement is to be unrestricted, then they can simply pull up the pause menu, open the world map, and choose a new location.

The train

It's not glamorous to travel by car or bus, but the train is an excellent touch in steam punk style games (a game in which technology is advanced, yet appears antiquated). If we are doing a mystery game, the train adds atmosphere as the protagonist can think things over on the train, or the train could be attacked (by a mysterious stranger). We could also select the modern alternative to the train, the subway, for urban environments.

To summarize:

The following are the most important things that we need to remember to do:

  • Split levels into sections, so they can be loaded additively

  • Split scenes into blocks of terrain meshes

  • Put curves in our levels to facilitate occlusion culling

  • Atlas textures that will be rendered

  • Plan ahead: retrofitting levels is a pain

  • Levels should appear bigger than they actually are

  • Water is always nice

  • It's not always what the player can reach, but rather what they can't that intrigues them

  • Some low polygon background art makes the world look vast

  • Particles add realism and shine

  • Give the player limited freedom with the illusion of great freedom

Culling is important

Enough cannot be said about culling. Culling is so important that there are multiple ways to cull different objects and sophisticated software tools that include pre-processing of scenes to achieve maximum culling have been integrated into Unity3D.

Near Clip

The near clip plane refers to the minimum distance away from the camera origin that an object must be before being rendered.

If the near clip plane were zero, then objects right on top of the camera origin would be rendered, which is usually distracting and not visually appealing. If the near clip is set to far away, then objects you don't want clipped will vanish, and if the player stands too close to an object, they may be able to see right through it.

Far clip

The far clip plane refers to the maximum distance away from the camera an object can be, before it will be clipped (not rendered).

It is important to adjust the far clip, so that it is far enough away that it does not clip your objects, however, it should be close enough that it does not do needless depth testing, which would slow down your processing speed.

Arguably, there is no reason to have a far clip plane because distance culling should take care of it but because, distance culling doesn't have to apply to every object in a scene, the far clip plane is the default.

The drawing library (OpenGL ES) also requires a far clip plane in order to process the image properly, an attempt to set the far clip to zero will result in only the background rendering. Conversely, setting the far clip to the constant INFINITY will result in a strange, cryptic, and unnerving internal error. It is, therefore, essential that the far clip be farther than the distance culling distances, but still reasonably close.

In the following image, items within the field of view are rendered. When item 3 exceeds the distance specified by the far clip plane, it is not rendered. If item 3 is viewed at an angle and part of it exceeds the far clip distance, part of item 3 will render and the rest of it will display the camera background. Imagine a parabolic dish where the center of the dish exceeds the far clip, the edges of the dish will still render, but the center will be clipped and show a background texture, color, or skybox as follows:

Distance culling

Distance culling is used to cull objects based on how far they are from the camera origin. Using layers in Unity3D, smaller, less noticeable objects can be culled before large objects. Distance Culling still complies with the far clip plane, so any culling distance farther than the far clip plane will always be culled. The far clip plane should be considered to be the absolute maximum distance any depth testing and geometry rendering should be done.

The following is a figure, demonstrating distance culling, when object 1 exceeds the culling distance set for its layer that it ceases rendering (It will not do anything fancy, it just vanishes. Therefore, it is crucial to make the distance far away, so the player doesn't notice the object disappear).

The following code should be applied to any camera we want to use with distance culling. Distances can be represented as any (floating point, >=0) number and should be less than the far clip plane: you cannot render any object farther than the far clip plane:

var distances = new float[32];
function Start ()
{
//Setting a value to zero will cause that layer to
// use the farclip
//User defined layers start at layer 8, which
// is distances[8]
this.camera.layerCullDistances = distances;
}
functionsetCullDistance(index : int, distance : float)
{
distances[index]=distance;
this.camera.layerCullDistances = distances;
}

Putting an object on the layer associated with the clipping distance will cause the camera to cull it, when it's far away. It is recommended that you use at least three distance culling layers: Near for pickups and enemies, Medium for scenery, and Far for objectives and terrain chunks.

Occlusion culling

Occlusion culling reduces the amount of required rendering computation time by caching the visibility of objects. While iOS devices have built-in dedicated vertex computation, hardware occlusion culling can still be beneficial because Unity3D can avoid passing invisible geometry to said hardware.

The following image demonstrates how occlusion culling is beneficial.

Imagine object 3 is a section of terrain and object 2 is a large wall that blocks the player's view of object 3. Finally, imagine object 1 is a small tree in front of the large wall. Since the tree doesn't obscure the wall, the wall is still visible. However, the wall is obscuring the block of terrain, so even though the terrain is in memory, it is not being rendered.

Occlusion culling is achieved by pre-computing PVS (Potentially VisibleSet), an array of bounds that cache the geometry, which should be rendered when a camera is facing a certain direction. Configuring and bounding zones and generating occlusion culling data should be the last step in making your game, as any change in the layout of your level will require that occlusion culling be computed again.

If you are experiencing problems with performance during testing, try reducing the far clip plane or adjusting your distance culling. You may also generate a quick preview of your occlusion settings by setting the view cell resolution very low.

 

Summary


In this chapter, we have covered the following:

  • The important device differences across the iOS device family and between the desktop platforms and iOS platforms that we need to consider before we begin to create a Unity3D game for iOS

  • How the iOS SDK setup differs from the Mac OS X desktop platform setup, when targeting different OS versions

  • How to ensure that our game can be deployed on all the iOS platform devices

  • The need to consider everything from the kind of game that we want to develop to the kind of level layouts and artwork that we will use to create the most efficient game possible

About the Author

  • Robert Wiebe US

    Robert Wiebe has more than 30 years’ experience designing, implementing, and testing software. He wrote his first game in 1979 as a high school student. He has focused on developing games and utilities for Android, iOS, and OS X. He has written the following apps: Video Kiosk, ShredIt, NetShred, Coconut Hut, and many more. His interests include Unity 2D/3D game engine, developing video series, writing books, and researching virtual reality games. He has experience in developing software for a number of industries, including mining, finance, and communications. After working for other people’s companies, he founded his own companies, Mireth Technology, and BurningThumb Studios, which are his primary interests today.

    Browse publications by this author