What's Your Input?

Create your own line of successful 2D games with Unity!

This article by Venita Pereira, the author of the book Learning Unity 2D Game Development by Example, teaches us all about the various input types and states of a game. We will then go on to learn how to create buttons and the game controls by using code snippets for input detection.

"Computers are finite machines; when given the same input, they always produce the same output."

– Greg M. Perry, Sams Teach Yourself Beginning Programming in 24 Hours

(For more resources related to this topic, see here.)

Overview

The list of topics that will be covered in this article is as follows:

  • Input versus output
  • Input types
  • Output types
  • Input Manager
  • Input detection
  • Buttons
  • Game controls

Input versus output

We will be looking at exactly what both input and output in games entail. We will look at their functions, importance, and differentiations.

Input in games

Input may not seem a very important part of a game at first glance, but in fact it is very important, as input in games involves how the player will interact with the game. All the controls in our game, such as moving, special abilities, and so forth, depend on what controls and game mechanics we would like in our game and the way we would like them to function.

Most games have the standard control setup of moving your character. This is to help usability, because if players are already familiar with the controls, then the game is more accessible to a much wider audience. This is particularly noticeable with games of the same genre and platform.

For instance, endless runner games usually make use of the tilt mechanic which is made possible by the features of the mobile device. However, there are variations and additions to the pre-existing control mechanics; for example, many other endless runners make use of the simple swipe mechanic, and there are those that make use of both.

When designing our games, we can be creative and unique with our controls, thereby innovating a game, but the controls still need to be intuitive for our target players. When first designing our game, we need to know who our target audience of players includes. If we would like our game to be played by young children, for instance, then we need to ensure that they are able to understand, learn, and remember the controls. Otherwise, instead of enjoying the game, they will get frustrated and stop playing it entirely.

As an example, a young player may hold a touchscreen device with their fingers over the screen, thereby preventing the input from working correctly depending on whether the game was first designed to take this into account and support this.

Different audiences of players interact with a game differently. Likewise, if a player is more familiar with the controls on a specific device, then they may struggle with different controls. It is important to create prototypes to test the input controls of a game thoroughly. Developing a well-designed input system that supports usability and accessibility will make our game more immersive.

Output in games

Output is the direct opposite of input; it provides the necessary information to the player. However, output is just as essential to a game as input. It provides feedback to the player, letting them know how they are doing. Output lets the player know whether they have done an action correctly or they have done something wrong, how they have performed, and their progression in the form of goals/missions/objectives.

Without feedback, a player would feel lost. The player would potentially see the game as being unclear, buggy, or even broken. For certain types of games, output forms the heart of the game.

The input in a game gets processed by the game to provide some form of output, which then provides feedback to the player, helping them learn from their actions. This is the cycle of the game's input-output system.

The following diagram represents the cycle of input and output:

Input types

There are many different input types that we can utilize in our games. These various input types can form part of the exciting features that our games have to offer. The following image displays the different input types:

The most widely used input types in games include the following:

  • Keyboard: Key presses from a keyboard are supported by Unity and can be used as input controls in PC games as well as games on any other device that supports a keyboard.
  • Mouse: Mouse clicks, motion (of the mouse), and coordinates are all inputs that are supported by Unity.
  • Game controller: This is an input device that generally includes buttons (including shoulder and trigger buttons), a directional pad, and analog sticks. The game controller input is supported by Unity.
  • Joystick: A joystick has a stick that pivots on a base that provides movement input in the form of direction and angle. It also has a trigger, throttle, and extra buttons. It is commonly used in flight simulation games to simulate the control device in an aircraft's cockpit and other simulation games that simulate controlling machines, such as trucks and cranes. Modern game controllers make use of a variation of joysticks known as analog sticks and are therefore treated as the same class of input device as joysticks by Unity. Joystick input is supported by Unity.
  • Microphone: This provides audio input commands for a game. Unity supports basic microphone input. For greater fidelity, a third-party audio recognition tool would be required.
  • Camera: This provides visual input for a game using image recognition. Unity has webcam support to access RGB data, and for more advanced features, third-party tools would be required.
  • Touchscreen: This provides multiple touch inputs from the player's finger presses on the device's screen. This is supported by Unity.
  • Accelerometer: This provides the proper acceleration force at which the device is moved and is supported by Unity.
  • Gyroscope: This provides the orientation of the device as input and is supported by Unity.
  • GPS: This provides the geographical location of the device as input and is supported by Unity.
  • Stylus: Stylus input is similar to touchscreen input in that you use a stylus to interact with the screen; however, it provides greater precision. The latest version of Unity supports the Android stylus.
  • Motion controller: This provides the player's motions as input. Unity does not support this, and therefore, third-party tools would be required.

Output types

The main output types in games are as follows:

  • Visual output
  • Audio output
  • Controller vibration

Unity supports all three.

Visual output

The Head-Up Display (HUD) is the gaming term for the game's Graphical User Interface (GUI) that provides all the essential information as visual output to the player as well as feedback and progress to the player as shown in the following image:

HUD, viewed June 22, 2014, http://opengameart.org/content/golden-ui

Other visual output includes images, animations, particle effects, and transitions.

Audio

Audio is what can be heard through an audio output, such as a speaker, to provide feedback that supports and emphasizes the visual output and, therefore, increases immersion. The following image displays a speaker:

Speaker, viewed June 22, 2014, http://pixabay.com/en/loudspeaker-speakers-sound-music-146583/

Controller vibration

Controller vibration provides feedback for instances where the player collides with an object or environmental feedback for earthquakes to provide even more immersion as in the following image:

Having a game that is designed to provide output meaningfully not only makes it clearer and more enjoyable, but can truly bring the world to life, making it truly engaging for the player.

Unity Input Manager

The Input Manager allows us to set up (map) our standard game control configuration for our game, and it has two advantages that are as follows:

  • The Input Manager allows us to simply and easily use the default set keys in our scripts
  • The Input Manager allows the players of our games to set the controls to their own configurations.

To configure it, we go to Edit | Project Settings | Input.

The following screenshot shows the Input Manager and all the input controls that can be configured:

Input Manager, viewed February 25, 2014, https://docs.unity3d.com/Documentation/Images/manual/class-InputManager-0.jpg

The following screenshot shows how the Input Manager will be displayed to players of our game from the game launcher:

Input Manager, viewed February 25, 2014, https://docs.unity3d.com/Documentation/Images/manual/class-InputManager-1.jpg

Detecting input

To detect input from the various devices, namely, computers and mobiles (iOS and Android), we will make use of the Unity class named Input with its functions and variables.

For movement-based input, we make use of the Input.GetAxis() function to ensure that movement is smoother and to reduce the size and complexity of the script.

For all other action event buttons, we make use of the Input.GetButton() function. We always call these functions from within the Update() function since they only get reset when the frame updates.

For iOS and Android mobile devices, we can track multiple touch inputs using the Input.touches property. We can also track input via the accelerometer and gyroscope using the properties Input.acceleration and Input.gyro respectively.

Buttons

Usually, the very first input that is required from a player in most games is from buttons on the main menu of a game. Thus, we are now going to create our own buttons using the OnGui function provided by Unity.

OnGui

The OnGUI function is used for handling GUI events, the creation and the look and functionality of the game's GUI. It is an event function that is part of the well-defined set of callbacks that Unity provides, so it gets called automatically like Start() and Update(). We, therefore, do not call it within another function.

OnGUI can be called several times per frame depending on its implementation. It will get called once per GUI event.

GUILayout.Button

We will use the existing Unity class GUILayout and its function Button to create our buttons. We specify the text that we would like to display in our buttons as well as our buttons' dimensions as parameters within the function as shown in the following script:

function OnGUI()
{

If we click on Button 1 as input, then we print out the debug text Button 1 clicked! to the Console window as output feedback using the Debug.Log() function. The Debug.Log() function is very useful for debugging/testing our games.

  if(GUILayout.Button("Button 1",  GUILayout.Width(100), GUILayout.Height(100)))
  {         Debug.Log("Button 1 clicked!");
    }

If we click on Button 2 as input, then we print out the debug text Button 2 clicked! to the Console window as output feedback using the Debug.Log() function.

The Debug.Log() function is very useful for debugging/testing our games.

    if(GUILayout.Button("Button 2",  GUILayout.Width(100), GUILayout.Height(100)))
  {
      Debug.Log("Button 2 clicked!");
}
}

We will now perform the following steps:

  1. Create a new script by going to Assets | Create | Javascript.

  2. Name the script buttons.

  3. Double-click on it to open the script in MonoDevelop.

  4. Replace the default existing script with the preceding script.

  5. Build our script in MonoDevelop by going to Build | Build All.

  6. Add the preceding script to an empty GameObject by going to GameObject | Create Empty and dragging the script buttons.js onto the Inspector of the empty GameObject.

  7. We should get the result shown in the following screenshot when we click on play and click on Button 1:

Game controls

Many games make use of virtual controls, which are onscreen controls. Therefore, it is worth creating our own game controls onscreen as opposed to entering input on the keyboard.

To do this, we will make use of sprites for the HUD to display the virtual controls and raycasting to detect when the player touches a control for input.

Raycasting

Raycasting is a query on the scene that returns objects that intersect with a given ray (which is a point in space with a direction). If we cast a ray from the main 2D camera in a straight line into the screen (specified by where the player is touching/clicking), we can then check if a collider has been hit.

If a collider has been hit, then we can check the name of the collider's GameObject. Depending on which GameObject has been hit, we can call the appropriate script to move the character GameObject in the corresponding direction. Let's use raycasting by executing the given steps:

  1. We download some public domain art from the following URL:

    http://freeartsprites.com/free-art/Space-Pack/

  2. Splice the image so that each control has been separated and the left, right, up, and down portions have become separate sprites.

  3. Convert each of these into a GameObject, and name them so that they are identical to the following screenshot:

    It is essential that they match as we will be calling those exact names in the script.

  4. Add a Box Collider 2D to each control by going to Add Component | Physics 2D | Box Collider 2D.

  5. We are going to check if a collider overlaps a point using Physics2D.OverlapPoint.

  6. Assign the following script to the right control:

    We check which platform the player is using as shown in the next line of code. We also declare a string variable by the name of control to store the control object's name.

    var platform : RuntimePlatform = Application.platform;

    We create a function to check for input as shown in the next line of code.

    function checkTouch(pos)
    {

    We use raycasting to detect if the player's finger is overlapping with the virtual onscreen controls. The first variable is the ray as a 3D vector as shown in the next line of code.

    var wp : Vector3 = Camera.main.ScreenToWorldPoint(pos);

    The next variable consists of the x and y coordinates of the player's finger, which is shown in the next line of code.

    var touchPos : Vector2 = new Vector2(wp.x, wp.y);

    The final variable returns whether or not the player's finger coordinates overlap with a Physics 2D collider.

    var hit = Physics2D.OverlapPoint(touchPos);

    If hit returns true, then we move the ship. We also detect which control has received input by checking the name of the object that has received a collision on its collider. We then print it to our log to keep track and then check if the whether it was the right control that was pressed so that we move the ship to the right.

    if(hit){
        control = hit.transform.name;
        print(""+control);
        if (control == "right")
        {
        move();
        }
        }
    }

    We check to see the device the player is using for input. For instance, if they are using a mobile device, then we use the touch functions for input detection; otherwise, we use the mouse functions for a PC.

    With the respective functions, we check when a finger touches the screen for a mobile, or the mouse is pressed down for a PC, and then check the position in coordinates of the finger or mouse.

    function Update()
    {
     if(platform == RuntimePlatform.Android || platform == RuntimePlatform.IPhonePlayer){
           if(Input.touchCount > 0) {
             if(Input.GetTouch(0).phase == TouchPhase.Began){
              checkTouch(Input.GetTouch(0).position);
             }
           }
        }else if(platform == RuntimePlatform.WindowsEditor){
           if(Input.GetMouseButtonDown(0)) {
             checkTouch(Input.mousePosition);
           }
        }
    }

    To move our ship, we find the Ship GameObject in the scene by its name. We then move it using transform.Translate, which moves the transform in the direction and at the distance of the translation relative to something.

    In this case, we translate the ship's position using a 3D vector moving to the right direction multiplied by the distance of 10 units per second moving relative to the camera.

    function move()
    {
      
      ship = GameObject.Find("Ship");
      
      ship.transform.Translate(Vector3.right * (Time.deltaTime*10), Camera.main.transform);
      
    }
  7. We will use the ship GameObject.

  8. When we click on play, the ship should move in the corresponding direction.

  9. We copy the script three more times and use exactly the same script that we used before for the left, up, and down controls by simply replacing the direction: ship.transform.Translate(Vector3.right * (Time.deltaTime*10), Camera.main.transform);

  10. We build our code by going to Build | Build All and click on the play button in Unity to test it.

The following screenshot displays our ship with virtual controls:

Summary

In this article, we learned all about the various input types and states and learned to create buttons as well as the game controls by using code snippets for input detection.

Resources for Article:


Further resources on this subject:


Books to Consider

Source SDK Game Development Essentials
$ 26.99
Learning the Yahoo! User Interface library
$ 16.20
comments powered by Disqus
X

An Introduction to 3D Printing

Explore the future of manufacturing and design  - read our guide to 3d printing for free