Home Game Development Leap Motion Development Essentials

Leap Motion Development Essentials

By Mischa Spiegelmock
books-svg-icon Book
eBook $22.99 $15.99
Print $38.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $22.99 $15.99
Print $38.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
Leap Motion is a company developing advanced motion sensing technology for human–computer interaction. Originally inspired by the level of difficulty of using a mouse and keyboard for 3D modeling, Leap Motion believe that moulding virtual clay should be as easy as moulding clay in your hands. Leap Motion now focus on bringing this motion sensing technology closer to the real world. Leap Motion Development Essentials explains the concepts and practical applications of gesture input for developers who want to take full advantage of Leap Motion technology. This guide explores the capabilities available to developers and gives you a clear overview of topics related to gesture input along with usable code samples. Leap Motion Development Essentials shows you everything you need to know about the Leap Motion SDK, from creating a working program with gesture input to more sophisticated applications covering a range of relevant topics. Sample code is provided and explained along with details of the most important and central API concepts. This book teaches you the essential information you need to design a gesture-enabled interface for your application, from specific gesture detection to best practices for this new input. You will be given guidance on practical considerations along with copious runnable demonstrations of API usage which are explained in step-by-step, reusable recipes.
Publication date:
October 2013
Publisher
Packt
Pages
106
ISBN
9781849697729

 

Chapter 1. Leap Motion SDK – A Quick Start

The Leap Motion is a peripheral input device that allows users to interact with the software through hand gestures. This chapter will explain the features and software interface exposed by the Leap Motion SDK, which will allow us to take advantage of the hand motion input.

 

An overview of the SDK


The Leap device uses a pair of cameras and an infrared pattern projected by LEDs to generate an image of your hands with depth information. A very small amount of processing is done on the device itself, in order to keep the cost of the units low.

The images are post-processed on your computer to remove noise, and to construct a model of your hands, fingers, and pointy tools that you are holding.

As an application developer, you can make use of this data via the Leap software developer kit, which contains a powerful high-level API for easily integrating gesture input into your applications. Because developers do not want to go to, the trouble of processing raw input in the form of depth-mapped images skeleton models and point cloud data, the SDK provides abstracted models that report what your user is doing with their hands. With the SDK you can write applications that make use of some familiar concepts:

  • All hands detected in a frame, including rotation, position, velocity, and movement since an earlier frame

  • All fingers and pointy tools (collectively known as "pointables") recognized as attached to each hand, with rotation, position, and velocity

  • The exact pixel location on a display pointed at by a finger or tool

  • Basic recognition of gestures such as swipes and taps

  • Detection of position and orientation changes between frames

 

Quick start


Congratulations on your purchase of a genuine fine quality Leap Motion gesture input device! This handy guide will walk you through the assembly, proper usage, and care of your Leap Motion.

To get started, remove your Leap Motion SDK and Leap Motion™ device from the box and unpack the shared object files and headers from their shrink-wrap. Gently place your SDK in a handy directory and fire up your favorite IDE to begin.

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

We'll get things going right away with a short C++ application to illustrate how to interact with the Leap SDK to receive events and input.

#include <iostream>
#include <Leap.h>

class Quickstart : public Leap::Listener {
public:
    virtual void onConnect(const Leap::Controller &);
    virtual void onFrame(const Leap::Controller &);
};

To interact with the Leap software, we will begin by creating a subclass of Leap::Listener and defining the callback methods we wish to receive. While it is possible to poll the controller for the current frame, generally you will want to make your program as responsive as possible which is most easily accomplished by acting on input events immediately via callbacks. In case you're wondering, the primary available callback methods are:

  • onInit: This indicates that the listener is added to a controller and called only once.

  • onExit: This indicates that the controller is destroyed or the listener is removed.

  • onConnect: This indicates that the device is connected and recognized by the driver and is ready to start processing frames.

  • onDisconnect: This indicates that the device is disconnected. Connection state can also be polled by checking controller.isConnected() so that you don't need to keep track of whether the controller is plugged in or not. (An earlier version of the SDK lacked this state accessor but the kind folks at Leap Motion realized us devs are really lazy).

  • onFrame: This indicates that a new frame of input data has been captured and processed. This is the only handler you really need to implement if you want to make use of the Leap.

Let's implement our onConnect handler real quick so that we can verify that the controller driver and SDK are communicating with the device properly. If everything is working as it should be, the following code should cause a message to be emitted on stdout when the device is plugged in to a USB port:

void Quickstart::onConnect(const Leap::Controller &controller) {
    std::cout << "Hello, Leap user!\n";
}

We can display a friendly contrived greeting to ourselves when the program is run with the controller connected and the driver software is running. This will cause breathless anticipation in your users as they prepare themselves to experience the magic and wonder of this fantastic new input technology.

A listener is attached to a Leap::Controller, which acts as the primary interface between the Leap driver and your application. A controller tracks processed frames, device connection state, configuration parameters, and invokes callback methods on a listener.

To begin receiving events, instantiate a new listener and a controller:

int main() {
    // create instance of our Listener subclass
    Quickstart listener;
    
    // create generic Controller to interface with the Leap device
    Leap::Controller controller;
    
    // tell the Controller to start sending events to our Listener
    controller.addListener(listener);
  …

If you place your hands over the device Quickstart::onFrame() will start being called. Let's create an onFrame handler that reports on the horizontal velocity of the first finger or tool detected:

void Quickstart::onFrame(const Leap::Controller &controller) {
    const Leap::Frame frame = controller.frame();

controller.frame() returns a Frame instance, which contains information detected about our scene at a specific point in time. It has a single optional parameter history, which allows you to travel backwards through the misty sands of time to compare frames and determine hand changes over time. Unfortunately this time machine is rather limited; only about 60 previous frames are stored in the controller.

    // do nothing unless hands are detected
    if (frame.hands().empty()) return;

Get used to making these sorts of checks. You'll be seeing a lot more of them. Here, we have no interest in processing this frame unless there are hands in it.

    // first detected hand
    const Leap::Hand firstHand = frame.hands()[0];
    // first pointable object (finger or tool)
    const Leap::PointableList pointables = firstHand.pointables();
    if (pointables.empty()) return;
    const Leap::Pointable firstPointable = pointables[0];

All fingers attached to a hand and all tools that the hand is grasping are returned as a PointableList, which behaves like an std::vector, including providing an iterator for people who are into that. Most commonly we will want to find out where a pointable is in space and how fast it is moving, which we can easily find out with tipPosition() and tipVelocity() respectively. Both return Leap::Vectors consisting of X, Y, and Z components.

    std::cout << "Pointable X velocity: " << firstPointable.tipVelocity()[0] << endl;

If you wave an outstretched finger or a tool (a chopstick works pretty well if you happen to have one lying around) back and forth over the controller you will be rewarded with the following riveting output:

Pointable X velocity: -223.937
Pointable X velocity: -117.421
Pointable X velocity: -242.293
Pointable X velocity: -141.43
Pointable X velocity: -61.9314
Pointable X velocity: 9.85328
Pointable X velocity: 41.9575
Pointable X velocity: 71.7436
Pointable X velocity: 96.0459
Pointable X velocity: 116.465

Leftwards motion is represented by negative values (mm/s). Rightwards motion is positive.

Tip

A note on the sample code

Because of frequent changes to the SDK, your best bet for finding the most up-to-date code samples is to check out the GitHub repository.

All sample code can be found at https://github.com/openleap/leapbook. This program and others like it can be built using the following command on Mac OS X or Linux using GCC or clang:

$ g++ quickstart.cpp -lLeap –Lpath/to/Leap_SDK/lib/libc++ –Ipath/to/Leap_SDK/include –o quickstart

Note that path/to/Leap_SDK should be replaced with the location of your Leap_SDK directory. It may be helpful to set an environment variable with the path or install the libraries and headers system-wide.

 

Major SDK components


Now that we've written our first gesture-enabled program, let's talk about the major components of the Leap SDK. We'll visit each of these in more depth as we continue our journey.

Controller

The Leap::Controller class is a liaison between the controller and your code. Whenever you wish to do anything at all with the device you must first go through your controller. From a controller instance we can interact with the device configuration, detected displays, current and past frames, and set up event handling with our listener subclass.

Config

An instance of the Config class can be obtained from a controller. It provides a key/value interface to modify the operation of the Leap device and driver behavior. Some of the options available are:

  • Robust mode: Somewhat slower frame processing but works better with less light.

  • Low resource mode: Less accurate and responsive tracking, but uses less CPU and USB bandwidth.

  • Tracking priority: Can prioritize either precision of tracking data or the rate at which data is sampled (resulting in approximately 4x data frame-rate boost), or a balance between the two (approximately 2x faster than the precise mode).

  • Flip tracking: Allows you to use the controller with the USB cable coming out of either side. This setting simply flips the positive and negative coordinates on the X-axis.

Screen

A controller may have one or more calibratedScreens, which are computer displays in the field of view of the controller, which have a known position and dimensions. Given a pointable direction and a screen we can determine what the user is pointing at.

Math

Several math-related functions and types such as Leap::Vector, Leap::Matrix, and Leap::FloatArray are provided by LeapMath.h. All points in space, screen coordinates, directions, and normal are returned by the API as three-element vectors representing X, Y, and Z coordinates or unit vectors.

Frame

The real juicy information is stored inside each Frame. A Frame instance represents a point in time in which the driver was able to generate an updated view of its world and detect where screens, your hands, and pointables are.

Hand

At present the only body parts you can use with the controller are your hands. Given a frame instance we can inspect the number of hands in the frame, their position and rotation, normal vectors, and gestures. The hand motion API allows you to compare two frames and determine if the user has performed a translation, rotation, or scaling gesture with their hands in that time interval. The methods we can call to check for these interactions are:

  • Leap::Hand::translation(sinceFrame): Translation (also known as movement) returned as a Leap::Vector including the direction of the movement of the hand and the distance travelled in millimeters.

  • Leap::Hand::rotationMatrix(sinceFrame), ::rotationAxis(sinceFrame), ::rotationAngle(sinceFrame, axisVector): Hand rotation, either described as a rotation matrix, vector around an axis or float angle around a vector between –π and π radians (that's -180° to 180° for those of you who are a little rusty with your trigonometry).

  • Leap::Hand::scaleFactor(sinceFrame): Scaling represents the distance between two hands. If the hands are closer together in the current frame compared to sinceFrame, the return value will be less than 1.0 but greater than 0.0. If the hands are further apart the return value will be greater than 1.0 to indicate the factor by which the distance has increased.

Pointable

A Hand also can contain information about Pointable objects that were recognized in the frame as being attached to the hand. A distinction is made between the two different subclasses of pointable objects, Tool, which can be any slender, long object such as a chopstick or a pencil, and Finger, whose meaning should be apparent. You can request either fingers or tools from a Hand, or a list of pointables to get both if you don't care.

Finger positioning

Suppose we want to know where a user's fingertips are in space. Here's a short snippet of code to output the spatial coordinates of the tips of the fingers on a hand that is being tracked by the controller:

    if (frame.hands().empty()) return;

    const Leap::Hand firstHand = frame.hands()[0];
    const Leap::FingerList fingers = firstHand.fingers();

Here we obtain a list of the fingers on the first hand of the frame. For an enjoyable diversion let's output the locations of the fingertips on the hand, given in the Leap coordinate system:

for (int i = 0; i < fingers.count(); i++) {
    const Leap::Finger finger = fingers[i];
        
    std::cout << "Detected finger " << i << " at position (" <<
        finger.tipPosition().x << ", " <<
        finger.tipPosition().y << ", " <<
        finger.tipPosition().z << ")" << std::endl;
}

This demonstrates how to get the position of the fingertips of the first hand that is recognized in the current frame. If you hold three fingers out the following dazzling output is printed:

Detected finger 0 at position (-119.867, 213.155, -65.763)
Detected finger 1 at position (-90.5347, 208.877, -61.1673)
Detected finger 2 at position (-142.919, 211.565, -48.6942)

While this is clearly totally awesome, the exact meaning of these numbers may not be immediately apparent. For points in space returned by the SDK the Leap coordinate system is used. Much like our forefathers believed the Earth to be the cornerstone of our solar system, your Leap device has similar notions of centricity. It measures locations by their distance from the Leap origin, a point centered on the top of the device. Negative X values represent a point in space to the left of the device, positive values are to the right. The Z coordinates work in much the same way, with positive values extending towards the user and negative values in the direction of the display. The Y coordinate is the distance from the top of the device, starting 25 millimeters above it and extending to about 600 millimeters (two feet) upwards. Note that the device cannot see below itself, so all Y coordinates will be positive.

An example of cursor control

By now we are feeling pretty saucy, having diligently run the sample code thus far and controlling our computer in a way never before possible. While there is certain utility and endless amusement afforded by printing out finger coordinates while waving your hands in the air and pretending to be a magician, there are even more exciting applications waiting to be written, so let's continue onwards and upwards.

Tip

Until computer-gesture interaction is commonplace, pretending to be a magician while you test the functionality of Leap SDK is not recommended in public places such as coffee shops.

In some cultures it is considered impolite to point at people. Fortunately your computer doesn't have feelings and won't mind if we use a pointing gesture to move its cursor around (you can even use a customarily offensive finger if you so choose). In order to determine where to move the cursor, we must first locate the position on the display that the user is pointing at. To accomplish this we will make use of the screen calibration and detection API in the SDK.

If you happen to leave your controller near a computer monitor it will do its best to try and determine the location and dimensions of the monitor by looking for a large, flat surface in its field of view. In addition you can use the complementary Leap calibration functionality to improve its accuracy if you are willing to take a couple of minutes to point at various dots on your screen. Note that once you have calibrated your screen, you should ensure that the relative positions of the Leap and the screen do not change.

Once your controller has oriented itself within your surroundings, hands and display, you can ask your trusty controller instance for a list of detected screens:

    // get list of detected screens
    const Leap::ScreenList screens = controller.calibratedScreens();
    
    // make sure we have a detected screen
    if (screens.empty()) return;
    const Leap::Screen screen = screens[0];

We now have a screen instance that we can use to find out the physical location in space of the screen as well as its boundaries and resolution. Who cares about all that though, when we can use the SDK to compute where we're pointing to with the intersect() method?

    // find the first finger or tool
    const Leap::Frame frame = controller.frame();
    const Leap::HandList hands = frame.hands();
    if (hands.empty()) return;
    const Leap::PointableList pointables = hands[0].pointables();
    if (pointables.empty()) return;
    const Leap::Pointable firstPointable = pointables[0];

    // get x, y coordinates on the first screen
    const Leap::Vector intersection = screen.intersect(
         firstPointable,
         true,  // normalize
         1.0f   // clampRatio
    );	

The vector intersection contains what we want to know here; the pixel pointed at by our pointable. If the pointable argument to intersect() is not actually pointing at the screen then the return value will be (NaN, NaN, NaN). NaN stands for not a number. We can easily check for the presence of non-finite values in a vector with the isValid() method:

    if (! intersection.isValid()) return;
    // print intersection coordinates
    std::cout << "You are pointing at (" <<
        intersection.x << ", " <<
        intersection.y << ", " <<
        intersection.z << ")" << std::endl;

Prepare to be astounded when you point at the middle of your screen and the transfixing message You are pointing at (0.519522, 0.483496, 0) is revealed. Assuming your screen resolution is larger than one pixel on either side, this output may be somewhat unexpected, so let's talk about what screen.intersect(const Pointable &pointable, bool normalize, float clampRatio=1.0f) is returning.

The intersect() method draws an imaginary ray from the tip of pointable extending in the same direction as your finger or tool and returns a three-element vector containing the coordinates of the point of intersection between the ray and the screen. If the second parameter normalize is set to false then intersect() will return the location in the leap coordinate system. Since we have no interest in the real world we have set normalize to true, which causes the coordinates of the returned intersection vector to be fractions of the screen width and height.

Tip

When intersect() returns normalized coordinates, (0, 0, 0) is considered the bottom-left pixel, and (1, 1, 0) is the top-right pixel.

It is worth noting that many computer graphics coordinate systems define the top-left pixel as (0, 0) so use caution when using these coordinates with other libraries.

There is one last (optional) parameter to the intersect() method, clampRatio, which is used to expand or contract the boundaries of the area at which the user can point, should you want to allow pointing beyond the edges of the screen.

Now that we have our normalized screen position, we can easily work out the pixel coordinate in the direction of the user's rude gesticulations:

    unsigned int x = screen.widthPixels() * intersection.x;
    // flip y coordinate to standard top-left origin
    unsigned int y = screen.heightPixels() * (1.0f - intersection.y);
    
    std::cout << "You are offending the pixel at (" <<
        x << ", " << y << std::endl;

Since intersection.x and intersection.y are fractions of the screen dimensions, simply multiply by the boundary sizes to get our intersection coordinates on the screen. We'll go ahead and leave out the Z-coordinate since it's usually (OK, always) zero.

Now for the coup de grace—moving the cursor location, here's how to do it on Mac OS X:

    CGPoint destPoint = CGPointMake(x, y);
    CGDisplayMoveCursorToPoint(kCGDirectMainDisplay, de.stPoint);

Note

You will need to #include <CoreGraphics/CoreGraphics.h> and link it ( –framework CoreGraphics) to make use of CGDisplayMoveCursorToPoint().

Now all of our hard efforts are rewarded, and we can while away the rest of our days making the cursor zip around with nothing more than a twitch of the finger. At least until our arm gets tired. After a few seconds (or minutes, for the easily-amused) it may become apparent that the utility of such an application is severely limited, as we can't actually click on anything.

So maybe you shouldn't throw your mouse away just yet, but read on if you are ready to escape from the shackles of such an antiquated input device.

A gesture-triggered action

Let's go all the way here and implement our first proper gesture—a mouse click. The first question to ask is, what sort of gesture should trigger a click? One's initial response might be a twitch of your pointing finger, perhaps by making a dipping or curling motion. This feels natural and similar enough to using a mouse or trackpad, but there is a major flaw—in the movement of the fingertip to execute the gesture we would end up moving the cursor, resulting in the click taking place somewhere different from where we intended. A different solution is needed.

If we take full advantage of our limbs, and assuming we are not an amputee, we can utilize not just one but both hands, using one as a "pointer" hand and one as a "clicker" hand. We'll retain the outstretched finger as the cursor movement gesture for the pointer hand, and define a "click" gesture to be the touching of two fingers together on the clicker hand.

Let's create a true Leap mouse application to support our newly defined clicking gesture. An important first step would be to choose a distance that represents two fingers touching. While at first blush a value of 0 mm would seem to be a reasonable definition of touching together, consider the fact that the controller is not always perfect in recognizing two touching fingers as being distinct from each other, or even existing at all. If we choose a suitably small distance we can call "touching" then the gesture will be triggered in one of the frames generated as the user closes their fingers together.

We'll begin with the obligatory listener class to handle frame events and keep track of our input state.

class MouseListener : public Leap::Listener {
public:
    MouseListener();
    const float clickActivationDistance = 40;
    virtual void onFrame(const Leap::Controller &);
    virtual void postMouseDown(unsigned x, unsigned y);
    virtual void postMouseUp(unsigned x, unsigned y);
…
};

On my revision 3 device, a distance of 40mm seems to work reasonably well.

For our onFrame handler we can build on our previous code. However now we need to keep track of not just one hand but two, which introduces quite a bit of extra complexity.

For starters, the method Leap::Frame::hands() is defined as returning the hands detected in a frame in an arbitrary order, meaning we cannot always expect the same hand to correspond to the same index in the returned HandList. This makes sense, because some frames will likely fail to recognize both hands and a new list of hands will need to be constructed as the detected hands are unrecognized and recognized again, and there is no guarantee that the ordering will be the same.

A further problem is that we will need to work out which is the user's left and right hands, because we should probably use the most dexterous hand as the pointer hand and the inferior, the clicker.

Indeed, even determining the primary and secondary hands is not quite as simple as one might think, because the primary and secondary hands will be reversed for left-handed people. Left-handed people have had it hard enough for thousands of years, so it would not be right for us to make assumptions.

Note

The English word "dexterity" comes from the Latin root dexter, relating to the right or right hand, and also meaning "skillful", "fortunate", or "proper" and often having a positive connotation. Contrast this with to the word for left—"sinister".

We'll start by adding some instance variables and initializers:

protected:
    bool clickActive; // currently clicking?
    bool leftHanded;  // user setting
    int32_t clickerHandID, pointerHandID; // last recognized

leftHanded will act as a flag which we can use when we determine which hand is the pointer and which is the clicker. clickerHandID and pointerHandID will be used to keep track of which detected hand from a given frame corresponds to the pointer and clicker.

We can create an initializing constructor like so:

MouseListener::MouseListener()
  : clickActive(false), leftHanded(false),
    clickerHandID(0), pointerHandID(0) {}

Explicitly initializing variables is good practice, in particular because the rules for which types are initialized in various situations in C++ are so multitudinous that memorizing them is discouraged. Using the initializer list syntax is considered good style because it can save unnecessary constructor calls when member objects are assigned new values, although since we are only initializing primitive types, we get no such reduction in overhead here.

    Leap::Hand pointerHand, clickerHand;
    
    if (pointerHandID) {
        pointerHand = frame.hand(pointerHandID);
        if (! pointerHand.isValid())
            pointerHand = hands[0];
    }

If hands are detected, we will always at least have a pointer hand defined. If we've already decided on which hand to use (pointerHandID is set) then we should see if that hand is available in the current frame. When Leap::Frame::hand(int32_t id) is called with a previously detected hand's identifier, it will return a corresponding hand instance. If the controller has lost track of the hand it was following, then you'll still get a Hand back, but isValid() will be false. If we fail to locate our old hand or one hasn't been set yet, we'll assign the first detected hand for the case where we only have one hand in the frame.

if (clickerHandID)
        clickerHand = frame.hand(clickerHandID);

We attempt to locate the previously detected clicker hand if possible.

    if (! clickerHand.isValid() && hands.count() == 2) {
        // figure out clicker and pointer hand
                
        // which hand is on the left and which is on the right?
        Leap::Hand leftHand, rightHand;
        if (hands[0].palmPosition()[0] <= hands[1].palmPosition()[0]) {
            leftHand = hands[0];
            rightHand = hands[1];
        } else {
            leftHand = hands[1];
            rightHand = hands[0];
        }

Before we try to work out which hand is the clicker and which should be the pointer, we'll need to know which is the left hand and which is the right hand. A simple comparison of the X coordinates will do the trick nicely for setting the leftHanded flag.

        if (leftHanded) {
            pointerHand = leftHand;
            clickerHand = rightHand;
        } else {
            pointerHand = rightHand;
            clickerHand = leftHand;
        }

Here we assign the primary hand to be the pointer, and the secondary to be the clicker.

        clickerHandID = clickerHand.id();
        pointerHandID = pointerHand.id();

Now that we've decided the hands, we need to retain references to those particular hands for as long as the controller can keep track.

    const Leap::PointableList pointables = pointerHand.pointables();

Instead of hands[0].pointables() as before, now we'll want to use pointerHand for the screen intersection. The rest of the pointer manipulation code remains the same.

if (! clickerHand.isValid()) return;

Now it is time to handle the detection of a click, but only if there are two hands.

    const Leap::PointableList clickerFingers =clickerHand.pointables();
    if (clickerFingers.count() != 2) return;

If we don't find exactly two fingers on the clicker hand, then there is not going to be much we can do in terms of determining how far apart they are. We want to know if the user has touched two fingers together or not.

float clickFingerDistance = clickerFingers[0].tipPosition().distanceTo(
clickerFingers[1].tipPosition()
);

The Leap::Vector class has a handy distanceTo() method that tells us how far apart two points in space are.

    if (! clickActive && clickFingerDistance < clickActivationDistance) {
        clickActive = true;
        cout << "mouseDown\n";
        postMouseDown(x, y);

If we have not already posted a mouse down event and if the clicker hand's two fingers are touching, then we will simulate a click with postMouseDown().

    } else if (clickActive && clickFingerDistance > clickActivationDistance) {
        cout << "mouseUp\n";
        clickActive = false;
        postMouseUp(x, y);
    }

And likewise for when the two fingers come apart, we finish the click and release the button. Unfortunately, just as with the cursor movement code, there is no simple cross-platform way to synthesize mouse events, but the OSX code is provided as follows for completeness:

void MouseListener::postMouseDown(unsigned x, unsigned y) {
    CGEventRef mouseDownEvent = CGEventCreateMouseEvent(
                                       NULL, kCGEventLeftMouseDown,
                                       CGPointMake(x, y),
                                       kCGMouseButtonLeft
                                                        );
    CGEventPost(kCGHIDEventTap, mouseDownEvent);
    CFRelease(mouseDownEvent);
}

void MouseListener::postMouseUp(unsigned x, unsigned y) {
    CGEventRef mouseUpEvent = CGEventCreateMouseEvent(
                                       NULL, kCGEventLeftMouseUp,
                                       CGPointMake(x, y),
                                       kCGMouseButtonLeft
                                                        );
    CGEventPost(kCGHIDEventTap, mouseUpEvent);
    CFRelease(mouseUpEvent);
}

And now you can throw away your mouse for good! Actually, don't do that. First be sure to run the screen calibration tool.

Truth be told, there are plenty of improvements that could be made to our simple, modest mouse replacement application. Implementing right-click, a scroll wheel and click-and-drag are left as an exercise for the reader.

 

Summary


And thus begins our exciting "leap" into the SDK, starting with the basics of reading finger information and tracking where the user is pointing. We'll continue filling in the details of the rest of the functionality in the SDK along with some more fun examples in the rest of the book. While we have only just scratched the surface of what can be done with Leap, you should already be starting to get ideas of how to engage with users using hand motion input. Try out some of the example applications that come with the SDK for inspiration to get a feel for what it can do.

You should now have a working application written in C++ with a frame callback that has access to all of the hand tracking data captured by the controller. Next up, we'll look at making an application interface that is as responsive as possible.

About the Author
  • Mischa Spiegelmock

    Mischa Spiegelmock is an accomplished software engineer from the San Francisco Bay Area. Slightly infamous from light-hearted technical pranks from his youth, he is now a respectable CTO at a healthcare software startup. His passions are architecting elegant and useful programs and sharing his insights into software design with others in a straightforward and entertaining fashion.

    Browse publications by this author
Latest Reviews (1 reviews total)
you have the best titles at good pricing!!
Leap Motion Development Essentials
Unlock this book and the full library FREE for 7 days
Start now