Search icon
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Learning Hub
Free Learning
Arrow right icon
Leap Motion Development Essentials
Leap Motion Development Essentials

Leap Motion Development Essentials: Leverage the power of Leap Motion to develop and deploy a fully interactive application

By Mischa Spiegelmock
$22.99 $15.99
Book Oct 2013 106 pages 1st Edition
$22.99 $15.99
$15.99 Monthly
$22.99 $15.99
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : Oct 25, 2013
Length 106 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849697729
Vendor :
Leap Motion
Category :
Table of content icon View table of contents Preview book icon Preview Book

Leap Motion Development Essentials

Chapter 1. Leap Motion SDK – A Quick Start

The Leap Motion is a peripheral input device that allows users to interact with the software through hand gestures. This chapter will explain the features and software interface exposed by the Leap Motion SDK, which will allow us to take advantage of the hand motion input.

An overview of the SDK

The Leap device uses a pair of cameras and an infrared pattern projected by LEDs to generate an image of your hands with depth information. A very small amount of processing is done on the device itself, in order to keep the cost of the units low.

The images are post-processed on your computer to remove noise, and to construct a model of your hands, fingers, and pointy tools that you are holding.

As an application developer, you can make use of this data via the Leap software developer kit, which contains a powerful high-level API for easily integrating gesture input into your applications. Because developers do not want to go to, the trouble of processing raw input in the form of depth-mapped images skeleton models and point cloud data, the SDK provides abstracted models that report what your user is doing with their hands. With the SDK you can write applications that make use of some familiar concepts:

  • All hands detected in a frame, including rotation, position, velocity, and movement since an earlier frame

  • All fingers and pointy tools (collectively known as "pointables") recognized as attached to each hand, with rotation, position, and velocity

  • The exact pixel location on a display pointed at by a finger or tool

  • Basic recognition of gestures such as swipes and taps

  • Detection of position and orientation changes between frames

Quick start

Congratulations on your purchase of a genuine fine quality Leap Motion gesture input device! This handy guide will walk you through the assembly, proper usage, and care of your Leap Motion.

To get started, remove your Leap Motion SDK and Leap Motion™ device from the box and unpack the shared object files and headers from their shrink-wrap. Gently place your SDK in a handy directory and fire up your favorite IDE to begin.


Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.

We'll get things going right away with a short C++ application to illustrate how to interact with the Leap SDK to receive events and input.

#include <iostream>
#include <Leap.h>

class Quickstart : public Leap::Listener {
    virtual void onConnect(const Leap::Controller &);
    virtual void onFrame(const Leap::Controller &);

To interact with the Leap software, we will begin by creating a subclass of Leap::Listener and defining the callback methods we wish to receive. While it is possible to poll the controller for the current frame, generally you will want to make your program as responsive as possible which is most easily accomplished by acting on input events immediately via callbacks. In case you're wondering, the primary available callback methods are:

  • onInit: This indicates that the listener is added to a controller and called only once.

  • onExit: This indicates that the controller is destroyed or the listener is removed.

  • onConnect: This indicates that the device is connected and recognized by the driver and is ready to start processing frames.

  • onDisconnect: This indicates that the device is disconnected. Connection state can also be polled by checking controller.isConnected() so that you don't need to keep track of whether the controller is plugged in or not. (An earlier version of the SDK lacked this state accessor but the kind folks at Leap Motion realized us devs are really lazy).

  • onFrame: This indicates that a new frame of input data has been captured and processed. This is the only handler you really need to implement if you want to make use of the Leap.

Let's implement our onConnect handler real quick so that we can verify that the controller driver and SDK are communicating with the device properly. If everything is working as it should be, the following code should cause a message to be emitted on stdout when the device is plugged in to a USB port:

void Quickstart::onConnect(const Leap::Controller &controller) {
    std::cout << "Hello, Leap user!\n";

We can display a friendly contrived greeting to ourselves when the program is run with the controller connected and the driver software is running. This will cause breathless anticipation in your users as they prepare themselves to experience the magic and wonder of this fantastic new input technology.

A listener is attached to a Leap::Controller, which acts as the primary interface between the Leap driver and your application. A controller tracks processed frames, device connection state, configuration parameters, and invokes callback methods on a listener.

To begin receiving events, instantiate a new listener and a controller:

int main() {
    // create instance of our Listener subclass
    Quickstart listener;
    // create generic Controller to interface with the Leap device
    Leap::Controller controller;
    // tell the Controller to start sending events to our Listener

If you place your hands over the device Quickstart::onFrame() will start being called. Let's create an onFrame handler that reports on the horizontal velocity of the first finger or tool detected:

void Quickstart::onFrame(const Leap::Controller &controller) {
    const Leap::Frame frame = controller.frame();

controller.frame() returns a Frame instance, which contains information detected about our scene at a specific point in time. It has a single optional parameter history, which allows you to travel backwards through the misty sands of time to compare frames and determine hand changes over time. Unfortunately this time machine is rather limited; only about 60 previous frames are stored in the controller.

    // do nothing unless hands are detected
    if (frame.hands().empty()) return;

Get used to making these sorts of checks. You'll be seeing a lot more of them. Here, we have no interest in processing this frame unless there are hands in it.

    // first detected hand
    const Leap::Hand firstHand = frame.hands()[0];
    // first pointable object (finger or tool)
    const Leap::PointableList pointables = firstHand.pointables();
    if (pointables.empty()) return;
    const Leap::Pointable firstPointable = pointables[0];

All fingers attached to a hand and all tools that the hand is grasping are returned as a PointableList, which behaves like an std::vector, including providing an iterator for people who are into that. Most commonly we will want to find out where a pointable is in space and how fast it is moving, which we can easily find out with tipPosition() and tipVelocity() respectively. Both return Leap::Vectors consisting of X, Y, and Z components.

    std::cout << "Pointable X velocity: " << firstPointable.tipVelocity()[0] << endl;

If you wave an outstretched finger or a tool (a chopstick works pretty well if you happen to have one lying around) back and forth over the controller you will be rewarded with the following riveting output:

Pointable X velocity: -223.937
Pointable X velocity: -117.421
Pointable X velocity: -242.293
Pointable X velocity: -141.43
Pointable X velocity: -61.9314
Pointable X velocity: 9.85328
Pointable X velocity: 41.9575
Pointable X velocity: 71.7436
Pointable X velocity: 96.0459
Pointable X velocity: 116.465

Leftwards motion is represented by negative values (mm/s). Rightwards motion is positive.


A note on the sample code

Because of frequent changes to the SDK, your best bet for finding the most up-to-date code samples is to check out the GitHub repository.

All sample code can be found at This program and others like it can be built using the following command on Mac OS X or Linux using GCC or clang:

$ g++ quickstart.cpp -lLeap –Lpath/to/Leap_SDK/lib/libc++ –Ipath/to/Leap_SDK/include –o quickstart

Note that path/to/Leap_SDK should be replaced with the location of your Leap_SDK directory. It may be helpful to set an environment variable with the path or install the libraries and headers system-wide.

Major SDK components

Now that we've written our first gesture-enabled program, let's talk about the major components of the Leap SDK. We'll visit each of these in more depth as we continue our journey.


The Leap::Controller class is a liaison between the controller and your code. Whenever you wish to do anything at all with the device you must first go through your controller. From a controller instance we can interact with the device configuration, detected displays, current and past frames, and set up event handling with our listener subclass.


An instance of the Config class can be obtained from a controller. It provides a key/value interface to modify the operation of the Leap device and driver behavior. Some of the options available are:

  • Robust mode: Somewhat slower frame processing but works better with less light.

  • Low resource mode: Less accurate and responsive tracking, but uses less CPU and USB bandwidth.

  • Tracking priority: Can prioritize either precision of tracking data or the rate at which data is sampled (resulting in approximately 4x data frame-rate boost), or a balance between the two (approximately 2x faster than the precise mode).

  • Flip tracking: Allows you to use the controller with the USB cable coming out of either side. This setting simply flips the positive and negative coordinates on the X-axis.


A controller may have one or more calibratedScreens, which are computer displays in the field of view of the controller, which have a known position and dimensions. Given a pointable direction and a screen we can determine what the user is pointing at.


Several math-related functions and types such as Leap::Vector, Leap::Matrix, and Leap::FloatArray are provided by LeapMath.h. All points in space, screen coordinates, directions, and normal are returned by the API as three-element vectors representing X, Y, and Z coordinates or unit vectors.


The real juicy information is stored inside each Frame. A Frame instance represents a point in time in which the driver was able to generate an updated view of its world and detect where screens, your hands, and pointables are.


At present the only body parts you can use with the controller are your hands. Given a frame instance we can inspect the number of hands in the frame, their position and rotation, normal vectors, and gestures. The hand motion API allows you to compare two frames and determine if the user has performed a translation, rotation, or scaling gesture with their hands in that time interval. The methods we can call to check for these interactions are:

  • Leap::Hand::translation(sinceFrame): Translation (also known as movement) returned as a Leap::Vector including the direction of the movement of the hand and the distance travelled in millimeters.

  • Leap::Hand::rotationMatrix(sinceFrame), ::rotationAxis(sinceFrame), ::rotationAngle(sinceFrame, axisVector): Hand rotation, either described as a rotation matrix, vector around an axis or float angle around a vector between –π and π radians (that's -180° to 180° for those of you who are a little rusty with your trigonometry).

  • Leap::Hand::scaleFactor(sinceFrame): Scaling represents the distance between two hands. If the hands are closer together in the current frame compared to sinceFrame, the return value will be less than 1.0 but greater than 0.0. If the hands are further apart the return value will be greater than 1.0 to indicate the factor by which the distance has increased.


A Hand also can contain information about Pointable objects that were recognized in the frame as being attached to the hand. A distinction is made between the two different subclasses of pointable objects, Tool, which can be any slender, long object such as a chopstick or a pencil, and Finger, whose meaning should be apparent. You can request either fingers or tools from a Hand, or a list of pointables to get both if you don't care.

Finger positioning

Suppose we want to know where a user's fingertips are in space. Here's a short snippet of code to output the spatial coordinates of the tips of the fingers on a hand that is being tracked by the controller:

    if (frame.hands().empty()) return;

    const Leap::Hand firstHand = frame.hands()[0];
    const Leap::FingerList fingers = firstHand.fingers();

Here we obtain a list of the fingers on the first hand of the frame. For an enjoyable diversion let's output the locations of the fingertips on the hand, given in the Leap coordinate system:

for (int i = 0; i < fingers.count(); i++) {
    const Leap::Finger finger = fingers[i];
    std::cout << "Detected finger " << i << " at position (" <<
        finger.tipPosition().x << ", " <<
        finger.tipPosition().y << ", " <<
        finger.tipPosition().z << ")" << std::endl;

This demonstrates how to get the position of the fingertips of the first hand that is recognized in the current frame. If you hold three fingers out the following dazzling output is printed:

Detected finger 0 at position (-119.867, 213.155, -65.763)
Detected finger 1 at position (-90.5347, 208.877, -61.1673)
Detected finger 2 at position (-142.919, 211.565, -48.6942)

While this is clearly totally awesome, the exact meaning of these numbers may not be immediately apparent. For points in space returned by the SDK the Leap coordinate system is used. Much like our forefathers believed the Earth to be the cornerstone of our solar system, your Leap device has similar notions of centricity. It measures locations by their distance from the Leap origin, a point centered on the top of the device. Negative X values represent a point in space to the left of the device, positive values are to the right. The Z coordinates work in much the same way, with positive values extending towards the user and negative values in the direction of the display. The Y coordinate is the distance from the top of the device, starting 25 millimeters above it and extending to about 600 millimeters (two feet) upwards. Note that the device cannot see below itself, so all Y coordinates will be positive.

An example of cursor control

By now we are feeling pretty saucy, having diligently run the sample code thus far and controlling our computer in a way never before possible. While there is certain utility and endless amusement afforded by printing out finger coordinates while waving your hands in the air and pretending to be a magician, there are even more exciting applications waiting to be written, so let's continue onwards and upwards.


Until computer-gesture interaction is commonplace, pretending to be a magician while you test the functionality of Leap SDK is not recommended in public places such as coffee shops.

In some cultures it is considered impolite to point at people. Fortunately your computer doesn't have feelings and won't mind if we use a pointing gesture to move its cursor around (you can even use a customarily offensive finger if you so choose). In order to determine where to move the cursor, we must first locate the position on the display that the user is pointing at. To accomplish this we will make use of the screen calibration and detection API in the SDK.

If you happen to leave your controller near a computer monitor it will do its best to try and determine the location and dimensions of the monitor by looking for a large, flat surface in its field of view. In addition you can use the complementary Leap calibration functionality to improve its accuracy if you are willing to take a couple of minutes to point at various dots on your screen. Note that once you have calibrated your screen, you should ensure that the relative positions of the Leap and the screen do not change.

Once your controller has oriented itself within your surroundings, hands and display, you can ask your trusty controller instance for a list of detected screens:

    // get list of detected screens
    const Leap::ScreenList screens = controller.calibratedScreens();
    // make sure we have a detected screen
    if (screens.empty()) return;
    const Leap::Screen screen = screens[0];

We now have a screen instance that we can use to find out the physical location in space of the screen as well as its boundaries and resolution. Who cares about all that though, when we can use the SDK to compute where we're pointing to with the intersect() method?

    // find the first finger or tool
    const Leap::Frame frame = controller.frame();
    const Leap::HandList hands = frame.hands();
    if (hands.empty()) return;
    const Leap::PointableList pointables = hands[0].pointables();
    if (pointables.empty()) return;
    const Leap::Pointable firstPointable = pointables[0];

    // get x, y coordinates on the first screen
    const Leap::Vector intersection = screen.intersect(
         true,  // normalize
         1.0f   // clampRatio

The vector intersection contains what we want to know here; the pixel pointed at by our pointable. If the pointable argument to intersect() is not actually pointing at the screen then the return value will be (NaN, NaN, NaN). NaN stands for not a number. We can easily check for the presence of non-finite values in a vector with the isValid() method:

    if (! intersection.isValid()) return;
    // print intersection coordinates
    std::cout << "You are pointing at (" <<
        intersection.x << ", " <<
        intersection.y << ", " <<
        intersection.z << ")" << std::endl;

Prepare to be astounded when you point at the middle of your screen and the transfixing message You are pointing at (0.519522, 0.483496, 0) is revealed. Assuming your screen resolution is larger than one pixel on either side, this output may be somewhat unexpected, so let's talk about what screen.intersect(const Pointable &pointable, bool normalize, float clampRatio=1.0f) is returning.

The intersect() method draws an imaginary ray from the tip of pointable extending in the same direction as your finger or tool and returns a three-element vector containing the coordinates of the point of intersection between the ray and the screen. If the second parameter normalize is set to false then intersect() will return the location in the leap coordinate system. Since we have no interest in the real world we have set normalize to true, which causes the coordinates of the returned intersection vector to be fractions of the screen width and height.


When intersect() returns normalized coordinates, (0, 0, 0) is considered the bottom-left pixel, and (1, 1, 0) is the top-right pixel.

It is worth noting that many computer graphics coordinate systems define the top-left pixel as (0, 0) so use caution when using these coordinates with other libraries.

There is one last (optional) parameter to the intersect() method, clampRatio, which is used to expand or contract the boundaries of the area at which the user can point, should you want to allow pointing beyond the edges of the screen.

Now that we have our normalized screen position, we can easily work out the pixel coordinate in the direction of the user's rude gesticulations:

    unsigned int x = screen.widthPixels() * intersection.x;
    // flip y coordinate to standard top-left origin
    unsigned int y = screen.heightPixels() * (1.0f - intersection.y);
    std::cout << "You are offending the pixel at (" <<
        x << ", " << y << std::endl;

Since intersection.x and intersection.y are fractions of the screen dimensions, simply multiply by the boundary sizes to get our intersection coordinates on the screen. We'll go ahead and leave out the Z-coordinate since it's usually (OK, always) zero.

Now for the coup de grace—moving the cursor location, here's how to do it on Mac OS X:

    CGPoint destPoint = CGPointMake(x, y);
    CGDisplayMoveCursorToPoint(kCGDirectMainDisplay, de.stPoint);


You will need to #include <CoreGraphics/CoreGraphics.h> and link it ( –framework CoreGraphics) to make use of CGDisplayMoveCursorToPoint().

Now all of our hard efforts are rewarded, and we can while away the rest of our days making the cursor zip around with nothing more than a twitch of the finger. At least until our arm gets tired. After a few seconds (or minutes, for the easily-amused) it may become apparent that the utility of such an application is severely limited, as we can't actually click on anything.

So maybe you shouldn't throw your mouse away just yet, but read on if you are ready to escape from the shackles of such an antiquated input device.

A gesture-triggered action

Let's go all the way here and implement our first proper gesture—a mouse click. The first question to ask is, what sort of gesture should trigger a click? One's initial response might be a twitch of your pointing finger, perhaps by making a dipping or curling motion. This feels natural and similar enough to using a mouse or trackpad, but there is a major flaw—in the movement of the fingertip to execute the gesture we would end up moving the cursor, resulting in the click taking place somewhere different from where we intended. A different solution is needed.

If we take full advantage of our limbs, and assuming we are not an amputee, we can utilize not just one but both hands, using one as a "pointer" hand and one as a "clicker" hand. We'll retain the outstretched finger as the cursor movement gesture for the pointer hand, and define a "click" gesture to be the touching of two fingers together on the clicker hand.

Let's create a true Leap mouse application to support our newly defined clicking gesture. An important first step would be to choose a distance that represents two fingers touching. While at first blush a value of 0 mm would seem to be a reasonable definition of touching together, consider the fact that the controller is not always perfect in recognizing two touching fingers as being distinct from each other, or even existing at all. If we choose a suitably small distance we can call "touching" then the gesture will be triggered in one of the frames generated as the user closes their fingers together.

We'll begin with the obligatory listener class to handle frame events and keep track of our input state.

class MouseListener : public Leap::Listener {
    const float clickActivationDistance = 40;
    virtual void onFrame(const Leap::Controller &);
    virtual void postMouseDown(unsigned x, unsigned y);
    virtual void postMouseUp(unsigned x, unsigned y);

On my revision 3 device, a distance of 40mm seems to work reasonably well.

For our onFrame handler we can build on our previous code. However now we need to keep track of not just one hand but two, which introduces quite a bit of extra complexity.

For starters, the method Leap::Frame::hands() is defined as returning the hands detected in a frame in an arbitrary order, meaning we cannot always expect the same hand to correspond to the same index in the returned HandList. This makes sense, because some frames will likely fail to recognize both hands and a new list of hands will need to be constructed as the detected hands are unrecognized and recognized again, and there is no guarantee that the ordering will be the same.

A further problem is that we will need to work out which is the user's left and right hands, because we should probably use the most dexterous hand as the pointer hand and the inferior, the clicker.

Indeed, even determining the primary and secondary hands is not quite as simple as one might think, because the primary and secondary hands will be reversed for left-handed people. Left-handed people have had it hard enough for thousands of years, so it would not be right for us to make assumptions.


The English word "dexterity" comes from the Latin root dexter, relating to the right or right hand, and also meaning "skillful", "fortunate", or "proper" and often having a positive connotation. Contrast this with to the word for left—"sinister".

We'll start by adding some instance variables and initializers:

    bool clickActive; // currently clicking?
    bool leftHanded;  // user setting
    int32_t clickerHandID, pointerHandID; // last recognized

leftHanded will act as a flag which we can use when we determine which hand is the pointer and which is the clicker. clickerHandID and pointerHandID will be used to keep track of which detected hand from a given frame corresponds to the pointer and clicker.

We can create an initializing constructor like so:

  : clickActive(false), leftHanded(false),
    clickerHandID(0), pointerHandID(0) {}

Explicitly initializing variables is good practice, in particular because the rules for which types are initialized in various situations in C++ are so multitudinous that memorizing them is discouraged. Using the initializer list syntax is considered good style because it can save unnecessary constructor calls when member objects are assigned new values, although since we are only initializing primitive types, we get no such reduction in overhead here.

    Leap::Hand pointerHand, clickerHand;
    if (pointerHandID) {
        pointerHand = frame.hand(pointerHandID);
        if (! pointerHand.isValid())
            pointerHand = hands[0];

If hands are detected, we will always at least have a pointer hand defined. If we've already decided on which hand to use (pointerHandID is set) then we should see if that hand is available in the current frame. When Leap::Frame::hand(int32_t id) is called with a previously detected hand's identifier, it will return a corresponding hand instance. If the controller has lost track of the hand it was following, then you'll still get a Hand back, but isValid() will be false. If we fail to locate our old hand or one hasn't been set yet, we'll assign the first detected hand for the case where we only have one hand in the frame.

if (clickerHandID)
        clickerHand = frame.hand(clickerHandID);

We attempt to locate the previously detected clicker hand if possible.

    if (! clickerHand.isValid() && hands.count() == 2) {
        // figure out clicker and pointer hand
        // which hand is on the left and which is on the right?
        Leap::Hand leftHand, rightHand;
        if (hands[0].palmPosition()[0] <= hands[1].palmPosition()[0]) {
            leftHand = hands[0];
            rightHand = hands[1];
        } else {
            leftHand = hands[1];
            rightHand = hands[0];

Before we try to work out which hand is the clicker and which should be the pointer, we'll need to know which is the left hand and which is the right hand. A simple comparison of the X coordinates will do the trick nicely for setting the leftHanded flag.

        if (leftHanded) {
            pointerHand = leftHand;
            clickerHand = rightHand;
        } else {
            pointerHand = rightHand;
            clickerHand = leftHand;

Here we assign the primary hand to be the pointer, and the secondary to be the clicker.

        clickerHandID =;
        pointerHandID =;

Now that we've decided the hands, we need to retain references to those particular hands for as long as the controller can keep track.

    const Leap::PointableList pointables = pointerHand.pointables();

Instead of hands[0].pointables() as before, now we'll want to use pointerHand for the screen intersection. The rest of the pointer manipulation code remains the same.

if (! clickerHand.isValid()) return;

Now it is time to handle the detection of a click, but only if there are two hands.

    const Leap::PointableList clickerFingers =clickerHand.pointables();
    if (clickerFingers.count() != 2) return;

If we don't find exactly two fingers on the clicker hand, then there is not going to be much we can do in terms of determining how far apart they are. We want to know if the user has touched two fingers together or not.

float clickFingerDistance = clickerFingers[0].tipPosition().distanceTo(

The Leap::Vector class has a handy distanceTo() method that tells us how far apart two points in space are.

    if (! clickActive && clickFingerDistance < clickActivationDistance) {
        clickActive = true;
        cout << "mouseDown\n";
        postMouseDown(x, y);

If we have not already posted a mouse down event and if the clicker hand's two fingers are touching, then we will simulate a click with postMouseDown().

    } else if (clickActive && clickFingerDistance > clickActivationDistance) {
        cout << "mouseUp\n";
        clickActive = false;
        postMouseUp(x, y);

And likewise for when the two fingers come apart, we finish the click and release the button. Unfortunately, just as with the cursor movement code, there is no simple cross-platform way to synthesize mouse events, but the OSX code is provided as follows for completeness:

void MouseListener::postMouseDown(unsigned x, unsigned y) {
    CGEventRef mouseDownEvent = CGEventCreateMouseEvent(
                                       NULL, kCGEventLeftMouseDown,
                                       CGPointMake(x, y),
    CGEventPost(kCGHIDEventTap, mouseDownEvent);

void MouseListener::postMouseUp(unsigned x, unsigned y) {
    CGEventRef mouseUpEvent = CGEventCreateMouseEvent(
                                       NULL, kCGEventLeftMouseUp,
                                       CGPointMake(x, y),
    CGEventPost(kCGHIDEventTap, mouseUpEvent);

And now you can throw away your mouse for good! Actually, don't do that. First be sure to run the screen calibration tool.

Truth be told, there are plenty of improvements that could be made to our simple, modest mouse replacement application. Implementing right-click, a scroll wheel and click-and-drag are left as an exercise for the reader.


And thus begins our exciting "leap" into the SDK, starting with the basics of reading finger information and tracking where the user is pointing. We'll continue filling in the details of the rest of the functionality in the SDK along with some more fun examples in the rest of the book. While we have only just scratched the surface of what can be done with Leap, you should already be starting to get ideas of how to engage with users using hand motion input. Try out some of the example applications that come with the SDK for inspiration to get a feel for what it can do.

You should now have a working application written in C++ with a frame callback that has access to all of the hand tracking data captured by the controller. Next up, we'll look at making an application interface that is as responsive as possible.

Left arrow icon Right arrow icon

Key benefits

  • Comprehensive and thorough coverage of many SDK features
  • Intelligent usage of gesture interfaces
  • In-depth, functional examples of API usage explained in detail


Leap Motion is a company developing advanced motion sensing technology for human–computer interaction. Originally inspired by the level of difficulty of using a mouse and keyboard for 3D modeling, Leap Motion believe that moulding virtual clay should be as easy as moulding clay in your hands. Leap Motion now focus on bringing this motion sensing technology closer to the real world. Leap Motion Development Essentials explains the concepts and practical applications of gesture input for developers who want to take full advantage of Leap Motion technology. This guide explores the capabilities available to developers and gives you a clear overview of topics related to gesture input along with usable code samples. Leap Motion Development Essentials shows you everything you need to know about the Leap Motion SDK, from creating a working program with gesture input to more sophisticated applications covering a range of relevant topics. Sample code is provided and explained along with details of the most important and central API concepts. This book teaches you the essential information you need to design a gesture-enabled interface for your application, from specific gesture detection to best practices for this new input. You will be given guidance on practical considerations along with copious runnable demonstrations of API usage which are explained in step-by-step, reusable recipes.

What you will learn

Read finger and hand positions as well as motion information Detect where a user is pointing on a screen Recognize gestures – both built-in and user-defined Deal with multithreaded programming challenges to create responsive interfaces Explore the theory and concepts of gestural interfaces along with best practices Integrate the Leap with 3D web capabilities using WebGL and Three.js Understand the detailed coverage of C++ and JavaScript APIs Add Leap support to a web page with no additional software or downloads required by users

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : Oct 25, 2013
Length 106 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849697729
Vendor :
Leap Motion
Category :

Table of Contents

12 Chapters
Leap Motion Development Essentials Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Author Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
1. Leap Motion SDK – A Quick Start Chevron down icon Chevron up icon
2. Real Talk – Real Time Chevron down icon Chevron up icon
3. Actual Gestures Chevron down icon Chevron up icon
4. Leap and the Web Chevron down icon Chevron up icon
5. HTML5 Antics in 3D Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by

No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial


How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to
  • To contact us directly if a problem is not resolved, use
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.