Reader small image

You're reading from  TinyML Cookbook - Second Edition

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781837637362
Edition2nd Edition
Right arrow
Author (1)
Gian Marco Iodice
Gian Marco Iodice
author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice

Right arrow

Building a Gesture-Based Interface for YouTube Playback with Edge Impulse and the Raspberry Pi Pico

Gesture recognition is the technology that enables people to interact with their devices without touching buttons or displays physically. By interpreting human gestures, this technology has found its space in various consumer electronics, including smartphones and game consoles. At its core, gesture recognition relies on two essential components: a sensor and a software algorithm.

In this chapter, we will show you how to use accelerometer measurements with machine learning (ML) to recognize three hand gestures with Raspberry Pi Pico. These recognized gestures will then be used to play/pause, mute/unmute, and change YouTube videos on our PC.

The development of this project will start by acquiring the accelerometer data to build the gesture recognition dataset. In this part, we will learn how to interface with the I2C protocol and use the Edge...

Technical requirements

To complete all the practical recipes of this chapter, we will need the following:

  • A Raspberry Pi Pico
  • A SparkFun RedBoard Artemis Nano (optional)
  • A micro-USB data cable
  • A USB-C data cable (optional)
  • 1 x half-size solderless breadboard
  • 1 x MPU-6050 IMU
  • 4 x jumper wires
  • Laptop/PC with either Linux, macOS, or Windows
  • Edge Impulse account

The source code and additional material are available in the Chapter09 folder in the GitHub repository: https://github.com/PacktPublishing/TinyML-Cookbook_2E/tree/main/Chapter09.

Communicating with the MPU-6050 IMU through I2C

Acquiring sensor data can be a challenging task in tinyML due to the need for direct interaction with hardware at a low level.

In this recipe, we will use the MPU-6050 Inertial Measurement Unit (IMU) to demonstrate the basic principles of a common communication protocol used with sensors known as the Inter-Integrated Circuit (I2C). By the end of this recipe, we will have developed an Arduino sketch that can read out the MPU-6050 address.

Getting ready

The IMU sensor is an electronic device capable of measuring accelerations, angular rates, and, in some cases, body orientations through a combination of integrated sensors. This device is at the heart of many technologies in various industries, including automotive, aerospace, and consumer electronics, to estimate position and orientation. For example, IMU allows the screen of a smartphone to auto-rotate and enables augmented reality/virtual reality ...

Acquiring accelerometer data

In this recipe, we will develop an application to read the accelerometer measurements from the MPU-6050 IMU with a frequency of 50 Hz and send them over the serial monitor.

The data transmitted serially will be acquired in the following recipe to build the training dataset.

Getting ready

The accelerometer is a sensor that measures accelerations on one, two, or three spatial axes, denoted as X, Y, and Z.

In this project, we will use the three-axis accelerometer integrated into the MPU-6050 IMU to recognize three hand gestures.

However, how does the accelerometer work, and how can we take the measurements from the sensor?

Let’s answer these questions in the following subsection.

The basic principles of the accelerometer

Consider the following system, which consists of a mass attached to one end of a spring:

Figure 9.8: Mass-spring system

The preceding...

Building the dataset with the Edge Impulse data forwarder tool

Any ML algorithm needs a dataset, and for us, this means getting data samples from the accelerometer.

Recording accelerometer data is not as difficult as it may seem at first glance. This task can easily be carried out with Edge Impulse.

In this recipe, we will use the Edge Impulse data forwarder tool to take the accelerometer measurements when we make the following three movements with the breadboard:

Figure 9.14: Hand gestures to recognize – circle, cross, and pan

When performing the motions indicated by the arrows in the previous figure, ensure that the breadboard is vertical and the Raspberry Pi Pico is in front of you.

Getting ready

The three gestures that we’ve considered for this project are as follows:

  • Circle: Moving the board clockwise in a circular motion.
  • Cross: Moving the board from the top left to the bottom right and then from the...

Designing and training the ML model

The dataset is in our hands. Therefore, we can start designing the ML model.

In this recipe, we will develop the following architecture with Edge Impulse:

Figure 9.20: Fully connected neural network for recognizing hand gestures from spectral features

As you can see from the preceding figure, the model, which consists of two fully connected layers, takes N0 spectral features as input. The following Getting ready section will explain the reasons behind this design.

Getting ready

In this recipe, we want to explain why the proposed feedforward neural network illustrated in Figure 9.20 is enough for recognizing gestures from accelerometer data.

When developing deep neural network architectures, we commonly feed the model with raw data, leaving the network to learn how to extract the features automatically.

This approach is effective and incredibly accurate in various applications, such as image classification...

Live classifications with the Edge Impulse data forwarder tool

Deploying a model on microcontrollers is error-prone because the code may contain bugs, the integration could be incorrect, or the model could not work reliably in the field. Consequently, conducting model testing becomes essential to exclude at least the ML model from the source of failures. In this recipe, we will use the Live classification tool of Edge Impulse to carry out this investigation.

Getting ready

The most effective approach to assess the performance of an ML model is to evaluate its behavior directly on the target platform, and the Edge Impulse data forwarder tool provides a straightforward method for doing so.

In our specific case, since the dataset was built using the Raspberry Pi Pico, we have gained initial insights into the model’s accuracy during the training phase.

However, there may be instances where the dataset may not be built on top of sensor data coming from the target...

Developing a continuous gesture recognition application with Edge Impulse and Arm Mbed OS

Now that we have tested the model, we are ready to build the sketch in the Arduino IDE to recognize our three motion gestures. In this recipe, we will build a continuous gesture recognition application with the help of Edge Impulse, Arm Mbed OS, and an algorithm to filter out redundant or spurious classification results.

Getting ready

Our goal is to develop a continuous gesture recognition application, which means that the accelerometer data sampling and ML inference must be performed concurrently. This approach guarantees that we capture and process all the pieces of the input data stream so we don’t miss any events.

The main ingredients to accomplish this task are as follows:

  • Arm Mbed OS for writing a multithreading program
  • An algorithm to filter out redundant classification results

Let’s start by learning how to perform...

Building a gesture-based interface with PyAutoGUI

Now that we can recognize the hand gestures with the Raspberry Pi Pico, our final objective is to create a touchless interface for controlling YouTube video playback.

In this recipe, we will develop a Python script to read the recognized motion transmitted over the serial port and use the PyAutoGUI library to build a gesture-based interface to play, pause, mute, unmute, and change YouTube videos.

Getting ready

The Python script that we are going to develop in this recipe relies primarily on two Python libraries:

  • pySerial, which allows us to read the data transmitted serially
  • PyAutoGUI (https://pyautogui.readthedocs.io/), which allows controlling the mouse and simulating keyboard input

The PyAutoGUI library can be installed with the following pip command:

$ pip install pyautogui

The API of this PyAutoGUI has been designed to be very simple to use. For example, we can easily...

Summary

The recipes presented in this chapter demonstrated how to build an end-to-end gesture recognition application with Edge Impulse and the Raspberry Pi Pico.

Initially, we learned how to connect an IMU sensor with the microcontroller using the I2C communication protocol and leverage the Mbed OS API to read the accelerometer measurements.

Afterward, we delved into the dataset preparation, model design, and model training using Edge Impulse. Here, we introduced the spectral feature ingredient necessary to obtain a compact model for gesture recognition. Then, we discovered how to implement a multithreading program using Arm Mbed OS to run the model inference concurrently with the accelerometer data acquisition.

Finally, we concluded the project by developing a Python script that leverages the PyAutoGUI library to simulate keyboard input, thereby facilitating the control of YouTube playback.

In this chapter, we began discussing how to build compact ML models...

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below:

https://packt.link/tiny

lock icon
The rest of the chapter is locked
You have been reading a chapter from
TinyML Cookbook - Second Edition
Published in: Nov 2023Publisher: PacktISBN-13: 9781837637362
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice