Reader small image

You're reading from  TinyML Cookbook - Second Edition

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781837637362
Edition2nd Edition
Right arrow
Author (1)
Gian Marco Iodice
Gian Marco Iodice
author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice

Right arrow

Using Edge Impulse and the Arduino Nano to Control LEDs with Voice Commands

Have you ever wondered how smart assistants can enable a hands-free experience with your devices? The answer lies in keyword spotting (KWS) technology, which recognizes the popular wake word phrases of OK Google, Alexa, Hey Siri, or Cortana. By identifying these phrases, the smart assistant wakes up and listens to your commands. Since KWS uses real-time speech recognition models, it must be on-device, always on, and running on a low-power system to be effective.

In this chapter, we will demonstrate the usage of KWS through Edge Impulse by building an application to voice control a light-emitting diode (LED) that emits a colored light (red, green, or blue) a certain amount of times (one, two, or three blinks) on the Arduino Nano.

This chapter will begin with dataset preparation, showing you how to acquire audio data with a mobile phone and the built-in microphone on the Arduino Nano....

Technical requirements

To complete all the practical recipes of this chapter, we will need the following:

  • An Arduino Nano 33 BLE Sense
  • A smartphone (an Android phone or Apple iPhone)
  • A micro-USB data cable
  • A laptop/PC with either Linux, macOS, or Windows
  • A Google Drive account
  • An Edge Impulse account

The source code and additional material are available in the Chapter04 folder on the GitHub repository: https://github.com/PacktPublishing/TinyML-Cookbook_2E/tree/main/Chapter04.

Acquiring audio data with a smartphone

As with all ML problems, data acquisition is the first step to take, and Edge Impulse offers several ways to do this directly from the web browser.

In this first recipe, we will learn how to acquire audio samples using a mobile phone.

Getting ready

Edge Impulse offers a straightforward and efficient method for data acquisition using smartphones through internet connectivity. This approach is so simple and intuitive that even people with no technical background will find it easy to use.

The only factor to consider before preparing the dataset is related to the number of audio recordings to take for training the model, outlined in the upcoming subsection.

Collecting audio samples for KWS

The number of samples depends entirely on the nature of the problem—therefore, no one approach fits all. In our scenario, 25 recordings for each class, each corresponding to the utterance to recognize (redgreen...

Acquiring audio data with the Arduino Nano

Building the dataset with recordings obtained with the mobile phone’s microphone is undoubtedly good enough for many applications. However, to prevent any potential loss in accuracy during the model’s deployment, we should also include audio clips recorded with the microphone used by the end application in the dataset.

Therefore, this recipe will show you how to record audio samples with the built-in microphone on the Arduino Nano through Edge Impulse.

Getting ready

In the previous recipe, you will have noticed that in the options reported in the Collect data section, the mobile phone was not the only one, as shown in the following screenshot:

Figure 4.9: Some of the options available to collect data in Edge Impulse

For example, as reported in the previous screenshot, you can collect data using your computer’s microphone or a fully supported microcontroller board.

If you click on the Connect...

Extracting MFE features from audio samples

Edge Impulse relies on the impulse to craft all data processing tasks, including feature extraction and model inference. In this tutorial, we will see how to create an impulse to extract MFE features from our audio samples.

Getting ready

In Edge Impulse, an impulse is responsible for data processing and consists of the following two sequential computational blocks:

  • Processing block: This is the preliminary step in any ML application, and it aims to prepare the data for the ML algorithm.
  • Learning block: This is the block that implements the ML solution, which aims to learn patterns from the data provided by the processing block.

The processing block determines the ML effectiveness since the raw input data is often unsuitable for feeding the model directly. For example, the input signal could be noisy or have irrelevant and redundant information for training the model, just to name...

Designing and training a CNN

In this recipe, we will be leveraging the following CNN architecture:

Figure 4.28: CNN architecture

The model presented in Figure 4.28 is a modified version of what Edge Impulse will propose when designing the neural network (NN). Our network has two 2D convolution layers with 8 and 16 output feature maps (OFMs), one dropout layer, and one fully connected layer, followed by a softmax activation.

The network’s input is the MFE feature extracted from the 1-s audio sample.

Getting ready

To get ready for this recipe, we need to understand how to design and train an ML model in Edge Impulse. Edge Impulse uses different ML frameworks for training depending on the chosen learning block. For a classification learning block, the framework employs TensorFlow with Keras. The model can be designed in two ways:

  • Visual mode (simple mode): This is the quickest method performed through the user interface (UI). Edge Impulse...

Tuning model performance with the EON Tuner

In this recipe, we will use the Edge Impulse EON Tuner to find the best feature extraction method and ML architecture for KWS on the Arduino Nano.

Getting ready

Developing the most efficient ML pipeline for a given target platform is always challenging. One way to do this is through iterative experiments. For example, we can evaluate how some target metrics (latency, memory, and accuracy) change depending on the input feature generation and the model architecture. However, this process is time-consuming because several combinations need to be tested and evaluated. Furthermore, this approach requires familiarity with digital signal processing and NN architectures to know the parameters to tune.

The Edge Impulse EON Tuner (https://docs.edgeimpulse.com/docs/eon-tuner) is a powerful tool designed to automate discovering the most optimal ML solution for a given target platform. Unlike traditional AutoML tools focusing solely...

Live classifications with a smartphone

When discussing model testing, we usually refer to evaluating the trained model on the test dataset. However, model testing in Edge Impulse is more than that.

In this recipe, we will learn how to test model performance on the test dataset and show a way to perform live classifications with a smartphone.

Getting ready

In Edge Impulse, there are two ways to evaluate the accuracy of a model:

  • Model testing on the test dataset: We assess the accuracy using the test dataset. The test dataset provides an unbiased evaluation of model effectiveness because the samples are not used directly or indirectly during training.
  • Live classification: This is a unique feature of Edge Impulse whereby we can record new samples from a smartphone or a supported device (for example, the Arduino Nano).

The live classification approach benefits from testing the trained model in the real world before deploying...

Keyword spotting on the Arduino Nano

As you might have guessed, it is time to deploy the KWS application on the Arduino Nano.

In this recipe, we will show how to do so with the help of Edge Impulse.

Getting ready

The application on the Arduino Nano will be based on the nano_ble33_sense_microphone_continuous.cpp example provided by Edge Impulse, which implements a real-time KWS application. Before adjusting this code sample, we want to examine how it works to get ready for this final recipe.

Learning how a real-time KWS application works

A real-time KWS application—for example, the one used in a smart assistant—should capture and process all pieces of the audio stream to never miss any events. Therefore, the application must record audio and run inference simultaneously so we do not miss any information.

On a microcontroller, parallel tasks can be performed in two ways:

  • With a real-time OS (RTOS...

Summary

The recipes presented in this chapter demonstrated how to build an end-to-end KWS application with Edge Impulse and the Arduino Nano.

Initially, we learned how to prepare the dataset by recording audio samples with a smartphone and the Arduino Nano directly from Edge Impulse.

Afterward, we delved into model design. Here, we introduced the MFE (or Mel-spectrogram) as a suitable input feature for training a CNN model for KWS.

Then, we trained a generic CNN and used the Edge Impulse EON Tuner to discover more efficient model architectures for our target platform regarding accuracy, latency performance, and memory consumption.

Finally, we tested the model’s accuracy on the test dataset and live audio samples recorded with a smartphone and deployed the KWS application on the Arduino Nano.

In this chapter, we have started discussing how to build a tinyML application with a microphone using Edge Impulse and the Arduino Nano. With the next project, we will...

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below:

https://packt.link/tiny

lock icon
The rest of the chapter is locked
You have been reading a chapter from
TinyML Cookbook - Second Edition
Published in: Nov 2023Publisher: PacktISBN-13: 9781837637362
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice