Reader small image

You're reading from  TinyML Cookbook - Second Edition

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781837637362
Edition2nd Edition
Right arrow
Author (1)
Gian Marco Iodice
Gian Marco Iodice
author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice

Right arrow

Detecting Objects with Edge Impulse Using FOMO on the Raspberry Pi Pico

Undoubtedly, image classification is one of the most fundamental and well-known tasks in machine learning (ML) and computer vision. This task is crucial to many applications across domains such as medical diagnosis, autonomous vehicles, security surveillance, and entertainment to automate visual data analysis.

However, we only get insights into what we have in the image when it comes to image classification. In fact, if we want to figure out where objects are located within the scene, especially if there’s more than one, we need to turn to a more advanced technique in computer vision known as object detection.

Object detection is more computationally intensive than image classification because it involves performing the classification task in multiple regions and scales of the input image simultaneously to identify the objects and their respective locations.

Despite this challenge, this chapter...

Technical requirements

To complete all the practical recipes of this chapter, we will need the following:

  • A Raspberry Pi Pico
  • A SparkFun RedBoard Artemis Nano (optional)
  • A micro-USB data cable
  • A USB-C data cable (optional)
  • A USB webcam
  • Laptop/PC with either Linux, macOS, or Windows
  • Edge Impulse account

Acquiring images with the webcam

In this first recipe, we will prepare the dataset by acquiring images of cans using the USB webcam and labeling them using Edge Impulse.

Getting ready

As with all ML projects, our first step is dataset preparation. In this project, the dataset will be created from scratch using Edge Impulse.

The data acquisition for images does not differ from what we experienced in Chapter 4, Using Edge Impulse and Arduino Nano to Control LEDs with Voice Commands, for building the KWS dataset.

To prevent overfitting and improve model robustness, you should create a more diverse training dataset. For example, you might consider the following to ensure that the model is reliable and effective in real-world scenarios:

  • Different backgrounds
  • Different distances from the camera
  • Different angles
  • Different light conditions

We recommend using a USB webcam rather than the laptop’s built-in one because capturing...

Designing the Impulse’s pre-processing block

Now that we have the dataset ready, let’s delve into the Impulse preparation.

In this recipe, our initial focus will be on the design of the pre-processing block responsible for preparing the input image feeding the ML model.

Getting ready

The object detection pipeline we are going to build with the help of Edge Impulse is the following:

Figure 7.8: The object detection pipeline

The image preparation block reported in the preceding figure encompasses all the essential operations required to transform the camera frame into the expected input format for the Learning block.

In this project, we won’t use any camera module with the microcontroller. Instead, we will emulate the functionality of an actual camera by transmitting the frames captured with the webcam from the computer over the serial connection.

To avoid the image preparation stage reported in Figure 7.8 and make the implementation...

Transfer learning with FOMO

After designing the pre-processing block, it is time to train the ML model.

In this recipe, we will discuss the features that make the FOMO model suitable for highly constrained devices and show how to train it in Edge Impulse.

Getting ready

The design of the FOMO architecture, leveraged in this project to enable object detection on Raspberry Pi Pico, demonstrates that by approaching problems from a different and simple perspective, we can turn the seemingly impossible into reality. tinyML developers should always think this way if they want to unlock novel solutions on microcontrollers, as the computational capabilities of these devices are certainly not the same as those of the cloud, laptops, or smartphones.

In the following subsection, we will dive deep into the technical details of FOMO to learn how this model works and be inspired by its underlying ideas.

Behind the design of FOMO

If you are an ML developer, I am confident...

Evaluating the model’s accuracy

The model’s accuracy experimented on the validation dataset seems to be promising. However, we can only confidently confirm the model’s suitability for our needs after evaluating its performance on unseen data.

In this recipe, we will carry out this evaluation with the Model testing and Live classification tools of Edge Impulse.

Getting ready

The test dataset provides an unbiased evaluation of the model’s accuracy since these samples are not used during training. Evaluating how the model behaves on unseen data is essential to determine its alignment with our expectations. For example, this assessment might unveil that the model struggles to identify objects against specific backgrounds. If such a situation arises, it could be due to the training dataset’s limited size. Therefore, we may retrain the model using additional images to address the issue.

However, what extra steps should we take if we have...

Using OpenCV and pySerial to send images over the serial interface

In this recipe, we will emulate a camera sensor that can communicate with the Raspberry Pi Pico through the serial interface using a local Python script.

This Python script will be able to capture images from the webcam using OpenCV and, upon the microcontroller’s request, transmit its pixels over the serial interface.

Getting ready

The Python script we will develop in this recipe can be considered a software program that emulates the functionality of a camera sensor we may intend to use in production. Our target platform is often not equipped with all the required hardware components during the initial prototyping stage. Nevertheless, running the model on the microcontroller might still be necessary. Some of these reasons might be for testing the model functionality or gather preliminary latency performance data.

The Python script we aim to develop in this recipe exploits the OpenCV (https:...

Reading data from the serial port with Arduino-compatible platforms

The Python script implemented in the previous recipe allows us to transmit images over the serial. However, before deploying the model on the microcontroller, it is worth verifying whether the serial communication with the device works and whether we can display camera frames on the screen.

Therefore, in this recipe, we will implement an Arduino sketch to send a read camera request and retrieve the image pixel data transmitted by the Python script.

Getting ready

The only prerequisite for accomplishing this recipe’s objective is reading the data from the serial port with an Arduino compatible platform.

Arduino provides a handy function for reading bytes sent over the serial: readBytes() (https://www.arduino.cc/reference/en/language/functions/communication/serial/readbytes/).

The readBytes() function is a method of the Serial object and takes two input arguments, which are the following:

...

Deploying FOMO on the Raspberry Pi Pico

Here we are, ready to deploy the FOMO model on the Raspberry Pi Pico.

In this recipe, we will develop a sketch to run the model inference with the Edge Impulse Inferencing SDK and transmit the centroid coordinates of the detected objects over the serial.

These coordinates will be read in the Python script developed previously to highlight the detected objects within the video stream.

Getting ready

Deploying the model trained with Edge Impulse is easy on any Arduino-compatible platform, thanks to the Arduino library generated by Edge Impulse.

This library contains everything we need to run the model inference successfully on the device, such as the following:

  • The trained model in TensorFlow Lite format.
  • Model parameters, such as the input image resolution and color format or the maximum number of possible detections in a single frame.
  • A library containing a set of functions for Digital Signal Processing...

Summary

The recipes presented in this chapter demonstrated how to build an object detection application for microcontrollers with the help of Edge Impulse using a pre-trained FOMO model.

Initially, we learned how to prepare the dataset by acquiring camera frames with the webcam. Afterward, we delved into the model design. Here, we discussed how to choose a suitable image resolution and color format for an object detection model based on the FOMO architecture. Then, we explored the FOMO architecture to learn why it is ideal for memory-constrained devices.

After introducing the FOMO architecture, we trained the model and tested its accuracy on the test dataset and live images acquired with the webcam.

Finally, we implemented a Python script to emulate a microcontroller camera module and deployed the object detection model on the Raspberry Pi Pico using the Edge Impulse Inferencing SDK.

In this chapter, we have started discussing how to build a tinyML application with...

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below:

https://packt.link/tiny

lock icon
The rest of the chapter is locked
You have been reading a chapter from
TinyML Cookbook - Second Edition
Published in: Nov 2023Publisher: PacktISBN-13: 9781837637362
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice