Reader small image

You're reading from  TinyML Cookbook - Second Edition

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781837637362
Edition2nd Edition
Right arrow
Author (1)
Gian Marco Iodice
Gian Marco Iodice
author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice

Right arrow

Classifying Desk Objects with TensorFlow and the Arduino Nano

Convolutional neural networks (CNNs) have gained popularity because of their ability to unlock challenging computer vision tasks such as image classification, object recognition, scene understanding, and pose estimation, once considered impossible to solve. Nowadays, many modern camera applications are powered by these deep learning algorithms, and we just need to open the camera app on a smartphone to see them in action. However, computer vision tasks are not restricted to smartphones or cloud-based systems. In fact, these algorithms can now be accelerated in microcontrollers, albeit with their limited computational capabilities.

In this chapter, we will see the benefit of adding sight to our tiny devices by classifying two desk objects with the OV7670 camera module, in conjunction with the Arduino Nano.

In the first part, we will learn how to acquire images from the OV7670 camera module. Then, we...

Technical requirements

To complete all the practical recipes of this chapter, we will need the following:

  • An Arduino Nano 33 BLE Sense
  • A micro-USB data cable
  • 1 x half-size solderless breadboard
  • 1 x OV7670 camera module
  • 1 x push-button
  • 18 x jumper wires (male to female)
  • Laptop/PC with either Linux, macOS, or Windows
  • Google Drive account

The source code and additional material are available in the Chapter08 folder in the GitHub repository: https://github.com/PacktPublishing/TinyML-Cookbook_2E/tree/main/Chapter08.

Taking pictures with the OV7670 camera module

In this first recipe, we will build an electronic circuit to take pictures with the OV7670 camera module, using the Arduino Nano. After assembling the circuit, we will use the pre-built CameraCaptureRawBytes sketch in the Arduino IDE to transmit the pixel values over the serial.

Getting ready

The OV7670 camera module, illustrated in the following figure, is the main ingredient required in this recipe to take pictures with the Arduino Nano:

Figure 8:1: The OV7670 camera module

This vision sensor is one of the most affordable for tinyML applications, as you can buy it from various electronic distributors for less than $10. Cost is not the only reason we went for this sensor, though. Other factors make this device our preferred option, such as the following:

  • Low frame resolution support: Since microcontrollers have limited memory, we should consider cameras capable of...

Grabbing camera frames from the serial port with Python

In the previous recipe, we showed how to take images from the OV7670, but we didn’t provide a method to display them. Therefore, it is now the time to implement a Python script locally to read the pixel values transmitted serially and show the resulting image on the screen.

Getting ready

In this recipe, we will develop a Python script locally to create images from the data transmitted over the serial. To facilitate this task, we will need two main Python libraries:

  • pySerial, which allows us to read the data transmitted serially
  • Pillow (https://pypi.org/project/Pillow), which enables us to create image files from the received data

The Pillow library is a fork of the Python Imaging Library (PIL) and can be installed with the following pip command:

$ pip install Pillow

Using the fromarray() method provided by this library, we can generate images from NumPy arrays that hold...

Acquiring QQVGA images with the YCbCr422 color format

When compiling the previous sketch for the Arduino Nano, you may have noticed the following warning message in the IDE’s output log: Low memory available, stability may occur. This warning message appears because the QVGA image in the RGB565 color format requires a buffer of 153.6 KB, equivalent to roughly 60% of the available SRAM in the microcontroller.

In this recipe, we will show how to acquire an image at a lower resolution and use the YCbCr422 color format to reduce memory requirements, without compromising image quality.

Getting ready

Images are well known to require big chunks of memory, which might be a problem when dealing with microcontrollers.

Lowering the image resolution is a common practice to reduce the image memory size. Common image resolutions for microcontrollers are smaller than the QVGA (320x240) previously used, such as QQVGA (160x120) or QQQVGA (80x60...

Building the dataset to classify desk objects

In this recipe, we will build the dataset by collecting images of the mug and book, with the OV7670 camera and the Arduino Nano. The image files will then be uploaded to Google Drive to train the ML model in the next recipe.

Getting ready

Training a deep neural network from scratch for image classification commonly requires a dataset with 1,000 images per class. As you might guess, collecting such a vast number of pictures would be time-consuming. To overcome this challenge, we will employ the technique we applied in the previous Chapter 7, Detecting Objects with Edge Impulse Using FOMO on the Raspberry Pi Pico: transfer learning.

This ML technique, which we will exploit in the following recipe, allows us to build a dataset with just 20 samples per class.

How to do it…

Before implementing the Python script, remove the test pattern mode (Camera.testPattern()) in the Arduino sketch so that you can acquire live...

Transfer learning with Keras

Transfer learning is an effective technique to train a model when dealing with small datasets.

In this recipe, we will exploit it alongside the MobileNet v2 pre-trained model to recognize our two desk objects.

Getting ready

The basic principle behind transfer learning is to exploit features learned for one problem to address a new and similar one. Therefore, the idea is to take layers from a previously trained model, commonly called a pre-trained model, and add some new trainable layers on top of them:

Figure 8.17: Model architecture with transfer learning

The pre-trained model’s layers are frozen, meaning their weights cannot change during training. These layers are the base (or backbone) of the new architecture and aim to extract features from the input data. These features feed the trainable layers, the only layers to be trained from scratch.

The trainable layers are the head of the new architecture, and for...

Quantizing and testing the trained model with TensorFlow Lite

As we know from previous recipes, the model should be quantized to 8 bits to operate effectively on microcontrollers. Nonetheless, how do we know if the quantized model preserves the accuracy of the floating-point variant?

This question will be answered in this recipe, where we will show you how to evaluate the accuracy of the quantized model generated by the TensorFlow Lite converter. After this analysis, we will convert the TensorFlow Lite model to a C-byte array for deploying it on the Arduino Nano in the next recipe.

Getting ready

Quantization is an essential technique to reduce model size and significantly improve latency. However, adopting arithmetic with limited precision may alter a model’s accuracy. As a result, evaluating the quantized model’s accuracy is critical to ensure that the application performs as intended. Unfortunately, TensorFlow Lite does not provide a built...

Fusing the pre-processing operators for efficient deployment

In this last recipe, we will develop a sketch to classify desk objects with the Arduino Nano. However, the ML deployment is not the only thing we must take care of. Indeed, a few additional operations must be implemented to supply the correct input image to the model.

Therefore, in this recipe, we will not just discuss model deployment but also delve into implementing a memory-efficient pre-processing pipeline, preparing the input for the model.

Getting ready

RAM usage is impacted by the variables allocated during the program execution, such as the input, output, and intermediate tensors of the ML model. However, the model is not solely responsible for memory utilization. In fact, the image acquired from the OV7670 camera needs to be pre-processed with the following operations to provide the appropriate input to the model:

  • Image cropping to match the input shape aspect ratio of the model
  • ...

Summary

The recipes presented in this chapter demonstrated how to build an end-to-end image classification application with TensorFlow and an Arduino-compatible platform.

In the first part, we learned how to connect the OV7670 camera module to the Arduino Nano and acquire images, with a resolution and color format suitable for memory-constrained devices.

Then, we developed a Python script to create images from the pixels transmitted over the serial by the Arduino Nano. This script was then extended to upload the file images to Google Drive, laying the foundation to build the training dataset.

After the dataset preparation, we delved into the model design, where we leveraged transfer learning with TensorFlow to train a model to classify desk objects.

Ultimately, we quantized the trained model to 8-bit using the TensorFlow Lite converter and deployed it to the Arduino Nano. However, the development of the Arduino sketch went beyond mere model deployment. Crucially, we...

Learn more on Discord

To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below:

https://packt.link/tiny

lock icon
The rest of the chapter is locked
You have been reading a chapter from
TinyML Cookbook - Second Edition
Published in: Nov 2023Publisher: PacktISBN-13: 9781837637362
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Gian Marco Iodice

Gian Marco Iodice is team and tech lead in the Machine Learning Group at Arm, who co-created the Arm Compute Library in 2017. The Arm Compute Library is currently the most performant library for ML on Arm, and it's deployed on billions of devices worldwide – from servers to smartphones. Gian Marco holds an MSc degree, with honors, in electronic engineering from the University of Pisa (Italy) and has several years of experience developing ML and computer vision algorithms on edge devices. Now, he's leading the ML performance optimization on Arm Mali GPUs. In 2020, Gian Marco cofounded the TinyML UK meetup group to encourage knowledge-sharing, educate, and inspire the next generation of ML developers on tiny and power-efficient devices.
Read more about Gian Marco Iodice