Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
TinyML Cookbook - Second Edition

You're reading from  TinyML Cookbook - Second Edition

Product type Book
Published in Nov 2023
Publisher Packt
ISBN-13 9781837637362
Pages 664 pages
Edition 2nd Edition
Languages
Author (1):
Gian Marco Iodice Gian Marco Iodice
Profile icon Gian Marco Iodice

Table of Contents (16) Chapters

Preface Getting Ready to Unlock ML on Microcontrollers Unleashing Your Creativity with Microcontrollers Building a Weather Station with TensorFlow Lite for Microcontrollers Using Edge Impulse and the Arduino Nano to Control LEDs with Voice Commands Recognizing Music Genres with TensorFlow and the Raspberry Pi Pico – Part 1 Recognizing Music Genres with TensorFlow and the Raspberry Pi Pico – Part 2 Detecting Objects with Edge Impulse Using FOMO on the Raspberry Pi Pico Classifying Desk Objects with TensorFlow and the Arduino Nano Building a Gesture-Based Interface for YouTube Playback with Edge Impulse and the Raspberry Pi Pico Deploying a CIFAR-10 Model for Memory-Constrained Devices with the Zephyr OS on QEMU Running ML Models on Arduino and the Arm Ethos-U55 microNPU Using Apache TVM Enabling Compelling tinyML Solutions with On-Device Learning and scikit-learn on the Arduino Nano and Raspberry Pi Pico Conclusion
Other Books You May Enjoy
Index

Transfer learning with Keras

Transfer learning is an effective technique to train a model when dealing with small datasets.

In this recipe, we will exploit it alongside the MobileNet v2 pre-trained model to recognize our two desk objects.

Getting ready

The basic principle behind transfer learning is to exploit features learned for one problem to address a new and similar one. Therefore, the idea is to take layers from a previously trained model, commonly called a pre-trained model, and add some new trainable layers on top of them:

Figure 8.17: Model architecture with transfer learning

The pre-trained model’s layers are frozen, meaning their weights cannot change during training. These layers are the base (or backbone) of the new architecture and aim to extract features from the input data. These features feed the trainable layers, the only layers to be trained from scratch.

The trainable layers are the head of the new architecture, and for...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}