Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Meta Learning with Python

You're reading from  Hands-On Meta Learning with Python

Product type Book
Published in Dec 2018
Publisher Packt
ISBN-13 9781789534207
Pages 226 pages
Edition 1st Edition
Languages
Author (1):
Sudharsan Ravichandiran Sudharsan Ravichandiran
Profile icon Sudharsan Ravichandiran

Table of Contents (17) Chapters

Title Page
Dedication
About Packt
Contributors
Preface
1. Introduction to Meta Learning 2. Face and Audio Recognition Using Siamese Networks 3. Prototypical Networks and Their Variants 4. Relation and Matching Networks Using TensorFlow 5. Memory-Augmented Neural Networks 6. MAML and Its Variants 7. Meta-SGD and Reptile 8. Gradient Agreement as an Optimization Objective 9. Recent Advancements and Next Steps 1. Assessments 2. Other Books You May Enjoy Index

MAML


MAML is one of the recently introduced and most popularly used meta learning algorithms and it has created a major breakthrough in meta learning research. Learning to learn is the key focus of meta learning and we know that, in meta learning, we learn from various related tasks containing only a small number of data points and the meta learner produces a quick learner that can generalize well on a new related task even with a lesser number of training samples.

The basic idea of MAML is to find a better initial parameter so that, with good initial parameters, the model can learn quickly on new tasks with fewer gradient steps.

So, what do we mean by that? Let's say we are performing a classification task using a neural network. How do we train the network? We will start off with initializing random weights and train the network by minimizing the loss. How do we minimize the loss? We do so using gradient descent. Okay, but how do we use gradient descent for minimizing the loss? We use gradient...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}