Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Natural Language Processing with Flair

You're reading from   Natural Language Processing with Flair A practical guide to understanding and solving NLP problems with Flair

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781801072311
Length 200 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
 Magajna Magajna
Author Profile Icon Magajna
Magajna
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Part 1: Understanding and Solving NLP with Flair
2. Chapter 1: Introduction to Flair FREE CHAPTER 3. Chapter 2: Flair Base Types 4. Chapter 3: Embeddings in Flair 5. Chapter 4: Sequence Tagging 6. Part 2: Deep Dive into Flair – Training Custom Models
7. Chapter 5: Training Sequence Labeling Models 8. Chapter 6: Hyperparameter Optimization in Flair 9. Chapter 7: Train Your Own Embeddings 10. Chapter 8: Text Classification in Flair 11. Part 3: Real-World Applications with Flair
12. Chapter 9: Deploying and Using Models in Production 13. Chapter 10: Hands-On Exercise – Building a Trading Bot with Flair 14. Other Books You May Enjoy

Technical considerations for NLP models in production

Deploying machine learning (especially NLP) models differs from deploying other software solutions in one key area – the resources needed to run the service. Hosting a generic service for a simple mobile or web app can, in theory, be done by any modern device such as a PC or a mobile phone. Only as you start to scale the service to cater to a larger audience is when you need to put extra thought, effort, and resources into making the service more scalable. When serving machine learning models, things often get complicated right from the start. We are dealing with huge models where a typical web server sometimes can't even load a model into memory. Each request uses up a significant amount of resources, yet we need to serve requests on demand, in real time, and to a large audience. But how do you do that?

When comparing this chapter's topic to what we covered in this book so far, you will notice that deploying...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Modal Close icon
Modal Close icon