Reader small image

You're reading from  Mastering Transformers

Product typeBook
Published inSep 2021
PublisherPackt
ISBN-139781801077651
Edition1st Edition
Right arrow
Authors (2):
Savaş Yıldırım
Savaş Yıldırım
author image
Savaş Yıldırım

Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP. He has significantly contributed to the Turkish NLP community by developing a lot of open source software and resources. He also provides comprehensive consultancy to AI companies on their R&D projects. In his spare time, he writes and directs short films, and enjoys practicing yoga.
Read more about Savaş Yıldırım

Meysam Asgari- Chenaghlu
Meysam Asgari- Chenaghlu
author image
Meysam Asgari- Chenaghlu

Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz. He has been a consultant for Turkey's leading telecommunication and banking companies. He has also worked on various projects, including natural language understanding and semantic search.
Read more about Meysam Asgari- Chenaghlu

View More author details
Right arrow

Chapter 10: Serving Transformer Models

So far, we've explored many aspects surrounding Transformers, and you've learned how to train and use a Transformer model from scratch. You also learned how to fine-tune them for many tasks. However, we still don't know how to serve these models in production. Like any other real-life and modern solution, Natural Language Processing (NLP)-based solutions must be able to be served in a production environment. However, metrics such as response time must be taken into consideration while developing such solutions.

This chapter will explain how to serve a Transformer-based NLP solution in environments where CPU/GPU is available. TensorFlow Extended (TFX) for machine learning deployment as a solution will be described here. Also, other solutions for serving Transformers as APIs such as FastAPI will be illustrated. You will also learn about the basics of Docker, as well as how to dockerize your service and make it deployable. Lastly...

Technical requirements

We will be using Jupyter Notebook, Python, and Dockerfile to run our coding exercises, which will require Python 3.6.0. The following packages need to be installed:

  • TensorFlow
  • PyTorch
  • Transformer >=4.00
  • fastAPI
  • Docker
  • Locust

Now, let's get started!

All the notebooks for the coding exercises in this chapter will be available at the following GitHub link: https://github.com/PacktPublishing/Mastering-Transformers/tree/main/CH10.

Check out the following link to see the Code in Action video: https://bit.ly/375TOPO

fastAPI Transformer model serving

There are many web frameworks we can use for serving. Sanic, Flask, and fastAPI are just some examples. However, fastAPI has recently gained so much attention because of its speed and reliability. In this section, we will use fastAPI and learn how to build a service according to its documentation. We will also use pydantic to define our data classes. Let's begin!

  1. Before we start, we must install pydantic and fastAPI:
    $ pip install pydantic
    $ pip install fastapi
  2. The next step is to make the data model for decorating the input of the API using pydantic. But before forming the data model, we must know what our model is and identify its input.

    We are going to use a Question Answering (QA) model for this. As you know from Chapter 6, Fine-Tuning Language Models for Token Classification, the input is in the form of a question and a context.

  3. By using the following data model, you can make the QA data model:
    from pydantic import BaseModel...

Dockerizing APIs

To save time during production and ease the deployment process, it is essential to use Docker. It is very important to isolate your service and application. Also, note that the same code can be run anywhere, regardless of the underlying OS. To achieve this, Docker provides great functionality and packaging. Before using it, you must install it using the steps recommended in the Docker documentation (https://docs.docker.com/get-docker/):

  1. First, put the main.py file in the app directory.
  2. Next, you must eliminate the last part from your code by specifying the following:
    if __name__ == '__main__':
         uvicorn.run('main:app', workers=1)
  3. The next step is to make a Dockerfile for your fastAPI; you made this previously. To do so, you must create a Dockerfile that contains the following content:
    FROM python:3.7
    RUN pip install torch
    RUN pip install fastapi uvicorn transformers
    EXPOSE 80
    COPY ./app /app
    CMD ["uvicorn...

Faster Transformer model serving using TFX

TFX provides a faster and more efficient way to serve deep learning-based models. But it has some important key points you must understand before you use it. The model must be a saved model type from TensorFlow so that it can be used by TFX Docker or the CLI. Let's take a look:

  1. You can perform TFX model serving by using a saved model format from TensorFlow. For more information about TensorFlow saved models, you can read the official documentation at https://www.tensorflow.org/guide/saved_model. To make a saved model from Transformers, you can simply use the following code:
    from transformers import TFBertForSequenceClassification
    model = \ TFBertForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-imdb", from_pt=True)
    model.save_pretrained("tfx_model", saved_model=True)
  2. Before we understand how to use it to serve Transformers, it is required to pull the Docker image for TFX:
    $ docker pull...

Load testing using Locust

There are many applications we can use to load test services. Most of these applications and libraries provide useful information about the response time and delay of the service. They also provide information about the failure rate. Locust is one of the best tools for this purpose. We will use it to load test three methods for serving a Transformer-based model: using fastAPI only, using dockerized fastAPI, and TFX-based serving using fastAPI. Let's get started:

  1. First, we must install Locust:
    $ pip install locust

    This command will install Locust. The next step is to make all the services serving an identical task use the same model. Fixing two of the most important parameters of this test will ensure that all the services have been designed identically to serve a single purpose. Using the same model will help us freeze anything else and focus on the deployment performance of the methods.

  2. Once everything is ready, you can start load testing your...

Summary

In this chapter, you learned the basics of serving Transformer models using fastAPI. You also learned how to serve models in a more advanced and efficient way, such as by using TFX. You then studied the basics of load testing and creating users. Making these users spawn in groups or one by one, and then reporting the results of stress testing, was another major topic of this chapter. After that, you studied the basics of Docker and how to package your application in the form of a Docker container. Finally, you learned how to serve Transformer-based models.

In the next chapter, you will learn about Transformer deconstruction, the model view, and monitoring training using various tools and techniques.

References

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Transformers
Published in: Sep 2021Publisher: PacktISBN-13: 9781801077651
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Savaş Yıldırım

Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP. He has significantly contributed to the Turkish NLP community by developing a lot of open source software and resources. He also provides comprehensive consultancy to AI companies on their R&D projects. In his spare time, he writes and directs short films, and enjoys practicing yoga.
Read more about Savaş Yıldırım

author image
Meysam Asgari- Chenaghlu

Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz. He has been a consultant for Turkey's leading telecommunication and banking companies. He has also worked on various projects, including natural language understanding and semantic search.
Read more about Meysam Asgari- Chenaghlu