Reader small image

You're reading from  Mastering Transformers

Product typeBook
Published inSep 2021
PublisherPackt
ISBN-139781801077651
Edition1st Edition
Right arrow
Authors (2):
Savaş Yıldırım
Savaş Yıldırım
author image
Savaş Yıldırım

Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP. He has significantly contributed to the Turkish NLP community by developing a lot of open source software and resources. He also provides comprehensive consultancy to AI companies on their R&D projects. In his spare time, he writes and directs short films, and enjoys practicing yoga.
Read more about Savaş Yıldırım

Meysam Asgari- Chenaghlu
Meysam Asgari- Chenaghlu
author image
Meysam Asgari- Chenaghlu

Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz. He has been a consultant for Turkey's leading telecommunication and banking companies. He has also worked on various projects, including natural language understanding and semantic search.
Read more about Meysam Asgari- Chenaghlu

View More author details
Right arrow

Chapter 5: Fine-Tuning Language Models for Text Classification

In this chapter, we will learn how to configure a pre-trained model for text classification and how to fine-tune it to any text classification downstream task, such as sentiment analysis or multi-class classification. We will also discuss how to handle sentence-pair and regression problems by covering an implementation. We will work with well-known datasets such as GLUE, as well as our own custom datasets. We will then take advantage of the Trainer class, which deals with the complexity of processes for training and fine-tuning.

First, we will learn how to fine-tune single-sentence binary sentiment classification with the Trainer class. Then, we will train for sentiment classification with native PyTorch without the Trainer class. In multi-class classification, more than two classes will be taken into consideration. We will have seven class classification fine-tuning tasks to perform. Finally, we will train a text regression...

Technical requirements

We will be using Jupyter Notebook to run our coding exercises. You will need Python 3.6+ for this. Ensure that the following packages are installed:

  • sklearn
  • Transformers 4.0+
  • datasets

All the notebooks for the coding exercises in this chapter will be available at the following GitHub link: https://github.com/PacktPublishing/Mastering-Transformers/tree/main/CH05.

Check out the following link to see the Code in Action video:

https://bit.ly/3y5Fe6R

Introduction to text classification

Text classification (also known as text categorization) is a way of mapping a document (sentence, Twitter post, book chapter, email content, and so on) to a category out of a predefined list (classes). In the case of two classes that have positive and negative labels, we call this binary classification – more specifically, sentiment analysis. For more than two classes, we call this multi-class classification, where the classes are mutually exclusive, or multi-label classification, where the classes are not mutually exclusive, which means a document can receive more than one label. For instance, the content of a news article may be related to sport and politics at the same time. Beyond this classification, we may want to score the documents in a range of [-1,1] or rank them in a range of [1-5]. We can solve this kind of problem with a regression model, where the type of the output is numeric, not categorical.

Luckily, the transformer architecture...

Fine-tuning a BERT model for single-sentence binary classification

In this section, we will discuss how to fine-tune a pre-trained BERT model for sentiment analysis by using the popular IMDb sentiment dataset. Working with a GPU will speed up our learning process, but if you do not have such resources, you can work with a CPU as well for fine-tuning. Let's get started:

  1. To learn about and save our current device, we can execute the following lines of code:
    from torch import cuda
    device = 'cuda' if cuda.is_available() else 'cpu'
  2. We will use the DistilBertForSequenceClassification class here, which is inherited from the DistilBert class, with a special sequence classification head at the top. We can utilize this classification head to train the classification model, where the number of classes is 2 by default:
    from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification
    model_path= 'distilbert-base-uncased'
    tokenizer...

Training a classification model with native PyTorch

The Trainer class is very powerful, and we have the HuggingFace team to thank for providing such a useful tool. However, in this section, we will fine-tune the pre-trained model from scratch to see what happens under the hood. Let's get started:

  1. First, let's load the model for fine-tuning. We will select DistilBERT here since it is a small, fast, and cheap version of BERT:
    from transformers import DistilBertForSequenceClassification
    model = DistilBertForSequenceClassification.from_pre-trained('distilbert-base-uncased')
  2. To fine-tune any model, we need to put it into training mode, as follows:
    model.train()
  3. Now, we must load the tokenizer:
    from transformers import DistilBertTokenizerFast
    tokenizer = DistilBertTokenizerFast.from_pre-trained('bert-base-uncased')
  4. Since the Trainer class organized the entire process for us, we did not deal with optimization and other training settings in...

Fine-tuning BERT for multi-class classification with custom datasets

In this section, we will fine-tune the Turkish BERT, namely BERTurk, to perform seven-class classification downstream tasks with a custom dataset. This dataset has been compiled from Turkish newspapers and consists of seven categories. We will start by getting the dataset. Alternatively, you can find it in this book's GitHub respository or get it from https://www.kaggle.com/savasy/ttc4900:

  1. First, run the following code to get data within a Python notebook:
    !wget https://raw.githubusercontent.com/savasy/TurkishTextClassification/master/TTC4900.csv
  2. Start by loading the data:
    import pandas as pd
    data= pd.read_csv("TTC4900.csv")
    data=data.sample(frac=1.0, random_state=42)
  3. Let's organize the IDs and labels with id2label and label2id to make the model figure out which ID refers to which label. We will also pass the number of labels, NUM_LABELS, to the model to specify the size of a thin...

Fine-tuning the BERT model for sentence-pair regression

The regression model is considered to be for classification, but the last layer only contains a single unit. This is not processed by softmax logistic regression but normalized. To specify the model and put a single-unit head layer at the top, we can either directly pass the num_labels=1 parameter to the BERT.from_pre-trained() method or pass this information through a Config object. Initially, this needs to be copied from the config object of the pre-trained model, as follows:

from transformers import DistilBertConfig, DistilBertTokenizerFast, DistilBertForSequenceClassification
model_path='distilbert-base-uncased'
config = DistilBertConfig.from_pre-trained(model_path, num_labels=1)
tokenizer = DistilBertTokenizerFast.from_pre-trained(model_path)
model = \
DistilBertForSequenceClassification.from_pre-trained(model_path, config=config)

Well, our pre-trained model has a single-unit head layer thanks to the num_labels...

Utilizing run_glue.py to fine-tune the models

So far, we have designed a fine-tuning architecture from scratch using both native PyTorch and the Trainer class. The HuggingFace community also provides another powerful script called run_glue.py for GLUE benchmark and GLUE-like classification downstream tasks. This script can handle and organize the entire training/validation process for us. If you want to do quick prototyping, you should use this script. It can fine-tune any pre-trained models on the HuggingFace hub. We can also feed it with our own data in any format.

Please go to the following link to access the script and to learn more: https://github.com/huggingface/transformers/tree/master/examples.

The script can perform nine different GLUE tasks. With the script, we can do everything that we have done with the Trainer class so far. The task name could be one of the following GLUE tasks: cola, sst2, mrpc, stsb, qqp, mnli, qnli, rte, or wnli.

Here is the script scheme for...

Summary

In this chapter, we discussed how to fine-tune a pre-trained model for any text classification downstream task. We fine-tuned the models using sentiment analysis, multi-class classification, and sentence-pair classification – more specifically, sentence-pair regression. We worked with a well-known IMDb dataset and our own custom dataset to train the models. While we took advantage of the Trainer class to cope with much of the complexity of the processes for training and fine-tuning, we learned how to train from scratch with native libraries to understand forward propagation and backpropagation with the transformers library. To summarize, we discussed and conducted fine-tuning single-sentence classification with Trainer, sentiment classification with native PyTorch without Trainer, single-sentence multi-class classification, and fine-tuning sentence-pair regression.

In the next chapter, we will learn how to fine-tune a pre-trained model to any token classification...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Transformers
Published in: Sep 2021Publisher: PacktISBN-13: 9781801077651
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Savaş Yıldırım

Savaş Yıldırım graduated from the Istanbul Technical University Department of Computer Engineering and holds a Ph.D. degree in Natural Language Processing (NLP). Currently, he is an associate professor at the Istanbul Bilgi University, Turkey, and is a visiting researcher at the Ryerson University, Canada. He is a proactive lecturer and researcher with more than 20 years of experience teaching courses on machine learning, deep learning, and NLP. He has significantly contributed to the Turkish NLP community by developing a lot of open source software and resources. He also provides comprehensive consultancy to AI companies on their R&D projects. In his spare time, he writes and directs short films, and enjoys practicing yoga.
Read more about Savaş Yıldırım

author image
Meysam Asgari- Chenaghlu

Meysam Asgari-Chenaghlu is an AI manager at Carbon Consulting and is also a Ph.D. candidate at the University of Tabriz. He has been a consultant for Turkey's leading telecommunication and banking companies. He has also worked on various projects, including natural language understanding and semantic search.
Read more about Meysam Asgari- Chenaghlu