Reader small image

You're reading from  Mastering NLP from Foundations to LLMs

Product typeBook
Published inApr 2024
PublisherPackt
ISBN-139781804619186
Edition1st Edition
Right arrow
Authors (2):
Lior Gazit
Lior Gazit
author image
Lior Gazit

Lior Gazit is a highly skilled Machine Learning professional with a proven track record of success in building and leading teams drive business growth. He is an expert in Natural Language Processing and has successfully developed innovative Machine Learning pipelines and products. He holds a Master degree and has published in peer-reviewed journals and conferences. As a Senior Director of the Machine Learning group in the Financial sector, and a Principal Machine Learning Advisor at an emerging startup, Lior is a respected leader in the industry, with a wealth of knowledge and experience to share. With much passion and inspiration, Lior is dedicated to using Machine Learning to drive positive change and growth in his organizations.
Read more about Lior Gazit

Meysam Ghaffari
Meysam Ghaffari
author image
Meysam Ghaffari

Meysam Ghaffari is a Senior Data Scientist with a strong background in Natural Language Processing and Deep Learning. Currently working at MSKCC, where he specialize in developing and improving Machine Learning and NLP models for healthcare problems. He has over 9 years of experience in Machine Learning and over 4 years of experience in NLP and Deep Learning. He received his Ph.D. in Computer Science from Florida State University, His MS in Computer Science - Artificial Intelligence from Isfahan University of Technology and his B.S. in Computer Science at Iran University of Science and Technology. He also worked as a post doctoral research associate at University of Wisconsin-Madison before joining MSKCC.
Read more about Meysam Ghaffari

View More author details
Right arrow

Demystifying Large Language Models: Theory, Design, and Langchain Implementation

In this chapter, we delve deep into the intricate world of large language models (LLMs) and the underpinning mathematical concepts that fuel their performance. The advent of these models has revolutionized the field of natural language processing (NLP), offering unparalleled proficiency in understanding, generating, and interacting with human language.

LLMs are a subset of artificial intelligence (AI) models that can understand and generate human-like text. They achieve this by being trained on a diverse range of internet text, thus learning an extensive array of facts about the world. They also learn to predict what comes next in a piece of text, which enables them to generate creative, fluent, and contextually coherent sentences.

As we explore the operations of LLMs, we will introduce the key metric of perplexity, a measurement of uncertainty that is pivotal in determining the performance of these...

Technical requirements

For this chapter, you are expected to possess a solid foundation in machine learning (ML) concepts, particularly in the areas of Transformers and reinforcement learning. An understanding of Transformer-based models, which underpin many of today’s LLMs, is vital. This includes familiarity with concepts such as self-attention mechanisms, positional encoding, and the structure of encoder-decoder architectures.

Knowledge of reinforcement learning principles is also essential, as we will delve into the application of RLHF in the fine-tuning of LMs. Familiarity with concepts such as policy gradients, reward functions, and Q-learning will greatly enhance your comprehension of this content.

Lastly, coding proficiency, specifically in Python, is crucial. This is because many of the concepts will be demonstrated and explored through the lens of programming. Experience with PyTorch or TensorFlow, popular ML libraries, and Hugging Face’s Transformers...

What are LLMs and how are they different from LMs?

An LM is a type of ML model that is trained to predict the next word (or character or subword, depending on the granularity of the model) in a sequence, given the words that came before it (or in some models, the surrounding words). It’s a probabilistic model that is capable of generating text that follows a certain linguistic style or pattern.

Before the advent of Transformer-based models such as generative pretrained Transformers (GPTs) and Bidirectional Encoder Representations from Transformers (BERT), there were several other types of LMs widely used in NLP tasks. The following subsections discuss a few of them.

n-gram models

These are some of the simplest LMs. An n-gram model uses the (n-1) previous words to predict the nth word in a sentence. For example, in a bigram (2-gram) model, we would use the previous word to predict the next word. These models are easy to implement and computationally efficient, but they...

How LLMs stand out

LLMs, such as GPT-3 and GPT-4, are simply LMs that are trained on a very large amount of text and have a very large number of parameters. The larger the model (in terms of parameters and training data), the more capable it is of understanding and generating complex and varied texts. Here are some key ways in which LLMs differ from smaller LMs:

  • Data: LLMs are trained on vast amounts of data. This allows them to learn from a wide range of linguistic patterns, styles, and topics.
  • Parameters: LLMs have a huge number of parameters. Parameters in an ML model are the parts of the model that are learned from the training data. The more parameters a model has, the more complex patterns it can learn.
  • Performance: Because they’re trained on more data and have more parameters, LLMs generally perform better than smaller ones. They’re capable of generating more coherent and diverse texts, and they’re better at understanding context, making...

Motivations for developing and using LLMs

The motivation to develop and use LLMs arises from several factors related to the capabilities of these models, and the potential benefits they can bring in diverse applications. The following subsections detail a few of these key motivations.

Improved performance

LLMs, when trained with sufficient data, generally demonstrate better performance compared to smaller models. They are more capable of understanding context, identifying nuances, and generating coherent and contextually relevant responses. This performance gain applies to a wide range of tasks in NLP, including text classification, named entity recognition, sentiment analysis, machine translation, question answering, and text generation. As shown in Table 7.1, the performance of BERT – one of the first well-known LLMs – and GPT is compared to the previous models on the General Language Understanding Evaluation (GLUE) benchmark. The GLUE benchmark is a collection...

Challenges in developing LLMs

Developing LLMs poses a unique set of challenges, including but not limited to handling massive amounts of data, requiring vast computational resources, and the risk of introducing or perpetuating bias. The following subsections outline the detailed explanations of these challenges.

Amounts of data

LLMs require enormous amounts of data for training. As the model size grows, so does the need for diverse, high-quality training data. However, collecting and curating such large datasets is a challenging task. It can be time - consuming and expensive. There’s also the risk of inadvertently including sensitive or inappropriate data in the training set. To have more of an idea, BERT has been trained using 3.3 billion words from Wikipedia and BookCorpus. GPT-2 has been trained on 40 GB of text data, and GPT-3 has been trained on 570 GB of text data. Table 7.2 shows the number of parameters and size of training data of a few recent LMs.

...

Different types of LLMs

LLMs are generally neural network architectures that are trained on a large corpus of text data. The term “large” refers to the size of these models in terms of the number of parameters and the scale of training data. Here are some examples of LLMs.

Transformer models

Transformer models have been at the forefront of the recent wave of LLMs. They are based on the “Transformer” architecture, which uses self-attention mechanisms to weigh the relevance of different words in the input when making predictions. Transformers are a type of neural network architecture introduced in the paper Attention is All You Need by Vaswani et al. One of their significant advantages, particularly for training LLMs, is their suitability for parallel computing.

In traditional RNN models, such as LSTM and GRU, the sequence of tokens (words, subwords, or characters in the text) must be processed sequentially. That’s because each token’...

Example designs of state-of-the-art LLMs

In this part, we are going to dig more into the design and architecture of some of the newest LLMs at the time of writing this book.

GPT-3.5 and ChatGPT

The core of ChatGPT is a Transformer, a type of model architecture that uses self-attention mechanisms to weigh the relevance of different words in the input when making predictions. It allows the model to consider the full context of the input when generating a response.

The GPT model

ChatGPT is based on the GPT version of the Transformer. The GPT models are trained to predict the next word in a sequence of words, given all the previous words. They process text from left to right (unidirectional context), which makes them well-suited for text generation tasks. For instance, GPT-3, one of the versions of GPT on which ChatGPT is based, contains 175 billion parameters.

Two-step training process

The training process for ChatGPT is done in two steps: pretraining and fine-tuning...

Summary

In this chapter, we’ve delved into the dynamic and complex world of state-of-the-art LLMs. We’ve discussed their remarkable generalization capabilities, making them versatile tools for a wide range of tasks. We also highlighted the crucial aspect of understanding complex contexts, where these models excel by grasping the nuances of language and the intricacies of various subject matters.

Additionally, we explored the paradigm of RLHF and how it is being employed to enhance LMs. RLHF leverages scalar feedback to improve LMs by mimicking human judgments, thereby helping to mitigate some of the common pitfalls encountered in NLP tasks.

We discussed technical requirements for working with these models, emphasizing the need for foundational knowledge in areas such as Transformers, reinforcement learning, and coding skills.

This chapter also touched on some prominent LMs such as GPT-4 and LLaMA, discussing their architecture, methods, and performance. We highlighted...

References

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering NLP from Foundations to LLMs
Published in: Apr 2024Publisher: PacktISBN-13: 9781804619186
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Lior Gazit

Lior Gazit is a highly skilled Machine Learning professional with a proven track record of success in building and leading teams drive business growth. He is an expert in Natural Language Processing and has successfully developed innovative Machine Learning pipelines and products. He holds a Master degree and has published in peer-reviewed journals and conferences. As a Senior Director of the Machine Learning group in the Financial sector, and a Principal Machine Learning Advisor at an emerging startup, Lior is a respected leader in the industry, with a wealth of knowledge and experience to share. With much passion and inspiration, Lior is dedicated to using Machine Learning to drive positive change and growth in his organizations.
Read more about Lior Gazit

author image
Meysam Ghaffari

Meysam Ghaffari is a Senior Data Scientist with a strong background in Natural Language Processing and Deep Learning. Currently working at MSKCC, where he specialize in developing and improving Machine Learning and NLP models for healthcare problems. He has over 9 years of experience in Machine Learning and over 4 years of experience in NLP and Deep Learning. He received his Ph.D. in Computer Science from Florida State University, His MS in Computer Science - Artificial Intelligence from Isfahan University of Technology and his B.S. in Computer Science at Iran University of Science and Technology. He also worked as a post doctoral research associate at University of Wisconsin-Madison before joining MSKCC.
Read more about Meysam Ghaffari