Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Mastering NLP from Foundations to LLMs

You're reading from  Mastering NLP from Foundations to LLMs

Product type Book
Published in Apr 2024
Publisher Packt
ISBN-13 9781804619186
Pages 340 pages
Edition 1st Edition
Languages
Authors (2):
Lior Gazit Lior Gazit
Profile icon Lior Gazit
Meysam Ghaffari Meysam Ghaffari
Profile icon Meysam Ghaffari
View More author details

Table of Contents (14) Chapters

Preface 1. Chapter 1: Navigating the NLP Landscape: A Comprehensive Introduction 2. Chapter 2: Mastering Linear Algebra, Probability, and Statistics for Machine Learning and NLP 3. Chapter 3: Unleashing Machine Learning Potentials in Natural Language Processing 4. Chapter 4: Streamlining Text Preprocessing Techniques for Optimal NLP Performance 5. Chapter 5: Empowering Text Classification: Leveraging Traditional Machine Learning Techniques 6. Chapter 6: Text Classification Reimagined: Delving Deep into Deep Learning Language Models 7. Chapter 7: Demystifying Large Language Models: Theory, Design, and Langchain Implementation 8. Chapter 8: Accessing the Power of Large Language Models: Advanced Setup and Integration with RAG 9. Chapter 9: Exploring the Frontiers: Advanced Applications and Innovations Driven by LLMs 10. Chapter 10: Riding the Wave: Analyzing Past, Present, and Future Trends Shaped by LLMs and AI 11. Chapter 11: Exclusive Industry Insights: Perspectives and Predictions from World Class Experts 12. Index 13. Other Books You May Enjoy

Different types of LLMs

LLMs are generally neural network architectures that are trained on a large corpus of text data. The term “large” refers to the size of these models in terms of the number of parameters and the scale of training data. Here are some examples of LLMs.

Transformer models

Transformer models have been at the forefront of the recent wave of LLMs. They are based on the “Transformer” architecture, which uses self-attention mechanisms to weigh the relevance of different words in the input when making predictions. Transformers are a type of neural network architecture introduced in the paper Attention is All You Need by Vaswani et al. One of their significant advantages, particularly for training LLMs, is their suitability for parallel computing.

In traditional RNN models, such as LSTM and GRU, the sequence of tokens (words, subwords, or characters in the text) must be processed sequentially. That’s because each token’...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}