Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Unlocking Data with Generative AI and RAG
Unlocking Data with Generative AI and RAG

Unlocking Data with Generative AI and RAG: Enhance generative AI systems by integrating internal data with large language models using RAG

eBook
€8.98 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Unlocking Data with Generative AI and RAG

What Is Retrieval-Augmented Generation (RAG)

The field of artificial intelligence (AI) is rapidly evolving. At the center of it all is generative AI. At the center of generative AI is retrieval-augmented generation (RAG). RAG is emerging as a significant addition to the generative AI toolkit, harnessing the intelligence and text generation capabilities of large language models (LLMs) and integrating them with a company’s internal data. This offers a method to enhance organizational operations significantly. This book focuses on numerous aspects of RAG, examining its role in augmenting the capabilities of LLMs and leveraging internal corporate data for strategic advantage.

As this book progresses, we will outline the potential of RAG in the enterprise, suggesting how it can make AI applications more responsive and smarter, aligning them with your organizational objectives. RAG is well-positioned to become a key facilitator of customized, efficient, and insightful AI solutions, bridging the gap between generative AI’s potential and your specific business needs. Our exploration of RAG will encourage you to unlock the full potential of your corporate data, paving the way for you to enter the era of AI-driven innovation.

In this chapter, we will cover the following topics:

  • The basics of RAG and how it combines LLMs with a company’s private data
  • The key advantages of RAG, such as improved accuracy, customization, and flexibility
  • The challenges and limitations of RAG, including data quality and computational complexity
  • Important RAG vocabulary terms, with an emphasis on vectors and embeddings
  • Real-world examples of RAG applications across various industries
  • How RAG differs from conventional generative AI and model fine-tuning
  • The overall architecture and stages of a RAG system from user and technical perspectives

By the end of this chapter, you will have a solid foundation in the core RAG concepts and understand the immense potential it offers organizations so that they can extract more value from their data and empower their LLMs. Let’s get started!

Understanding RAG – Basics and principles

Modern-day LLMs are impressive, but they have never seen your company’s private data (hopefully!). This means the ability of an LLM to help your company fully utilize its data is very limited. This very large barrier has given rise to the concept of RAG, where you are using the power and capabilities of the LLM but combining it with the knowledge and data contained within your company’s internal data repositories. This is the primary motivation for using RAG: to make new data available to the LLM and significantly increase the value you can extract from that data.

Beyond internal data, RAG is also useful in cases where the LLM has not been trained on the data, even if it is public, such as the most recent research papers or articles about a topic that is strategic to your company. In both cases, we are talking about data that was not present during the training of the LLM. You can have the latest LLM trained on the most tokens ever, but if that data was not present for training, then the LLM will be at a disadvantage in helping you reach your full productivity.

Ultimately, this highlights the fact that, for most organizations, it is a central need to connect new data to an LLM. RAG is the most popular paradigm for doing this. This book focuses on showing you how to set up a RAG application with your data, as well as how to get the most out of it in various situations. We intend to give you an in-depth understanding of RAG and its importance in leveraging an LLM within the context of a company’s private or specific data needs.

Now that you understand the basic motivations behind implementing RAG, let’s review some of the advantages of using it.

Advantages of RAG

Some of the potential advantages of using RAG include improved accuracy and relevance, customization, flexibility, and expanding the model’s knowledge beyond the training data. Let’s take a closer look:

  • Improved accuracy and relevance: RAG can significantly enhance the accuracy and relevance of responses that are generated by LLMs. RAG fetches and incorporates specific information from a database or dataset, typically in real time, and ensures that the output is based on both the model’s pre-existing knowledge and the most current and relevant data that you are providing directly.
  • Customization: RAG allows you to customize and adapt the model’s knowledge to your specific domain or use case. By pointing RAG to databases or datasets directly relevant to your application, you can tailor the model’s outputs so that they align closely with the information and style that matters most for your specific needs. This customization enables the model to provide more targeted and useful responses.
  • Flexibility: RAG provides flexibility in terms of the data sources that the model can access. You can apply RAG to various structured and unstructured data, including databases, web pages, documents, and more. This flexibility allows you to leverage diverse information sources and combine them in novel ways to enhance the model’s capabilities. Additionally, you can update or swap out the data sources as needed, enabling the model to adapt to changing information landscapes.
  • Expanding model knowledge beyond training data: LLMs are limited by the scope of their training data. RAG overcomes this limitation by enabling models to access and utilize information that was not included in their initial training sets. This effectively expands the knowledge base of the model without the need for retraining, making LLMs more versatile and adaptable to new domains or rapidly evolving topics.
  • Removing hallucinations: The LLM is a key component within the RAG system. LLMs have the potential to provide wrong information, also known as hallucinations. These hallucinations can manifest in several ways, such as made-up facts, incorrect facts, or even nonsensical verbiage. Often, the hallucination is worded in a way that can be very convincing, causing it to be difficult to identify. A well-designed RAG application can remove hallucinations much more easily than when directly using an LLM.

With that, we’ve covered the key advantages of implementing RAG in your organization. Next, let’s discuss some of the challenges you might face.

Challenges of RAG

There are some challenges to using RAG as well, which include dependency on the quality of the internal data, the need for data manipulation and cleaning, computational overhead, more complex integrations, and the potential for information overload. Let’s review these challenges and gain a better understanding of how they impact RAG pipelines and what can be done about them:

  • Dependency on data quality: When talking about how data can impact an AI model, the saying in data science circles is garbage in, garbage out. This means that if you give a model bad data, it will give you bad results. RAG is no different. The effectiveness of RAG is directly tied to the quality of the data it retrieves. If the underlying database or dataset contains outdated, biased, or inaccurate information, the outputs generated by RAG will likely suffer from the same issues.
  • Need for data manipulation and cleaning: Data in the recesses of the company often has a lot of value to it, but it is not often in good, accessible shape. For example, data from PDF-based customer statements needs a lot of massaging so that it can be put into a format that can be useful to a RAG pipeline.
  • Computational overhead: A RAG pipeline introduces a host of new computational steps into the response generation process, including data retrieval, processing, and integration. LLMs are getting faster every day, but even the fastest response can be more than a second, and some can take several seconds. If you combine that with other data processing steps, and possibly multiple LLM calls, the result can be a very significant increase in the time it takes to receive a response. This all leads to increased computational overhead, affecting the efficiency and scalability of the entire system. As with any other IT initiative, an organization must balance the benefits of enhanced accuracy and customization against the resource requirements and potential latency introduced by these additional processes.
  • Data storage explosion; complexity in integration and maintenance: Traditionally, your data resides in a data source that’s queried in various ways to be made available to your internal and external systems. But with RAG, your data resides in multiple forms and locations, such as vectors in a vector database, that represent the same data, but in a different format. Add in the complexity of connecting these various data sources to LLMs and relevant technical mechanisms such as vector searches and you have a significant increase in complexity. This increased complexity can be resource-intensive. Maintaining this integration over time, especially as data sources evolve or expand, adds even more complexity and cost. Organizations need to invest in technical expertise and infrastructure to leverage RAG capabilities effectively while accounting for the rapid increase in complexities these systems bring with them.
  • Potential for information overload: RAG-based systems can pull in too much information. It is just as important to implement mechanisms to address this issue as it is to handle times when not enough relevant information is found. Determining the relevance and importance of retrieved information to be included in the final output requires sophisticated filtering and ranking mechanisms. Without these, the quality of the generated content could be compromised by an excess of unnecessary or marginally relevant details.
  • Hallucinations: While we listed removing hallucinations as an advantage of using RAG, hallucinations do pose one of the biggest challenges to RAG pipelines if they’re not dealt with properly. A well-designed RAG application must take measures to identify and remove hallucinations and undergo significant testing before the final output text is provided to the end user.
  • High levels of complexity within RAG components: A typical RAG application tends to have a high level of complexity, with many components that need to be optimized for the overall application to function properly. The components can interact with each other in several ways, often with many more steps than the basic RAG pipeline you start with. Every component within the pipeline needs significant amounts of trials and testing, including your prompt design and engineering, the LLMs you use and how you use them, the various algorithms and their parameters for retrieval, the interface you use to access your RAG application, and numerous other aspects that you will need to add over the course of your development.

In this section, we explored the key advantages of implementing RAG in your organization, including improved accuracy and relevance, customization, flexibility, and the ability to expand the model’s knowledge beyond its initial training data. We also discussed some of the challenges you might face when deploying RAG, such as dependency on data quality, the need for data manipulation and cleaning, increased computational overhead, complexity in integration and maintenance, and the potential for information overload. Understanding these benefits and challenges provides a foundation for diving deeper into the core concepts and vocabulary used in RAG systems.

To understand the approaches we will introduce, you will need a good understanding of the vocabulary used to discuss these approaches. In the following section, we will familiarize ourselves with some of the foundational concepts so that you can better understand the various components and techniques involved in building effective RAG pipelines.

RAG vocabulary

Now is as good a time as any to review some vocabulary that should help you become familiar with the various concepts in RAG. In the following subsections, we will familiarize ourselves with some of this vocabulary, including LLMs, prompting concepts, inference, context windows, fine-tuning approaches, vector databases, and vectors/embeddings. This is not an exhaustive list, but understanding these core concepts should help you understand everything else we will teach you about RAG in a more effective way.

LLM

Most of this book will deal with LLMs. LLMs are generative AI technologies that focus on generating text. We will keep things simple by concentrating on the type of model that most RAG pipelines use, the LLM. However, we would like to clarify that while we will focus primarily on LLMs, RAG can also be applied to other types of generative models, such as those for images, audio, and videos. We will focus on these other types of models and how they are used in RAG in Chapter 14.

Some popular examples of LLMs are the OpenAI ChatGPT models, the Meta Llama models, Google’s Gemini models, and Anthropic’s Claude models.

Prompting, prompt design, and prompt engineering

These terms are sometimes used interchangeably, but technically, while they all have to do with prompting, they do have different meanings:

  • Prompting is the act of sending a query or prompt to an LLM.
  • Prompt design refers to the strategy you implement to design the prompt you will send to the LLM. Many different prompt design strategies work in different scenarios. We will review many of these in Chapter 13.
  • Prompt engineering focuses more on the technical aspects surrounding the prompt that you use to improve the outputs from the LLM. For example, you may break up a complex query into two or three different LLM interactions, engineering it better to achieve superior results. We will also review prompt engineering in Chapter 13.

LangChain and LlamaIndex

This book will focus on using LangChain as the framework for building our RAG pipelines. LangChain is an open source framework that supports not just RAG but any development that wants to use LLMs within a pipeline approach. With over 15 million monthly downloads, LangChain is the most popular generative AI development framework. It supports RAG particularly well, providing a modular and flexible set of tools that make RAG development significantly more efficient than not using a framework.

While LangChain is currently the most popular framework for developing RAG pipelines, LlamaIndex is a leading alternative to LangChain, with similar capabilities in general. LlamaIndex is known for its focus on search and retrieval tasks and may be a good option if you require advanced search or need to handle large datasets.

Many other options focus on various niches. Once you have gotten familiar with building RAG pipelines, be sure to look at some of the other options to see if there are frameworks that work for your particular project better.

Inference

We will use the term inference from time to time. Generally, this refers to the process of the LLM generating outputs or predictions based on given inputs using a pre-trained language model. For example, when you ask ChatGPT a question, the steps it takes to provide you with a response is called inference.

Context window

A context window, in the context of LLMs, refers to the maximum number of tokens (words, sub-words, or characters) that the model can process in a single pass. It determines the amount of text the model can see or attend to at once when making predictions or generating responses.

The context window size is a key parameter of the model architecture and is typically fixed during model training. It directly relates to the input size of the model as it sets an upper limit on the number of tokens that can be fed into the model at a time.

For example, if a model has a context window size of 4,096 tokens, it means that the model can process and generate sequences of up to 4,096 tokens. When processing longer texts, such as documents or conversations, the input needs to be divided into smaller segments that fit within the context window. This is often done using techniques such as sliding windows or truncation.

The size of the context window has implications for the model’s ability to understand and maintain long-range dependencies and context. Models with larger context windows can capture and utilize more contextual information when generating responses, which can lead to more coherent and contextually relevant outputs. However, increasing the context window size also increases the computational resources required to train and run the model.

In the context of RAG, the context window size is essential because it determines how much information from the retrieved documents can be effectively utilized by the model when generating the final response. Recent advancements in language models have led to the development of models with significantly larger context windows, enabling them to process and retain more information from the retrieved sources. See Table 1.1 to see the context windows of many popular LLMs, both closed and open sourced:

LLM

Context Window (Tokens)

ChatGPT-3.5 Turbo 0613 (OpenAI)

4,096

Llama 2 (Meta)

4,096

Llama 3 (Meta)

8,000

ChatGPT-4 (OpenAI)

8,192

ChatGPT-3.5 Turbo 0125 (OpenAI)

16,385

ChatGPT-4.0-32k (OpenAI)

32,000

Mistral (Mistral AI)

32,000

Mixtral (Mistral AI)

32,000

DBRX (Databricks)

32,000

Gemini 1.0 Pro (Google)

32,000

ChatGPT-4.0 Turbo (OpenAI)

128,000

ChatGPT-4o (OpenAI)

128,000

Claude 2.1 (Anthropic)

200,000

Claude 3 (Anthropic)

200,000

Gemini 1.5 Pro (Google)

1,000,000

Table 1.1 – Different context windows for LLMs

Figure 1.1, which is based on Table 1.1, shows that Gemini 1.5 Pro is far larger than the others.

Figure 1.1 – Different context windows for LLMs

Figure 1.1 – Different context windows for LLMs

Note that Figure 1.1 shows models that have generally aged from right to left, meaning the older models tended to have smaller context windows, with the newest models having larger context windows. This trend is likely to continue, pushing the typical context window larger as time progresses.

Fine-tuning – full-model fine-tuning (FMFT) and parameter-efficient fine-tuning (PEFT)

FMFT is where you take a foundation model and train it further to gain new capabilities. You could simply give it new knowledge for a specific domain, or you could give it a skill, such as being a conversational chatbot. FMFT updates all the parameters and biases in the model.

PEFT, on the other hand, is a type of fine-tuning where you focus only on specific parts of the parameters or biases when you fine-tune the model, but with a similar goal as general fine-tuning. The latest research in this area shows that you can achieve similar results to FMFT with far less cost, time commitment, and data.

While this book does not focus on fine-tuning, it is a very valid strategy to try to use a model fine-tuned with your data to give it more knowledge from your domain or to give it more of a voice from your domain. For example, you could train it to talk more like a scientist than a generic foundation model, if you’re using this in a scientific field. Alternatively, if you are developing in a legal field, you may want it to sound more like a lawyer.

Fine-tuning also helps the LLM to understand your company’s data better, making it better at generating an effective response during the RAG process. For example, if you have a scientific company, you might fine-tune a model with scientific information and use it for a RAG application that summarizes your research. This may improve your RAG application’s output (the summaries of your research) because your fine-tuned model understands your data better and can provide a more effective summary.

Vector store or vector database?

Both! All vector databases are vector stores, but not all vector stores are vector databases. OK, while you get out your chalkboard to draw a Vinn diagram, I will continue to explain this statement.

There are ways to store vectors that are not full databases. They are simply storage devices for vectors. So, to encompass all possible ways to store vectors, LangChain calls them all vector stores. Let’s do the same! Just know that not all the vector stores that LangChain connects with are officially considered vector databases, but in general, most of them are and many people refer to all of them as vector databases, even when they are not technically full databases from a functionality standpoint. Phew – glad we cleared that up!

Vectors, vectors, vectors!

A vector is a mathematical representation of your data. They are often referred to as embeddings when talking specifically about natural language processing (NLP) and LLMs. Vectors are one of the most important concepts to understand and there are many different parts of a RAG pipeline that utilize vectors.

We just covered many key vocabulary terms that will be important for you to understand the rest of this book. Many of these concepts will be expanded upon in future chapters. In the next section, we will continue to discuss vectors in further depth. And beyond that, we will spend Chapters 7 and 8 going over vectors and how they are used to find similar content.

Vectors

It could be argued that understanding vectors and all the ways they are used in RAG is the most important part of this entire book. As mentioned previously, vectors are simply the mathematical representations of your external data, and they are often referred to as embeddings. These representations capture semantic information in a format that can be processed by algorithms, facilitating tasks such as similarity search, which is a crucial step in the RAG process.

Vectors typically have a specific dimension based on how many numbers are represented by them. For example, this is a four-dimensional vector:

[0.123, 0.321, 0.312, 0.231]

If you didn’t know we were talking about vectors and you saw this in Python code, you might recognize this as a list of four floating points, and you aren’t too far off. However, when working with vectors in Python, you want to recognize them as a NumPy array, rather than lists. NumPy arrays are generally more machine-learning-friendly because they are optimized to be processed much faster and more efficiently than Python lists, and they are more broadly recognized as the de facto representation of embeddings across machine learning packages such as SciPy, pandas, scikit-learn, TensorFlow, Keras, Pytorch, and many others. NumPy also enables you to perform vectorized math directly on the NumPy array, such as performing element-wise operations, without having to code in loops and other approaches you might have to use if you were using a different type of sequence.

When working with vectors for vectorization, there are often hundreds or thousands of dimensions, which refers to the number of floating points present in the vector. Higher dimensionality can capture more detailed semantic information, which is crucial for accurately matching query inputs with relevant documents or data in RAG applications.

In Chapter 7, we will cover the key role vectors and vector databases play in RAG implementation. Then, in Chapter 8, we will dive more into the concept of similarity searches, which utilize vectors to search much faster and more efficiently. These are key concepts that will help you gain a much deeper understanding of how to better implement a RAG pipeline.

Understanding vectors can be a crucial underlying concept to understand how to implement RAG, but how is RAG used in practical applications in the enterprise? We will discuss these practical AI applications of RAG in the next section.

Implementing RAG in AI applications

RAG is rapidly becoming a cornerstone of generative AI platforms in the corporate world. RAG combines the power of retrieving internal or new data with generative language models to enhance the quality and relevance of the generated text. This technique can be particularly useful for companies across various industries to improve their products, services, and operational efficiencies. The following are some examples of how RAG can be used:

  • Customer support and chatbots: These can exist without RAG, but when integrated with RAG, it can connect those chatbots with past customer interactions, FAQs, support documents, and anything else that was specific to that customer.
  • Technical support: With better access to customer history and information, RAG-enhanced chatbots can provide a significant improvement to current technical support chatbots.
  • Automated reporting: RAG can assist in creating initial drafts or summarizing existing articles, research papers, and other types of unstructured data into more digestible formats.
  • E-commerce support: For e-commerce companies, RAG can help generate dynamic product descriptions and user content, as well as make better product recommendations.
  • Utilizing knowledge bases: RAG improves the searchability and utility of both internal and general knowledge bases by generating summaries, providing direct answers to queries, and retrieving relevant information across various domains such as legal, compliance, research, medical, academia, patents, and technical documents.
  • Innovation scouting: This is like searching general knowledge bases but with a focus on innovation. With this, companies can use RAG to scan and summarize information from quality sources to identify trends and potential areas for innovations that are relevant to that company’s specialization.
  • Training and education: RAG can be used by education organizations and corporate training programs to generate or customize learning materials based on specific needs and knowledge levels of the learners. With RAG, a much deeper level of internal knowledge from the organization can be incorporated into the educational curriculum in very customized ways to the individual or role.

These are just a few of the ways organizations are using RAG right now to improve their operations. We will dive into each of these areas in more depth in Chapter 3, helping you understand how you can implement all these game-changing initiatives in multiple places in your company.

You might be wondering, “If I am using an LLM such as ChatGPT to answer my questions in my company, does that mean my company is using RAG already?

The answer is “No.

If you just log in to ChatGPT and ask questions, that is not the same as implementing RAG. Both ChatGPT and RAG are forms of generative AI, and they are sometimes used together, but they are two different concepts. In the next section, we will discuss the differences between generative AI and RAG.

Comparing RAG with conventional generative AI

Conventional generative AI has already shown to be a revolutionary change for companies, helping their employees reach new levels of productivity. LLMs such as ChatGPT are assisting users with a rapidly growing list of applications that include writing business plans, writing and improving code, writing marketing copy, and even providing healthier recipes for a specific diet. Ultimately, much of what users are doing is getting done faster.

However, conventional generative AI does not know what it does not know. And that includes most of the internal data in your company. Can you imagine what you could do with all the benefits mentioned previously, but combined with all the data within your company – about everything your company has ever done, about your customers and all their interactions, or about all your products and services combined with a knowledge of what a specific customer’s needs are? You do not have to imagine it – that is what RAG does!

Before RAG, most of the services you saw that connected customers or employees with the data resources of the company were just scratching the surface of what is possible compared to if they could access all the data in the company. With the advent of RAG and generative AI in general, corporations are on the precipice of something really, really big.

Another area you might confuse RAG with is the concept of fine-tuning a model. Let’s discuss what the differences are between these types of approaches.

Comparing RAG with model fine-tuning

LLMs can be adapted to your data in two ways:

  • Fine-tuning: With fine-tuning, you are adjusting the weights and/or biases that define the model’s intelligence based on new training data. This directly impacts the model, permanently changing how it will interact with new inputs.
  • Input/prompts: This is where you use the model, using the prompt/input to introduce new knowledge that the LLM can act upon.

Why not use fine-tuning in all situations? Once you have introduced the new knowledge, the LLM will always have it! It is also how the model was created – by being trained with data, right? That sounds right in theory, but in practice, fine-tuning has been more reliable in teaching a model specialized tasks (such as teaching a model how to converse in a certain way), and less reliable for factual recall.

The reason is complicated, but in general, a model’s knowledge of facts is like a human’s long-term memory. If you memorize a long passage from a speech or book and then try to recall it a few months later, you will likely still understand the context of the information, but you may forget specific details. On the other hand, adding knowledge through the input of the model is like our short-term memory, where the facts, details, and even the order of wording are all very fresh and available for recall. It is this latter scenario that lends itself better in a situation where you want successful factual recall. And given how much more expensive fine-tuning can be, this makes it that much more important to consider RAG.

There is a trade-off, though. While there are generally ways to feed all data you have to a model for fine-tuning, inputs are limited by the context window of the model. This is an area that is actively being addressed. For example, early versions of ChatGPT 3.5 had a 4,096 token context window, which is the equivalent of about five pages of text. When ChatGPT 4 was released, they expanded the context window to 8,192 tokens (10 pages) and there was a Chat 4-32k version that had a context window of 32,768 tokens (40 pages). This issue is so important that they included the context window size in the name of the model. That is a strong indicator of how important the context window is!

Interesting fact!

What about the latest Gemini 1.5 model? It has a 1 million token context window or over 1,000 pages!

As the context windows expand, another issue is created. Early models with expanded context windows were shown to lose a lot of the details, especially in the middle of the text. This issue is also being addressed. The Gemini 1.5 model with its 1 million token context window has performed well in tests called needle in a haystack tests for remembering all details well throughout the text it can take as input. Unfortunately, the model did not perform as well in the multiple needles in a haystack tests. Expect more effort in this area as these context windows become larger. Keep this in mind if you need to work with large amounts of text at a time.

Note

It is important to note that token count differs from word count as tokens include punctuation, symbols, numbers, and other text representations. How a compound word such as ice cream is treated token-wise depends on the tokenization scheme and it can vary across LLMs. But most well-known LLMs (such as ChatGPT and Gemini) would consider ice cream as two tokens. Under certain circumstances in NLP, you may argue that it should be one token based on the concept that a token should represent a useful semantic unit for processing, but that is not the case for these models.

Fine-tuning can also be quite expensive, depending on the environment and resources you have available. In recent years, the costs for fine-tuning have come down substantially due to new techniques such as representative fine-tuning, LoRA-related techniques, and quantization. But in many RAG development efforts, fine-tuning is considered an additional cost to already expensive RAG efforts, so it is considered a more expensive addition to the efforts.

Ultimately, when deciding between RAG and fine-tuning, consider your specific use case and requirements. RAG is generally superior for retrieving factual information that is not present in the LLM’s training data or is private. It allows you to dynamically integrate external knowledge without modifying the model’s weights. Fine-tuning, on the other hand, is more suitable for teaching the model specialized tasks or adapting it to a specific domain. Keep the limitations of context window sizes and the potential for overfitting in mind when fine-tuning a specific dataset.

Now that we have defined what RAG is, particularly when compared to other approaches that use generative AI, let’s review the general architecture of RAG systems.

The architecture of RAG systems

The following are the stages of a RAG process from a user’s perspective:

  1. A user enters a query/question.
  2. The application thinks for a little while before checking the data it has access to so that it can see what is the most relevant.
  3. The application provides a response that focuses on answering the user’s question, but using data that has been provided to it through the RAG pipeline.

From a technical standpoint, this captures two of the stages you will code: the retrieval and generation stages. But there is one other stage, known as indexing, which can be and is often executed before the user enters the query. With indexing, you are turning supporting data into vectors, storing them in a vector database, and likely optimizing the search functionality so that the retrieval step is as fast and effective as possible.

Once the user passes their query into the system, the following steps occur:

  1. The user query is vectorized.
  2. The vectorized query is passed to a vector search to retrieve the most relevant data in a vector database representing your external data.
  3. The vector search returns the most relevant results and unique keys referencing the original content those vectors represent.
  4. The unique keys are used to pull out the original data associated with those vectors, often in a batch of multiple documents.
  5. The original data might be filtered or post-processed but will typically then be passed to an LLM based on what you expect the RAG process to do.
  6. The LLM is provided with a prompt that generally says something like “You are a helpful assistant for question-answering tasks. Take the following question (the user query) and use this helpful information (the data retrieved in the similarity search) to answer it. If you don't know the answer based on the information provided, just say you don't know.
  7. The LLM processes that prompt and provides a response based on the external data you provided.

Depending on the scope of the RAG system, these steps can be done in real time, or steps such as indexing can be done before the query so that it is ready to be searched when the time comes.

As mentioned previously, we can break these aspects down into three main stages (see Figure 1.2):

  • Indexing
  • Retrieval
  • Generation
Figure 1.2 – The three stages of RAG

Figure 1.2 – The three stages of RAG

As described previously, these three stages make up the overall user pattern and design of a general RAG system. In Chapter 4, we will dive much deeper into understanding these stages. This will help you tie the concepts of this coding paradigm with their actual implementation.

Summary

In this chapter, we explored RAG and its ability to enhance the capabilities of LLMs by integrating them with an organization’s internal data. We learned how RAG combines the power of LLMs with a company’s private data, enabling the model to utilize information it was not originally trained on, making the LLM’s outputs more relevant and valuable for the specific organization. We also discussed the advantages of RAG, such as improved accuracy and relevance, customization to a company’s domain, flexibility in data sources used, and expansion of the model’s knowledge beyond its original training data. Additionally, we examined the challenges and limitations of RAG, including dependency on data quality, the need for data cleaning, added computational overhead and complexity, and the potential for information overload if not properly filtered.

Midway through this chapter, we defined key vocabulary terms and emphasized the critical importance of understanding vectors. We explored examples of how RAG is being implemented across industries for various applications and compared RAG to conventional generative AI and model fine-tuning.

Finally, we outlined the architecture and stages of a typical RAG pipeline from both the user’s perspective and a technical standpoint while covering the indexing, retrieval, and generation stages of the RAG pipeline. In the next chapter, we will walk through these stages using an actual coding example.

Get This Book's PDF Version and Exclusive Extras

Scan the QR code (or go to packtpub.com/unlock). Search for this book by name, confirm the edition, and then follow the steps on the page.

Note: Keep your invoice handy. Purchases made directly from Packt don’t require one.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Optimize data retrieval and generation using vector databases
  • Boost decision-making and automate workflows with AI agents
  • Overcome common challenges in implementing real-world RAG systems
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

Generative AI is helping organizations tap into their data in new ways, with retrieval-augmented generation (RAG) combining the strengths of large language models (LLMs) with internal data for more intelligent and relevant AI applications. The author harnesses his decade of ML experience in this book to equip you with the strategic insights and technical expertise needed when using RAG to drive transformative outcomes. The book explores RAG’s role in enhancing organizational operations by blending theoretical foundations with practical techniques. You’ll work with detailed coding examples using tools such as LangChain and Chroma’s vector database to gain hands-on experience in integrating RAG into AI systems. The chapters contain real-world case studies and sample applications that highlight RAG’s diverse use cases, from search engines to chatbots. You’ll learn proven methods for managing vector databases, optimizing data retrieval, effective prompt engineering, and quantitatively evaluating performance. The book also takes you through advanced integrations of RAG with cutting-edge AI agents and emerging non-LLM technologies. By the end of this book, you’ll be able to successfully deploy RAG in business settings, address common challenges, and push the boundaries of what’s possible with this revolutionary AI technique.

Who is this book for?

This book is for AI researchers, data scientists, software developers, and business analysts looking to leverage RAG and generative AI to enhance data retrieval, improve AI accuracy, and drive innovation. It is particularly suited for anyone with a foundational understanding of AI who seeks practical, hands-on learning. The book offers real-world coding examples and strategies for implementing RAG effectively, making it accessible to both technical and non-technical audiences. A basic understanding of Python and Jupyter Notebooks is required.

What you will learn

  • Understand RAG principles and their significance in generative AI
  • Integrate LLMs with internal data for enhanced operations
  • Master vectorization, vector databases, and vector search techniques
  • Develop skills in prompt engineering specific to RAG and design for precise AI responses
  • Familiarize yourself with AI agents' roles in facilitating sophisticated RAG applications
  • Overcome scalability, data quality, and integration issues
  • Discover strategies for optimizing data retrieval and AI interpretability

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 27, 2024
Length: 346 pages
Edition : 1st
Language : English
ISBN-13 : 9781835887912
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 27, 2024
Length: 346 pages
Edition : 1st
Language : English
ISBN-13 : 9781835887912
Category :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 101.97
RAG-Driven Generative AI
€40.99
Unlocking Data with Generative AI and RAG
€33.99
Unlocking the Power of Auto-GPT and Its Plugins
€26.99
Total 101.97 Stars icon

Table of Contents

19 Chapters
Part 1 – Introduction to Retrieval-Augmented Generation (RAG) Chevron down icon Chevron up icon
Chapter 1: What Is Retrieval-Augmented Generation (RAG) Chevron down icon Chevron up icon
Chapter 2: Code Lab – An Entire RAG Pipeline Chevron down icon Chevron up icon
Chapter 3: Practical Applications of RAG Chevron down icon Chevron up icon
Chapter 4: Components of a RAG System Chevron down icon Chevron up icon
Chapter 5: Managing Security in RAG Applications Chevron down icon Chevron up icon
Part 2 – Components of RAG Chevron down icon Chevron up icon
Chapter 6: Interfacing with RAG and Gradio Chevron down icon Chevron up icon
Chapter 7: The Key Role Vectors and Vector Stores Play in RAG Chevron down icon Chevron up icon
Chapter 8: Similarity Searching with Vectors Chevron down icon Chevron up icon
Chapter 9: Evaluating RAG Quantitatively and with Visualizations Chevron down icon Chevron up icon
Chapter 10: Key RAG Components in LangChain Chevron down icon Chevron up icon
Chapter 11: Using LangChain to Get More from RAG Chevron down icon Chevron up icon
Part 3 – Implementing Advanced RAG Chevron down icon Chevron up icon
Chapter 12: Combining RAG with the Power of AI Agents and LangGraph Chevron down icon Chevron up icon
Chapter 13: Using Prompt Engineering to Improve RAG Efforts Chevron down icon Chevron up icon
Chapter 14: Advanced RAG-Related Techniques for Improving Results Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
N/A Jan 29, 2025
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Interesting book for the RAG approach
Feefo Verified review Feefo
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.

Modal Close icon
Modal Close icon