Reader small image

You're reading from  Generative AI with LangChain

Product typeBook
Published inDec 2023
PublisherPackt
ISBN-139781835083468
Edition1st Edition
Right arrow
Author (1)
Ben Auffarth
Ben Auffarth
author image
Ben Auffarth

Ben Auffarth is a full-stack data scientist with more than 15 years of work experience. With a background and Ph.D. in computational and cognitive neuroscience, he has designed and conducted wet lab experiments on cell cultures, analyzed experiments with terabytes of data, run brain models on IBM supercomputers with up to 64k cores, built production systems processing hundreds and thousands of transactions per day, and trained language models on a large corpus of text documents. He co-founded and is the former president of Data Science Speakers, London.
Read more about Ben Auffarth

Right arrow

Getting Started with LangChain

In this book, we’ll write a lot of code and test many different integrations and tools. Therefore, in this chapter, we’ll give basic setup instructions for all the libraries needed with the most common dependency management tools such as Docker, Conda, pip, and Poetry. This will ensure that you can run all the practical examples in this book.

Next, we’ll go through model integrations that we can use such as OpenAI’s ChatGPT, models on Hugging Face, Jina AI, and others. Further, we’ll introduce, set up, and work with a few providers in turn. For each of them, we will show how to get an API key token.

In the end, as a practical example, we’ll go through an example of a real-world application, an LLM app that could help customer service agents, one of the main areas where LLMs could prove to be game-changing. This will give us a bit more context around using LangChain, and we can introduce tips and tricks...

How to set up the dependencies for this book

We’ll assume at least a basic familiarity with Python, Jupyter, and environments in this book, but let’s quickly walk through this together. You can safely skip this section if you are confident about your setup or if you plan to install libraries separately for each chapter or application.

Please make sure you have Python version 3.10 or higher installed. You can install it from python.org or your platform’s package manager. If you use Docker, Conda, or Poetry, an appropriate Python version should be installed automatically as part of the instructions. You should also install Jupyter Notebook or JupyterLab to run the example notebooks interactively.

Environment management tools like Docker, Conda, Pip, and Poetry help create reproducible Python environments for projects. They install dependencies and isolate projects. This table gives an overview of these options for managing dependencies:

...

Exploring API model integrations

Before properly starting with generative AI, we need to set up access to models such as LLMs or text-to-image models so we can integrate them into our applications. As discussed in Chapter 1, What Is Generative AI?, there are various LLMs by tech giants, like GPT-4 by OpenAI, BERT and PaLM-2 by Google, LLaMA by Meta, and many more.

For LLMs, OpenAI, Hugging Face, Cohere, Anthropic, Azure, Google Cloud Platform’s Vertex AI (PaLM-2), and Jina AI are among the many providers supported in LangChain; however, this list is growing all the time. You can check out the full list of supported integrations for LLMs at https://integrations.langchain.com/llms.

Here’s a screenshot of this page as of the time of writing (October 2023), which includes both cloud providers and interfaces for local models:

Figure 3.1: LLM integrations in LangChain

LangChain implements three different interfaces – we can use chat models, LLMs...

Exploring local models

We can also run local models from LangChain. The advantages of running models locally are complete control over the model and not sharing any data over the internet.

Please note that we don’t need an API token for local models!

Let’s preface this with a note of caution: an LLM is big, which means that it’ll take up a lot of disk space or system memory. The use cases presented in this section should run even on old hardware, like an old MacBook; however, if you choose a big model, it can take an exceptionally long time to run or may crash the Jupyter notebook. One of the main bottlenecks is memory requirement. In rough terms, if quantized (roughly, compressed; we’ll discuss quantization in Chapter 8, Customizing LLMs and Their Output), 1 billion parameters correspond to 1 GB of RAM (please note that not all models will come quantized).

You can also run these models on hosted resources or services such as Kubernetes...

Building an application for customer service

Customer service agents are responsible for answering customer inquiries, resolving issues, and addressing complaints. Their work is crucial for maintaining customer satisfaction and loyalty, which directly affects a company’s reputation and financial success.

Generative AI can assist customer service agents in several ways:

  • Sentiment classification: This helps identify customer emotions and allows agents to personalize their responses.
  • Summarization: This enables agents to understand the key points of lengthy customer messages and save time.
  • Intent classification: Similar to summarization, this helps predict the customer’s purpose and allows for faster problem-solving.
  • Answer suggestions: This provides agents with suggested responses to common inquiries, ensuring that accurate and consistent messaging is provided.

These approaches combined can help customer service agents respond...

Summary

In this chapter, we walked through four distinct ways of installing LangChain and other libraries needed in this book as an environment. Then, we introduced several providers of models for text and images. For each of them, we explained where to get the API token, and demonstrated how to call a model.

Finally, we developed an LLM app for text categorization (intent classification) and sentiment analysis in a use case for customer service. This showcases LangChain’s ease in orchestrating multiple models to create useful applications. By chaining together various functionalities in LangChain, we can help reduce response times in customer service and make sure answers are accurate and to the point.

In Chapter 4, Building Capable Assistants and Chapter 5, Building a Chatbot Like ChatGPT, we’ll dive more into use cases such as question answering in chatbots through augmentation with tools and retrieval.

Questions

Please look to see whether you can provide answers to these questions. I’d recommend you go back to the corresponding sections of this chapter if you are unsure about any of them:

  1. How do you install LangChain?
  2. List at least 4 cloud providers of LLMs apart from OpenAI!
  3. What are Jina AI and Hugging Face?
  4. How do you generate images with LangChain?
  5. How do you run a model locally on your own machine rather than through a service?
  6. How do you perform text classification in LangChain?
  7. How can we help customer service agents in their work through generative AI?

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://packt.link/lang

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Generative AI with LangChain
Published in: Dec 2023Publisher: PacktISBN-13: 9781835083468
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Ben Auffarth

Ben Auffarth is a full-stack data scientist with more than 15 years of work experience. With a background and Ph.D. in computational and cognitive neuroscience, he has designed and conducted wet lab experiments on cell cultures, analyzed experiments with terabytes of data, run brain models on IBM supercomputers with up to 64k cores, built production systems processing hundreds and thousands of transactions per day, and trained language models on a large corpus of text documents. He co-founded and is the former president of Data Science Speakers, London.
Read more about Ben Auffarth