Reader small image

You're reading from  Vector Search for Practitioners with Elastic

Product typeBook
Published inNov 2023
PublisherPackt
ISBN-139781805121022
Edition1st Edition
Right arrow
Authors (2):
Bahaaldine Azarmi
Bahaaldine Azarmi
author image
Bahaaldine Azarmi

Bahaaldine Azarmi, Global VP Customer Engineering at Elastic, guides companies as they leverage data architecture, distributed systems, machine learning, and generative AI. He leads the customer engineering team, focusing on cloud consumption, and is passionate about sharing knowledge to build and inspire a community skilled in AI.
Read more about Bahaaldine Azarmi

Jeff Vestal
Jeff Vestal
author image
Jeff Vestal

Jeff Vestal has a rich background spanning over a decade in financial trading firms and extensive experience with Elasticsearch. He offers a unique blend of operational acumen, engineering skills, and machine learning expertise. As a Principal Customer Enterprise Architect, he excels at crafting innovative solutions, leveraging Elasticsearch's advanced search capabilities, machine learning features, and generative AI integrations, adeptly guiding users to transform complex data challenges into actionable insights.
Read more about Jeff Vestal

View More author details
Right arrow

Model Management and Vector Considerations in Elastic

In this chapter, we will provide an overview of the Hugging Face ecosystem, Elasticsearch’s Eland Python library, and practical strategies for using embedding models in Elasticsearch.

We will start by exploring the Hugging Face platform, discussing how to get started, selecting suitable models, and leveraging its vast collection of datasets. We will also delve into the features offered by Hugging Face’s Spaces and how to use them effectively.

Then, we will introduce the Eland Python library, created by Elastic, and demonstrate its usage through a Jupyter Notebook example.

The topics that we will cover in this chapter are as follows:

  • Eland Python library created by Elastic
  • Index mappings
  • Machine Learning (ML) nodes
  • Integrating ML models into Elasticsearch
  • Critical aspects of planning for cluster capacity
  • Storage efficiency strategies that can help optimize the performance and resource...

Technical requirements

In order to implement the concepts covered in this chapter, you’ll need the following:

Hugging Face

As discussed briefly in the introduction, the primary goal of Hugging Face is to democratize access to state-of-the-art NLP technologies and facilitate their adoption across various industries and applications. By providing an extensive library of pre-trained models (over 120,000 at the time of this writing), user-friendly APIs, and a collaborative environment for model sharing and fine-tuning, Hugging Face empowers developers and researchers to create advanced language processing applications with ease.

Building upon that foundation, Hugging Face doesn’t just stop at providing an extensive library; it also ensures streamlined access and effective application management. One of the standout features to this end is the Model Hub.

Model Hub

Hugging Face offers resources and services focused on the needs of both researchers and businesses. These include the Model Hub, which serves as a central repository for pre-trained models including inference APIs that...

Eland

Eland is a Python library developed by Elastic that allows users to interface with Elasticsearch seamlessly for data manipulation and analysis. The library is built on top of the official Elasticsearch Python client and extends the pandas API to Elasticsearch. This enables users to interact with Elasticsearch data using familiar pandas-like syntax and conventions, making it easier to integrate Elasticsearch with existing data analysis workflows and tools.

Eland is particularly useful for handling large datasets that cannot fit in memory and require distributed processing. Elasticsearch can scale horizontally by distributing data across multiple nodes in a cluster. This allows the user to efficiently work with far larger datasets than what would be possible on a laptop. Let’s look at some features of Eland:

  • One key use case for Eland is querying data stored in Elasticsearch. Instead of writing raw Elasticsearch queries, users can write Python code that resembles...

Generating vectors in Elasticsearch

Vectors can be generated during ingest before a document is indexed (written) into Elasticsearch using an ingest pipeline. Each processor in the ingest pipeline performs a different task, such as enriching, modifying, or removing data. The key processor in that pipeline is the inference processor, which passes the text to an embedding model, receives the vector representation, and then adds the vector to the original document payload before moving it along to be stored in an index. This ensures that the document’s vector representation is available for indexing and searching immediately after it is ingested.

Coming up is an example of an Elasticsearch ingest pipeline configuration that uses the inference processor with the sentence-transformers/msmarco-MiniLM-L-12-v3 model we loaded earlier. The pipeline takes a field named summary from the input document, processes it using the embedding model to generate an embedding, and stores the resulting...

Planning for cluster capacity and resources

Planning for sufficient cluster capacity and resources is crucial for any production environment, especially when implementing a vector search use case of considerable size. To ensure optimal performance and efficiency, careful consideration, planning, and testing must be carried out.

In the following chapter, we will delve into load testing, which is an essential part of fine-tuning and optimizing your Elasticsearch deployment. But before that, we will explore what it takes to run embedding models on ML nodes in Elasticsearch, outlining the essential factors to consider in order to strike the right balance between performance and resource utilization. In this section, we will discuss the critical aspects of CPU, RAM, and disk requirements, setting the stage for a comprehensive understanding of resource management in Elasticsearch.

CPU and memory requirements

The CPU requirements for vector search in Elasticsearch are not drastically...

ML node capacity

For embedding models run on ML nodes in Elasticsearch, you will need to plan to ensure your nodes have enough capacity to run the model at inference time. Elastic Cloud allows the auto-scaling of ML nodes based on CPU requirements, which allows them to scale up and out when more compute is required and scale down when those requirements are reduced.

We cover tuning ML nodes for inference in the next chapter in more detail, but minimally, you will need an ML node with enough RAM to load at least one instance of the embedding model. As your performance requirements increase, you can increase the number of allocations of the individual model as well as the number of threads allocated per allocation.

To check the size of a model and the amount of memory (RAM) required to load the model, you can run the get trained models statistics API (for more information on this API, visit the documentation page at https://www.elastic.co/guide/en/elasticsearch/reference/current...

Storage efficiency strategies

As your production dataset for vector search grows in size, so do the resources required to store those vectors and search through them in a timely fashion. In this section, we discuss several strategies users can take to reduce those resources. Each strategy has its trade-offs and should be carefully considered and thoroughly tested before being put into production.

Reducing dimensionality

Reducing dimensionality refers to the process of transforming high-dimensional data into a lower-dimensional representation. This process is often employed to mitigate the challenges that arise when working with high-dimensional data, such as the curse of dimensionality (https://en.wikipedia.org/wiki/Curse_of_dimensionality). Dimensionality reduction techniques, such as Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE), can help improve the efficiency and effectiveness of kNN vector search. However, there are advantages...

Summary

In this chapter, we delved into the intricacies of the Hugging Face ecosystem and the capabilities of Elasticsearch’s Eland Python library, offering practical examples for using embedding models within Elasticsearch. We explored the Hugging Face platform, highlighting its datasets, model selection, and the potential of its Spaces. Furthermore, we provided a hands-on approach to the Eland library, illustrating its functionalities and addressing pivotal considerations such as mappings, ML nodes, and model integration. We also touched upon the nuances of cluster capacity planning, emphasizing RAM, disk size, and CPU considerations. Finally, we underscored several storage efficiency tactics, focusing on dimensionality reduction, quantization, and mapping settings to ensure optimal performance and resource conservation for your Elasticsearch cluster.

In the next chapter, we will dive into the operational phase of working with data and learn how to tune performance for...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Vector Search for Practitioners with Elastic
Published in: Nov 2023Publisher: PacktISBN-13: 9781805121022
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Bahaaldine Azarmi

Bahaaldine Azarmi, Global VP Customer Engineering at Elastic, guides companies as they leverage data architecture, distributed systems, machine learning, and generative AI. He leads the customer engineering team, focusing on cloud consumption, and is passionate about sharing knowledge to build and inspire a community skilled in AI.
Read more about Bahaaldine Azarmi

author image
Jeff Vestal

Jeff Vestal has a rich background spanning over a decade in financial trading firms and extensive experience with Elasticsearch. He offers a unique blend of operational acumen, engineering skills, and machine learning expertise. As a Principal Customer Enterprise Architect, he excels at crafting innovative solutions, leveraging Elasticsearch's advanced search capabilities, machine learning features, and generative AI integrations, adeptly guiding users to transform complex data challenges into actionable insights.
Read more about Jeff Vestal