Reader small image

You're reading from  MLOps with Red Hat OpenShift

Product typeBook
Published inJan 2024
PublisherPackt
ISBN-139781805120230
Edition1st Edition
Right arrow
Authors (2):
Ross Brigoli
Ross Brigoli
author image
Ross Brigoli

Ross Brigoli is a consulting architect at Red Hat, where he focuses on designing and delivering solutions around microservices architecture, DevOps, and MLOps with Red Hat OpenShift for various industries. He has two decades of experience in software development and architecture.
Read more about Ross Brigoli

Faisal Masood
Faisal Masood
author image
Faisal Masood

Faisal Masood is a cloud transformation architect at AWS. Faisal's focus is to assist customers in refining and executing strategic business goals. Faisal main interests are evolutionary architectures, software development, ML lifecycle, CD and IaC. Faisal has over two decades of experience in software architecture and development.
Read more about Faisal Masood

View More author details
Right arrow

Building a Face Detector Using the Red Hat ML Platform

In the previous chapter of this book, you learned how the Red Hat platform enables you to build and deploy ML models. In this chapter, you will see that model is just one part of the puzzle. You have to collect data and process it before it can be fed to the model and you can get a useful response. You will see how the Red Hat platform enables you to build and deploy all the components required for a real-world application.

The aim of this chapter is to introduce you to how other Red Hat services on the same OpenShift platform provide a complete ecosystem for your needs. In this chapter, you will learn about the following:

  • Building and deploying a TensorFlow model to detect faces
  • Capturing a video feed from your local laptop
  • Storing the results in Redis, running on the OpenShift platform
  • Generating an alert when the model detects a face in the feed
  • Cost optimization strategies for the OpenShift platform...

Architecting a human face detector system

We will start by defining the business use case, its utility, and an architectural diagram of how the components work together.

The idea is to collect a video feed from where you can detect multiple objects and respond accordingly. For example, in our case, we are detecting a human face in a real-time video feed. This system could capture the feed from the front of your house and work as a security system. Or, you can apply the same workflow to detect potholes on the road through a continuous video feed collected by a car.

Once the camera captures the feed, it sends the video frame by frame to an application running on your OpenShift cluster, which then calls the model for inference. Once the model detects a face, the calling application displays and stores the results in a Redis cache (you can further enhance the application to store the results in a database), from where you can display the result or generate an alert. The backend application...

Training a model for face detection

In this section, you will use a pre-trained model to build your own model for detecting a human face in a picture. This may be a simple example but we have chosen it for a reason. Our aim is to show you how different components work together in such a system while being able to test it from any laptop with a webcam. You can enhance and rebuild the model for more complicated use cases if needed.

You will use Google’s EfficientNet, a highly efficient convolutional neural network, as the base pre-trained model. With pre-trained models, you do not need a huge amount of data to train the model for your use case. This will save you both time and compute resources. This method of reusing pre-trained models is also called transfer learning.

Because this model is specifically designed for image classification, in this example, we will be using it to classify whether an image contains a human face, a human finger, or something else. As a result...

Installing Redis on Red Hat OpenShift

Redis is a super-fast in-memory database. Redis provides a key-value store with different data structures, such as lists, for the applications to use. In our case, the video generates a lot of frames and our application will infer these frames and keep a count of frames/images with faces. So, we decided to use Redis to keep an atomic counter.

OpenShift will host the Redis server. You will find the complete non-production Redis setup in the chapter7/redis/redis-server.yaml file. Open the file and paste it into the OpenShift GUI while you are in the face-detection project. Hit the Create button and you will have a running Redis cluster on your platform. The following screenshot shows redis-server.yaml in the OpenShift UI.

Figure 7.11 – Installing the Redis server

Figure 7.11 – Installing the Redis server

Validate that the server is running by checking the services section of the OpenShift console within the wines project, identify the Pods, and validate...

Building and deploying the inferencing application

Before we dive deep into the inferencing application, let’s understand the application components. Our aim is to collect information from a camera, such as the video camera on your laptop, and then send it to the application, where the application will make a call to your model and see whether a face has been detected.

The video-capturing application (we call it the frontend) will capture the video and send every tenth frame as an array of 256-by-256 image to the server via HTTP. The server (or the backend application) will receive the frame or image and make an inference call to the model. The backend service will also keep a Redis-based counter, and when a face is detected, the application will increment the face counter in the Redis database. The backend service will also expose another HTTP service to read the value of the counter, which will then be displayed in the frontend service. Conceptually, it looks as in the...

Bringing it all together

Start with updating the frontend code with the HTTP address for face-detection-app as in the refreshFaceCounter and takepicture functions. Keep in mind that your URL will be different.

Save and load the HTML file into your browser. The browser will throw a warning that the application is trying to capture video feed; allow the application to access the video feed. You will get a screen like the one shown in Figure 7.23.

Figure 7.23 – Application UI capturing video and inferencing

Figure 7.23 – Application UI capturing video and inferencing

The web page captures the video stream from your laptop camera and displays it in the top area of the page. The middle area shows the image capture every 250 milliseconds as configured on the web page, and the bottom counter displays the number of images captured.

You will notice that the counter is continuously incremented while the person sits in front of the camera. This means that every 250 milliseconds, an image has been captured and...

Optimizing cost for your ML platform

In this section, you will learn how to use different OpenShift capabilities with Red Hat Data Science to optimize the cost for your platform. While we will not dive deep into this topic, we will provide you with some basic concepts to continue optimizing your platform resources.

When you run any software on the Red Hat OpenShift platform, such as a Jupyter notebook, build pipelines, and model serving, all of it runs as containers on the platform. These containers run on the machines or worker nodes, which could be a VM in a cloud platform such as Amazon EC2. Let’s see how OpenShift provisions machines to run containers for your MLOps needs.

Machine management in OpenShift

Machine management is OpenShift’s capability to work with the cloud or on-premises infrastructure providers, such as Amazon Web Services (AWS) or VMware (VMW), and to provision and scale the machines for your workloads. OpenShift adapts to changing workloads...

Summary

Congratulations! You have just experienced building an end-to-end MLOps workflow from scratch. You have trained and deployed an ML model and built a pipeline to automate your model training and deployment workflow using the tools that come with OpenShift Data Science. You have also successfully built a backend application that hosts your model and exposes it as an HTTP endpoint.

You have seen how OpenShift not only provides a full ML life cycle but also hosts your application and supports technologies such as Redis. All the components that have been deployed will benefit from the scalability of the platform.

Your journey does not stop here. The models we have shown here are just an example. You can deploy open source large language models (LLMs) on the platform.

Happy learning!

lock icon
The rest of the chapter is locked
You have been reading a chapter from
MLOps with Red Hat OpenShift
Published in: Jan 2024Publisher: PacktISBN-13: 9781805120230
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Ross Brigoli

Ross Brigoli is a consulting architect at Red Hat, where he focuses on designing and delivering solutions around microservices architecture, DevOps, and MLOps with Red Hat OpenShift for various industries. He has two decades of experience in software development and architecture.
Read more about Ross Brigoli

author image
Faisal Masood

Faisal Masood is a cloud transformation architect at AWS. Faisal's focus is to assist customers in refining and executing strategic business goals. Faisal main interests are evolutionary architectures, software development, ML lifecycle, CD and IaC. Faisal has over two decades of experience in software architecture and development.
Read more about Faisal Masood