Home Data Computer Vision on AWS

Computer Vision on AWS

By Lauren Mullennex , Nate Bachmeier , Jay Rao
books-svg-icon Book
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $35.99 $24.99
Print $44.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Chapter 1: Computer Vision Applications and AWS AI/ML Services Overview
About this book
Computer vision (CV) is a field of artificial intelligence that helps transform visual data into actionable insights to solve a wide range of business challenges. This book provides prescriptive guidance to anyone looking to learn how to approach CV problems for quickly building and deploying production-ready models. You’ll begin by exploring the applications of CV and the features of Amazon Rekognition and Amazon Lookout for Vision. The book will then walk you through real-world use cases such as identity verification, real-time video analysis, content moderation, and detecting manufacturing defects that’ll enable you to understand how to implement AWS AI/ML services. As you make progress, you'll also use Amazon SageMaker for data annotation, training, and deploying CV models. In the concluding chapters, you'll work with practical code examples, and discover best practices and design principles for scaling, reducing cost, improving the security posture, and mitigating bias of CV workloads. By the end of this AWS book, you'll be able to accelerate your business outcomes by building and implementing CV into your production environments with the help of AWS AI/ML services.
Publication date:
March 2023
Publisher
Packt
Pages
324
ISBN
9781801078689

 

Computer Vision Applications and AWS AI/ML Services Overview

In the past decade, the field of computer vision (CV) has rapidly advanced. Research in deep learning (DL) techniques has helped computers mimic human brains to “see” content in videos and images and transform it into actionable insights. There are examples of the wide variety of applications of CV all around us, including self-driving cars, text and handwriting detection, classifying different types of skin cancer in images, industrial equipment inspection, and detecting faces and objects in videos. Despite recent advancements, the availability of vast amounts of data from disparate sources has posed challenges in creating scalable CV solutions that achieve high-quality results. Automating a production CV pipeline is a cumbersome task requiring many steps. You may be asking, “How do I get started?” and “What are the best practices?”.

If you are a machine learning (ML) engineer or data scientist or want to better understand how to build and implement comprehensive CV solutions on Amazon Web Services (AWS), this book is for you. We provide practical code examples, tips, and step-by-step explanations to help you quickly deploy and automate production CV models. We assume that you have intermediate-level knowledge of artificial intelligence (AI) and ML concepts. In this first chapter, we will introduce CV and address implementation challenges, discuss the prevalence of CV across a variety of use cases, and learn about AWS AI/ML services.

In this chapter, we will cover the following:

  • Understanding CV
  • Solving business challenges with CV
  • Exploring AWS AI/ML services
  • Setting up your AWS environment
 

Technical requirements

You will need a computer with internet access to create an AWS account to set up Amazon SageMaker to run the code samples in the following chapters. The Python code and sample datasets for the solutions discussed are available at https://github.com/PacktPublishing/Computer-Vision-on-AWS.

 

Understanding CV

CV is a domain within AI and ML. It enables computers to detect and understand visual inputs (videos and images) to make predictions:

Figure 1.1 – CV is a subdomain of AI and ML

Figure 1.1 – CV is a subdomain of AI and ML

Before we discuss the inner workings of a CV system, let’s summarize the different types of ML algorithms:

  • Supervised learning (SL)—Takes a set of labeled input data and predicts a known target value. For example, a model may be trained on a set of labeled dog images. When a new unlabeled dog image is processed by the model, the model correctly predicts that the image is a dog instead of a cat.
  • Unsupervised learning (UL)—Unlabeled data is provided, and patterns or structures need to be found within the data since no labeled target value is present. One example of UL is a targeted marketing campaign where customers need to be segmented into groups based on various common attributes such as demographics.
  • Semi-supervised learning—consists of unlabeled and labeled data. This is beneficial for CV tasks, since it is a time-consuming process to label individual images. With this method, only some of the images in the dataset need to be labeled, in order to label and classify the unlabeled images.

CV architecture and applications

Now that we’ve covered the different types of ML training methods, how does this relate to CV? DL algorithms are commonly used to solve CV problems. These algorithms are composed of artificial neural networks (ANNs) containing layers of nodes, which function like a neuron in a human brain. A neural network (NN) has multiple layers, including one or more input layers, hidden layers, and output layers. Input data flows through the input layers. The nodes perform transformations of the input data in the hidden layers and produce output to the output layer. The output layer is where predictions of the input data occur. The following figure shows an example of a deep NN (DNN) architecture:

Figure 1.2 – DNN architecture

Figure 1.2 – DNN architecture

How does this architecture apply to real-world applications? With CV and DL technology, you can detect patterns in images and use these patterns for classification. One type of NN that excels in classifying images is a convolutional NN (CNN). CNNs were inspired by ANNs. The way the nodes in a CNN communicate replicates how animals visualize the world. One application of CNNs is classifying X-ray images to assist doctors with medical diagnoses:

Figure 1.3 – Image classification of X-rays

Figure 1.3 – Image classification of X-rays

There are multiple types of problems that CV can solve that we will highlight throughout this book. Localization locates one or more objects in an image and draws a bounding box around the object(s). Object detection uses localization and classification to identify and classify one or multiple objects in an image. These tasks are more complicated than image classification. Faster R-CNN (Regions with CNN), SSD (single shot detector), and YOLO (you only look once) are other types of DNN models that can be used for object detection tasks. These models are designed for performance such as decreasing latency and increasing accuracy.

Segmentation—including instance segmentation and semantic segmentation—highlights the pixels of an image, instead of objects, and classifies them. Segmentation can also be applied to videos to detect black frames, color bars, end credits, and shot changes:

Figure 1.4 – Examples of different CV problem types

Figure 1.4 – Examples of different CV problem types

Despite recent advances in CV and DL, there are still challenges within the field. CV systems are complex, there are vast amounts of data to process, and considerations need to be taken before training a model. It is important to understand the data available since a model is only as good as the quality of your data, and the steps required to prepare the data for model training.

Data processing and feature engineering

CV deals with images and videos, which are a form of unstructured data. Unstructured data does not have a predefined data model and cannot be stored in a database row and column format. This type of data poses unique challenges compared to tabular data. More processing is required to transform the data into a usable format. A computer sees an image as a matrix of pixel values. A pixel is a set of numbers between 0-255 in the red, green, blue (RGB) system. Images vary in their resolutions, dimensions, and colors. In order to train a model, CV algorithms require that images are normalized such that they are the same size. Additional image processing techniques include resizing, rotating, enhancing the resolution, and converting from RGB to grayscale. Another technique is image masking, which allows us to focus on a region of interest. In the following photos, we apply a mask to highlight the motorcycle:

Figure 1.5 – Applying an image mask to highlight the motorcycle

Figure 1.5 – Applying an image mask to highlight the motorcycle

Preprocessing is important since images are often large and take up lots of storage. Resizing an image and converting it to grayscale can speed up the ML training process. However, this technique is not always optimal for the problem we’re trying to solve. For example, in medical image analysis such as skin cancer diagnosis, the colors of the samples are relevant for a proper diagnosis. This is why it’s important to have a complete understanding of the business problem you’re trying to solve before choosing how to process your data. In the following chapters, we’ll provide code examples that detail various image preprocessing steps.

Features or attributes in ML are important input data characteristics that affect the output or target variable of a model. Distinct features in an image help a model differentiate objects from one another. Determining relevant features depends on the context of your business problem. If you’re trying to identify a Golden Retriever dog in a group of images also containing cats, then height is an important feature. However, if you’re looking to classify different types of dogs, then height is not always a distinguishing feature since Golden Retrievers are similar in height to many other dog breeds. In this case, color and coat length might be more useful features.

Data labeling

Data annotation or data labeling is the process of labeling your input datasets. It helps derive value from your unstructured data for SL. Some of the challenges with data labeling are that it is a manual process that is time-consuming, humans have a bias for labeling an object, and it’s difficult to scale. Amazon SageMaker Ground Truth Plus (https://aws.amazon.com/sagemaker/data-labeling/) helps address these challenges by automating this process. It contains a labeling user interface (UI) and quality workflow customizations. The labeling is done by an expert workforce with domain knowledge of the ML tasks to complete. This improves the label quality and leads to better training datasets. In Chapter 9, we will cover a code example using SageMaker Ground Truth Plus.

Amazon Rekognition Custom Labels (https://aws.amazon.com/rekognition/custom-labels-features/) also provides a visual interface to label your images. Labels can be applied to the entire image or you can create bounding boxes to label specific objects. In the next two chapters, we will discuss Amazon Rekognition and Rekognition Custom Labels in more detail.

In this section, we discussed the architecture behind DL CV algorithms. We also covered data processing, feature engineering, and data labeling considerations to create high-quality training datasets. In the next section, we will discuss the evolution of CV and how it can be applied to many different business use cases.

 

Solving business challenges with CV

CV has tremendous business value across a variety of industries and use cases. There have also been recent technological advancements that are generating excitement within the field. The first use case of CV was noted over 60 years ago when a digital scanner was used to transform images into grids of numbers. Today, vision transformers and generative AI allow us to quickly create images and videos from text prompts. The applications of CV are evident across every industry, including healthcare, manufacturing, media and entertainment, retail, agriculture, sports, education, and transportation. Deriving meaningful insights from images and videos has helped accelerate business efficiency and improved the customer experience. In this section, we will briefly cover the latest CV implementations and highlight use cases that we will be diving deeper into throughout this book.

New applications of CV

In 1961, Lawrence Roberts, who is often considered the “father” of CV, presented in his paper Machine Perception of Three-Dimensional Solids (https://dspace.mit.edu/bitstream/handle/1721.1/11589/33959125-MIT.pdf) how a computer could construct a 3D array of objects from a 2D photograph. This groundbreaking paper led researchers to explore the value of image recognition and object detection. Since the discovery of NNs and DL, the field of CV has made great strides in developing more accurate and efficient models. Earlier, we reviewed some of these models, such as CNN and YOLO. These models are widely adopted for a variety of CV tasks. Recently, a new model called vision transformers has emerged that outperforms CNN in terms of accuracy and efficiency. Before we review vision transformers in more detail, let’s summarize the idea of transformers and their relevance in CV.

In order to understand transformers, we first need to explore a DL concept that is used in natural language processing (NLP), called attention. An introduction to transformers and self-attention was first presented in the paper Attention is All You Need (https://arxiv.org/pdf/1706.03762.pdf). The attention mechanism is used in RNN sequence-to-sequence (seq2seq) models. One example of an application of seq2seq models is language translation. This model is composed of an encoder and a decoder. The encoder processes the input sequence, and the decoder generates the transformed output. There are hidden state vectors that take the input sequence and the context vector from the encoder and send them to the decoder to predict the output sequence. The following diagram is an illustration of these concepts:

Figure 1.6 – Translating a sentence from English to German using a seq2seq model

Figure 1.6 – Translating a sentence from English to German using a seq2seq model

In the above, we pay attention to the context of the words in the input to determine the next sequence when generating the output. Another example of attention from Attention is All You Need weighs the importance of different inputs when making predictions. Here is a sentiment analysis example from the paper for a hotel service task, where the bold words are considered relevant:

Figure 1.7 – Example of attention for sentiment analysis from “Attention is All You Need”

Figure 1.7 – Example of attention for sentiment analysis from “Attention is All You Need”

A transformer relies on self-attention, which is defined in the paper as “an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence”. Transformers are important in the application of NLP because they capture the relationship and context of words in text. Take a look at the following sentences:

Andy Jassy is the current CEO of Amazon. He was previously the CEO of Amazon Web Services.

Using transformers, we are able to understand that “He” in the second sentence is referring to Andy Jassy. Without this context of the subject in the first sentence, it is difficult to understand the relationship between the rest of the words in the text.

Now that we’ve reviewed transformers and explained their importance in NLP, how does this relate to CV? The vision transformer was introduced in a 2021 paper, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (https://arxiv.org/pdf/2010.11929v2.pdf). Vision transformers expand upon the concept of text transformers. The technical details of vision transformers are outside of the scope of this book; however, they have shown great improvement over CNNs for image classification tasks. Transformer architecture has introduced new innovations such as generative AI. Generative AI is blurring the lines between generating separate models for NLP and CV. With generative AI, we can generate images from a text phrase. One image generator developed by OpenAI is called DALL-E (https://openai.com/blog/dall-e-now-available-without-waitlist/). Another example created by Stability AI is called Stable Diffusion (https://huggingface.co/spaces/stabilityai/stable-diffusion). All that is required to generate an image is to type in an English phrase. The following figure shows an example of images generated by Stable Diffusion:

Figure 1.8 - Images created from text “An astronaut in the mountains” using Stable Diffusion

Figure 1.8 - Images created from text “An astronaut in the mountains” using Stable Diffusion

The potential use cases for transformers and generative AI are just beginning to be explored. Throughout the rest of this book, we will discuss the following real-world applications of CV and provide code examples.

Contactless check-in and checkout

To improve the customer experience, many businesses have adopted contactless check-in and checkout processes. This provides a frictionless experience that reduces cost and is easier to scale. It also adds a layer of enhanced security. Instead of checking out from a grocery store using a credit card or trying to remember a PIN, you can use a biometric option such as your palm or facial recognition. In Chapter 3, we will walk through a code example to build a contactless hotel check-in system using identity verification.

Video analysis

You can use CV to analyze videos to detect objects in real time. This helps gather analytics for security footage and helps ensure compliance requirements are met in a manufacturing facility. In the media and entertainment industry, companies can monetize their content by automating the analysis of videos to determine when to insert advertisements. In Chapter 5, we will use CV to annotate and automate security video footage.

Content moderation

The amount of digital content is increasing. Often, this content is moderated manually by human reviewers, which is not a scalable or cost-effective solution. Companies in the gaming, social media, financial services, and healthcare industries are looking to protect their brands, create safe online communities that improve the user experience, meet regulatory and compliance requirements, and reduce the cost of content moderation. CV services combined with additional AI services, such as NLP, can automatically moderate image, video, text, and audio workflows to detect unwanted or offensive content and protect sensitive information. In Chapter 6, we teach you how to incorporate these capabilities into an automated pipeline.

CV at the edge

CV at the edge allows you to run your models locally on an edge device to make real-time predictions and reduce latency. Many ML use cases require that models run on the edge. To meet privacy preservation standards, users’ data needs to be kept directly on devices such as mobile phones, smart cameras, and smart speakers. Also, your devices may be running in places with limited connectivity such as oil drills, or even in space where it is impossible to send your data to the cloud to perform inference. Consultancy firm Deloitte estimates that today there are over 750 million AI devices, and that number is only continuing to grow. What types of use cases can CV solve at the edge? Camera streams on a manufacturing floor can trigger multiple models and alert maintenance teams when equipment defects are identified and can also detect issues in product quality. It also has applications in healthcare. CV models can be deployed on X-ray machines and in operating rooms to quickly process medical images, which helps with faster patient diagnosis. In Chapters 7 and 8, we’ll dive deeper into CV at the edge and provide code examples to solve industrial Internet of Things (IoT) scenarios and defect detection.

In this section, we introduced transformers and discussed their impact on CV. We also covered common challenges that can be solved with CV across multiple industries. These use cases are not an exhaustive list and represent only a small sample of how CV can unlock meaningful insights from your content and accelerate your business outcomes. In the next section, we will introduce the AWS AI/ML services and the benefits of using these services in your downstream applications.

 

Exploring AWS AI/ML services

There are many challenges faced when building and deploying a production CV model. It’s often difficult to find the right ML skill sets. Gathering high-quality data and labeling the data is a manual and costly process. Data processing and feature engineering require domain expertise. Developing, training, and testing ML models takes time. Once a model is created and deployed into production, it’s challenging to scale on-premises and difficult to understand which metrics to monitor to detect data and model quality drift. Reducing inference latency, automating the retraining process, and managing the underlying infrastructure are also concerns.

AWS AI/ML services are designed to address these challenges. These services are fully managed, so you don’t have to worry about their underlying architecture. You can also optimize your costs by only paying for what you use. Within the portfolio of AWS AI/ML services, there are several approaches to choose from when building your CV application.

AWS AI services

AWS AI services provide pre-trained models that use DL technology to solve common use cases such as image classification, personalized recommendations, fraud detection, anomaly detection, and NLP. These services don’t require any ML expertise and they’re easily integrated into your applications or with other AWS services by calling APIs. They help remove the undifferentiated heavy lifting of dealing with image preprocessing and feature extraction. This way, you can focus on solving your business problems and moving to production faster.

One of the AI services for CV is Amazon Rekognition. It is a fully managed DL-based service that detects objects, people, activities, scenes, text, and inappropriate content in images and videos. It also provides facial analysis and facial search capabilities. Rekognition contains pre-trained models but also allows you to train your own custom model using Rekognition Custom Models. In the next two chapters, we provide code examples and applications of Rekognition and Rekognition Custom Models.

Amazon Lookout for Vision (https://aws.amazon.com/lookout-for-vision/) is another AI service that uses CV to detect anomalies and defects in manufacturing. Using pre-trained models, it helps improve industrial quality assurance by analyzing images to identify objects with visual defects. This helps improve your operational efficiency. In Chapter 7, we go into more detail about using Lookout for Vision.

For building and managing CV applications at the edge, AWS Panorama (https://aws.amazon.com/panorama/) provides ML devices and a software development kit (SDK) to add CV to your cameras.

This helps to automate costly inspection tasks by building CV applications to analyze video feeds. The Panorama appliance performs predictions locally for real-time decision-making. With Panorama, you can train your own models or select pre-built applications from AWS or third-party vendors.

These are only a few examples of the AWS AI services we will be focusing on in this book. For more details on the pre-trained services available for your applications, visit the AWS Machine Learning | AI Services page (https://aws.amazon.com/machine-learning/ai-services/).

Amazon SageMaker

If you are interested in fine-tuning a pre-trained model, using built-in algorithms, or building your own custom ML model, Amazon SageMaker (https://aws.amazon.com/sagemaker/) is a comprehensive fully managed ML service that allows you to prepare data, build, train, and deploy ML models for any use case. SageMaker provides the infrastructure, tools, visual interfaces, workflows, and MLOps capabilities for every step of the ML life cycle to help you deploy and manage models at scale. SageMaker also contains an integrated development environment (IDE) called SageMaker Studio where you can perform all steps within the ML life cycle and orchestrate continuous integration/continuous deployment (CI/CD) pipelines. For more information on SageMaker Studio, refer to the book Getting Started with SageMaker Studio, by Michael Hsieh (https://www.packtpub.com/product/getting-started-with-amazon-sagemaker-studio/9781801070157):

Figure 1.9 – Amazon SageMaker features and capabilities

Figure 1.9 – Amazon SageMaker features and capabilities

With SageMaker, you can use transfer learning (TL) to fine-tune and reuse a pre-trained model without training a model from scratch. This saves you time and allows you to transfer the domain knowledge you gained previously to solve a new ML problem. This technique can be applied to CV or any type of business problem.

SageMaker contains dozens of pre-built algorithms that are optimized for speed, scale, and accuracy. They include support for supervised and unsupervised algorithms to solve a variety of use cases, including CV-related problems such as image classification, object detection, and semantic segmentation.

If a pre-trained or pre-built solution does not fit your needs, you have the option to build a custom ML model. There is a variety of powerful CPU and GPU compute options available for training and hosting your model on SageMaker. In Chapter 12, we will build a custom CV model on SageMaker to classify different types of skin cancer.

In this section, we provided an overview of the AWS AI/ML services related to CV. Next, we will show you how to set up the AWS environment that you will use throughout this book to build CV solutions.

 

Setting up your AWS environment

In the following chapters, you will need access to an AWS account to run the code examples. If you already have an AWS account, feel free to skip this section and move on to the next chapter.

Note

Please use the AWS Free Tier, which allows you to try services free of charge based on certain service usage limits or time limits. See https://aws.amazon.com/free for more details.

Follow the instructions at https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-creating.html to sign up for an AWS account, then proceed as follows:

  1. Once the AWS account is created, sign in using your email address and password and access the AWS Management Console at https://console.aws.amazon.com/.
  2. Type IAM in the services search bar at the top of the console and select IAM to navigate to the IAM console. Select Users from the left panel in the IAM console and select on Add User.
  3. Enter a User name value, then select Programmatic access and AWS Management Console access for Access type. Keep the Console password setting as Autogenerated password, and keep Require password reset as selected:
Figure 1.10 – Setting your IAM username and access type

Figure 1.10 – Setting your IAM username and access type

  1. Select Next: Permissions. On the Set permissions page, select on Attach existing policies directly and select the checkbox to the left of AdministratorAccess. Select Next twice to go to the Review page. Select Create user:
Figure 1.11 – Adding Administrator access for IAM user

Figure 1.11 – Adding Administrator access for IAM user

  1. Now, go back to the AWS Management Console (console.aws.amazon.com) and select Sign In. Provide the IAM username you created in the previous step along with a temporary password, and enter a new password to log in to the console.

Creating an Amazon SageMaker Jupyter notebook instance

We will be using Jupyter Notebooks to run our code in the following chapters. Please execute the following steps to create a notebook instance in Amazon SageMaker:

  1. In the AWS Management Console, type SageMaker in the services search bar at the top of the page, and select on it to access the Amazon SageMaker console.
  2. On the left panel, select on Notebook to expand, and select Notebook instances.
  3. At the top right of the Notebook instances page, select Create notebook instance.
  4. Under Notebook instance settings, type a name for Notebook instance name. For Notebook instance type, select m1.t3.medium since it falls under the AWS Free Tier:
Figure 1.12 – Amazon SageMaker: Notebook instance settings

Figure 1.12 – Amazon SageMaker: Notebook instance settings

  1. Under the Permissions and encryption section, select the IAM role list and choose Create a new role. Specify Any S3 bucket to provide access to all S3 buckets.
  2. Leave the rest of the default options in the remaining sections and select Create notebook instance.
  3. It will take a few minutes for the notebook instance to provision. Once the status is InService, you are ready to proceed. The following chapters will provide instructions for executing the code examples.

Now, you are ready to deploy the code examples that will show you how to use AWS AI/ML services to deploy CV solutions. Throughout the rest of the book, you will use a SageMaker notebook instance for these steps.

 

Summary

In this chapter, we covered the architecture behind a CV DNN and the common CV problem types. We discussed how to create high-quality datasets by preprocessing your input images, extracting features, and auto-labeling your data. Next, we summarized recent CV advancements and provided a brief overview of common CV use cases and their importance in deriving value for your business. We also explored AWS AI/ML services and how they can be used to quickly deploy production solutions.

In the next chapter, we will introduce Amazon Rekognition. You will learn about the different Rekognition APIs and how to interact with them. We will dive deeper into several use cases and provide Python code examples for execution.

About the Authors
  • Lauren Mullennex

    Lauren Mullennex is a Senior AI/ML Specialist Solutions Architect at AWS. She has broad experience in infrastructure, DevOps, and cloud architecture across multiple industries. She has published multiple AWS AI/ML blogs, spoken at AWS conferences, and focuses on developing solutions using CV and MLOps.

    Browse publications by this author
  • Nate Bachmeier

    Nate Bachmeier is a Principal Solutions Architect at AWS (Ph.D. CS, MBA). He nomadically explores the world one cloud integration at a time, focusing on the Financial Service industry.

    Browse publications by this author
  • Jay Rao

    Jay Rao is a Principal Solutions Architect at AWS. He enjoys providing technical and strategic guidance to customers and helping them design and implement solutions.

    Browse publications by this author
Computer Vision on AWS
Unlock this book and the full library FREE for 7 days
Start now