Reader small image

You're reading from  Hands-On Computer Vision with Detectron2

Product typeBook
Published inApr 2023
Reading LevelBeginner
PublisherPackt
ISBN-139781800561625
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Van Vung Pham
Van Vung Pham
author image
Van Vung Pham

Van Vung Pham is a passionate research scientist in machine learning, deep learning, data science, and data visualization. He has years of experience and numerous publications in these areas. He is currently working on projects that use deep learning to predict road damage from pictures or videos taken from roads. One of the projects uses Detectron2 and Faster R-CNN to predict and classify road damage and achieve state-of-the-art results for this task. Dr. Pham obtained his PhD from the Computer Science Department, at Texas Tech University, Lubbock, Texas, USA. He is currently an assistant professor at the Computer Science Department, Sam Houston State University, Huntsville, Texas, USA.
Read more about Van Vung Pham

Right arrow

Fine-Tuning Instance Segmentation Models

The object instance segmentation models utilize results from the object detection models. Therefore, all the techniques introduced in the previous chapters for fine-tuning object detection models work the same for object instance segmentation models. However, object instance segmentation has an important feature to fine-tune: the quality of the boundaries of the detected objects. Therefore, this chapter introduces PointRend, a project inside Detectron2 that helps improve the object boundaries’ sharpness.

By the end of this chapter, you will be able to understand how PointRend works. You will also have hands-on experience developing object instance segmentation applications with better segmentation quality using existing PointRend models. Additionally, you can train an object instance segmentation application using PointRend on a custom dataset. Specifically, this chapter covers the following topics:

  • Introduction to PointRend...

Technical requirements

You must have completed Chapter 1 to have an appropriate development environment for Detectron2. All the code, datasets, and results are available in this book’s GitHub repository at https://github.com/PacktPublishing/Hands-On-Computer-Vision-with-Detectron2.

Introduction to PointRend

PointRend is a project as a part of Detectron2. It helps provide better segmentation quality for detected object boundaries. It can be used with instance segmentation and semantic segmentation. It is an extension of the Mask R-CNN head, which we discussed in the previous chapter. It performs point sampling on the detected mask and performs predictions on the sampled points instead of all the points in the mask. This technique allows us to compute the mask with a higher resolution, thus providing a higher mask resolution. Figure 11.1 illustrates an example of two images when not using (left) and when using (right) PointRend:

Figure 11.1: Segmentation quality with and without PointRend

Figure 11.1: Segmentation quality with and without PointRend

PointRend helps render segmentations that are of a higher resolution with object boundaries that are crisp and less smooth. Therefore, it is useful if the objects to detect have sharp edges.

At inference time, starting from the coarse prediction...

Using existing PointRend models

The steps for performing object instance segmentation using existing PointRend models are similar to that of performing object instance segmentation using existing Detectron2 models in the Detectron2 Model Zoo, as described in the previous chapter. Therefore, this section covers more of the differences. For PointRend, we need to clone the Detectron2 repository to use its configuration files from the PointRend project:

!git clone --branch https://github.com/facebookresearch/detectron2.git detectron2_repo

The repository is stored in the detectron2_repo folder in the current working directory. With this repository cloned, the code to generate the configuration is a little different:

# some other common import statements are removed here
from detectron2.projects import point_rend
config_file = "detectron2_repo/projects/PointRend/configs/InstanceSegmentation/pointrend_rcnn_X_101_32x8d_FPN_3x_coco.yaml"
checkpoint_url = "detectron2...

Training custom PointRend models

This section describes the steps for training a custom PointRend model for object instance segmentation tasks on the brain tumor dataset (described in the previous chapter). Training custom PointRend models includes steps similar to training instance segmentation models in Detectron2. Therefore, this section focuses more on the discrepancies between the two; the complete source code is available in this book’s GitHub repository.

The source code for downloading the brain tumor segmentation dataset, installing Detectron2, and registering train and test datasets remains the same. Similar to the previous section, before we can get a configuration file, we need to clone the Detectron2 repository to use the configuration files for the PointRend project:

!git clone https://github.com/facebookresearch/detectron2.git detectron2_repo

The code for the initial configuration remains the same:

output_dir = "output/pointrend"
os.makedirs...

Summary

This chapter introduced the techniques to fine-tune object instance segmentation applications trained using Detection2. In general, object instance segmentation applications also use object detection parts. Therefore, all the methods that are utilized to fine-tune object detection models can be used for object instance segmentation models. Additionally, this chapter discussed the PointRend project, which helps improve the object instance segmentation boundaries for the detected objects. Specifically, it described how PointRend works and the steps for developing object instance segmentation applications using the existing models available in the PointRend Model Zoo. Finally, this chapter also provided code snippets to train custom PointRend models on custom datasets.

Congratulations again! By now, you should have profound knowledge regarding Detectron2 and be able to develop computer vision applications by using existing models or training custom models on custom datasets...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Computer Vision with Detectron2
Published in: Apr 2023Publisher: PacktISBN-13: 9781800561625
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Van Vung Pham

Van Vung Pham is a passionate research scientist in machine learning, deep learning, data science, and data visualization. He has years of experience and numerous publications in these areas. He is currently working on projects that use deep learning to predict road damage from pictures or videos taken from roads. One of the projects uses Detectron2 and Faster R-CNN to predict and classify road damage and achieve state-of-the-art results for this task. Dr. Pham obtained his PhD from the Computer Science Department, at Texas Tech University, Lubbock, Texas, USA. He is currently an assistant professor at the Computer Science Department, Sam Houston State University, Huntsville, Texas, USA.
Read more about Van Vung Pham