Reader small image

You're reading from  Qt 5 and OpenCV 4 Computer Vision Projects

Product typeBook
Published inJun 2019
Reading LevelIntermediate
PublisherPackt
ISBN-139781789532586
Edition1st Edition
Languages
Right arrow
Author (1)
Zhuo Qingliang
Zhuo Qingliang
author image
Zhuo Qingliang

Zhuo Qingliang (a.k.a. KDr2 online) is presently working at Beijing Paoding Technology Co. LTD., a start-up Fintech company in China that is dedicated to improving the financial industry by using artificial intelligence technologies. He has over 10 years experience in Linux, C, C++, Python, Perl, and Java development. He is interested in programming, doing consulting work, participating in and contributing to the open source community (of course, includes the Julia community).
Read more about Zhuo Qingliang

Right arrow

Assessments

Chapter 1, Building an Image Viewer

  1. We use a message box to tell users that they are already viewing the first or last image as they attempt to view the image prior to the first image, or the image following the last image. However, there is another way to handle this: disable prevAction when users are viewing the first image, and disable nextAction when users are viewing the last image. How do we go about this?

The QAction class has a bool enabled property and, hence, a setEnabled(bool) method, and we call it to enable or disable the corresponding action in the prevImage and nextImage methods.

  1. There is only text on our menu items or tool buttons. How can we add an icon image to them?

The QAction class has a QIcon icon property and, hence, a setIcon method, and you can create and set an icon for the action. To create a QIcon object, please refer to its corresponding documentation...

Chapter 2, Editing Images Like a Pro

  1. How would we know whether an OpenCV function supports in-place operations?

As we mentioned in the chapter, we can refer to the official document pertaining to the function. If the document stipulates that it supports in-place operations, then it does, otherwise, it doesn't.

  1. How can a hotkey be added to each action we added as a plugin?

We can add a new method to the plugin interface class that returns a QList<QKeySequence> instance and implement it in the concrete plugin class. When we load the plugin, we call that method to get the shortcut key sequence and set it as the hotkey of the action for that plugin.

  1. How can a new action be added to discard all the changes in the current image in our application?

First of all, add a class field of the QPixmap type to the MainWindow class. Before editing the current image, we save a...

Chapter 3, Home Security Applications

  1. Can we detect motion from a video file instead of from a camera? How is this achieved?

Yes, we can. Just use the video file path to construct the VideoCapture instance. More details can be found at https://docs.opencv.org/4.0.0/d8/dfe/classcv_1_1VideoCapture.html.

  1. Can we perform the motion detection work in a thread that differs from the video capturing thread? If so, how is this possible?

Yes. But we should use a number of synchronization mechanisms to ensure data safety. Also, if we dispatch the frames to different threads, we must ensure that the order of the resulting frames is also correct when they are sent back and are about to be shown.

  1. IFTTT allows you to include images in the notifications it sends—How could we send an image with the motion we detected while sending notifications to your mobile phone via this feature...

Chapter 4, Fun with Faces

  1. Can the LBP cascade classifier be used to detect faces by yourself?

Yes. Just use the OpenCV built-in lbpcascades/lbpcascade_frontalface_improved.xml classifier data file.

  1. There are a number of other algorithms that can be used to detect facial landmarks in the OpenCV library. The majority of these can be found at https://docs.opencv.org/4.0.0/db/dd8/classcv_1_1face_1_1Facemark.html. Try them for yourself.

Different functions can be used from the following link—https://docs.opencv.org/4.0.0/d4/d48/namespacecv_1_1face.html—to create different algorithm instances. All these algorithms have the same API as the one we used in the chapter, so you can try these algorithms easily by just changing their creation statement.

  1. How can a colored ornament be applied to faces?

In our project, both the video frame and the ornament are of the BGR format...

Chapter 5, Optical Character Recognition

  1. How is it possible to recognize characters in non-English languages with Tesseract?

Specify the corresponding language name when initializing the TessBaseAPI instance.

  1. When we used the EAST model to detect text areas, the detected areas are actually rotated rectangles, and we simply use their bounding rectangles instead. Is this always correct? If not, how can this approach be rectified?

It is correct, but this is not the best approach. We can copy the region in the bounding boxes of the rotated rectangles to new images, and then rotate and crop them to transform the rotated rectangles into regular rectangles. After that, we will generally get better outputs by sending the resulting regular rectangles to Tesseract in order to extract the text.

  1. Try to figure out a way to allow users to adjust the selected region after dragging the mouse...

Chapter 6, Object Detection in Real Time

  1. When we trained the cascade classifier for the faces of Boston bulls, we annotated the dog faces on each image by ourselves. The annotation process was very time-consuming. There is a tarball of annotation data for that dataset on its website: http://vision.stanford.edu/aditya86/ImageNetDogs/annotation.tar. Is it possible to generate the info.txt file from this annotation data by using a piece of code? How can this be done?

The annotation data in that tarball relates to the dogs' bodies, and not to the dogs' faces. So, we can't use it to train a classifier for the dogs' faces. However, if you want to train a classifier for the full bodies of the dogs, this can help. The data in that tarball is stored in XML format, and the annotation rectangles are the nodes with the //annotation/object/bndbox path, which we can extract...

Chapter 7, Real-Time Car Detection and Distance Measurement

  1. Is there a better reference object when measuring the distance between cars?

There are many classes in the coco dataset in which the objects generally have fixed positions; for instance, traffic lights, fire hydrants, and stop signs. We can find some of them in the view of our camera, choose any two of them, measure the distance between them, and then use the chosen objects and their distance as the references.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Qt 5 and OpenCV 4 Computer Vision Projects
Published in: Jun 2019Publisher: PacktISBN-13: 9781789532586
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Zhuo Qingliang

Zhuo Qingliang (a.k.a. KDr2 online) is presently working at Beijing Paoding Technology Co. LTD., a start-up Fintech company in China that is dedicated to improving the financial industry by using artificial intelligence technologies. He has over 10 years experience in Linux, C, C++, Python, Perl, and Java development. He is interested in programming, doing consulting work, participating in and contributing to the open source community (of course, includes the Julia community).
Read more about Zhuo Qingliang