Images are generally produced using a digital camera, which captures a scene by projecting light going through its lens onto an image sensor. The fact that an image is formed by the projection of a 3D scene onto a 2D plane shows the existence of important relationships between a scene and its image and between different images of the same scene. Projective geometry is the tool that is used to describe and characterize, in mathematical terms, the process of image formation. In this chapter, we will introduce you to some of the fundamental projective relations that exist in multiview imagery and explain how they can be used in computer vision programming. You will learn how matching can be made more accurate through the use of projective constraints, or how a mosaic from multiple images can be composited using two-view relations. Before...
You're reading from OpenCV 4 Computer Vision Application Programming Cookbook - Fourth Edition
Computing the fundamental matrix of an image pair
In this recipe, we will explore the projective relationship that exists between two images that display the same scene. These two images could have been obtained by moving a camera at two different locations to take pictures from two viewpoints, or by using two cameras, each of them taking a different picture of the scene. When these two cameras are separated by a rigid baseline, we use the term stereovision.
Getting ready
Let's now consider two cameras observing a given scene point, as shown in the following diagram:
We learned that we can find the image x of a 3D point X by tracing a line joining this 3D point with the camera's center. Conversely, the scene point...
Matching images using a random sample consensus
When two cameras observe the same scene, they see the same elements but under different viewpoints. We have already studied the feature point matching problem in Chapter 8, Detecting Interest Points. In this recipe, we come back to this problem, and we will learn how to exploit the epipolar constraint between two views to match image features more reliably.
The principle that we will follow is simple – when we match feature points between two images, we only accept those matches that fall on the corresponding epipolar lines. However, to be able to check this condition, the fundamental matrix must be known, but we need good matches to estimate this matrix. This seems to be a chicken-and-egg problem. However, in this recipe, we propose a solution in which the fundamental matrix and a set of good matches will be jointly computed...
Computing a homography between two images
The second recipe of this chapter showed you how to compute the fundamental matrix of an image pair from a set of matches. In projective geometry, another very useful mathematical entity also exists. This one can be computed from multiview imagery, and, as we will see, is a matrix with special properties.
Getting ready
Again, let's consider the projective relation between a 3D point and its image on a camera, which we introduced in the first recipe of this chapter. Basically, we learned that this equation relates a 3D point with its image using the intrinsic properties of the camera and the position of this camera (specified with a rotation and a translation component). If we...
Detecting planar targets in an image
In the previous recipe, we explained how homographies can be used to stitch together images separated by a pure rotation to create a panorama. In this recipe, we also learned that different images of a plane also generate homographies between views. We will now see how we can make use of this fact to recognize a planar object in an image.
How to do it...
Suppose you want to detect the occurrence of a planar object in an image. This object could be a poster, painting, signage, book cover (as in the following example), and so on. Based on what we learned in this chapter, the strategy would consist of detecting feature points on this object and to trying to match them with the feature points...