Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
OpenCV Computer Vision Application Programming Cookbook Second Edition

You're reading from  OpenCV Computer Vision Application Programming Cookbook Second Edition

Product type Book
Published in Aug 2014
Publisher Packt
ISBN-13 9781782161486
Pages 374 pages
Edition 1st Edition
Languages
Author (1):
Robert Laganiere Robert Laganiere
Profile icon Robert Laganiere

Table of Contents (18) Chapters

OpenCV Computer Vision Application Programming Cookbook Second Edition
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Playing with Images Manipulating Pixels Processing Color Images with Classes Counting the Pixels with Histograms Transforming Images with Morphological Operations Filtering the Images Extracting Lines, Contours, and Components Detecting Interest Points Describing and Matching Interest Points Estimating Projective Relations in Images Processing Video Sequences Index

Chapter 11. Processing Video Sequences

In this chapter, we will cover the following recipes:

  • Reading video sequences

  • Processing the video frames

  • Writing video sequences

  • Tracking feature points in a video

  • Extracting the foreground objects in a video

Introduction


Video signals constitute a rich source of visual information. They are made of a sequence of images, called frames, that are taken at regular time intervals (specified as the frame rate, generally expressed in frames per second) and show a scene in motion. With the advent of powerful computers, it is now possible to perform advanced visual analysis on video sequences—sometimes at rates close to, or even faster than, the actual video frame rate. This chapter will show you how to read, process, and store video sequences.

We will see that once the individual frames of a video sequence have been extracted, the different image processing functions presented in this book can be applied to each of them. In addition, we will also look at a few algorithms that perform a temporal analysis of the video sequence, compare adjacent frames to track objects, or cumulate image statistics over time in order to extract foreground objects.

Reading video sequences


In order to process a video sequence, we need to be able to read each of its frames. OpenCV has put in place an easy-to-use framework that can help us perform frame extraction from video files or even from USB or IP cameras. This recipe shows you how to use it.

How to do it...

Basically, all you need to do in order to read the frames of a video sequence is create an instance of the cv::VideoCapture class. You then create a loop that will extract and read each video frame. Here is a basic main function that displays the frames of a video sequence:

int main()
{
  // Open the video file
  cv::VideoCapture capture("bike.avi");
  // check if video successfully opened
  if (!capture.isOpened())
    return 1;

  // Get the frame rate
  double rate= capture.get(CV_CAP_PROP_FPS);

  bool stop(false);
  cv::Mat frame; // current video frame
  cv::namedWindow("Extracted Frame");

  // Delay between each frame in ms
  // corresponds to video frame rate
  int delay= 1000/rate;

...

Processing the video frames


In this recipe, our objective is to apply some processing function to each of the frames of a video sequence. We will do this by encapsulating the OpenCV video capture framework into our own class. Among other things, this class will allow us to specify a function that will be called each time a new frame is extracted.

How to do it...

What we want is to be able to specify a processing function (a callback function) that will be called for each frame of a video sequence. This function can be defined as receiving a cv::Mat instance and outputting a processed frame. Therefore, in our framework, the processing function must have the following signature to be a valid callback:

void processFrame(cv::Mat& img, cv::Mat& out);

As an example of such a processing function, consider the following simple function that computes the Canny edges of an input image:

void canny(cv::Mat& img, cv::Mat& out) {
  // Convert to gray
  if (img.channels()==3)
    cv::cvtColor...

Writing video sequences


In the previous recipes, we learned how to read a video file and extract its frames. This recipe will show you how to write frames and, therefore, create a video file. This will allow us to complete the typical video-processing chain: reading an input video stream, processing its frames, and then storing the results in a new video file.

How to do it...

Writing video files in OpenCV is done using the cv::VideoWriter class. An instance is constructed by specifying the filename, the frame rate at which the generated video should play, the size of each frame, and whether or not the video will be created in color:

writer.open(outputFile, // filename
    codec,          // codec to be used 
    framerate,      // frame rate of the video
    frameSize,      // frame size
    isColor);       // color video?

In addition, you must specify the way you want the video data to be saved. This is the codec argument; this will be discussed at the end of this recipe.

Once the video file...

Tracking feature points in a video


This chapter is about reading, writing, and processing video sequences. The objective is to be able to analyze a complete video sequence. As an example, in this recipe, you will learn how to perform temporal analysis of the sequence in order to track feature points as they move from frame to frame.

How to do it...

To start the tracking process, the first thing to do is to detect the feature points in an initial frame. You then try to track these points in the next frame. Obviously, since we are dealing with a video sequence, there is a good chance that the object on which the feature points are found has moved (this motion can also be due to camera movement). Therefore, you must search around a point's previous location in order to find its new location in the next frame. This is what accomplishes the cv::calcOpticalFlowPyrLK function. You input two consecutive frames and a vector of feature points in the first image; the function returns a vector of new...

Extracting the foreground objects in a video


When a fixed camera observes a scene, the background remains mostly unchanged. In this case, the interesting elements are the moving objects that evolve inside this scene. In order to extract these foreground objects, we need to build a model of the background, and then compare this model with a current frame in order to detect any foreground objects. This is what we will do in this recipe. Foreground extraction is a fundamental step in intelligent surveillance applications.

If we had an image of the background of the scene (that is, a frame that contains no foreground objects) at our disposal, then it would be easy to extract the foreground of a current frame through a simple image difference:

  // compute difference between current image and background
  cv::absdiff(backgroundImage,currentImage,foreground);

Each pixel for which this difference is high enough would then be declared as a foreground pixel. However, most of the time, this background...

lock icon The rest of the chapter is locked
You have been reading a chapter from
OpenCV Computer Vision Application Programming Cookbook Second Edition
Published in: Aug 2014 Publisher: Packt ISBN-13: 9781782161486
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}