Python Multimedia: Video Format Conversion, Manipulations and Effects

Exclusive offer: get 50% off this eBook here
Python Multimedia

Python Multimedia — Save 50%

Learn how to develop Multimedia applications using Python with this practical step-by-step guide

$26.99    $13.50
by Ninad Sathaye | December 2010 | Open Source

Photographs capture the moment, but it is the video that helps us relive that moment! Video has become a major part of our lives. We preserve our memories by capturing the family vacation on a camcorder. When it comes to digitally preserving those recorded memories, the digital video processing plays an important role. We will use GStreamer for learning the fundamentals of video processing..

In this article by Ninad Sathaye, author of the book Python Multimedia Beginner's Guide, we shall:

  • Develop a simple command-line video player
  • Perform basic video manipulations such as cropping, resizing, and tweaking the parameters such as brightness, contrast, and saturation levels of a streaming video
  • Learn how to convert video between different video formats

So let's get on with it.

 

Python Multimedia

Python Multimedia

Learn how to develop Multimedia applications using Python with this practical step-by-step guide

  • Use Python Imaging Library for digital image processing.
  • Create exciting 2D cartoon characters using Pyglet multimedia framework
  • Create GUI-based audio and video players using QT Phonon framework.
  • Get to grips with the primer on GStreamer multimedia framework and use this API for audio and video processing.

 

        Read more about this book      

(For more resources on Python, see here.)

Installation prerequisites

We will use Python bindings of GStreamer multimedia framework to process video data. See Python Multimedia: Working with Audios for the installation instructions to install GStreamer and other dependencies.

For video processing, we will be using several GStreamer plugins not introduced earlier. Make sure that these plugins are available in your GStreamer installation by running the gst-inspect-0.10 command from the console (gst-inspect-0.10.exe for Windows XP users). Otherwise, you will need to install these plugins or use an alternative if available.

Following is a list of additional plugins we will use in this article:

  • autoconvert: Determines an appropriate converter based on the capabilities. It will be used extensively used throughout this article.
  • autovideosink: Automatically selects a video sink to display a streaming video.
  • ffmpegcolorspace: Transforms the color space into a color space format that can be displayed by the video sink.
  • capsfilter: It's the capabilities filter—used to restrict the type of media data passing down stream, discussed extensively in this article.
  • textoverlay: Overlays a text string on the streaming video.
  • timeoverlay: Adds a timestamp on top of the video buffer.
  • clockoverlay: Puts current clock time on the streaming video.
  • videobalance: Used to adjust brightness, contrast, and saturation of the images. It is used in the Video manipulations and effects section.
  • videobox: Crops the video frames by specified number of pixels—used in the Cropping section.
  • ffmux_mp4: Provides muxer element for MP4 video muxing.
  • ffenc_mpeg4: Encodes data into MPEG4 format.
  • ffenc_png: Encodes data in PNG format.

Playing a video

Earlier, we saw how to play an audio. Like audio, there are different ways in which a video can be streamed. The simplest of these methods is to use the playbin plugin. Another method is to go by the basics, where we create a conventional pipeline and create and link the required pipeline elements. If we only want to play the 'video' track of a video file, then the latter technique is very similar to the one illustrated for audio playback. However, almost always, one would like to hear the audio track for the video being streamed. There is additional work involved to accomplish this. The following diagram is a representative GStreamer pipeline that shows how the data flows in case of a video playback.

Python Multimedia: Video Format Conversion, Manipulations and Effects

In this illustration, the decodebin uses an appropriate decoder to decode the media data from the source element. Depending on the type of data (audio or video), it is then further streamed to the audio or video processing elements through the queue elements. The two queue elements, queue1 and queue2, act as media data buffer for audio and video data respectively. When the queue elements are added and linked in the pipeline, the thread creation within the pipeline is handled internally by the GStreamer.

Time for action – video player!

Let's write a simple video player utility. Here we will not use the playbin plugin. The use of playbin will be illustrated in a later sub-section. We will develop this utility by constructing a GStreamer pipeline. The key here is to use the queue as a data buffer. The audio and video data needs to be directed so that this 'flows' through audio or video processing sections of the pipeline respectively.

  1. Download the file PlayingVidio.py from the Packt website. The file has the source code for this video player utility.
  2. The following code gives an overview of the Video player class and its methods.

    import time
    import thread
    import gobject
    import pygst
    pygst.require("0.10")
    import gst
    import os

    class VideoPlayer:
    def __init__(self):
    pass
    def constructPipeline(self):
    pass
    def connectSignals(self):
    pass
    def decodebin_pad_added(self, decodebin, pad):
    pass
    def play(self):
    pass
    def message_handler(self, bus, message):
    pass

    # Run the program
    player = VideoPlayer()
    thread.start_new_thread(player.play, ())
    gobject.threads_init()
    evt_loop = gobject.MainLoop()
    evt_loop.run()

    As you can see, the overall structure of the code and the main program execution code remains the same as in the audio processing examples. The thread module is used to create a new thread for playing the video. The method VideoPlayer.play is sent on this thread. The gobject.threads_init() is an initialization function for facilitating the use of Python threading within the gobject modules. The main event loop for executing this program is created using gobject and this loop is started by the call evt_loop.run().

    Instead of using thread module you can make use of threading module as well. The code to use it will be something like:

    1. import threading
    2. threading.Thread(target=player.play).start()

       

    You will need to replace the line thread.start_new_thread(player.play, ()) in earlier code snippet with line 2 illustrated in the code snippet within this note. Try it yourself!

  3. Now let's discuss a few of the important methods, starting with self.contructPipeline:

    1 def constructPipeline(self):
    2 # Create the pipeline instance
    3 self.player = gst.Pipeline()
    4
    5 # Define pipeline elements
    6 self.filesrc = gst.element_factory_make("filesrc")
    7 self.filesrc.set_property("location",
    8 self.inFileLocation)
    9 self.decodebin = gst.element_factory_make("decodebin")
    10
    11 # audioconvert for audio processing pipeline
    12 self.audioconvert = gst.element_factory_make(
    13 "audioconvert")
    14 # Autoconvert element for video processing
    15 self.autoconvert = gst.element_factory_make(
    16 "autoconvert")
    17 self.audiosink = gst.element_factory_make(
    18 "autoaudiosink")
    19
    20 self.videosink = gst.element_factory_make(
    21 "autovideosink")
    22
    23 # As a precaution add videio capability filter
    24 # in the video processing pipeline.
    25 videocap = gst.Caps("video/x-raw-yuv")
    26 self.filter = gst.element_factory_make("capsfilter")
    27 self.filter.set_property("caps", videocap)
    28 # Converts the video from one colorspace to another
    29 self.colorSpace = gst.element_factory_make(
    30 "ffmpegcolorspace")
    31
    32 self.videoQueue = gst.element_factory_make("queue")
    33 self.audioQueue = gst.element_factory_make("queue")
    34
    35 # Add elements to the pipeline
    36 self.player.add(self.filesrc,
    37 self.decodebin,
    38 self.autoconvert,
    39 self.audioconvert,
    40 self.videoQueue,
    41 self.audioQueue,
    42 self.filter,
    43 self.colorSpace,
    44 self.audiosink,
    45 self.videosink)
    46
    47 # Link elements in the pipeline.
    48 gst.element_link_many(self.filesrc, self.decodebin)
    49
    50 gst.element_link_many(self.videoQueue, self.autoconvert,
    51 self.filter, self.colorSpace,
    52 self.videosink)
    53
    54 gst.element_link_many(self.audioQueue,self.audioconvert,
    55 self.audiosink)

  4. In various audio processing applications, we have used several of the elements defined in this method. First, the pipeline object, self.player, is created. The self.filesrc element specifies the input video file. This element is connected to a decodebin.
  5. On line 15, autoconvert element is created. It is a GStreamer bin that automatically selects a converter based on the capabilities (caps). It translates the decoded data coming out of the decodebin in a format playable by the video device. Note that before reaching the video sink, this data travels through a capsfilter and ffmpegcolorspace converter. The capsfilter element is defined on line 26. It is a filter that restricts the allowed capabilities, that is, the type of media data that will pass through it. In this case, the videoCap object defined on line 25 instructs the filter to only allow video-xraw-yuv capabilities.
  6. The ffmpegcolorspace is a plugin that has the ability to convert video frames to a different color space format. At this time, it is necessary to explain what a color space is. A variety of colors can be created by use of basic colors. Such colors form, what we call, a color space. A common example is an rgb color space where a range of colors can be created using a combination of red, green, and blue colors. The color space conversion is a representation of a video frame or an image from one color space into the other. The conversion is done in such a way that the converted video frame or image is a closer representation of the original one.

    The video can be streamed even without using the combination of capsfilter and the ffmpegcolorspace. However, the video may appear distorted. So it is recommended to use capsfilter and ffmpegcolorspace converter. Try linking the autoconvert element directly to the autovideosink to see if it makes any difference.

  7. Notice that we have created two sinks, one for audio output and the other for the video. The two queue elements are created on lines 32 and 33. As mentioned earlier, these act as media data buffers and are used to send the data to audio and video processing portions of the GStreamer pipeline. The code block 35-45 adds all the required elements to the pipeline.
  8. Next, the various elements in the pipeline are linked. As we already know, the decodebin is a plugin that determines the right type of decoder to use. This element uses dynamic pads. While developing audio processing utilities, we connected the pad-added signal from decodebin to a method decodebin_pad_added. We will do the same thing here; however, the contents of this method will be different. We will discuss that later.
  9. On lines 50-52, the video processing portion of the pipeline is linked. The self.videoQueue receives the video data from the decodebin. It is linked to an autoconvert element discussed earlier. The capsfilter allows only video-xraw-yuv data to stream further. The capsfilter is linked to a ffmpegcolorspace element, which converts the data into a different color space. Finally, the data is streamed to the videosink, which, in this case, is an autovideosink element. This enables the 'viewing' of the input video.
  10. Now we will review the decodebin_pad_added method.

    1 def decodebin_pad_added(self, decodebin, pad):
    2 compatible_pad = None
    3 caps = pad.get_caps()
    4 name = caps[0].get_name()
    5 print "\n cap name is =%s"%name
    6 if name[:5] == 'video':
    7 compatible_pad = (
    8 self.videoQueue.get_compatible_pad(pad, caps) )
    9 elif name[:5] == 'audio':
    10 compatible_pad = (
    11 self.audioQueue.get_compatible_pad(pad, caps) )
    12
    13 if compatible_pad:
    14 pad.link(compatible_pad)

  11. This method captures the pad-added signal, emitted when the decodebin creates a dynamic pad. Here the media data can either represent an audio or video data. Thus, when a dynamic pad is created on the decodebin, we must check what caps this pad has. The name of the get_name method of caps object returns the type of media data handled. For example, the name can be of the form video/x-raw-rgb when it is a video data or audio/x-raw-int for audio data. We just check the first five characters to see if it is video or audio media type. This is done by the code block 4-11 in the code snippet. The decodebin pad with video media type is linked with the compatible pad on self.videoQueue element. Similarly, the pad with audio caps is linked with the one on self.audioQueue.
  12. Review the rest of the code from the PlayingVideo.py. Make sure you specify an appropriate video file path for the variable self.inFileLocation and then run this program from the command prompt as:

    $python PlayingVideo.py

    This should open a GUI window where the video will be streamed. The audio output will be synchronized with the playing video.

What just happened?

We created a command-line video player utility. We learned how to create a GStreamer pipeline that can play synchronized audio and video streams. It explained how the queue element can be used to process the audio and video data in a pipeline. In this example, the use of GStreamer plugins such as capsfilter and ffmpegcolorspace was illustrated. The knowledge gained in this section will be applied in the upcoming sections in this article.

Playing video using 'playbin'

The goal of the previous section was to introduce you to the fundamental method of processing input video streams. We will use that method one way or another in the future discussions. If just video playback is all that you want, then the simplest way to accomplish this is by means of playbin plugin. The video can be played just by replacing the VideoPlayer.constructPipeline method in file PlayingVideo.py with the following code. Here, self.player is a playbin element. The uri property of playbin is set as the input video file path.

def constructPipeline(self):
self.player = gst.element_factory_make("playbin")
self.player.set_property("uri",
"file:///" + self.inFileLocation)

Python Multimedia Learn how to develop Multimedia applications using Python with this practical step-by-step guide
Published: August 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on Python, see here.)

Video format conversion

Saving the video in a different file format is one of the frequently performed tasks—for example, the task of converting a recorded footage on to your camcorder to a format playable on a DVD player. So let's list out the elements we need in a pipeline to carry out the video format conversion.

  • A filesrc element to stream the video file and a decodebin to decode the encoded input media data.
  • Next, the audio processing elements of the pipeline, such as audioconvert, an encoder to encode the raw audio data into an appropriate audio format to be written.
  • The video processing elements of the pipeline, such as a video encoder element to encode the video data.
  • A multiplexer or a muxer that takes the encoded audio and video data streams and puts them into a single channel.
  • There needs to be an element that, depending on the media type, can send the media data to an appropriate processing unit. This is accomplished by queue elements that act as data buffers. Depending on whether it is an audio or video data, it is streamed to the audio or video processing elements. The queue is also needed to stream the encoded data from audio pipeline to the multiplexer.
  • Finally, a filesink element to save the converted video file (containing both audio and video tracks).

Time for action – video format converter

We will create a video conversion utility that will convert an input video file into a format specified by the user. The file you need to download from the Packt website is VideoConverter.py. This file can be run from the command line as:

python VideoConverter.py [options]

Where, the options are as follows:

  • --input_path: The full path of the video file we wish to convert. The video format of the input files. The format should be in a supported list of formats. The supported input formats are MP4, OGG, AVI, and MOV.
  • --output_path: The full path of the output video file. If not specified, it will create a folder OUTPUT_VIDEOS within the input directory and save the file there with same name.
  • --output_format: The audio format of the output file. The supported output formats are OGG and MP4.

As we will be using a decodebin element for decoding the input media data; there is actually a wider range of input formats this utility can handle. Modify the code in VideoPlayer.processArguments or add more formats to dictionary VideoPlayer.supportedInputFormats.

  1. If not done already, download the file VideoConverter.py from the Packt website.
  2. The overall structure of the code is:

    import os, sys, time
    import thread
    import getopt, glob
    import gobject
    import pygst
    pygst.require("0.10")
    import gst

    class VideoConverter:
    def __init__(self):
    pass
    def constructPipeline(self):
    pass
    def connectSignals(self):
    pass
    def decodebin_pad_added(self, decodebin, pad):
    pass
    def processArgs(self):
    pass
    def printUsage(self):
    pass
    def printFinalStatus(self, starttime, endtime):
    pass
    def convert(self):
    pass
    def message_handler(self, bus, message):
    pass

    # Run the converter
    converter = VideoConverter()
    thread.start_new_thread(converter.convert, ())
    gobject.threads_init()
    evt_loop = gobject.MainLoop()
    evt_loop.run()

    A new thread is created by calling thread.start_new_thread, to run the application. The method VideoConverter.convert is sent on this thread. It is similar to the VideoPlayer.play method discussed earlier. Let's review some key methods of the class VideoConverter.

  3. The __init__ method contains the initialization code. It also calls methods to process command-line arguments and then build the pipeline. The code is illustrated as follows:

    1 def __init__(self):
    2 # Initialize various attrs
    3 self.inFileLocation = ""
    4 self.outFileLocation = ""
    5 self.inputFormat = "ogg"
    6 self.outputFormat = ""
    7 self.error_message = ""
    8 # Create dictionary objects for
    9 # Audio / Video encoders for supported
    10 # file format
    11 self.audioEncoders = {"mp4":"lame",
    12 "ogg": "vorbisenc"}
    13
    14 self.videoEncoders={"mp4":"ffenc_mpeg4",
    15 "ogg": "theoraenc"}
    16
    17 self.muxers = {"mp4":"ffmux_mp4",
    18 "ogg":"oggmux" }
    19
    20 self.supportedOutputFormats = self.audioEncoders.keys()
    21
    22 self.supportedInputFormats = ("ogg", "mp4",
    23 "avi", "mov")
    24
    25 self.pipeline = None
    26 self.is_playing = False
    27
    28 self.processArgs()
    29 self.constructPipeline()
    30 self.connectSignals()

    To process the video file, we need audio and video encoders. This utility will support the conversion to only MP4 and OGG file formats. This can be easily extended to include more formats by adding appropriate encoders and muxer plugins. The values of the self.audioEncoders and self.videoEncoders dictionary objects specify the encoders to use for the streaming audio and video data respectively. Therefore, to store the video data in MP4 format, we use the ffenc_mp4 encoder. The encoders illustrated in the code snippet should be a part of the GStreamer installation on your computer. If not, visit the GStreamer website to find out how to install these plugins. The values of dictionary self.muxers represent the multiplexer to use in a specific output format.

  4. The constructPipeline method does the main conversion job. It builds the required pipeline, which is then set to playing state in the convert method.

    1 def constructPipeline(self):
    2 self.pipeline = gst.Pipeline("pipeline")
    3
    4 self.filesrc = gst.element_factory_make("filesrc")
    5 self.filesrc.set_property("location",
    6 self.inFileLocation)
    7
    8 self.filesink = gst.element_factory_make("filesink")
    9 self.filesink.set_property("location",
    10 self.outFileLocation)
    11
    12 self.decodebin = gst.element_factory_make("decodebin")
    13 self.audioconvert = gst.element_factory_make(
    14 "audioconvert")
    15
    16 audio_encoder = self.audioEncoders[self.outputFormat]
    17 muxer_str = self.muxers[self.outputFormat]
    18 video_encoder = self.videoEncoders[self.outputFormat]
    19
    20 self.audio_encoder= gst.element_factory_make(
    21 audio_encoder)
    22 self.muxer = gst.element_factory_make(muxer_str)
    23 self.video_encoder = gst.element_factory_make(
    24 video_encoder)
    25
    26 self.videoQueue = gst.element_factory_make("queue")
    27 self.audioQueue = gst.element_factory_make("queue")
    28 self.queue3 = gst.element_factory_make("queue")
    29
    30 self.pipeline.add( self.filesrc,
    31 self.decodebin,
    32 self.video_encoder,
    33 self.muxer,
    34 self.videoQueue,
    35 self.audioQueue,
    36 self.queue3,
    37 self.audioconvert,
    38 self.audio_encoder,
    39 self.filesink)
    40
    41 gst.element_link_many(self.filesrc, self.decodebin)
    42
    43 gst.element_link_many(self.videoQueue,
    44 self.video_encoder, self.muxer, self.filesink)
    45
    46 gst.element_link_many(self.audioQueue,self.audioconvert,
    47 self.audio_encoder, self.queue3,
    48 self.muxer)

    In an earlier section, we covered several of the elements used in the previous pipeline. The code on lines 43 to 48 establishes linkage for the audio and video processing elements. On line 44, the multiplexer, self.muxer is linked with the video encoder element. It puts the separate parts of the stream—in this case, the video and audio data, into a single file. The data output from audio encoder, self.audio_encoder, is streamed to the muxer via a queue element, self.queue3. The muxed data coming out of self.muxer is then streamed to the self.filesink.

  5. Let's quickly review the VideoConverter.convert method.


    1 def convert(self):
    2 # Record time before beginning Video conversion
    3 starttime = time.clock()
    4
    5 print "\n Converting Video file.."
    6 print "\n Input File: %s, Conversion STARTED..." %
    7 self.inFileLocation
    8
    9 self.is_playing = True
    10 self.pipeline.set_state(gst.STATE_PLAYING)
    11 while self.is_playing:
    12 time.sleep(1)
    13
    14 if self.error_message:
    15 print "\n Input File: %s, ERROR OCCURED." %
    16 self.inFileLocation
    17 print self.error_message
    18 else:
    19 print "\n Input File: %s, Conversion COMPLETE " %
    20 self.inFileLocation
    21
    22 endtime = time.clock()
    23 self.printFinalStatus(starttime, endtime)
    24 evt_loop.quit()

    On line 10, the GStreamer pipeline built earlier is set to playing. When the conversion is complete, it will generate the End Of Stream (EOS) message. The self.is_playing flag is modified in the method self.message_handler. The while loop on line 11 is executed until the EOS message is posted on the bus or some error occurs. Finally, on line 24, the main execution loop is terminated.

    On line 3, we make a call to time.clock(). This actually gives the CPU time spent on the process.

  6. The other methods such as VideoConverter.decodebin_pad_added are identical to the one developed in the Playing a video section. Review the remaining methods from the file VideoConverter.py and then run this utility by specifying appropriate command-line arguments. The following screenshot shows sample output messages when the program is run from the console window.

    Python Multimedia: Video Format Conversion, Manipulations and Effects

    This is a sample run of the video conversion utility from the console.

What just happened?

We created another useful utility that can convert video files from one format to the other. We learned how to encode the audio and video data into a desired output format and then use a multiplexer to put these two data streams into a single file.

Have a go hero – batch-convert the video files

The video converter developed in previous sections can convert a single video file at a time. Can you make it a batch-processing utility ? Refer to the code for the audio conversion utility developed in the Working with Audios article. The overall structure will be very similar. However, there could be challenges in converting multiple video files because of the use of queue elements. For example, when it is done converting the first file, the data in the queue may not be flushed when we start conversion of the other file. One crude way to address this would be to reconstruct the whole pipeline and connect signals for each audio file. However, there will be a more efficient way to do this. Think about it!

Video manipulations and effects

Suppose you have a video file that needs to be saved with an adjusted default brightness level. Alternatively, you may want to save another video with a different aspect ratio. In this section, we will learn some of the basic and most frequently performed operations on a video. We will develop code using Python and GStreamer for tasks such as resizing a video or adjusting its contrast level.

Resizing

The data that can flow through an element is described by the capabilities (caps) of a pad on that element. If a decodebin element is decoding video data, the capabilities of its dynamic pad will be described as, for instance, video/x-raw-yuv. Resizing a video with GStreamer multimedia framework can be accomplished by using a capsfilter element, that has width and height parameters specified. As discussed earlier, the capsfilter element limits the media data type that can be transferred between two elements. For example, a cap object described by the string, video/x-raw-yuv, width=800, height=600 will set the width of the video to 800 pixels and the height to 600 pixels.

Time for action – resize a video

We will now see how to resize a streaming video using the width and height parameters described by a GStreamer cap object.

  1. Download the file VideoManipulations.py from the Packt website. The overall class design is identical to the one studied in the Playing a video section.
  2. The methods self.constructAudioPipeline() and self.constructVideoPipeline(), respectively, define and link elements related to audio and video portions of the main pipeline object self.player. As we have already discussed most of the audio/video processing elements in earlier sections, we will only review the constructVideoPipeline method here.

    1 def constructVideoPipeline(self):
    2 # Autoconvert element for video processing
    3 self.autoconvert = gst.element_factory_make(
    4 "autoconvert")
    5 self.videosink = gst.element_factory_make(
    6 "autovideosink")
    7
    8 # Set the capsfilter
    9 if self.video_width and self.video_height:
    10 videocap = gst.Caps(
    11 "video/x-raw-yuv," "width=%d, height=%d"%
    12 (self.video_width,self.video_height))
    13 else:
    14 videocap = gst.Caps("video/x-raw-yuv")
    15
    16 self.capsFilter = gst.element_factory_make(
    17 "capsfilter")
    18 self.capsFilter.set_property("caps", videocap)
    19
    20 # Converts the video from one colorspace to another
    21 self.colorSpace = gst.element_factory_make(
    22 "ffmpegcolorspace")
    23
    24 self.videoQueue = gst.element_factory_make("queue")
    25
    26 self.player.add(self.videoQueue,
    27 self.autoconvert,
    28 self.capsFilter,
    29 self.colorSpace,
    30 self.videosink)
    31
    32 gst.element_link_many(self.videoQueue,
    33 self.autoconvert,
    34 self.capsFilter,
    35 self.colorSpace,
    36 self.videosink)

    The capsfilter element is defined on line 16. It is a filter that restricts the type of media data that will pass through it. The videocap is a GStreamer cap object created on line 10. This cap specifies the width and height parameters of the streaming video. It is set as a property of the capsfilter, self.capsFilter. It instructs the filter to only stream video-xraw-yuv data with width and height specified by the videocap object.

    In the source file, you will see an additional element self.videobox linked in the pipeline. It is omitted in the above code snippet. We will see what this element is used for in the next section.

  3. The rest of the code is straightforward. We already covered similar methods in earlier discussions. Develop the rest of the code by reviewing the file VideoManipulations.py. Make sure to specify an appropriate video file path for the variable self.inFileLocation .Then run this program from the command prompt as:

    $python VideoManipulations.py

    This should open a GUI window where the video will be streamed. The default size of this window will be controlled by the parameters self.video_width and self.video_height specified in the code.

What just happened?

The command-line video player developed earlier was extended in the example we just developed. We used capsfilter plugin to specify the width and height parameters of the streaming video and thus resize the video.

Cropping

Suppose you have a video that has a large 'gutter space' at the bottom or some unwanted portion on a side that you would like to trim off. The videobox GStreamer plugin facilitates cropping the video from left, right, top, or bottom.

Time for action – crop a video

Let's add another video manipulation feature to the command-line video player developed earlier.

  1. The file we need here is the one used in the earlier section, VideoManipulations.py.
  2. Once again, we will focus our attention on the constructVideoPipeline method of the class VideoPlayer. The following code snippet is from this method. The rest of the code in this method is identical to the one reviewed in the earlier section.

    1 self.videobox = gst.element_factory_make("videobox")
    2 self.videobox.set_property("bottom", self.crop_bottom )
    3 self.videobox.set_property("top", self.crop_top )
    4 self.videobox.set_property("left", self.crop_left )
    5 self.videobox.set_property("right", self.crop_right )
    6
    7 self.player.add(self.videoQueue,
    8 self.autoconvert,
    9 self.videobox,
    10 self.capsFilter,
    11 self.colorSpace,
    12 self.videosink)
    13
    14 gst.element_link_many(self.videoQueue,
    15 self.autoconvert,
    16 self.videobox,
    17 self.capsFilter,
    18 self.colorSpace,
    19 self.videosink)

  3. The code is self-explanatory. The videobox element is created on line 1. The properties of videobox that crop the streaming video are set on lines 2-5. It receives the media data from the autoconvert element. The source pad of videobox is connected to the sink of either capsfilter or directly the ffmpegcolorspace element.
  4. Develop the rest of the code by reviewing the file VideoManipulations.py. Make sure to specify an appropriate video file path for the variable self.inFileLocation. Then run this program from the command prompt as:

    $python VideoManipulations.py

    This should open a GUI window where the video will be streamed. The video will be cropped from left, right, bottom, and top sides by the parameters self.crop_left, self.crop_right, self.crop_bottom, and self.crop_top respectively.

What just happened?

We extended the video player application further to add a GStreamer element that can crop the video frames from sides. The videobox plugin was used to accomplish this task.

Have a go hero – add borders to a video

  1. In the previous section, we used videobox element to trim the video from sides. The same plugin can be used to add a border around the video. If you set negative values for videobox properties, such as, bottom, top, left and right, instead of cropping the video, it will add black border around the video. Set negative values of parameters such as self.crop_left to see this effect.
  2. The video cropping can be accomplished by using videocrop plugin. It is similar to the videobox plugin, but it doesn't support adding a border to the video frames. Modify the code and use this plugin to crop the video.

Adjusting brightness and contrast

If you have a homemade video recorded in poor lighting conditions, you would probably adjust its brightness level. The contrast-level highlights the difference between the color and brightness level of each video frame. The videobalance plugin can be used to adjust the brightness, contrast, hue, and saturation. The next code snippet creates this element and sets the brightness and contrast properties. The brightness property can accept values in the range -1 to 1, the default (original) brightness level is 0. The contrast can have values in the range 0 to 2 with the default value as 1.

self.videobalance = gst.element_factory_make("videobalance")
self.videobalance.set_property("brightness", 0.5)
self.videobalance.set_property("contrast", 0.5)

The videobalance is then linked in the GStreamer pipeline as:

gst.element_link_many(self.videoQueue,
self.autoconvert,
self.videobalance,
self.capsFilter,
self.colorSpace,
self.videosink)

Review the rest of the code from file VideoEffects.py.

Creating gray scale video

The video can be rendered as gray scale by adjusting the saturation property of the videobalance plugin. The saturation can have a value in the range 0 to 2. The default value is 1. Setting this value to 0.0 converts the images to gray scale. The code is illustrated as follows:

self.videobalance.set_property("saturation", 0.0)

You can refer to the file VideoEffects.py, which illustrates how to use the videobalance plugin to adjust saturation and other parameters discussed in earlier sections.

Summary

This article explained the fundamentals of video processing. It covered topics such as converting video between different video formats, performing basic video manipulations such as cropping, resizing, adjusting brightness, and so on.


Further resources on this subject:


Python Multimedia Learn how to develop Multimedia applications using Python with this practical step-by-step guide
Published: August 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:

About the Author :


Ninad Sathaye

Ninad has more than 6 years of experience in software design and development. He is currently working at IBM India. Prior to IBM, he was a Systems Programmer at Nanorex Inc. based in Michigan, USA. At Nanorex, he was involved in the development of an open source, interactive 3D CAD software, written in Python and C. This is where he developed his passion for the Python programming language. Besides programming, his favorite hobbies are reading and traveling. Ninad holds a Master's of Science degree in Mechanical Engineering from Kansas State University, USA.

Books From Packt


Python 2.6 Text Processing Beginners Guide
Python 2.6 Text Processing Beginners Guide

Python Text Processing with NLTK 2.0   Cookbook
Python Text Processing with NLTK 2.0 Cookbook

Python Geo-Spatial Development
Python Geo-Spatial Development

Python 3 Object Oriented Programming
Python 3 Object Oriented Programming

Python 2.6 Graphics Cookbook
Python 2.6 Graphics Cookbook

Spring Python 1.1
Spring Python 1.1

Plone 3 Multimedia
Plone 3 Multimedia

MySQL for Python
MySQL for Python


Your rating: None Average: 5 (3 votes)

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
c
f
2
S
u
W
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software