Using Image Processing Techniques

(For more resources related to this topic, see here.)

In most of the examples, we will use the following famous test image widely used to illustrate computer vision algorithms and techniques:

You can download Lenna's image from Wikipedia (

Transforming image contrast and brightness

In this recipe we will cover basic image color transformations using the Surface class for pixel manipulation.

How to do it...

We will create an application with simple GUI for contrast and brightness manipulation on the sample image. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/gl.h" #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h"

  2. Add properties to the main class:

    float mContrast,mContrastOld; float mBrightness,mBrightnessOld; Surface32f mImage, mImageOutput;

  3. In the setup method an image is loaded for processing and the Surface object is prepared to store processed image:

    mImage = loadImage( loadAsset("image.png") ); mImageOutput = Surface32f(mImage.getWidth(), mImage.getHeight(), false);

  4. Set window size to default values:

    setWindowSize(1025, 512); mContrast = 0.f; mContrastOld = -1.f; mBrightness = 0.f; mBrightnessOld = -1.f;

  5. Add parameter controls to the InterfaceGl window:

    mParams.addParam("Contrast", &mContrast, "min=-0.5 max=1.0 step=0.01"); mParams.addParam("Brightness", &mBrightness, "min=-0.5 max=0.5 step=0.01");

  6. Implement the update method as follows:

    if(mContrastOld != mContrast || mBrightnessOld != mBrightness) { float c = 1.f + mContrast; Surface32f::IterpixelIter = mImage.getIter(); Surface32f::IterpixelOutIter = mImageOutput.getIter(); while( pixelIter.line() ) { pixelOutIter.line(); while( pixelIter.pixel() ) { pixelOutIter.pixel(); // contrast transformation pixelOutIter.r() = (pixelIter.r() - 0.5f) * c + 0.5f; pixelOutIter.g() = (pixelIter.g() - 0.5f) * c + 0.5f; pixelOutIter.b() = (pixelIter.b() - 0.5f) * c + 0.5f; // brightness transformation pixelOutIter.r() += mBrightness; pixelOutIter.g() += mBrightness; pixelOutIter.b() += mBrightness; } } mContrastOld = mContrast; mBrightnessOld = mBrightness; }

  7. Lastly, we will draw the original and processed images by adding the following lines of code inside the draw method:

    gl::draw(mImage); gl::draw(mImageOutput, Vec2f(512.f+1.f, 0.f));

How it works...

The most important part is inside the update method. In step 6 we checked if the parameters for contrast and brightness had been changed. If they have, we iterate through all the pixels of the original image and store recalculated color values in mImageOutput. While modifying the brightness is just increasing or decreasing each color component, calculating contrast is a little more complicated. For each color component we are using the multiplying formula, color = (color - 0.5) * contrast + 0.5, where contrast is a number between 0.5 and 2. In the GUI we are setting a value between -0.5 and 1.0, which is more natural range; it is then recalculated at the beginning of step 6. While processing the image we have to change color value of all pixels, so later in step 6, you can see that we iterate through later columns of each row of the pixels using two while loops. To move to the next row we invoked the line method on the Surface iterator and then the pixel method to move to the next pixel of the current row. This method is much faster than using, for example, the getPixel and setPixel methods.

Our application is rendering the original image on the left-hand side and the processed image on the right-hand side, so you can compare the results of color adjustment.

Integrating with OpenCV

OpenCV is a very powerful open-source library for computer vision. The library is written in C++ so it can be easily integrated in your Cinder application. There is a very useful OpenCV Cinder block provided within Cinder package available at the GitHub repository (

Getting ready

Make sure you have Xcode up and running with a Cinder project opened.

How to do it…

We will add OpenCV Cinder block to your project, which also illustrates the usual way of adding any other Cinder block to your project. Perform the following steps to do so:

  1. Add a new group to our Xcode project root and name it Blocks. Next, drag the opencv folder inside the Blocks group. Be sure to select the Create groups for any added folders radio button, as shown in the following screenshot:

  2. You will need only the include folder inside the opencv folder in your project structure, so delete any reference to others. The final project structure should look like the following screenshot:

  3. Add the paths to the OpenCV library files in the Other Linker Flags section of your project's build settings, for example:

    $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_imgproc.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_core.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_objdetect.a

    These paths are shown in the following screenshot:

  4. Add the paths to the OpenCV Cinder block headers you are going to use in the User Header Search Paths section of your project's build settings:


    This path is shown in the following screenshot:

  5. Include OpenCV Cinder block header file:

    #include "CinderOpenCV.h"

How it works…

OpenCV Cinder block provides the toOcv and fromOcv functions for data exchange between Cinder and OpenCV. After setting up your project you can use them, as shown in the following short example:

Surface mImage, mImageOutput; mImage = loadImage( loadAsset("image.png") ); cv::Mat ocvImage(toOcv(mImage)); cv::cvtColor(ocvImage, ocvImage, CV_BGR2GRAY ); mImageOutput = Surface(fromOcv(ocvImage));

You can use the toOcv and fromOcv functions to convert between Cinder and OpenCV types, storing image data such as Surface or Channel handled through the ImageSourceRef type; there are also other types, as shown in the following table:

Cinder types

OpenCV types











In this example we are linking against the following three files from the OpenCV package:

  • libopencv_imgproc.a: This image processing module includes image manipulation functions, filters, feature detection, and more
  • libopencv_core.a: This module provides core functionality and data structures
  • libopencv_objdetect.a: This module has object detection tools such as cascade classifiers

You can find the documentation on all OpenCV modules at

There's more…

There are some features that are not available in precompiled OpenCV libraries packaged in OpenCV Cinder block, but you can always compile your own OpenCV libraries and still use exchange functions from OpenCV Cinder block in your project.

Detecting edges

In this recipe, we will demonstrate how to use edge detection function, which is one of the image processing functions implemented directly in Cinder.

Getting ready

Make sure you have Xcode up and running with an empty Cinder project opened. We will need a sample image to proceed, so save it in your assets folder as image.png.

How to do it…

We will process the sample image with the edge detection function. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h" #include "cinder/ip/EdgeDetect.h" #include "cinder/ip/Grayscale.h"

  2. Add two properties to your main class:

    Surface8u mImageOutput;

  3. Load the source image and set up Surface for processed images inside the setup method:

    mImage = loadImage( loadAsset("image.png") ); mImageOutput = Surface8u(mImage.getWidth(), mImage.getHeight(), false);

  4. Use image processing functions:

    ip::grayscale(mImage, &mImage); ip::edgeDetectSobel(mImage, &mImageOutput);

  5. Inside the draw method add the following two lines of code for drawing images:

    gl::draw(mImage); gl::draw(mImageOutput, Vec2f(512.f+1.f, 0.f));

How it works…

As you can see, detecting edges in Cinder is pretty easy because of implementation of basic image processing functions directly in Cinder, so you don't have to include any third-party libraries. In this case we are using the grayscale function to convert the original image color space to grayscale. It is a commonly used feature in image processing because many algorithms work more efficiently on grayscale images or are even designed to work only with grayscale source images. The edge detection is implemented with the edgeDetectSobel function and uses the Sobel algorithm. In this case, the first parameter is the source original grayscale image and the second parameter, is the output Surface object in which the result will be stored.

Inside the draw method we are drawing both images, as shown in the following screenshot:

There's more…

You may find the image processing functions implemented in Cinder insufficient, so you can also include to your project, third-party library such as OpenCV. We explained how we can use Cinder and OpenCV together in the preceding recipe, Integrating with OpenCV.

Other useful functions in the context of edge detection are Canny and findContours. The following is the example of how we can use them:

vector<vector<cv::Point> > contours; cv::Mat inputMat( toOcv( frame ) ); // blur cv::cvtColor( inputMat, inputMat, CV_BGR2GRAY ); cv::Mat blurMat; cv::medianBlur(inputMat, blurMat, 11); // threshold cv::Mat thresholdMat; cv::threshold(blurMat, thresholdMat, 50, 255, CV_8U ); // erode cv::Mat erodeMat; cv::erode(thresholdMat, erodeMat, 11); // Detect edges cv::Mat cannyMat; int thresh = 100; cv::Canny(erodeMat, cannyMat, thresh, thresh*2, 3 ); // Find contours cv::findContours(cannyMat, contours, CV_RETR_TREE, CV_CHAIN_APPROX_ SIMPLE);

After executing the preceding code, the points, which form the contours are stored in the contours variable.

Detecting faces

In this recipe, we will examine how our application can be used to recognize human faces. Thanks to the OpenCV library, it is really easy.

Getting ready

We will be using the OpenCV library, so please refer to the Integrating with OpenCV recipe for information on how to set up your project. We will need a sample image to proceed, so save it in your assets folder as image.png. Put the Haar cascade classifier file for frontal face recognition inside the assets directory. The cascade file can be found inside the downloaded OpenCV package or in the online public repository, located at

How to do it…

We will create an application that demonstrates the usage of cascade classifier from OpenCV with Cinder. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h"

  2. Add the following members to your main class:

    Surface8u mImage; cv::CascadeClassifier mFaceCC; std::vector<Rectf> mFaces;

  3. Add the following code snippet to the setup method:

    mImage = loadImage( loadAsset("image.png") ); mFaceCC.load( getAssetPath( "haarcascade_frontalface_alt.xml" ).string() );

  4. Also add the following code snippet at the end of the setup method:

    cv::Mat cvImage( toOcv( mImage, CV_8UC1 ) ); std::vector<cv::Rect> faces; mFaceCC.detectMultiScale( cvImage, faces ); std::vector::const_iterator faceIter; for(faceIter = faces.begin(); faceIter != faces.end(); ++faceIter ) { Rectf faceRect( fromOcv( *faceIter ) ); mFaces.push_back( faceRect ); }

  5. At the end of the draw method add the following code snippet:

    gl::color( Color::white() ); gl::draw(mImage); gl::color( ColorA( 1.f, 0.f, 0.f, 0.45f ) ); std::vector<Rectf>::const_iterator faceIter; for(faceIter = mFaces.begin(); faceIter != mFaces.end(); ++faceIter ) { gl::drawStrokedRect( *faceIter ); }

How it works…

In step 3 we loaded an image file for processing and an XML classifier file, which has description of the object features to be recognized. In step 4 we performed an image detection by invoking the detectMultiScale function on the mFaceCC object, where we pointed to cvImage as an input and stored the result in a vector structure, cvImage is converted from mImage as an 8-bit, single channel image (CV_8UC1). What we did next was iterating through all the detected faces and storing Rectf variable, which describes a bounding box around the detected face. Finally, in step 5 we drew our original image and all the recognized faces as stroked rectangles.

We are using cascade classifier implemented in OpenCV, which can be trained to detect a specific object in the image. More on training and using cascade classifier for object detection can be found in the OpenCV documentation, located at

There's more…

You can use a video stream from your camera and process each frame to track faces of people in real time.

Detecting features in an image

In this recipe we will use one of the methods of finding characteristic features in the image. We will use the SURF algorithm implemented by the OpenCV library.

Getting ready

We will be using the OpenCV library, so please refer to the Integrating with OpenCV recipe for information on how to set up your project. We will need a sample image to proceed, so save it in your assets folder as image.png, then save a copy of the sample image as image2.png and perform some transformation on it, for example rotation.

How to do it…

We will create an application that visualizes matched features between two images. Perform the following steps to do so:

  1. Add the paths to the OpenCV library files in the Other Linker Flags section of your project's build settings, for example:

    $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_imgproc.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_core.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_objdetect.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_features2d.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_flann.a

  2. Include necessary headers:

    #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h"

  3. In your main class declaration add the method and properties:

    int matchImages(Surface8u img1, Surface8u img2); Surface8u mImage, mImage2; gl::Texture mMatchesImage;

  4. Inside the setup method load the images and invoke the matching method:

    mImage = loadImage( loadAsset("image.png") ); mImage2 = loadImage( loadAsset("image2.png") ); int numberOfmatches = matchImages(mImage, mImage2);

  5. Now you have to implement previously declared matchImages method:

    int MainApp::matchImages(Surface8u img1, Surface8u img2) { cv::Mat image1(toOcv(img1)); cv::cvtColor( image1, image1, CV_BGR2GRAY ); cv::Mat image2(toOcv(img2)); cv::cvtColor( image2, image2, CV_BGR2GRAY ); // Detect the keypoints using SURF Detector std::vector<cv::KeyPoint> keypoints1, keypoints2; cv::SurfFeatureDetector detector; detector.detect( image1, keypoints1 ); detector.detect( image2, keypoints2 ); // Calculate descriptors (feature vectors) cv::SurfDescriptorExtractor extractor; cv::Mat descriptors1, descriptors2; extractor.compute( image1, keypoints1, descriptors1 ); extractor.compute( image2, keypoints2, descriptors2 ); // Matching cv::FlannBasedMatcher matcher; std::vector<cv::DMatch> matches; matcher.match( descriptors1, descriptors2, matches ); double max_dist = 0; double min_dist = 100; for( int i = 0; i< descriptors1.rows; i++ ) { double dist = matches[i].distance; if( dist<min_dist ) min_dist = dist; if( dist>max_dist ) max_dist = dist; } std::vector good_matches; for( int i = 0; i< descriptors1.rows; i++ ) { if( matches[i].distance<2*min_dist ) good_matches.push_back( matches[i]); } // Draw matches cv::Matimg_matches; cv::drawMatches(image1, keypoints1, image2, keypoints2, good_matches, img_matches, cv::Scalar::all(-1), cv::Scalar::all(-1), std::vector(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_ POINTS ); mMatchesImage = gl::Texture(fromOcv(img_matches)); return good_matches.size(); }

  6. The last thing is to visualize the matches, so put the following line of code inside the draw method:


How it works…

Let's discuss the code under step 5. First we are converting image1 and image2 to an OpenCV Mat structure. Then we are converting both images to grayscale. Now we can start processing images with SURF, so we are detecting keypoints – the characteristic points of the image calculated by this algorithm. We can use calculated keypoints from these two images and match them using FLANN, or more precisely the FlannBasedMatcher class. After filtering out the proper matches and storing them in the good_matches vector we can visualize them, as follows:

Please notice that second image is rotated, however the algorithm can still find and link the corresponding keypoints.

There's more…

Detecting characteristic features in the images is crucial for matching pictures and is part of more advanced algorithms used in augmented reality applications.

If images match

It is possible to determine if one of the images is a copy of another or is it rotated. You can use a number of matches returned by the matchImages method.

Other possibilities

SURF is rather a slow algorithm for real-time matching so you can try the FAST algorithm for your project if you need to process frames from the camera at real time. The FAST algorithm is also included in the OpenCV library.

See also

Converting images to vector graphics

In this recipe, we will try to convert simple, hand-drawn sketches to vector graphics using image processing functions from the OpenCV library and Cairo library for vector drawing and exporting.

Getting started

We will be using the OpenCV library, so please refer to the Integrating with OpenCV recipe earlier in this article for information on how to set up your project. You may want to prepare your own drawing to be processed. In this example we are using a photo of some simple geometric shapes sketched on paper.

How to do it…

We will create an application to illustrate the conversion to vector shapes. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h" #include "cinder/cairo/Cairo.h"

  2. Add the following declarations to your main class: void renderDrawing( cairo::Context&ctx ); Surface mImage, mIPImage; std::vector<std::vector<cv::Point> >mContours, mContoursApprox; double mApproxEps; int mCannyThresh;
  3. Load your drawing and set default values inside the setup method:

    mImage = loadImage( loadAsset("drawing.jpg") ); mApproxEps = 1.0; mCannyThresh = 200;

  4. At the end of the setup method add the following code snippet:

    cv::Mat inputMat( toOcv( mImage ) ); cv::Mat bgr, gray, outputFrame; cv::cvtColor(inputMat, bgr, CV_BGRA2BGR); double sp = 50.0; double sr = 55.0; cv::pyrMeanShiftFiltering(bgr.clone(), bgr, sp, sr); cv::cvtColor(bgr, gray, CV_BGR2GRAY); cv::cvtColor(bgr, outputFrame, CV_BGR2BGRA); mIPImage = Surface(fromOcv(outputFrame)); cv::medianBlur(gray, gray, 7); // Detect edges using cv::MatcannyMat; cv::Canny(gray, cannyMat, mCannyThresh, mCannyThresh*2.f, 3 ); mIPImage = Surface(fromOcv(cannyMat)); // Find contours cv::findContours(cannyMat, mContours, CV_RETR_LIST, CV_CHAIN_ APPROX_SIMPLE); // prepare outline for( int i = 0; i<mContours.size(); i++ ) { std::vector<cv::Point> approxCurve; cv::approxPolyDP(mContours[i], approxCurve, mApproxEps, true); mContoursApprox.push_back(approxCurve); }

  5. Add implementation for the renderDrawing method:

    void MainApp::renderDrawing( cairo::Context&ctx ) { ctx.setSource( ColorA( 0, 0, 0, 1 ) ); ctx.paint(); ctx.setSource( ColorA( 1, 1, 1, 1 ) ); for( int i = 0; i<mContoursApprox.size(); i++ ) { ctx.newSubPath(); ctx.moveTo(mContoursApprox[i][0].x, mContoursApprox[i][0].y); for( int j = 1; j <mContoursApprox[i].size(); j++ ) { ctx.lineTo(mContoursApprox[i][j].x, mContoursApprox[i][j].y); } ctx.closePath(); ctx.fill(); ctx.setSource(Color( 1, 0, 0 )); for( int j = 1; j <mContoursApprox[i].size(); j++ ) {[i][j].x, mContoursApprox[i][j].y, 2.f); } ctx.fill(); } }

  6. Implement your draw method as follows:

    gl::clear( Color( 0.1f, 0.1f, 0.1f ) ); gl::color(Color::white()); gl::pushMatrices(); gl::scale(Vec3f(0.5f,0.5f,0.5f)); gl::draw(mImage); gl::draw(mIPImage, Vec2i(0, mImage.getHeight()+1)); gl::popMatrices(); gl::pushMatrices(); gl::translate(Vec2f(mImage.getWidth()*0.5f+1.f, 0.f)); gl::color( Color::white() ); cairo::SurfaceImage vecSurface( mImage.getWidth(), mImage. getHeight() ); cairo::Context ctx( vecSurface ); renderDrawing(ctx); gl::draw(vecSurface.getSurface()); gl::popMatrices();

  7. Inside the keyDown method insert the following code snippet:

    if( event.getChar() == 's' ) { cairo::Context ctx( cairo::SurfaceSvg( getAppPath() / fs::path("..") / "output.svg",mImage.getWidth(), mImage. getHeight() ) ); renderDrawing( ctx ); }

How it works…

The key part is implemented in step 4 where we are detecting edges in the image and then finding contours. We are drawing vector representation of processed shapes in step 5, inside the renderDrawing method. For drawing vector graphics we are using the Cairo library, which is also able to save results into a file in several vector formats. As you can see in the following screenshot, there is an original image in the upper-left corner and just under it is the preview of the detected contours. The vector version of our simple hand-drawn image is on the right-hand side:

Each shape is a filled path with black color. Paths consist of points calculated in step 4. The following is the visualization with highlighted points:

You can save a vector graphic as a file by pressing the S key. The file will be saved in the same folder as application executable under the name output.svg. SVG is only one of the following available exporting options:




Preparing context for SVG file rendering


Preparing context for PDF file rendering


Preparing context for PostScript file rendering


Preparing context for Illustrator EPS file rendering

The exported graphics look as follows:

See also


In this article, we saw examples of using image processing techniques implemented in Cinder and using third-party libraries and covered images into vector graphics.

Resources for Article :

Further resources on this subject:

You've been reading an excerpt of:

Cinder Creative Coding Cookbook

Explore Title