Linking OpenCV to an iOS project

Exclusive offer: get 50% off this eBook here
Instant OpenCV for iOS [Instant]

Instant OpenCV for iOS [Instant] — Save 50%

Learn how to build real-time computer vision applications for the iOS platform using the OpenCV library with this book and ebook

$12.99    $6.50
by Alexander Shishkov Kirill Kornyakov | September 2013 | Games Open Source

This article created by Kirill Kornyakov and Alexander Shishkov, the author of Instant OpenCV for iOS, teaches about how to convert UIImageto cv::Mat, and make a call to the C++ library using Objective-C code.

(For more resources related to this topic, see here.)

Getting ready

First you should download the OpenCV framework for iOS from the official website at http://opencv.org. In this article, we will use Version 2.4.6.

How to do it...

The following are the main steps to accomplish the task:

  1. Add the OpenCV framework to your project.
  2. Convert image to the OpenCV format.
  3. Process image with a simple OpenCV call.
  4. Convert image back.
  5. Display image as before.

Let's implement the described steps:

  1. We continue modifying the previous project, so that you can use it; otherwise create a new project with UIImageView. We'll start by adding the OpenCV framework to the Xcode project. There are two ways to do it.

    You can add the framework as a resource. This is a straightforward approach. Alternatively, the framework can be added through project properties by navigating to Project | Build Phases | Link Binary With Libraries. To open project properties you should click to the project name in the Project Navigator area.

  2. Next, we'll include OpenCV header files to our project. To avoid conflicts, we will add the following code to the very beginning of the file, above all other imports:

    #ifdef __cplusplus
    #import <opencv2/opencv.hpp>
    #endif

    This is needed, because OpenCV redefines some names, for example, min/max functions.

  3. Set the value of Compile Sources As property as Objective-C++. The property is available in the project settings and can be accessed by navigating to Project | Build Settings | Apple LLVM compiler 4.1 - Language.
  4. To convert the images from UIImageto cv::Mat, you can use the following functions:

    UIImage* MatToUIImage(const cv::Mat& image)
    {
    NSData *data = [NSData dataWithBytes:image.data length:image.
    elemSize()*image.total()];

    CGColorSpaceRef colorSpace;

    if (image.elemSize() == 1) {
    colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
    colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__
    bridge CFDataRef)data);

    // Creating CGImage from cv::Mat
    CGImageRef imageRef = CGImageCreate(image.cols, //width

    image.rows, //height
    8, //bits per
    component
    8*image.elemSize(),//bits
    per pixel
    image.step.p[0], //
    bytesPerRow
    colorSpace, //colorspace
    kCGImageAlphaNone|kCGBitmapByteOrderDefault,//
    bitmap info
    provider, //
    CGDataProviderRef
    NULL, //decode
    false, //should
    interpolate
    kCGRenderingIntentDefault
    //intent
    );

    // Getting UIImage from CGImage
    UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return finalImage;
    }
    void UIImageToMat(const UIImage* image, cv::Mat& m,
    bool alphaExist = false)
    {
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.
    CGImage);
    CGFloat cols = image.size.width, rows = image.size.height;
    CGContextRef contextRef;
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
    if (CGColorSpaceGetModel(colorSpace) == 0)
    {
    m.create(rows, cols, CV_8UC1);
    //8 bits per component, 1 channel
    bitmapInfo = kCGImageAlphaNone;
    if (!alphaExist)
    bitmapInfo = kCGImageAlphaNone;
    contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows,
    8,
    m.step[0], colorSpace,
    bitmapInfo);
    }
    else
    {
    m.create(rows, cols, CV_8UC4); // 8 bits per component, 4
    channels
    if (!alphaExist)
    bitmapInfo = kCGImageAlphaNoneSkipLast |
    kCGBitmapByteOrderDefault;
    contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows,
    8,
    m.step[0], colorSpace,
    bitmapInfo);
    }
    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows),
    image.CGImage);
    CGContextRelease(contextRef);
    }

  5. These functions are included into the library starting from Version 2.4.6 of OpenCV. In order to use them, you should include the ios.h header file.

    #import "opencv2/highgui/ios.h"

  6. Let's consider a simple example that extracts edges from the image. In order to do so, you have to add the following code to the viewDidLoad()method:

    viewDidLoad() method:
    - (void)viewDidLoad
    {
    [super viewDidLoad];

    UIImage* image = [UIImage imageNamed:@"lena.png"];
    // Convert UIImage* to cv::Mat
    UIImageToMat(image, cvImage);
    if (!cvImage.empty())
    {
    cv::Mat gray;
    // Convert the image to grayscale
    cv::cvtColor(cvImage, gray, CV_RGBA2GRAY);
    // Apply Gaussian filter to remove small edges
    cv::GaussianBlur(gray, gray,
    cv::Size(5, 5), 1.2, 1.2);

    // Calculate edges with Canny
    cv::Mat edges;
    cv::Canny(gray, edges, 0, 50);
    // Fill image with white color
    cvImage.setTo(cv::Scalar::all(255));
    // Change color on edges
    cvImage.setTo(cv::Scalar(0, 128, 255, 255), edges);
    // Convert cv::Mat to UIImage* and show the resulting
    image
    imageView.image = MatToUIImage(cvImage);
    }
    }

Now run your application and check whether the application finds edges on the image correctly.

How it works...

Frameworks are intended to simplify the process of handling dependencies. They encapsulate header and binary files, so the Xcode sees them, and you don't need to add all the paths manually. Simply speaking, the iOS framework is just a specially structured folder containing include files and static libraries for different architectures (for example, armv7, armv7s, and x86). But Xcode knows where to search for proper binaries for each build configuration, so this approach is the simplest way to link external library on the iOS. All dependencies are handled automatically and added to the final application package.

Usually, iOS applications are written in Objective-C language. Header files have a *.h extension and source files have *.m. Objective-C is a superset of C, so you can easily mix these languages in one file. But OpenCV is primarily written in C++, so we need to use C++ in the iOS project, and we need to enable support of Objective-C++. That's why we have set the language property to Objective-C++. Source files in Objective-C++ language usually have the *.mm extension.

To include OpenCV header files, we use the #importdirective. It is very similar to #include in C++, while there is one distinction. It automatically adds guards for the included file, while in C++ we usually add them manually:

#ifndef __SAMPLE_H__
#define __SAMPLE_H__

#endif

In the code of the example, we just convert the loaded image from a UIImage object to cv::Matby calling the UIImageToMat function. Please be careful with this function, because it entails a memory copy, so frequent calls to this function will negatively affect your application's performance.

Please note that this is probably the most important performance tip—to be very careful while working with memory in mobile applications. Avoid memory reallocations and copying as much as possible. Images require quite large chunks of memory, and you should reuse them between iterations. For example, if your application has some pipeline, you should preallocate all buffers and use the same memory while processing new frames.

After converting images, we do some simple image processing with OpenCV. First, we convert our image to the single-channel one. After that, we use the GaussianBlur filter to remove small details. Then we use the Canny method to detect edges in the image. To visualize results, we create a white image and change the color of the pixels that lie on detected edges. The resulting cv::Mat object is converted back to UIImage and displayed on the screen.

There's more...

The following is additional advice.

Objective-C++

There is one more way to add support of Objective-C++ to your project. You should just change the extension of the source files to .mm where you plan to use C++ code. This extension is specific to Objective-C++ code.

Converting to cv::Mat

If you don't want to use UIImage, but want to load an image to cv::Mat directly, you can do it using the following code:

// Create file handle
NSFileHandle* handle =
[NSFileHandle fileHandleForReadingAtPath:filePath];
// Read content of the file
NSData* data = [handle readDataToEndOfFile];
// Decode image from the data buffer
cvImage = cv::imdecode(cv::Mat(1, [data length], CV_8UC1,
(void*)data.bytes),
CV_LOAD_IMAGE_UNCHANGED);

In this example we read the file content to the buffer and call the cv::imdecode function to decode the image. But there is one important note; if you later want to convert cv::Mat to the UIImage, you should change the channel order from BGR to RGB, as OpenCV's native image format is BGR.

Summary

This article explained how to link the OpenCV library and call any function from it.

Resources for Article:


Further resources on this subject:


Instant OpenCV for iOS [Instant] Learn how to build real-time computer vision applications for the iOS platform using the OpenCV library with this book and ebook
Published: August 2013
eBook Price: $12.99
See more
Select your format and quantity:

About the Author :


Alexander Shishkov

Alexander Shishkov has been working in the field of computer vision for the last five years. He works at Itseez (Nizhny Novgorod, Russia), where he has developed technologies including counting systems, object detection, and image retrieval systems. He has also created continuous integration systems and websites (http://opencv.org) for OpenCV. Alexander has B.Sc. and M.Sc. degrees from Nizhny Novgorod State University, Russia.

Kirill Kornyakov

Kirill Kornyakov has been a member of the core OpenCV development team for the last four years. He works at Itseez (Nizhny Novgorod, Russia), where he leads the development of the OpenCV library for the Android operating system, with a focus on performance optimization for the NVIDIA Tegra platform. He also works on the implementation of real-time computer vision algorithms, mainly Computational Photography applications. Kirill has B.Sc. and M.Sc. degrees from Nizhny Novgorod State University, Russia.

Books From Packt


 Instant OpenCV Starter [Instant]
Instant OpenCV Starter [Instant]

OpenCV Computer Vision with Python
OpenCV Computer Vision with Python

OpenCV 2 Computer Vision Application Programming Cookbook
OpenCV 2 Computer Vision Application Programming Cookbook

Android Application Programming with OpenCV
Android Application Programming with OpenCV

 Mastering OpenCV with Practical Computer Vision Projects
Mastering OpenCV with Practical Computer Vision Projects

iOS 5 Essentials
iOS 5 Essentials

 Mastering openFrameworks: Creative Coding Demystified
Mastering openFrameworks: Creative Coding Demystified

 Unity iOS Game Development Beginners Guide
Unity iOS Game Development Beginners Guide


Your rating: None Average: 1.6 (5 votes)

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
8
X
i
S
F
Y
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software