# 2-Dimensional Image Filtering

Exclusive offer: get 50% off this eBook here

### Visual Media Processing Using MATLAB Beginner's Guide — Save 50%

Learn a range of techniques from enhancing and adding artistic effects to your photographs, to editing and processing your videos, all using Matlab with this book and ebook

\$29.99    \$15.00
by George Siogkas | September 2013 | Beginner's Guides Open Source

In this article by George Siogkas, author of the book, Visual Media Processing Using MATLAB Beginner's Guide, we will dig a little deeper into the image filtering theory and use some more complicated techniques to enhance our images. All the methods we will present, will once more be accompanied by hands-on examples that demonstrate their significance in both typical and more advanced image processing tasks.

• The basic theory on image filtering and processing pixel neighborhoods
• How we can filter an image using convolution
• Alternative ways to filter an image
• Creating image filters in MATLAB
• Using filters for image blurring

So, let's get started!

(For more resources related to this topic, see here.)

# An introduction to image filtering

Morphological operations and edge detection are actually types of image filtering, even though we used them in a black box sense, without really looking under the hood. Hopefully, this approach will get you accustomed to the details of image filtering a little faster.

First of all, let's give a general definition of image filtering; it can be explained as the process of modifying the values of the pixels using a function that is typically applied on a local neighborhood of the image. In many situations, applying the function on a neighborhood involves a special operation, called convolution, with an operand called kernel. In this sense, you have already applied such a process in the case of erosion or dilation and even in the case of edge detection. The former processes used the strel function to create a kernel, while the latter used a kernel based on your choice of the edge detection method. But let's not get ahead of ourselves. We will try to take things one step at a time, starting by explaining neighborhood processing.

# Processing neighborhoods of pixels

In the previous paragraph, we mentioned that the filtering process typically takes place on a specific neighborhood of pixels. When this neighborhood process is applied for all pixels, it is called sliding neighborhood operation. In it, we slide a rectangular neighborhood window through all possible positions of the image and modify its central pixel using a function of the pixels in the neighborhood.

Let's see how this is done, using a numeric example. We'll start with something simple, like a linear filtering process, that is, averaging. Let's suppose that we have a small image, sized 8x8 pixels and we want to modify its pixel values, so that they get assigned with the rounded average of the pixels' values in their 3x3 neighborhoods.

This will be easier to explain by using a real numeric example. Let's explain what happens in the step shown in the following image, in which the central pixel of the highlighted 3x3 neighborhood (in the fourth row and sixth column) will be replaced by the average value of all the pixels in the neighborhood (rounded to the nearest integer):

Let the image be called I, the result in pixel I(4,6) will be:

Substituting the values of the pixels, we can calculate the average value:

Hence, the value of the central pixel of the neighborhood will become 121 (the closest integer to 120.89).

By repeating the process described previously for all the pixels of the image, we get a result commonly known as mean filtering or average filtering. The final result of the entire process is shown in the following figure:

You may be wondering now; the choice of neighborhood, for the example, was very convenient, but what happens when we want to change the value of a pixel on the borders of the image such as let's say pixel I(1,4)? Why was it set to 77 as shown in the image?

This is indeed a valid and natural question, and you are very intuitive if you already thought about it. The answer is that the way to tackle this problem when you want your resulting image to be the same size as your original image is to involve only the neighboring pixels that exist in your calculations. However, since in our example, the calculation that has to be performed is averaging the neighborhood pixels, the denominator will still be 9, hence, it will be like we pad the rest of the neighborhood with zeros. Let's demonstrate this example as well:

As shown in the previous image, the central pixel value gets evaluated as follows:

Of course, since there is no 0th line, the first three operands of the addition are non-existent, hence set to zero:

Therefore, the result of the averaging process for the aforementioned neighborhood will be equal to 77 (as shown in the image). This approach is not the only one we have for the image borders. We could assign the maximum possible value (255 for our example) to the non-existent pixels, or assign them the mean value of the rest of the neighborhood, and so on. The choice we make affects the quality of the borders of the image, as we will see in real pictures later on.

# The basics of convolution

The process described previously was performed in overlapping neighborhoods of the image, but no use of a kernel was mentioned. So, what is this all about? And how does the convolution fit in this framework? Well, the truth is that the process described previously is actually describing the essence of convolution, which is passing a kernel over all possible equally sized neighborhoods of the image and using it to modify the value of the central pixel. The only problem in our case is that we did not use a specific kernel in the process described. Or did we? Let's try to find out using MATLAB code to perform two-dimensional convolution.

The 3x3 neighborhood we used for the described process can be replaced by a 3x3 kernel, as long as the final result remains the same. The kernel that accomplishes this effect is a 3x3 matrix with all pixels set to 1/9. Convolving this kernel with the original image produces the same result as the aforementioned example. To demonstrate the process, we can use the two-dimensional convolution MATLAB function conv2 as follows, to get the result:

```>> original = [132  101  101  107  115  121  110   92
120  124  122  120  129  123  121  129
134  146  144  134  134  132  134  138
143  147  136  121  121  115  107  107
145  147  138  129  119  113  113  122
162  155  152  149  142  129  118  122
127  122  115  113  117  102   95   94
67   74   78   80   89   89  107  109];        % Create original
image
>> kernel = ones(3,3)*(1/9);         % Create kernel
>> conv_result = conv2(original, kernel,'same'); % Perform
convolution
>> final_result = round(conv_result)             % Rounding of result```

The final result obtained is as follows:

```final_result =
53    78    75    77    79    80    77    50
84   125   122   123   124   124   122    80
90   135   133   129   125   124   123    82
96   142   138   131   124   121   120    80
100   147   142   134   126   120   116    77
95   140   136   130   124   116   112    74
79   117   115   115   112   110   107    72
43    65    65    66    66    67    66    45```

As expected, the result is the same as the one calculated using the analytical process described before. The convolution kernel has done its job. In our process, we used a 8x8 original image and a 3x3 kernel with the values of all pixels as 1/9 (this is what happens when you get a 3x3 matrix with all instances of 1 and multiply it by 1/9, as we did) and finally ordered the conv2 function to produce the result using the padding process described earlier for the borders, hence calculating a result with the same dimensions as the original.

But how did it do it? What exactly is convolution? Now it is time to fully understand convolution. But first, you must get acquainted with its mathematical equations. Since learning math is not the purpose of this book, we will try to give you just the basics, so that you get an idea of what this operation is all about, as it is invaluable for image filtering.

# The ugly mathematical truth

Let's start with the mathematical definition of convolution for discrete functions (since in digital image processing all functions are discrete). To form our problem in a signal processing sense, we can define it as passing an input image I, through a Linear Space Invariant (LSI) system, performing convolution with a kernel h (also called a filter), to produce an output image, g. Hence, we get the following block diagram:

This process is described mathematically by the following equation:

where * is the symbol for convolution and the large Σ denotes a sum. The reason we have two sums is because our process is two-dimensional. Without going into too much detail, we can summarize the process described previously using the following steps, which are also followed in the implementation of conv2:

1. Rotate the convolution kernel by 180 degrees to abide by the process in the double sum of the equation.
2. Determine the central pixel of the neighborhood. This is straightforward when the neighborhood has an odd number of rows and columns, but must be based on some rule if either of the dimensions is even.
3. Apply the rotated kernel to each pixel of the input image. This is a multiplication of each pixel in the rotated kernel by the corresponding pixel on the image neighborhood processed. It can be thought of as the weighted sum of the neighborhood pixels.

The result of conv2 can be either of the following choices:

• full: Larger than the original image, taking into account all the pixels that can be computed using the convolution kernel, even if their center falls out of the image. This is the default choice for the function.
• same: Same size as the original image, using zeros to calculate border pixel values.
• valid: Smaller than the original image, so that it uses only pixels that have full valid neighbors in the computations.

This means that when you want to produce a convolution result with the same size as the original image, you will have to use same as an input, as we did in our previous example.

By now, those of you that are not very much into math may be tempted to stop reading. So, let's stop the mathematical jargon and dive into the practical examples. We know what a convolution does and we have seen an example on the pixels of a very small image, using an averaging convolution kernel. So, what does this process really do to an image?

# Time for action – applying averaging filters in images

We will start off with an easy-to-follow example, so that all the theory described previously is demonstrated. In this example, we will also introduce some new MATLAB functions, to facilitate your understanding. Let's start:

1. First, we load our image, which is holiday_image2.bmp:
```>> img = imread('holiday_image2.bmp');
```
2. Then, we generate our convolution kernel, using function fspecial and then rotate it 180 degrees:
```>> kernel = fspecial('average',3);
>> kernel = rot90(kernel,2)```
3. The output of the code will be as follows:
```kernel =
0.1111    0.1111    0.1111
0.1111    0.1111    0.1111
0.1111    0.1111    0.1111```
4. Now, it is time to use the three different ways of convolving our image:
```>> con1 = conv2(img,kernel);          % Default usage ('full')
>> con2 = conv2(img,kernel,'same');   % convolution using
'same'
>> con3 = conv2(img,kernel,'valid');  % convolution using
'valid'```
5. In the previous step, you probably got a warning saying:
```Warning: CONV2 on values of class UINT8 is obsolete.
Use CONV2(DOUBLE(A),DOUBLE(B)) or CONV2(SINGLE(A),SINGLE(B))
6. This actually means that UNIT8 type will not be supported by conv2 in the future. To be on the safe side, you might want to use the suggestion by MATLAB and convert your image to single prior to convolving it:
```>> img = single(img);
>> kernel = fspecial('average',3);   % Create 3x3 averaging
kernel
>> con1 = conv2(img,kernel);         % Default usage ('full')
>> con2 = conv2(img,kernel,'same');  % convolution using 'same'
>> con3 = conv2(img,kernel,'valid'); % convolution using
'valid'```
7. Now, we can show our results in one figure, along with the original image. This time, we are going to use an empty matrix as the second argument in imshow, to avoid having to convert our results to UNIT8:
```>> figure;subplot(2,2,1),imshow(img,[]),title('Original')
>> subplot(2,2,2),imshow(con1,[]),title('full')
>> subplot(2,2,3),imshow(con2,[]),title('same')
>> subplot(2,2,4),imshow(con3,[]),title('valid')```

8. It is obvious that the three results are identical, but there is a small detail. Their size is not. So let's see if we got what we expected. In the Workspace window, you can see the difference in sizes:

9. Let's now discuss the physical, qualitative meaning of averaging an image. What does it exactly do? The answer is; it performs blurring of the image. To examine this effect, we can crop the tower from our original and averaged image and display the result. The tower can be cropped using the following coordinates:
```>> tower_original = img(51:210,321:440);
>> tower_blurred = con2(51:210,321:440); figure
>> subplot(1,2,1),imshow(tower_original),title('Original
tower')
>> subplot(1,2,2),imshow(tower_blurred),title('Blurred tower')```
10. The original image and the blurred image are as follows:

## What just happened?

The process described in the previous example demonstrated the usage of convolution in its various implementations, using the averaging kernel produced using fspecial. This function is designed to generate kernels for popular filtering tasks, as we will further analyze in the following sections. In our case, we created a 3x3 kernel of values equal to 1/9 (which is almost equal to 0.1111, hence the result in step 2). Then, the three different choices of convolution were applied and the results were displayed along with the original image. Of course, a detail such as the size of the borders cannot be easily observed in full scale, so we observed the difference in the sizes of the results. Finally, we displayed a part of the original image next to the same part of the same convolution result, to prove that the result of the averaging process is a blurring of the image.

# Alternatives to convolution

Convolution is not the only way to perform image filtering. There is also correlation, which gives us the same result. Filtering an image using correlation can be accomplished by using the MATLAB function called filter2, which performs, as its name implies, a two-dimensional filtering of two images. The first input in this case is a kernel (filter) and the second input is an image (or in a more general case a two-dimensional matrix). We will not go into detail here, just point out that one main difference between the two methods is that correlation does not need the kernel to be rotated. The border issue remains, having the same three approaches as in the case of convolution using conv2. A demonstration on the equivalence of the two functions is given if we type in the following commands:

```>> img = imread('holiday_image2.bmp');
>> img = img(51:210,321:440);
>> kernel = fspecial('average',3);
>> kernel180 = rot90(kernel,3);
>> conv_result = conv2(img,kernel180,'same');
>> corr_result = filter2(kernel,img,'same');
>> subplot(1,3,1),imshow(img),title('Original')
>> subplot(1,3,2),imshow(uint8(conv_result)),title('Blurred - conv2')
>> subplot(1,3,3),imshow(uint8(corr_result)),title('Blurred -
filter2')```

The result of the preceding code is displayed as follows:

In our example, the two kernels used for conv2 and filter2 are identical, since the averaging filter used is square (3x3) and all its elements are equal. The generalized process shown will be useful when we have a more complex kernel.

# Using imfilter

The two alternative solutions for performing image filtering presented so far have their origin in general two-dimensional signal processing theory. This means that they should be expanded for three-dimensional signals when we have to deal with colored image filtering. The process is pretty straightforward and involves repeating the process for all three separate colored channels. But why do that, when we have a function that takes care of checking the image before applying the filter and then selecting the correct method?

This specialized function is called imfilter and it is designed for handling images, regardless if they are grayscale or color. This function can implement both filtering methods described in previous paragraphs and it can also define the result to be same or full. Its extra functionality comes in the selection of the way it handles boundary values, and the automatic processing of color images. Furthermore, this function performs the needed conversions, in case the image input is integer-valued. Combined with the fspecial function, this will probably be your most valuable tool in MATLAB when it comes to image filtering.

 Learn a range of techniques from enhancing and adding artistic effects to your photographs, to editing and processing your videos, all using Matlab with this book and ebook
Published: September 2013
eBook Price: \$29.99
Book Price: \$49.99
See more

# Creating filters with fspecial

So far you have seen the averaging filter kernel, which can be generated in any method possible that can produce a mxn matrix with all its values equal to 1/mn.

The fspecial function used in our previous examples is one way to produce the averaging kernel mentioned. However, it can be used to produce several other filtering kernels. A simple call to the help of MATLAB on this function shows us its usage in the first few lines of the result:

`>> help fspecial`

The call to the help command of MATLAB will give following output:

``` fspecial Create predefined 2-D filters.
H = fspecial(TYPE) creates a two-dimensional filter H of the
specified type. Possible values for TYPE are:
'average'   averaging filter
'disk'      circular averaging filter
'gaussian'  Gaussian lowpass filter
'laplacian' filter approximating the 2-D Laplacian operator
'log'       Laplacian of Gaussian filter
'motion'    motion filter
'prewitt'   Prewitt horizontal edge-emphasizing filter
'sobel'     Sobel horizontal edge-emphasizing filter
'unsharp'   unsharp contrast enhancement filter```

This means that fspecial can create nine different filters, depending on the input choice of the user. If we want to categorize them according to their functionality, we will have to use three broad categories:

• Image smoothing or Blurring: This is a process that is performed using low-pass filters. The ones that fspecial provides are average, disk, motion and gaussian. The filters of this type are generally called low-pass because they only let image areas with low frequencies (smooth areas without much detail) be unaffected.
• Edge detection filters: These are the core filters used for the edge detection techniques. The ones supported by fspecial are laplacian, log, prewitt and sobel. All these filters suppress the pixel values in areas that do not have many edges and enhance the edges in the image. When they are thresholded, they produce results like the ones generated by edge.
• Finally, fspecial can create a filter that is used for high-pass filtering, that is, the enhancement of areas that contain much detail. This has the opposite effect from the first group of filters and can be accomplished using an unsharp kernel.

we will try to check the functionality of some of these filters, using real and practical examples.

# Different ways to blur an image

Image blurring or smoothing, can be accomplished in many ways. Three of the most popular techniques can be accomplished using imfilter and fspecial. Since the tower from the previous example contains enough detail to show the effect, we will use it for our example.

# Time for action – how much blurring is enough

we will write a custom function that incorporates a combination of MATLAB functions to make our lives easier. This time, our function will perform image blurring, hence will be called BlurImage.m:

```function [output] = BlurImage(input,kernel_choice,kernel_size,method)

% Function for image blurring
% Inputs:
%        input - Input image
%        kernel_choice – User's choice of filter
%         (1: disk
%          2: average
%          3: gaussian)
%        kernel_size – User's choice of kernel size
%          [rows, columns] for average,
%          [rows, columns, standard deviation] for Gaussian)
%        method – User's choice of filtering method
%       (1: correlation
%        2: convolution)
% Output:
%        output - Output image (after bluring)

switch kernel_choice
case 1
kernel = fspecial('disk',kernel_size);
case 2
kernel = fspecial('average',kernel_size);
case 3
kernel = fspecial('gaussian',kernel_size);
end

switch method
case 1
output = imfilter(input,kernel,'conv');
case 2
output = imfilter(input,kernel,'corr');
end```

Now we can test our code. Let's filter the same image using three different filters, using two different sizes:

1. First, we will load and crop our image:
```>> img = imread('holiday_image2.bmp');
>> img = img(51:180,321:440);```
2. Then, we will apply the three filters with selected size 3x3:
```>> f1 = BlurImage(img,1,1,1);
>> f2 = BlurImage(img,2,[3,3],1);
>> f3 = BlurImage(img,3,[3,3,1.5],1);```
3. Let's do the same for kernels of size 5x5:
```>> f4 = BlurImage(img,1,2,1);
>> f5 = BlurImage(img,2,[5,5],1);
>> f6 = BlurImage(img,3,[5,5,1.5],1);```
4. Finally, we will display all images in the same figure, next to the original:
```>> subplot(2,4,1),imshow(img),title('Original')
>> subplot(2,4,2),imshow(f1),title('Blur by disk of radius 1')
>> subplot(2,4,3),imshow(f2),title('Blur by 3x3 averaging
kernel')
>> subplot(2,4,4),imshow(f3),title('Blur by 3x3 Gaussian
kernel')
>> subplot(2,4,6),imshow(f4),title('Blur by disk of radius 2')
>> subplot(2,4,7),imshow(f5),title('Blur by 5x5 averaging
kernel')
>> subplot(2,4,8),imshow(f6),title('Blur by 5x5 Gaussian
kernel')```
5. The result will be as follows:

## What just happened?

We have just created a tool that can be useful for the one-step blurring of images. All three blurring methods are included and you have the choice of filter parameters. The three methods were then demonstrated, for different kernel sizes, and the effect they have on an image became obvious. From this example, the pros and cons of each choice are not very apparent. The only thing that is very apparent is that they all cause a loss of detail, which could be useful in special cases.

# Time to make art using blurring

Losing information by blurring an image is not always bad; many photographers use this effect to add an artistic touch to their images. A common effect is called bokeh and it is the blurring of out-of-focus areas in a photograph. Let's see how we can create an out-of-focus effect in one of our photographs. We will use a panoramic night photograph of the city I grew up in, Ioannina. Let's try the disk kernel with a radius of 25:

```>> img = imread('Ioannina.jpg');
>> kernel = fspecial('disk',25);
>> for i=1:size(img,3),
bokeh(:,:,i) = imfilter(img(:,:,i),kernel);
end
>> subplot(2,1,1),imshow(img),title('Original image of Ioannina')
>> subplot(2,1,2),imshow(bokeh),title('Bokeh image of Ioannina')```

Now, we will try to add such an effect to our images, by writing a function that will let the user define the Region Of Interest (ROI) that will remain in focus and then perform blurring using the disk kernel.

# Time for action – creating the bokeh effect in an image

We will now work on another night image taken in Berlin, Germany. We will try to isolate the light bulb soldier and perform blurring in the other areas of the image. The function we will use will be able to handle both grayscale and color images. Let's see the function:

```function [output] = Bokeh(input, radius)

% Function that performs blurring on the whole image except a user defined
% ROI,using a disk kernel. The effect resembles the bokeh effect.
% Inputs:
%        input  - Input image
% Output:
%        output - Output image (only user-defined ROI stays in focus)

kernel = fspecial('disk',radius);       % Create disk kernel
disp('Select area to keep in focus!')   % Display message to user
mask = roipoly(input);           % User selects area of interest
for i = 1:size(input,3)          % Covering the case of color images
cropped = input(:,:,i);      % Perform per-channel processing
channel = input(:,:,i);             % Replica of channel
cropped = imfilter(cropped,kernel); % Perform blurring out of ROI
output = cat(3,output,channel);  % Concatenate channels
end```

Now, we can use our function on the image described:

`>> img = imread('soldier.jpg');`
2. Then, we must call our Bokeh function, giving it the input image name and the radius of the filter (we will use 15 pixels):
`>> [output] = Bokeh(soldier,15);`

We get the following message displayed by our function:

Select area to keep in focus!

3. Now we select the area we want to keep in focus:

4. Now, let's show our original image and our artistic result next to each other:
```>> subplot(1,2,1),imshow(img),title('Original')
>> subplot(1,2,2),imshow(output),title('Bokeh effect')```

## What just happened?

In this example, you got to learn a new way to process your images so that you add an artistic effect to them. The process of ROI selection was used so that we select a region we want to remain unaffected. Then, for all channels of the image (whether it has one or three), we perform blurring with a disk kernel, which is a good approximation of the bokeh effect caused by the rendering of out-of-focus light sources by a photographic lens. This way, you can make the area outside your selection seem naturally out-of-focus. In a similar manner, you can add other effects in selected parts of your image. You just have to be careful of what regions you select to take place in the processing (set to 1 in your mask) and which ones you want to keep unaffected (set to zero in your mask).

## Have a go hero – add a motion effect in your image

Now it is your turn to take the wheel. Try to alter the Bokeh function we wrote to perform blurring using the motion filter instead of the disk kernel. The motion filter adds a feel of motion to your images; the larger the kernel, the faster the motion. Wouldn't it be fun if the cars in our soldier image seemed to move? Let's try it. You could base your code on the function we created earlier. Its definition is given below:

```function [output] = Motion(input,len)

% Function that performs motion blurring on a user defined
% ROI,using the motionkernel. The effect resembles a local motion.
% Inputs:
%        input  - Input image
%        len    – User's choice of length for the motion in pixels
%        theta  – User's choice of angle for the motion in degrees
% Output:
%        output - Output image (only user-defined ROI appears to
move)```

To check if your function works, you should type:

```>> img = imread('soldier.jpg');
>> [output] = Motion(soldier,25,0);```

Then, you should be able to use the mouse to define the ROI you want to appear as moving:

When displayed next to the original image, the final result should look something like this:

# Summary

In this article, we learned about the basic theory on image filtering and processing pixel neighborhoods. We learned how we can filter an image using convolution and what are the alternative ways to filter an image. We also learned how to create image filters in MATLAB, using filters for image blurring

## Resources for Article :

Further resources on this subject:

 Learn a range of techniques from enhancing and adding artistic effects to your photographs, to editing and processing your videos, all using Matlab with this book and ebook
Published: September 2013
eBook Price: \$29.99
Book Price: \$49.99
See more

## George Siogkas

George Siogkas is currently the Associate Dean of the Department of Engineering and Informatics at New York College, Greece, where he has been teaching as a senior lecturer for the past four years. He also has more than ten years of research experience in the academia. His keen passion for MATLAB programming, especially in the areas of image and video processing, was developed while working towards a PhD in the field of computer vision for intelligent transportation systems.

Dr. Siogkas received his PhD in Electrical and Computer Engineering from the University of Patras, Greece in 2013.