OpenCV Opening/Closing shifts the positions of the pixels - python

I'm currently using morphology transformations on binary images with OpenCV 2.4
I just noticed that using the built-in functions of OpenCV, all my pixels' positions are shifted right and down by one (i.e. the pixel previously located at (i,j) is now located at (i+1, j+1))
import cv2
import numpy as np
from skimage.morphology import opening
image = cv2.imread('input.png', 0)
kernel = np.ones((16,16), np.uint8)
opening_opencv = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel)
opening_skimage = opening(image, kernel)
cv2.imwrite('opening_opencv.png', opening_opencv)
cv2.imwrite('opening_skimage.png', opening_skimage)
Input :
Output :
As I didn't understand why, I just tied the same operation using skimage, and it doesn't make this "gap" during the morphological transformation.
Ouput :
Any idea about this issue ?
Thanks !

It is the way you commented, but the exact inverse :)
Structuring elements with even size lead to shifts, there is no middle pixel. With an odd size, you get a middle pixel and (n-1)/2 pixels at each size.
Other way of saying it is that a SE with odd sizes is symmetric and even size is asymmetric.

Related

Smoothing 2D contour with rough boundary in numpy array

I have a contour that is represented in a numpy array in such a way that the boundary points have 1 and the rest are 0. An example image is shown below. How could I smooth this contour ?
I am trying the get a contour that is smoother than the one in the image right now
You can use active_contour to match a spline with a set number of points to your contour. If you lower the number of points you get a smoother contour.
You can use morphological transformations such as erosion and dilation. OpenCV has a nice tutorial here: https://docs.opencv.org/master/d9/d61/tutorial_py_morphological_ops.html
As #Mad Physicist says you can use morphological techniques I recommend using Opening (Dilation and erosion) followed by a dilation as follows:
import cv2
import numpy as np
img = cv2.imread('img.png',0)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))
opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel,iterations= 5)
opening = cv2.dilate(opening,kernel,iterations = 1)
img_window = np.hstack((img,opening))
cv2.imshow("image",img_window)
cv2.waitKey(0)
cv2.destroyAllWindows()
results:

Smooth the edges of binary images (Face) using Python and Open CV

I am looking for a perfect way to smooth edges of binary images. The problem is the binary image appears to be a staircase like borders which is very unpleasing for my further masking process.
I am attaching a raw binary image that is to be converted into smooth edges and I am also providing the expected outcome. I am also looking for a solution that would work even if we increase the dimensions of the image.
Problem Image Expected Outcome
To preserve the sharpness of a binary image, I would recommend applying something like a median filter. Here is an example of this:
from PIL import Image, ImageFilter
image = Image.open('input_image.png')
image = image.filter(ImageFilter.ModeFilter(size=13))
image.save('output_image.png')
which gives us the following results:
Figure 1. Left: The original input image. Right: The output image with a median filter of size 13.
Increasing the size of the filter would increase the degree of smoothing, but of course this comes as a trade-off because you also lose high-frequency information such as the sharp corner on the bottom-left of this sample image. Unfortunately, high-frequency features are similar in nature to the staircase-like borders.
You can do that in Python/OpenCV with the help of Skimage by blurring the binary image. Then apply a one-sided clip.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread('bw_image.png')
# blur threshold image
blur = cv2.GaussianBlur(img, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
# C = A*X+B
# 255 = A*255+B
# 0 = A*127.5+B
# Thus A=2 and B=-127.5
#aa = a*2.0-255.0 does not work correctly, so use skimage
result = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255))
# save output
cv2.imwrite('bw_image_antialiased.png', result)
# Display various images to see the steps
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
You will have to adjust the amount of blur for the degree of aliasing in the image.

Frequency domain filtering with scipy.fftpack, ifft2 does not give the desired result

I am trying to simply apply a Gaussian filter on a gray-scale input lena image in frequency domain with the following code and here is the wrong output I am getting:
from scipy import signal
from skimage.io import imread
import scipy.fftpack as fp
import matplotlib.pyplot as plt
im = imread('lena.jpg') # read lena gray-scale image
# create a 2D-gaussian kernel with the same size of the image
kernel = np.outer(signal.gaussian(im.shape[0], 5), signal.gaussian(im.shape[1], 5))
freq = fp.fftshift(fp.fft2(im))
freq_kernel = fp.fftshift(fp.fft2(kernel))
convolved = freq*freq_kernel # simply multiply in the frequency domain
im_out = fp.ifft2(fp.ifftshift(convolved)).real # output blurred image
However, if I do the same but use signal.fftconvolve I get the desired blurred image output as shown below:
im_out = signal.fftconvolve(im, kernel, mode='same') # output blurred image
My input image is 220x220, is there any padding issue? if so, how to solve it and make the first code (without fftconvolve) work? any help will be highly appreciated.
First of all, there is no need to shift the result of the FFT just to shift it back before doing the IFFT. This just amounts to a lot of shifting they has no effect on the result. Multiplying the two arrays happens in the same way whether you shift them both or not.
The problem you noticed in your output is that the four quadrants are swapped. The reason this happens is because the filter is shifted by half its size, causing the same shift in the output.
Why is it shifted? Well, because the FFT puts the origin in the top-left corner of the image. This is not only true for the output of the FFT, but also for its input. Thus, you need to generate a kernel whose origin is at the top-left corner. How? Simply apply ifftshift to it before calling fft:
freq = fp.fft2(im)
freq_kernel = fp.fft2(fp.ifftshift(kernel))
convolved = freq*freq_kernel
im_out = fp.ifft2(convolved).real
Note that ifftshift shifts the origin from the center to the top-left corner, whereas fftshift shifts it from the corner to the center.

removing black dots from image using OpenCV and Python

I am trying to compare two images and need to pre-process/clean one of them which is a scanned copy before comparing with a digital copy.
Scanned copy /
Digital copy
I ran this code on the scanned image and got an output which has numerous black dots. Not sure how to clean these up so that I can compare with the digital copy
img = cv2.multiply(img, 1.2)
kernel = np.ones((1, 1), np.uint8)
img = cv2.erode(img, kernel, iterations=1)
kernel1 = np.zeros( (9,9), np.float32)
kernel1[4,4] = 2.0
boxFilter = np.ones( (9,9), np.float32) / 81.0
kernel1 = kernel1 - boxFilter
img = cv2.filter2D(img, -1, kernel1)
below is the output I got
Try apply filter in frequency domain, your image after FFT will have regular bright dots, because your image noise. If you will remove these dots and make inverse FFT transform you will remove dots from your image. Check this examples please: example1 , example2 and example3.
Yes. #Andrey method is the right way of solving the problem.
I have tried removing the high frequency dots in the frequency domain and here is an example of how it will look like if done correctly
Original Image in grayscale.
After running FFT on the image
Removing all high frequency noise. Of course this is done manually by drawing a black circle around the noise source. You can design your program to detect local bright spot and remove them cleanly.
Here is the final result after inverse FFT of the above frequency image. Some what degraded due to the crude way of me removing the noise but it should give you a rough idea of how it can be done.
Only the area around the dots will be affected by this process, leaving all other pattern in their original form.

Remove spurious small islands of noise in an image - Python OpenCV

I am trying to get rid of background noise from some of my images. This is the unfiltered image.
To filter, I used this code to generate a mask of what should remain in the image:
element = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.erode(mask, element, iterations = 1)
mask = cv2.dilate(mask, element, iterations = 1)
mask = cv2.erode(mask, element)
With this code and when I mask out the unwanted pixels from the original image, what I get is:
As you can see, all the tiny dots in the middle area are gone, but a lot of those coming from the denser area are also gone. To reduce the filtering, I tried changing the second parameter of getStructuringElement() to be (1,1) but doing this gives me the first image as if nothing has been filtered.
Is there any way where I can apply some filter that is between these 2 extremes?
In addition, can anyone explain to me what exactly does getStructuringElement() do? What is a "structuring element"? What does it do and how does its size (the second parameter) affect the level of filtering?
A lot of your questions stem from the fact that you're not sure how morphological image processing works, but we can put your doubts to rest. You can interpret the structuring element as the "base shape" to compare to. 1 in the structuring element corresponds to a pixel that you want to look at in this shape and 0 is one you want to ignore. There are different shapes, such as rectangular (as you have figured out with MORPH_RECT), ellipse, circular, etc.
As such, cv2.getStructuringElement returns a structuring element for you. The first parameter specifies the type you want and the second parameter specifies the size you want. In your case, you want a 2 x 2 "rectangle"... which is really a square, but that's fine.
In a more bastardized sense, you use the structuring element and scan from left to right and top to bottom of your image and you grab pixel neighbourhoods. Each pixel neighbourhood has its centre exactly at the pixel of interest that you're looking at. The size of each pixel neighbourhood is the same size as the structuring element.
Erosion
For an erosion, you examine all of the pixels in a pixel neighbourhood that are touching the structuring element. If every non-zero pixel is touching a structuring element pixel that is 1, then the output pixel in the corresponding centre position with respect to the input is 1. If there is at least one non-zero pixel that does not touch a structuring pixel that is 1, then the output is 0.
In terms of the rectangular structuring element, you need to make sure that every pixel in the structuring element is touching a non-zero pixel in your image for a pixel neighbourhood. If it isn't, then the output is 0, else 1. This effectively eliminates small spurious areas of noise and also decreases the area of objects slightly.
The size factors in where the larger the rectangle, the more shrinking is performed. The size of the structuring element is a baseline where any objects that are smaller than this rectangular structuring element, you can consider them as being filtered and not appearing in the output. Basically, choosing a 1 x 1 rectangular structuring element is the same as the input image itself because that structuring element fits all pixels inside it as the pixel is the smallest representation of information possible in an image.
Dilation
Dilation is the opposite of erosion. If there is at least one non-zero pixel that touches a pixel in the structuring element that is 1, then the output is 1, else the output is 0. You can think of this as slightly enlarging object areas and making small islands bigger.
The implications with size here is that the larger the structuring element, the larger the areas of the objects will be and the larger the isolated islands become.
What you're doing is an erosion first followed by a dilation. This is what is known as an opening operation. The purpose of this operation is to remove small islands of noise while (trying to) maintain the areas of the larger objects in your image. The erosion removes those islands while the dilation grows back the larger objects to their original sizes.
You follow this with an erosion again for some reason, which I can't quite understand, but that's ok.
What I would personally do is perform a closing operation first which is a dilation followed by an erosion. Closing helps group areas that are close together into a single object. As such, you see that there are some larger areas that are close to each other that should probably be joined before we do anything else. As such, I would do a closing first, then do an opening after so that we can remove the isolated noisy areas. Take note that I'm going to make the closing structuring element size larger as I want to make sure I get nearby pixels and the opening structuring element size smaller so that I don't want to mistakenly remove any of the larger areas.
Once you do this, I would mask out any extra information with the original image so that you leave the larger areas intact while the small islands go away.
Instead of chaining an erosion followed by a dilation, or a dilation followed by an erosion, use cv2.morphologyEx, where you can specify MORPH_OPEN and MORPH_CLOSE as the flags.
As such, I would personally do this, assuming your image is called spots.png:
import cv2
import numpy as np
img = cv2.imread('spots.png')
img_bw = 255*(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) > 5).astype('uint8')
se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
se2 = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
mask = cv2.morphologyEx(img_bw, cv2.MORPH_CLOSE, se1)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, se2)
mask = np.dstack([mask, mask, mask]) / 255
out = img * mask
cv2.imshow('Output', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('output.png', out)
The above code is pretty self-explanatory. First, I read in the image and then I convert the image to grayscale and threshold with an intensity of 5 to create a mask of what is considered object pixels. This is a rather clean image and so anything larger than 5 seems to have worked. For the morphology routines, I need to convert the image to uint8 and scale the mask to 255. Next, we create two structuring elements - one that is a 5 x 5 rectangle for the closing operation and another that is 2 x 2 for the opening operation. I run cv2.morphologyEx twice for the opening and closing operations respectively on the thresholded image.
Once I do that, I stack the mask so that it becomes a 3D matrix and divide by 255 so that it becomes a mask of [0,1] and then we multiply this mask with the original image so that we can grab the original pixels of the image back and maintaining what is considered a true object from the mask output.
The rest is just for illustration. I show the image in a window, and I also save the image to a file called output.png, and its purpose is to show you what the image looks like in this post.
I get this:
Bear in mind that it isn't perfect, but it's much better than how you had it before. You'll have to play around with the structuring element sizes to get something that you consider as a good output, but this is certainly enough to get you started.
C++ version
There have been some requests to translate the code I wrote above into the C++ version using OpenCV. I have finally gotten around to writing a C++ version of the code and this has been tested on OpenCV 3.1.0. The code for this is below. As you can see, the code is very similar to that seen in the Python version. However, I used cv::Mat::setTo on a copy of the original image and set whatever was not part of the final mask to 0. This is the same thing as performing an element-wise multiplication in Python.
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char *argv[])
{
// Read in the image
Mat img = imread("spots.png", CV_LOAD_IMAGE_COLOR);
// Convert to black and white
Mat img_bw;
cvtColor(img, img_bw, COLOR_BGR2GRAY);
img_bw = img_bw > 5;
// Define the structuring elements
Mat se1 = getStructuringElement(MORPH_RECT, Size(5, 5));
Mat se2 = getStructuringElement(MORPH_RECT, Size(2, 2));
// Perform closing then opening
Mat mask;
morphologyEx(img_bw, mask, MORPH_CLOSE, se1);
morphologyEx(mask, mask, MORPH_OPEN, se2);
// Filter the output
Mat out = img.clone();
out.setTo(Scalar(0), mask == 0);
// Show image and save
namedWindow("Output", WINDOW_NORMAL);
imshow("Output", out);
waitKey(0);
destroyWindow("Output");
imwrite("output.png", out);
}
The results should be the same as what you get in the Python version.
One can also remove small pixel clusters using the remove_small_objects function in skimage:
import matplotlib.pyplot as plt
from skimage import morphology
import numpy as np
import skimage
# read the image, grayscale it, binarize it, then remove small pixel clusters
im = plt.imread('spots.png')
grayscale = skimage.color.rgb2gray(im)
binarized = np.where(grayscale>0.1, 1, 0)
processed = morphology.remove_small_objects(binarized.astype(bool), min_size=2, connectivity=2).astype(int)
# black out pixels
mask_x, mask_y = np.where(processed == 0)
im[mask_x, mask_y, :3] = 0
# plot the result
plt.figure(figsize=(10,10))
plt.imshow(im)
This displays:
To retain only larger clusters, try increasing min_size (smallest size of retained clusters) and decreasing connectivity (size of pixel neighborhood when forming clusters). Using just those two parameters, one can retain only pixel clusters of an appropriate size.

Categories

Resources