I am looking for a perfect way to smooth edges of binary images. The problem is the binary image appears to be a staircase like borders which is very unpleasing for my further masking process.
I am attaching a raw binary image that is to be converted into smooth edges and I am also providing the expected outcome. I am also looking for a solution that would work even if we increase the dimensions of the image.
Problem Image Expected Outcome
To preserve the sharpness of a binary image, I would recommend applying something like a median filter. Here is an example of this:
from PIL import Image, ImageFilter
image = Image.open('input_image.png')
image = image.filter(ImageFilter.ModeFilter(size=13))
image.save('output_image.png')
which gives us the following results:
Figure 1. Left: The original input image. Right: The output image with a median filter of size 13.
Increasing the size of the filter would increase the degree of smoothing, but of course this comes as a trade-off because you also lose high-frequency information such as the sharp corner on the bottom-left of this sample image. Unfortunately, high-frequency features are similar in nature to the staircase-like borders.
You can do that in Python/OpenCV with the help of Skimage by blurring the binary image. Then apply a one-sided clip.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread('bw_image.png')
# blur threshold image
blur = cv2.GaussianBlur(img, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
# C = A*X+B
# 255 = A*255+B
# 0 = A*127.5+B
# Thus A=2 and B=-127.5
#aa = a*2.0-255.0 does not work correctly, so use skimage
result = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255))
# save output
cv2.imwrite('bw_image_antialiased.png', result)
# Display various images to see the steps
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
You will have to adjust the amount of blur for the degree of aliasing in the image.
Related
I am trying to detect each small circle (it is the bead part of the radial tires from the cross-sectional image)located as shown in the image and get their information(optional). To improve the detection process I have performed a few image processing steps including median blurring and binary thresholding (the general binary thresholding and inverse binary thresholding). I am using HoughCicle transform to detect the circles however I stucked and couldn't be able to detect it yet.
Please, any suggestions? Thank you very much.
This is the original image
Cropped image (it is the area where the circle I want to detect appear)
This is the binary image output and cropped it to remove the unnecessary part
As such, I'm trying to detect each circle from the binary image shown in the image below like marked in red.
Final preprocessed image
I used the following code
import cv2
import numpy as np
import os
import matplotlib.pyplot as plt
############# preprocessing ##################
img = cv2.imread('BD-2021.png')
median_5 = cv2.medianBlur(img, 5) # median filtering
image_masked = cv2.cvtColor(median_5, cv2.COLOR_BGR2GRAY) # converting to grayscael
res,thresh_img=cv2.threshold(image_masked,230,255,cv2.THRESH_BINARY_INV) # inverse binary
# res,thresh_img_b=cv2.threshold(image_masked,200,255,cv2.THRESH_BINARY) # global binary
height, width = thresh_img.shape
img_crop = thresh_img[int(0.7*height):height,:width]
# reverse_thresh = cv2.cvtColor(thresh_img,cv2.COLOR_GRAY2BGR)
############# Hough circle detection ##################
c = cv2.HoughCircles(img_crop, cv2.HOUGH_GRADIENT,
minDist=2, dp=1, param1=70,
param2=12, minRadius=0,maxRadius=5)
c = np.uint16(np.around(c))
for i in c[0,:]:
# draw the outer circle
cv2.circle(img,(i[0],i[1]),i[2],(0,255,0),2)
# cv2.circle(reverse_thresh,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(img,(i[0],i[1]),2,(0,0,255),3)
# cv2.circle(reverse_thresh,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('circle detected',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I would appreciate for any recommendation ? Thank you once again.
I would recommend using the Blob Detection Algorithm. This algorithm is the first part of the SIFT descriptor and is fairly easy to implement. The blob detection algorithm involves finding blobs in the picture using the Laplacian of Gaussian convolution (LoG) which can be approximated as a difference of gaussians. A good explanation on the scale space and how to implement the blob detection is given here.
Using the dlib library I was able to mask the mouth feature from one image (masked).
masked
Similarly, I have another cropped image of the mouth that does not have the mask (colorlip).
colorlip
I had scaled and replaced the images (replaced) and using np.where as shown in the code below.
replaced
#Get the values of the lip and the target mask
lip = pred_toblackscreen[bbox_lip[0]:bbox_lip[1], bbox_lip[2]:bbox_lip[3],:]
target = roi[bbox_mask[0]:bbox_mask[1], bbox_mask[2]:bbox_mask[3],:]
cv2.namedWindow('masked', cv2.WINDOW_NORMAL)
cv2.imshow('masked', target)
#Resize the lip to be the same scale/shape as the mask
lip_h, lip_w, _ = lip.shape
target_h, target_w, _ = target.shape
fy = target_h / lip_h
fx = target_w / lip_w
scaled_lip = cv2.resize(lip,(0,0),fx=fx,fy=fy)
cv2.namedWindow('colorlip', cv2.WINDOW_NORMAL)
cv2.imshow('colorlip', scaled_lip)
update = np.where(target==[0,0,0],scaled_lip,target)
cv2.namedWindow('replaced', cv2.WINDOW_NORMAL)
cv2.imshow('replaced', update)
But the feature shape (lip) in 'colorlip' does not match the 'masked' image. So, there is a misalignment and the edges of the mask look sharp as if the image has been overlayed. How to solve this problem? And how to make the final replaced image look more subtle and normal?
**Update #2: OpenCV Image Inpainting to smooth jagged borders.
OpenCV python inpainting should help with rough borders. Using the mouth landmark model, mouth segmentation mask from DL model or anything that was used the border location can be found. From that draw border with a small chosen width around the mouth contour in a new image and use it as a mask for inpainting. The mask I provided need to be inverted to work.
In input masks one of the mask is wider, one has shadow and last one is narrow. The six output images are generated with radius value of 5 and 20 for all three masks.
Code
import numpy as np
# import cv2 as cv2
# import cv2
import cv2.cv2 as cv2
img = cv2.imread('images/lip_img.png')
#mask = cv2.imread('images/lip_img_border_mask.png',0)
mask = cv2.imread('images/lip_img_border_mask2.png',0)
#mask = cv2.imread('images/lip_img_border_mask3.png',0)
mask = np.invert(mask)
# Choose appropriate method and radius.
radius = 20
dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_TELEA)
# dst = cv2.inpaint(img, mask, radius, cv2.INPAINT_NS)
cv2.imwrite('images/inpainted_lip.jpg', dst)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Input Image and Masks
Output Images
**Update #1: Added Deep Image harmonization based blending methods.
Try OpenCV seamless cloning for subtle replacement and getting rid of sharp edges. Also deep learning based image inpainting on sharp corners or combining it with seamless clone may provide better results.
Deep learning based Image Harmonization can be another approach to blend together two images such that the cropped part matches the style of background image. Even in this case the pixel intensity will change to match the background but blending will be smoother. Links are added to bottom of the post.
Example
This code example is based on learnopencv seamless cloning example,
# import cv2
from cv2 import cv2
import numpy as np
src = cv2.imread("images/src_img.jpg")
dst = cv2.imread("images/dest_img.jpg")
src_mask = cv2.imread("images/src_img_rough_mask.jpg")
src_mask = np.invert(src_mask)
cv2.namedWindow('src_mask', cv2.WINDOW_NORMAL)
cv2.imshow('src_mask', src_mask)
cv2.waitKey(0)
# Where to place image.
center = (500,500)
# Clone seamlessly.
output = cv2.seamlessClone(src, dst, src_mask, center, cv2.NORMAL_CLONE)
# Write result
cv2.imwrite("images/opencv-seamless-cloning-example.jpg", output)
cv2.namedWindow('output', cv2.WINDOW_NORMAL)
cv2.imshow('output', output)
cv2.waitKey(0)
Source Image
Rough Mask Image
Destination Image
Final Image
Reference
https://docs.opencv.org/4.5.4/df/da0/group__photo__clone.html
https://learnopencv.com/seamless-cloning-using-opencv-python-cpp/
https://learnopencv.com/face-swap-using-opencv-c-python/
https://github.com/JiahuiYu/generative_inpainting
https://docs.opencv.org/4.x/df/d3d/tutorial_py_inpainting.html
Deep Image Harmonization
https://github.com/bcmi/Image-Harmonization-Dataset-iHarmony4
https://github.com/wasidennis/DeepHarmonization
https://github.com/saic-vul/image_harmonization
https://github.com/wuhuikai/GP-GAN
https://github.com/junleen/RainNet
https://github.com/bcmi/BargainNet-Image-Harmonization
https://github.com/vinthony/s2am
I am using the following image
I would like to calculate the properties for example area,mean_intensity', 'solidity of the maroon color. I have used some code in python to read the image and convert it in grayscale then Ostu threshold to convert it binary image
image = img_as_ubyte(rgb2gray(io.imread("1.jpg")))
plt.imshow(image, cmap='gray')
from skimage.filters import threshold_otsu
threshold = threshold_otsu(image)
#Generate thresholded image
thresholded_img = image < threshold
plt.imshow(thresholded_img,cmap='gray')
After applying the little code I got the following binary image
I can see few scattered black pixels around the solid area. First of all, I want to clear those and then Calculate the properties of my Region of Interest which is black.
What could be the next way it can be done. I have seen measure.regionprops() in from skimage. Not sure can I use it here.
Update as suggestion from #Fix that I should BGR to RGB, but the outputs are still not the same as the paper's output.
(Small note: this post already post on https://dsp.stackexchange.com/posts/60670 but since I need help quickly so I think I reposted here, hope this doesn't violate to any policy)
I tried to create synthetic blurred image from ground-truth image using PSF kernels (in png format), some paper only mentioned that I need to do convolve operation on it, but it's seem to be I need more than that.
What I did
import matplotlib.pyplot as plt
import cv2 as cv
import scipy
from scipy import ndimage
import matplotlib.image as mpimg
import numpy as np
img = cv.imread('../dataset/text_01.png')
norm_image = cv.normalize(img, None, alpha=-0.1, beta=1.8, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)
f = cv.imread('../matlab/uniform_kernel/kernel_01.png')
norm_f = cv.normalize(f, None, alpha=0, beta=1, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F)
result = ndimage.convolve(norm_image, norm_f, mode='nearest')
result = np.clip(result, 0, 1)
imgplot = plt.imshow(result)
plt.show()
And this only give me a white-entire image.
I tried to decrease the beta to lower number like this here norm_f = cv.normalize(f, None, alpha=0, beta=0.03, norm_type=cv.NORM_MINMAX, dtype=cv.CV_32F) and the image is appeared but it's very different in the color of it.
The paper I got idea how to do it and dataset (images with ground-truth and PSF kernels in PNG format) are here
This is what they said:
We create the synthetic saturated images in a way similar to [3, 10].
Specifically, we first stretch the intensity range of the latent image
from [0,1] to [−0.1,1.8], and convolve the blur kernels with the
images. We then clip the blurred images into the range of [0,1]. The
same process is adopted for generating non-uniform blurred images.
This is some images I got from my source.
And this is the ground-truth image:
And this is the PSF kernel in PNG format file:
And this is their output (synthetic image):
Please help me out, it doesn't matter solution, even it's a software, another languages, another tools. I only care eventually I have synthetic blurred image from original (sharp) image with PSF kernel with good performance (I tried on Matlab but suffered similar problem, I used imfilter, and one more problem with Matlab is they're slow).
(please not judge for only care about the output of the process, I'm not using deconvol method to deblur blurred back to the original image one so I want to have enough datasets (original&blurred) pairs to test my hypothesis/method)
Thanks.
OpenCV reads / writes images in BGR format, and Matplotlib in RGB. So if you want to display the right colours, you should first convert it to RGB :
result_rgb = cv.cvtColor(result, cv.COLOR_BGR2RGB)
imgplot = plt.imshow(result)
plt.show()
Edit: You could convolve each channel separately and normalise your convolve image like this:
f = cv.cvtColor(f, cv.COLOR_BGR2GRAY)
norm_image = img / 255.0
norm_f = f / 255.0
result0 = ndimage.convolve(norm_image[:,:,0], norm_f)/(np.sum(norm_f))
result1 = ndimage.convolve(norm_image[:,:,1], norm_f)/(np.sum(norm_f))
result2 = ndimage.convolve(norm_image[:,:,2], norm_f)/(np.sum(norm_f))
result = np.stack((result0, result1, result2), axis=2).astype(np.float32)
Then you should get the right colors. This though uses a normalisation between 0.0 and 1.0 for both the image and the kernel (unlike between -0.1 and 1.8 for the image as the paper suggests).
I'm using OpenCV to process some images, and one of the first steps I need to perform is increasing the image contrast on a color image. The fastest method I've found so far uses this code (where np is the numpy import) to multiply and add as suggested in the original C-based cv1 docs:
if (self.array_alpha is None):
self.array_alpha = np.array([1.25])
self.array_beta = np.array([-100.0])
# add a beta value to every pixel
cv2.add(new_img, self.array_beta, new_img)
# multiply every pixel value by alpha
cv2.multiply(new_img, self.array_alpha, new_img)
Is there a faster way to do this in Python? I've tried using numpy's scalar multiply instead, but the performance is actually worse. I also tried using cv2.convertScaleAbs (the OpenCV docs suggested using convertTo, but cv2 seems to lack an interface to this function) but again the performance was worse in testing.
Simple arithmetic in numpy arrays is the fastest, as Abid Rahaman K commented.
Use this image for example: http://i.imgur.com/Yjo276D.png
Here is a bit of image processing that resembles brightness/contrast manipulation:
'''
Simple and fast image transforms to mimic:
- brightness
- contrast
- erosion
- dilation
'''
import cv2
from pylab import array, plot, show, axis, arange, figure, uint8
# Image data
image = cv2.imread('imgur.png',0) # load as 1-channel 8bit grayscale
cv2.imshow('image',image)
maxIntensity = 255.0 # depends on dtype of image data
x = arange(maxIntensity)
# Parameters for manipulating image data
phi = 1
theta = 1
# Increase intensity such that
# dark pixels become much brighter,
# bright pixels become slightly bright
newImage0 = (maxIntensity/phi)*(image/(maxIntensity/theta))**0.5
newImage0 = array(newImage0,dtype=uint8)
cv2.imshow('newImage0',newImage0)
cv2.imwrite('newImage0.jpg',newImage0)
y = (maxIntensity/phi)*(x/(maxIntensity/theta))**0.5
# Decrease intensity such that
# dark pixels become much darker,
# bright pixels become slightly dark
newImage1 = (maxIntensity/phi)*(image/(maxIntensity/theta))**2
newImage1 = array(newImage1,dtype=uint8)
cv2.imshow('newImage1',newImage1)
z = (maxIntensity/phi)*(x/(maxIntensity/theta))**2
# Plot the figures
figure()
plot(x,y,'r-') # Increased brightness
plot(x,x,'k:') # Original image
plot(x,z, 'b-') # Decreased brightness
#axis('off')
axis('tight')
show()
# Close figure window and click on other window
# Then press any keyboard key to close all windows
closeWindow = -1
while closeWindow<0:
closeWindow = cv2.waitKey(1)
cv2.destroyAllWindows()
Original image in grayscale:
Brightened image that appears to be dilated:
Darkened image that appears to be eroded, sharpened, with better contrast:
How the pixel intensities are being transformed:
If you play with the values of phi and theta you can get really interesting outcomes. You can also implement this trick for multichannel image data.
--- EDIT ---
have a look at the concepts of 'levels' and 'curves' on this youtube video showing image editing in photoshop. The equation for linear transform creates the same amount i.e. 'level' of change on every pixel. If you write an equation which can discriminate between types of pixel (e.g. those which are already of a certain value) then you can change the pixels based on the 'curve' described by that equation.
Try this code:
import cv2
img = cv2.imread('sunset.jpg', 1)
cv2.imshow("Original image",img)
# CLAHE (Contrast Limited Adaptive Histogram Equalization)
clahe = cv2.createCLAHE(clipLimit=3., tileGridSize=(8,8))
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) # convert from BGR to LAB color space
l, a, b = cv2.split(lab) # split on 3 different channels
l2 = clahe.apply(l) # apply CLAHE to the L-channel
lab = cv2.merge((l2,a,b)) # merge channels
img2 = cv2.cvtColor(lab, cv2.COLOR_LAB2BGR) # convert from LAB to BGR
cv2.imshow('Increased contrast', img2)
#cv2.imwrite('sunset_modified.jpg', img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sunset before:
Sunset after increased contrast:
Use the cv2::addWeighted function. It will be faster than any of the other methods presented thus far. It's designed to work on two images:
dst = cv.addWeighted( src1, alpha, src2, beta, gamma[, dst[, dtype]] )
But if you use the same image twice AND you set beta to zero, you can get the effect you want
dst = cv.addWeighted( src1, alpha, src1, 0, gamma)
The big advantage to using this function is that you will not have to worry about what happens when values go below 0 or above 255. In numpy, you have to figure out how to do all of the clipping yourself. Using the OpenCV function, it does all of the clipping for you and it's fast.