python library Image processing- find image tearing - python

I'm looking for a method to understand when an image has a tear in the data -
All I could think of is running vertically pixel by pixel and "understanding" major changes in data
tearing in image:
tears are always horizontal
Any suggestion will be helpful

To solve a similar issue I was having with my images, I was able to filter the images using the standard deviation of the laplacian. If your untorn images are similar to the torn images you may be able to differentiate between them and discard the images with a standard deviation above some value. Other edge detection algorithms such as Canny may work as well.
A simple implementation to import an image and calculate the standard deviation of the laplacian can be done using opencv in python.
import cv2 as cv
ddepth = cv.CV_16S # desired depth of destination image
kernel_size = 3 # Aperture size used to compute the second-derivative filters
path = r"C:\Your\filepath\here" # Directory to get files from
img = cv.imread(path, cv.IMREAD_COLOR) # Read image
std = cv.Laplacian(img, ddepth,
ksize=kernel_size).std() # Get sd of laplacian. If std is too high, maybe image tearing
I am sure there are better approaches to this problem, so hopefully you will get other answers as well.

Related

Bokeh-like Blur with Mask as Intensity of Blur Radius

I know how to Gaussian blur with Pillow, but can't track how to mask it by intensity of the radius value with a mask.
I am using MiDaS package to produce depth maps form 2D images. What I want to do is be able to blur the original image by the depth mask as a pseudo depth of field.
Here is a visual demonstration of the result I'm after with CV2 or Pillow (I don't understand which can do what I'm after.)
Note: I'm sorry if this is considered junk, I've sat on this question for a month. I tried scouring the net for something like this, and all I found was Poor Man's Portrait Mode which I could not get to work, and also would be reproducing depth maps when I already have them from my script and used for the 3D image creation.
Edit:
I did come up with this, using composite Not sure why I didn't take note of it before. Though I have to say, the results aren't too great. I think I really do need to emulate some sort of shape blur like bokeh.
sharpen = 3
boxBlur = 5
oimg = Image.open('2.png').convert('RGB')
width, height = oimg.size
mimg = Image.open('2_depth.png').resize((width, height)).convert('L')
bimg = oimg.filter(ImageFilter.BoxBlur(int(boxBlur)))
bimg = bimg.filter(ImageFilter.BLUR)
for i in range(sharpen):
bimg = bimg.filter(ImageFilter.SHARPEN)
rimg = Image.composite(oimg, bimg, mimg)
Basically get your image, and mask, ensure the mask matches the image (I had a issue where images didn't match, but were the same size, just saved different from 2 saved the same way)
Blur your image to a new variable, however you like, Gaussian, etc. Gaussian was too soft for me. Add whatever extra filtering you want
Composite the results together, using depth map as a mask for composite.
Note: If someone knows how to achieve a different sort of blur that mimics bokeh, I'd like to know, and have adjusted the question title. I read about a discBlur but couldn't find anything for PIL/CV2.
I’ve got only a brute-force solution with iteration over pixels: Variable blur intensity.
My code is working but not as efficiently as I want.
You can try. Open your image as input and put your depth map in the variable blur_map.

How to remove noise from an image using pillow?

I am trying to de-noise an image that I've made in order to read the numbers on it using Tesseract.
Noisy image.
Is there any way to do so?
I am kind of new to image manipulation.
from PIL import ImageFilter
im1 = im.filter(ImageFilter.BLUR)
im2 = im.filter(ImageFilter.MinFilter(3))
im3 = im.filter(ImageFilter.MinFilter)
The Pillow library provides the ImageFilter module that can be used to enhance images. Per the documentation:
The ImageFilter module contains definitions for a pre-defined set of filters, which can be be used with the Image.filter() method.
These filters work by passing a window or kernel over the image, and computing some function of the pixels in that box to modify the pixels (usually the central pixel)
The MedianFilter seems to be widely used and resembles the description given in nishthaneeraj's answer.
You have to read Python pillow Documentation
Python pillow Documentation link:
https://pillow.readthedocs.io/en/stable/
Pillow image module:
https://pillow.readthedocs.io/en/stable/reference/ImageFilter.html#module-PIL.ImageFilter
How do you remove noise from an image in Python?
The mean filter is used to blur an image in order to remove noise. It involves determining the mean of the pixel values within a n x n kernel. The pixel intensity of the center element is then replaced by the mean. This eliminates some of the noise in the image and smooths the edges of the image.

shape detection

I have tried 3 algorithms:
Compare by Compare_ssim.
Difference detection by PIL (ImageChops.difference).
Images subtraction.
The first algorithm:
(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")
The second algorithm:
from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
diff.show()
The third algorithm:
image3= cv2.subtract(image1,image2)
The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?
These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.
I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.
I acknowledge that this is easier said than done, and will only work with this particular kind of scene.
It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?
Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.
Anyway, you can get such result with the following code, I don't know if it helps, I hope so:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk
fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)
elastixImageFilter = sitk.ElastixImageFilter()
affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']
parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))
elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()
registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()
resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)
cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)
Your first image resized to 256x256:
Your second image:
Your second image registered with the first one:
Here is the difference between the first and second image which could show what's different:
This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.
You can reduce noise by two means:
a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images.
b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.
In both cases (a) and (b) you run the difference analysis after you applied either method.
Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.
There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.

Denoise and filter an image

I am doing a license-plate recognition. I have crop out the plate but it is very blurred. Therefore I cannot split out the digits/characters and recognize it.
Here is my image:
I have tried to denoise it through using scikit image function.
First, import the libraries:
import cv2
from skimage import restoration
from skimage.filters import threshold_otsu, rank
from skimage.morphology import closing, square, disk
then, I read the image and convert it to gray scale
image = cv2.imread("plate.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
I try to remove the noise:
denoise = restoration.denoise_tv_chambolle(image , weight=0.1)
thresh = threshold_otsu(denoise)
bw = closing(denoise > thresh, square(2))
What I got is :
As you can see, all the digits are mixed together. Thus, I cannot separate them and recognize the characters one by one.
What I expect is something like this (I draw it):
I am looking for help how can I better filter the image? Thank you.
=====================================================================
UPDATE:
After using skimage.morphology.erosion, I got:
First, this image seems to be more defaced by blur, than by noize, so there are no good reasons to denoise it, try debluring instead.
The simplest would be inverse filtering or even Wiener filtering. Then you'll need to separate image's background from letters by the level of luminosity for example with watershed algorithm. Then you'll get separate letters which you need to pass through one of classifiers for example, based on neural networks (even simplistic feed-forward net would be ok).
And then you'll finally get the textual representation. That's how such recognitions are usually made.
There's good book by Gonzalez&Woods, try looking for detailed explaination there.
I concur with the opinion that you should probably try to optimize the input image quality.
Number plate blur is a typical example of motion blur.
How well you can deblur depends upon how big or small is the blur radius.
Generally greater the speed of the vehicle, larger the blur radius and therefore more difficult to restore.
A simple solution that somewhat works is de-interlacing of images.
Note that it is only slightly more readable than your input image.
Here I have dropped every alternate line and resized the image to half its size using PIL/Pillow and this is what I get:
from PIL import Image
img=Image.open("license.jpeg")
size=list(img.size)
size[0] /= 2
size[1] /= 2
smaller_image=img.resize(size, Image.NEAREST)
smaller_image.save("smaller_image.png")
The next and more formal approach is deconvolution.
Since blurring is achieved using convolution of images, deblurring requires doing the inverse of convolution or deconvolution of the image. There are various kinds of deconvolution algorithms like the Wiener deconvolution,
Richardson-Lucy method, Radon transform and a few types of Bayesian filtering.
You can apply Wiener deconvolution algorithm using this code. Play with the angle, diameter and signal to noise ratio and see if it provides some improvements.
The skimage.restoration module also provides implementation of both unsupervised_wiener and richardson_lucy deconvolution.
In the code below I have shown both the implementations but you will have to modify the psf to see which one suits better.
import numpy as np
import matplotlib.pyplot as plt
import cv2
from skimage import color, data, restoration
from scipy.signal import convolve2d as conv2
img = cv2.imread('license.jpg')
licence_grey_scale = color.rgb2gray(img)
psf = np.ones((5, 5)) / 25
# comment/uncomment next two lines one by one to see unsupervised_wiener and richardson_lucy deconvolution
deconvolved, _ = restoration.unsupervised_wiener(licence_grey_scale, psf)
deconvolved = restoration.richardson_lucy(licence_grey_scale, psf)
fig, ax = plt.subplots()
plt.gray()
ax.imshow(deconvolved)
ax.axis('off')
plt.show()
Unfortunately most of these deconvolution alogirthms require you to know in advance the
blur kernel (aka the Point Spread Function aka PSF).
Here since you do not know the PSF, so you will have to use blind deconvolution.
Blind deconvolution attempts to estimate the original image without any knowledge of the blur kernel.
I have not tried this with your image but here is a Python implementation of blind deconvolution algorithm:
https://github.com/alexis-mignon/pydeconv
Note that an effective general purpose blind deconvolution algorithms has not yet been found and is an active field of research.
ChanVeseBinarize with an image enhanced binarized kernel gave me this result. This is helpful to highlight 4,8,1 and 2. I guess you need to do separate convolution with each character and if the peak of the convolution is higher than a threshold we can assume that letter to be present at the location of the peak. To take care of distortion, you need to do the convolution with few different types of font of a given character.
Another potential improvement using derivative filter and little bit of Gaussian smoothing. The K & X are not as distorted as the previous solution.

Suggestion to detecting straight lines on a sand plate, python

I am working on a project that requires detecting lines on a plate of sand. The lines are hand-drew by user so that are not exactly "straight" (see photo). And because of the sand, the lines are quite hard to distinguish.
I tried cv2.HoughLines from OpenCV but didn't achieve good results. So any suggestion on the detecting method? And welcome for suggestion to improve the clarity of the lines. I am thinking of putting a few led light surrounding the plate.
Thanks
The detecting method depends a lot on how much generality you require: is the exposure and contrast going to change from one image to another? Is the typical width of lines going to change? In the following, I assume that such parameters do not vary much for your applications, please correct me if I'm wrong.
I'll be using scikit-image, a common image processing package for Python. If you're not familiar with this package, documentation can be found on http://scikit-image.org/, and the package is bundled with all installations of Scientific Python. However, the algorithms that I use are also available in other tools, like opencv.
My solution is written below. Basically, the principle is
first, denoise the image. Life is usually simpler after a denoising step. Here I use a total variation filter, since it results in a piecewise-constant image that will be easier to threshold. I enhance dark regions using a morphological erosion (on the gray-level image).
then apply an adaptive threshold that varies locally in space, since the contrast varies through the image. This operation results in a binary image.
erode the binary image to break spurious links between regions, and keep only large regions.
compute a measure of the elongation of the regions to keep only the most elongated ones. Here I use the ratio of the eigenvalues of the inertia tensor.
Parameters that are the most difficult to tune is the block size for the adaptive thresholding, and the minimum size of regions to keep. I also tried a Canny filter on the denoised image (skimage.filters.canny), and results were quite good, but edges were not always closed, you might also want to try an edge-detection method such as a Canny filter.
The result is shown below:
# Import modules
import numpy as np
from skimage import io, measure, morphology, restoration, filters
from skimage import img_as_float
import matplotlib.pyplot as plt
# Open the image
im = io.imread('sand_lines.png')
im = img_as_float(im)
# Denoising
tv = restoration.denoise_tv_chambolle(im, weight=0.4)
ero = morphology.erosion(tv, morphology.disk(5))
# Threshold the image
binary = filters.threshold_adaptive(ero, 181)
# Clean the binary image
binary = morphology.binary_dilation(binary, morphology.disk(8))
clean = morphology.remove_small_objects(np.logical_not(binary), 4000)
labels = measure.label(clean, background=0) + 1
# Keep only elongated regions
props = measure.regionprops(labels)
eigvals = np.array([prop.inertia_tensor_eigvals for prop in props])
eigvals_ratio = eigvals[:, 1] / eigvals[:, 0]
eigvals_ratio = np.concatenate(([0], eigvals_ratio))
color_regions = eigvals_ratio[labels]
# Plot the result
plt.figure()
plt.imshow(color_regions, cmap='spectral')

Categories

Resources