Binarization of image in opencv - python

I'm having a problem with binarization of image (perhaps blurry in general)
I have this image:
and after I've done binarization I get
How can I do better binarization? My goal is to have just black background and white letters and nothing else. I used adaptive threshold binarization
cv2.adaptiveThreshold(image_gs,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY ,41,3)
and I also have
kernel=np.ones(1,1)
Does anyone have idea how to do that?

You should try deblurring methods, see these:
Deblurring image by deconvolution using opencv
Experiments with deblurring using OpenCV

Try out the following:
1.De-noise your image,first, by using either a Median,Bilateral,Gaussian or Adaptive Smooth Filter (Gaussian filter works pretty well when it comes to images with textual content).
2.De-blur the image by referring to http://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ or https://github.com/tvganesh/deconv
3.Check out Adaptive Gaussian thresholding,instead.In case its a scene text image,you can use Otsu's algorithm after shadow removal.
The 'Image Processing in OpenCV' tutorials have a detailed documentation on Image Thresholding.
The Image Filtering — OpenCV 3.0.0-dev documentation explains the implementation of the Median Blur, applied to an image.

Related

How many Blur Filter or Deblur models/modules are available in OpenCV or other python libraries?

I am doing a project where I have to remove noise, filter blur, and so many image preprocessing things to be applied in a real-time video to enhance the video quality. So at first, I broke down the video into frames then I wanted to use all the mechanisms.
So, my question is, which deblur approach should I apply to get my desired result, or is there any python library that will be better for my work?
Maybe you could start with scikit-image's Image Deconvolution to "filter blur" and median filtering to remove pepper and salt noise.
Seeing some images could be useful to help you more.

Inverse filtering a blurred image - Python

For an assignment, The image corrupted by atmospheric turbulence. I want to deblur an image using inverse image filtering. I have done some research and it seems I need the original image for this procedure but I only have the blurred image. How can I construct the degrading function that was used to blur this image? I am not allowed to use the original image. Thank you in advance.
This is the image:

Dash blurred part of the image

Is it possible to dash the blurred part of the image?
Right now I am using python with OpenCV. I know only how to load images and display if the image is blurred.
My input is a blurred image:
I would like to get:
I do not have:
original/unblurred image.
Output can have still blurred parts but dashed.
Thanks a lot for help!
You could try by computing the "Variance of the Laplacian" on parts of the image to detect the regions that have a low variation in greyscales (= assumed blurry) and which regions have a high variation in greyscale (= assumed non-blurry).
There is a nice tutorial on how to check if an image is blurry, it can be found here
There is also a post here that explains the theory behind it.
It ain't a complete solution, but it might be a way to start.

Imitating the "magic wand" photoshop tool in OpenCV

I'm trying to isolate the sky region from a series of grayscale images in OpenCV. All of the images are fairly similar: the top of the image is always a sky region, and is always a bright, gray-white colour. I've attempted contour-based approaches, and written my own algorithm to extract the line of the horizon and divide the image into two masks accordingly. However, I've noticed that the reliability of the magic wand tool in Photoshop on this image set is MUCH more accurate.
Here's the image that I'm processing:
and the result that I hope to achieve:
How can this be imitated in OpenCV?
I think what you're looking for is the grabcut algorithm

Blur part of an Image and blend it with the Background

I need to blur faces to protect the privacy of people in street view images like Google does in Google Street View. The blur should not make the image aesthetically unpleasant. I read in the paper titled Large-scale Privacy Protection in Google Street View by Google (link) that Google does the following to blur the detected faces.
We chose to apply a combination of noise and aggressive Gaussian blur that we alpha-blend smoothly with the background starting at the edge of the box.
Can someone explain how to perform this task? I understand Gaussian Blur, but how to blend it with the background?
Code will be helpful but not required
My question is not how to blur a part of image?, it is how to blend the blurred portion with the background so that blur is not unpleasant? Please refer to the quote I provided from the paper.
I have large images and a lot of them. An iterative process as in the possible duplicate will be time consuming.
EDIT
If someone ever wants to do something like this, I wrote a Python implementation. It isn't exactly what I was asking for but it does the job.
Link: pyBlur
I'm reasonably sure the general idea is:
Create a shape for the area you want to blur (say a rectangle).
Extend your shape by X pixels outwards.
Apply a gradient on alpha from 0.0 .. 1.0 (or similar) over the extended area.
Apply blur the extended area (ignoring alpha)
Now use an alpha-blend to apply the modified image to the original image.
Adding noise in a similar way to the original image would make it further less obvious that it's been blurred (because the blur will of course also blur away the noise).
I don't know the exact parameters for how much to grow, what values to use for the alpha gradient, etc, but that's what I understand from the quoted text.

Categories

Resources