Eliminate borders for automatic license plate recognition - python

Im working on a ALPR system, from the binarization part the images endup with a border that prevent the OCR to get the correct values.
As in this image:
How can I delete the black border an get only the characters?
EDIT.
This is the input:
That's the quality of the images.

The resolution is at the limit of what is reasonable.
You can improve a little by
doubling the resolution (bilinear interpolation),
applying a top-hat morphological filter,
binarizing.

Related

How do I segmentate dirty 7-seg LCD with reflections

The LCD display cannot be cleaned, the light conditions cannot be changed and the background can be tricky. So, I cannot use any kind of segmentation by color, search for rectangles and use Otsu. MSER doesn't give a good result. I even tried to locate display relatively to "DEXP" logo. The logo turned out to be too small to do this with sufficient accuracy. Bilateral filtering or gaussian blur helps, but not much. Even supposing that I found ROI, local thresholding gives too noisy results. Morphological transformations don't help. Is there a way to extract digits for further OCR?

ocr image cleansing with python opencv

I'm currently learning about computer vision OCR. I have an image that needs to be scan. I face a problem during the image cleansing.
I use opencv2 in python to do the things. This is the original image:
image = cv2.imread(image_path)
cv2.imshow("imageWindow", image)
I want to cleans the above image, the number at the middle (64) is the area I wanted to scan. However, the number got cleaned as well.
image[np.where((image > [0,0,105]).all(axis=2))] = [255,255,255]
cv2.imshow("imageWindow", image)
What should I do to correct the cleansing here? I wanted to make the screen where the number 64 located is cleansed coz I will perform OCR scan afterwards.
Please help, thank you in advance.
What you're trying to do is called "thresholding". Looks like your technique is recoloring pixels that fall below a certain threshold, but the LCD digit darkness varies enough in that image to throw it off.
I'd spend some time reading about thresholding, here's a good starting place:
Thresholding in OpenCV with Python. You're probably going to need an adaptive technique (like Adaptive Gaussian Thresholding), but you may find other ways that work for your images.

OCR: Retrieve essential parts of original image after noise reduction with Python

I'm trying to denoise an image (photographed text) in order to improve OCR. I'm using Python - skimage for the task, but I'm open to other library recommendations (PIL, cv2, ...)
Example image (should read "i5"):
I used skimage.morphology.erosion and skimage.morphology.remove_small_objects quite successfully, resulting in :
The noise is gone, but so is some part of the 5 and dot on the i.
Now the question: I had an idea how to repair the 5. I add the original image to the denoised one, resulting in parts being black, and parts being gray:
Then I make all gray parts connected to black parts black (propagate over the structure). Finally, by erasing all parts which are still gray, I get a clean image.
But I don't know how to do the propagate part using one of the above libraries. Is there an algorithm for that ?
Bonus question: How can I preserve the dot on the i ?

Removing text while processing the image

I am working on an application where I need feature like Cam Scanner where document is to be detected in an image. For that I am using Canny Edge detection followed by Hough Transform.
The results look promising but the text in the document is creating issues as explained via images below:
Original Image
After canny edge detection
After hough transform
My issue lies in the third image, the text in original mage near the bottom has forced hough transform to detect the horizontal line(2nd cluster from bottom).
I know I can take the largest quadrilateral and that would work fine in most cases, but still I want to know any other ways where in this processing I can ignore the effect of text on the edges.
Any help would be appreciated.
I solved the issue of text with the help of median filter of size 15(square) in an image of 500x700.
Median filter doesn't affect the boundaries of the paper, but can help eliminate the text completely.
Using that I was able to get much more effective boundaries.
Another approach you could try is to use thresholding to find the paper boundaries. This would create a binary image. You can then examine the blobs of white pixels and see if any are large enough to be the paper and have the right dimensions. If it fits the criteria, you can find the min/max points of this blob to represent the paper.
There are several ways to do the thresholding, including iterative, otsu, and adaptive.
Also, for best results you may have to dilate the binary image to close the black lines in the table as shown in your example.

Blur part of an Image and blend it with the Background

I need to blur faces to protect the privacy of people in street view images like Google does in Google Street View. The blur should not make the image aesthetically unpleasant. I read in the paper titled Large-scale Privacy Protection in Google Street View by Google (link) that Google does the following to blur the detected faces.
We chose to apply a combination of noise and aggressive Gaussian blur that we alpha-blend smoothly with the background starting at the edge of the box.
Can someone explain how to perform this task? I understand Gaussian Blur, but how to blend it with the background?
Code will be helpful but not required
My question is not how to blur a part of image?, it is how to blend the blurred portion with the background so that blur is not unpleasant? Please refer to the quote I provided from the paper.
I have large images and a lot of them. An iterative process as in the possible duplicate will be time consuming.
EDIT
If someone ever wants to do something like this, I wrote a Python implementation. It isn't exactly what I was asking for but it does the job.
Link: pyBlur
I'm reasonably sure the general idea is:
Create a shape for the area you want to blur (say a rectangle).
Extend your shape by X pixels outwards.
Apply a gradient on alpha from 0.0 .. 1.0 (or similar) over the extended area.
Apply blur the extended area (ignoring alpha)
Now use an alpha-blend to apply the modified image to the original image.
Adding noise in a similar way to the original image would make it further less obvious that it's been blurred (because the blur will of course also blur away the noise).
I don't know the exact parameters for how much to grow, what values to use for the alpha gradient, etc, but that's what I understand from the quoted text.

Categories

Resources