Remove points which contains pixels fewer than (N) - python

I tried almost all filters in PIL, but failed.
Is there any function in numpy of scipy to remove the noise?
Like Bwareaopen() in Matlab()?
e.g:
PS: If there is a way to fill the letters into black, I will be grateful

Numpy/Scipy can do morphological operations just as well as Matlab can.
See scipy.ndimage.morphology, containing, among other things, binary_opening(), the equivalent of Matlab's bwareaopen().

Numpy/Scipy solution: scipy.ndimage.morphology.binary_opening. More powerful solution: use scikits-image.
from skimage import morphology
cleaned = morphology.remove_small_objects(YOUR_IMAGE, min_size=64, connectivity=2)
See http://scikit-image.org/docs/0.9.x/api/skimage.morphology.html#remove-small-objects

I don't think this is what you want, but this works (uses Opencv (which uses Numpy)):
import cv2
# load image
fname = 'Myimage.jpg'
im = cv2.imread(fname,cv2.COLOR_RGB2GRAY)
# blur image
im = cv2.blur(im,(4,4))
# apply a threshold
im = cv2.threshold(im, 175 , 250, cv2.THRESH_BINARY)
im = im[1]
# show image
cv2.imshow('',im)
cv2.waitKey(0)
Output ( image in a window ):
You can save the image using cv2.imwrite

Related

Why skimage white_tophat is different from manually achieved top hat?

I am trying to remove gradient background from image using morphology top hat operation.
for this purpose I use skimage morphology library (opening, whiet_tophat) functions.
By itself white tophat means = initial image - opened image.
In my code I am comparing the results of skimage wht function result to manually obtained wth.
import numpy as np
from skimage import morphology
import cv2 as cv
img = cv.imread('images/TEST.jpg', cv.IMREAD_GRAYSCALE)
img_not = cv.bitwise_not(img)
se = np.ones((50,50), np.uint8)
opened = morphology.opening(img, se)
wth_my= img_not - opened
wth=morphology.white_tophat(img_not, se)
cv.imwrite('images/TEST_Opened.jpg', opened)
cv.imwrite('images/TEST_WTH_MY.jpg', wth_my)
cv.imwrite('images/TEST_WTH.jpg', wth)
cv.waitKey(0)
cv.destroyAllWindows()
Results are quite different (see screenshots). Please advice what's wrong in my code.
As you said, the top-hat filter is I - opening(I). You wrote:
opened = morphology.opening(img, se)
wth_my= img_not - opened
Note how one line uses img, and the other uses img_not. You need to apply the opening to img_not as well, so that it is the same image that the two parts of the operation work on.

Extract car images without Mask RCNN

I want to extract car images without using Mask RCNN. I tried a couple of methods but couldn't decide on how to proceed with any of them. I need recommendation on which method would be best and how to go through with it.
Method 1 - Using XML files and haar cascade classifier
I was thinking of using xml files to detection and crop car images. The problems I faced were:
They only detect car in square shapes. I needed car images cropped. So ultimately I ended up with better images of cropped cars. This didn't solve my problem.
The cropped image didn't detect car as a whole but small parts of it. Maybe due to XML file's config.
My code:
!wget https://raw.githubusercontent.com/shaanhk/New-GithubTest/master/cars.xml
import numpy as py
import cv2
car_cascade=cv2.CascadeClassifier('cars.xml')
img = cv2.imread('im1.jpg')
cars = car_cascade.detectMultiScale(img, 1.1, 1)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting image:
Method 2 - Using Canny Edge Detection
I tried to perform canny edge detection for car. It worked to some extent that I managed to reduce edges to mostly car object. But I don't know how to proceed from there.
My code:
import cv2
import numpy as np
image= cv2.imread('im1.jpg')
imagecopy= np.copy(image)
grayimage= cv2.cvtColor(imagecopy, cv2.COLOR_RGB2GRAY)
canny= cv2.Canny(grayimage, 300,150)
cv2.imshow('Highway Edge Detection Image', canny)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting Image:
Method 3 - Extract car image using color gradients
On googling I found a method using HSV transformation and then creating a custom mask to extract cars. But I don't know much about this method and have no idea how to go about it. I used the code provided and am posting it below.
Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
image = mpimg.imread('im1.jpg')
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
background_hue = h[10,10]
lower_hue = np.array([background_hue-10,0,0])
upper_hue = np.array([background_hue+10,255,255])
mask = cv2.inRange(hsv, lower_hue, upper_hue)
# Mask the image to let the car show through
masked_image = np.copy(image)
masked_image[mask != 0] = [0, 0, 0]
cv2.imwrite('mask.jpg',masked_image)
# Display it!
plt.imshow(masked_image)
Image:
I'd like to mention, I'm a complete beginner in Computer Vision and am trying to learn by doing some small stuff like these. My code is probably very flawed and hopefully I can work on it on the way. Please feel absolutely free to mention any other method (except Mask RCNN) or any problems with code.

Counting Objects in an image using OPENCV and Python

I'm currently in the pursue of counting the number of shrimps in a given image. I'm using this test image:
The code I have used so far is the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Load img
path = r'C:\Users\...' #the path to the image
original=cv2.imread(path, cv2.COLOR_BGR2RGB)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
#Hist to proceed with the binarizarion
hist = cv2.calcHist([img],[0],None,[256],[0,256])
#do the threshold
ret,thresh = cv2.threshold(img,60,255,cv2.THRESH_BINARY_INV)
From this point I have tried different morphological transformations such a erode, dilate, open and close but they don't seem to be working and separating the objects as I want.
I've read that I can apply a Watershed transformation so separate touching elements, but I donĀ“t have experience in this (working at this point at the moment).
After that I am planning on using a Simple Blob Detector to count the blobs, I don't know if these steps are correct.
Any help is very welcomed!

Why does pytesseract fail to recognise digits from image with darker background?

I've this python code which I use to convert a text written in a picture to a string, it does work for certain images which have large characters, but not for the one I'm trying right now which contains only digits.
This is the picture:
This is my code:
import pytesseract
from PIL import Image
img = Image.open('img.png')
pytesseract.pytesseract.tesseract_cmd = 'C:/Program Files (x86)/Tesseract-OCR/tesseract'
result = pytesseract.image_to_string(img)
print (result)
Why is it failing at recognising this specific image and how can I solve this problem?
I have two suggestions.
First, and this is by far the most important, in OCR preprocessing images is key to obtaining good results. In your case I suggest binarization. Your images look extremely good so you shouldn't have any problem but if you do, then maybe you should try to binarize your images:
import cv2
from PIL import Image
img = cv2.imread('gradient.png')
# If your image is not already grayscale :
# img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshold = 180 # to be determined
_, img_binarized = cv2.threshold(img, threshold, 255, cv2.THRESH_BINARY)
pil_img = Image.fromarray(img_binarized)
And then try the ocr again with the binarized image.
Check if your image is in grayscale and uncomment if needed.
This is simple thresholding. Adaptive thresholding also exists but it is noisy and does not bring anything in your case.
Binarized images will be much easier for Tesseract to handle. This is already done internally (https://github.com/tesseract-ocr/tesseract/wiki/ImproveQuality) but sometimes things can be messed up and very often it's useful to do your own preprocessing.
You can check if the threshold value is right by looking at the images :
import matplotlib.pyplot as plt
plt.imshow(img, cmap='gray')
plt.imshow(img_binarized, cmap='gray')
Second, if what I said above still doesn't work, I know this doesn't answer "why doesn't pytesseract work here" but I suggest you try out tesserocr. It is a maintained python wrapper for Tesseract.
You could try:
import tesserocr
text_from_ocr = tesserocr.image_to_text(pil_img)
Here is the doc for tesserocr from pypi : https://pypi.org/project/tesserocr/
And for opencv : https://pypi.org/project/opencv-python/
As a side-note, black and white is treated symetrically in Tesseract so having white digits on a black background is not a problem.

Detect object with openCV and python

I'm trying to detect the white dots in the following image using OpenCV and Python.
I tried using the function cv2.HoughCircles but without any success.
Do I need to use a different method?
This is my code:
import cv2, cv
import numpy as np
import sys
if len(sys.argv)>1:
filename = sys.argv[1]
else:
filename = 'p.png'
img_gray = cv2.imread(filename,cv2.CV_LOAD_IMAGE_GRAYSCALE)
if img_gray==None:
print "cannot open ",filename
else:
img = cv2.GaussianBlur(img_gray, (0,0), 2)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(img,cv2.cv.CV_HOUGH_GRADIENT,4,10,param1=200,param2=100,minRadius=3,maxRadius=100)
if circles:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),1)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
If you can reproduce a morphological reconstruction in OpenCV, you can easily build a h-dome transform which simplifies the task significantly. Otherwise, a simple threshold on a gaussian filtering might be enough too.
Binarize[FillingTransform[GaussianFilter[f, 2], 0.4, Padding -> 1]]
The gaussian filtering was done in the code above to effectively suppress the noise around the border of the input, which would remain after the h-dome transform otherwise.
Next there is the result of a simple threshold after a gaussian filtering (Binarize[GaussianFilter[f, 2], 0.5]) as well another result that is given by a direct binarization using Kapur's thresholding method (see the paper "A new method for gray-level picture thresholding using the entropy of the histogram" (which is no longer a new method, it is from 1985)):
The right image above has a lot of small points all over the border (which cannot be seen at this image resolution), but is fully automatic. From these 3 options, only the second one is already present in OpenCV.
I think a median filter will improve your image. Try to experiment with some kernels, 3x3 or 7x7. Then after that some (local) thresholding algorithm will get you shapes. You can either you HoughCircles, or just find contours and check them for roundness.
Convert the image to binary image using a suitable threshold technique (Otsu might help). Then use morphological operations like erosion to make circles smaller and then you can easily find their centers.

Categories

Resources