I am trying to remove gradient background from image using morphology top hat operation.
for this purpose I use skimage morphology library (opening, whiet_tophat) functions.
By itself white tophat means = initial image - opened image.
In my code I am comparing the results of skimage wht function result to manually obtained wth.
import numpy as np
from skimage import morphology
import cv2 as cv
img = cv.imread('images/TEST.jpg', cv.IMREAD_GRAYSCALE)
img_not = cv.bitwise_not(img)
se = np.ones((50,50), np.uint8)
opened = morphology.opening(img, se)
wth_my= img_not - opened
wth=morphology.white_tophat(img_not, se)
cv.imwrite('images/TEST_Opened.jpg', opened)
cv.imwrite('images/TEST_WTH_MY.jpg', wth_my)
cv.imwrite('images/TEST_WTH.jpg', wth)
cv.waitKey(0)
cv.destroyAllWindows()
Results are quite different (see screenshots). Please advice what's wrong in my code.
As you said, the top-hat filter is I - opening(I). You wrote:
opened = morphology.opening(img, se)
wth_my= img_not - opened
Note how one line uses img, and the other uses img_not. You need to apply the opening to img_not as well, so that it is the same image that the two parts of the operation work on.
Related
I want to extract car images without using Mask RCNN. I tried a couple of methods but couldn't decide on how to proceed with any of them. I need recommendation on which method would be best and how to go through with it.
Method 1 - Using XML files and haar cascade classifier
I was thinking of using xml files to detection and crop car images. The problems I faced were:
They only detect car in square shapes. I needed car images cropped. So ultimately I ended up with better images of cropped cars. This didn't solve my problem.
The cropped image didn't detect car as a whole but small parts of it. Maybe due to XML file's config.
My code:
!wget https://raw.githubusercontent.com/shaanhk/New-GithubTest/master/cars.xml
import numpy as py
import cv2
car_cascade=cv2.CascadeClassifier('cars.xml')
img = cv2.imread('im1.jpg')
cars = car_cascade.detectMultiScale(img, 1.1, 1)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting image:
Method 2 - Using Canny Edge Detection
I tried to perform canny edge detection for car. It worked to some extent that I managed to reduce edges to mostly car object. But I don't know how to proceed from there.
My code:
import cv2
import numpy as np
image= cv2.imread('im1.jpg')
imagecopy= np.copy(image)
grayimage= cv2.cvtColor(imagecopy, cv2.COLOR_RGB2GRAY)
canny= cv2.Canny(grayimage, 300,150)
cv2.imshow('Highway Edge Detection Image', canny)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting Image:
Method 3 - Extract car image using color gradients
On googling I found a method using HSV transformation and then creating a custom mask to extract cars. But I don't know much about this method and have no idea how to go about it. I used the code provided and am posting it below.
Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
image = mpimg.imread('im1.jpg')
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
background_hue = h[10,10]
lower_hue = np.array([background_hue-10,0,0])
upper_hue = np.array([background_hue+10,255,255])
mask = cv2.inRange(hsv, lower_hue, upper_hue)
# Mask the image to let the car show through
masked_image = np.copy(image)
masked_image[mask != 0] = [0, 0, 0]
cv2.imwrite('mask.jpg',masked_image)
# Display it!
plt.imshow(masked_image)
Image:
I'd like to mention, I'm a complete beginner in Computer Vision and am trying to learn by doing some small stuff like these. My code is probably very flawed and hopefully I can work on it on the way. Please feel absolutely free to mention any other method (except Mask RCNN) or any problems with code.
I'm currently in the pursue of counting the number of shrimps in a given image. I'm using this test image:
The code I have used so far is the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Load img
path = r'C:\Users\...' #the path to the image
original=cv2.imread(path, cv2.COLOR_BGR2RGB)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
#Hist to proceed with the binarizarion
hist = cv2.calcHist([img],[0],None,[256],[0,256])
#do the threshold
ret,thresh = cv2.threshold(img,60,255,cv2.THRESH_BINARY_INV)
From this point I have tried different morphological transformations such a erode, dilate, open and close but they don't seem to be working and separating the objects as I want.
I've read that I can apply a Watershed transformation so separate touching elements, but I donĀ“t have experience in this (working at this point at the moment).
After that I am planning on using a Simple Blob Detector to count the blobs, I don't know if these steps are correct.
Any help is very welcomed!
Using cv2, I am able to find the contours of text in an image. I would like to remove said text and replace it with the average pixel of the surrounding area.
However, the contours are just a bit smaller than I would like, resulting in a blurred edge where one can barely tell what the original text was:
I once chanced upon a cv2 tutorial with a stylized "j" as the sample image. It showed how to "expand" a contour in a manner similar to adding a positive sample next to every pre-existing positive sample in a mask.
If such a method does not already exist in cv2, how may I do this manually?
The function I sought was dilation, as detailed here:
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html
import cv2
import numpy as np
img = cv2.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
dilation = cv2.dilate(img,kernel,iterations = 1)
I'm writing a script in Python for my image processing class, which should read a directory for images, display them, and then I will eventually add additional code to perform Otsu thresholding on these images. I can get a reference image to display properly to include Otsu thresholding; however, I run into trouble when I attempt to display the remaining images in the directory. I am not sure that my images are being read from the directory correctly, as I am trying to store them in an array; however, I can see the output window displays grey squares which correspond to the dimensions of the actual image resolutions, which suggests that they are being at least partly read correctly.
I've already attempted to isolate the script to load images and display them into a separate file and running it. I was concerned that the successful processing of my sample image (which included a black/white binarization) was somehow affecting my image display later. This was not the case, as running a separate script produced the same grey square output.
****Update****
I've managed to tweak the below script(not yet updated) to run almost correctly. By writing the full filepath directly for each file, I can get the output to display correctly. It appears there is some issue with loading images into an array, best I can tell; a potential workaround for future testing is importing file locations as a string array, and implementing that vs. loading images into an array directly.
import cv2 as cv
import numpy as np
from PIL import Image
import glob
from matplotlib import pyplot as plot
import time
image=cv.imread('Fig ref.jpg')
image2=cv.cvtColor(image, cv.COLOR_RGB2GRAY)
cv.imshow('Image', image)
# global thresholding
ret1,th1 = cv.threshold(image2,127,255,cv.THRESH_BINARY)
# Otsu's thresholding
ret2,th2 = cv.threshold(image2,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv.GaussianBlur(image2,(5,5),0)
ret3,th3 = cv.threshold(blur,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# plot all the images and their histograms
images = [image2, 0, th1,
image2, 0, th2,
blur, 0, th3]
titles = ['Original Noisy Image','Histogram','Global Thresholding (v=127)',
'Original Noisy Image','Histogram',"Otsu's Thresholding",
'Gaussian filtered Image','Histogram',"Otsu's Thresholding"]
for i in range(3):
plot.subplot(3,3,i*3+1),plot.imshow(images[i*3],'gray')
plot.title(titles[i*3]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+2),plot.hist(images[i*3].ravel(),256)
plot.title(titles[i*3+1]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray')
plot.title(titles[i*3+2]), plot.xticks([]), plot.yticks([])
plot.show()
imageFolderPath = 'D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work'
imagePath = glob.glob(imageFolderPath + '/*.JPG')
im_array = np.array( [np.array(Image.open(img).convert('RGB')) for img in imagePath] )
temp=cv.imread("D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work\Fig ref.jpg")
cv.imshow('image', temp)
time.sleep(15)
for i in range(9):
cv.imshow('Image', im_array[i])
time.sleep(2)
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray'): The second argument says you use gray color map. Get rid of it and you would get color displays.
I tried almost all filters in PIL, but failed.
Is there any function in numpy of scipy to remove the noise?
Like Bwareaopen() in Matlab()?
e.g:
PS: If there is a way to fill the letters into black, I will be grateful
Numpy/Scipy can do morphological operations just as well as Matlab can.
See scipy.ndimage.morphology, containing, among other things, binary_opening(), the equivalent of Matlab's bwareaopen().
Numpy/Scipy solution: scipy.ndimage.morphology.binary_opening. More powerful solution: use scikits-image.
from skimage import morphology
cleaned = morphology.remove_small_objects(YOUR_IMAGE, min_size=64, connectivity=2)
See http://scikit-image.org/docs/0.9.x/api/skimage.morphology.html#remove-small-objects
I don't think this is what you want, but this works (uses Opencv (which uses Numpy)):
import cv2
# load image
fname = 'Myimage.jpg'
im = cv2.imread(fname,cv2.COLOR_RGB2GRAY)
# blur image
im = cv2.blur(im,(4,4))
# apply a threshold
im = cv2.threshold(im, 175 , 250, cv2.THRESH_BINARY)
im = im[1]
# show image
cv2.imshow('',im)
cv2.waitKey(0)
Output ( image in a window ):
You can save the image using cv2.imwrite