Obtain features inside image and remove boundary - python

I want to detect features inside an image (retina scan). The image consists of a retina scan inside a rectangular box with black background.
I am working with Python 3.6, and I am using Canny Edge Detection to detect features inside the image. I understand that the algorithm for canny edge detection uses edge gradients to find edges. While Canny Edge Detection gives me features inside the retina scan for a proper choice of threshold values, it always keeps the circular rim between the retina scan and the black background in the output image.
In the output image, I want to have only the features inside the image (retina scan), and not the outer rim. How can I do this? I am searching for solutions which use Python. I am also open to the use of techniques other than Canny Edge Detection if they help to achieve the required task.
Below is the actual image, and the output image that I get from Canny Edge Detection.
Below is the circular rim that I am talking about (highlighted in red.)
Given below is the expected output image:
My code is given underneath:
import cv2
import matplotlib.pyplot as plt
from matplotlib.pyplot import imread as imread
plt.figure(1)
img_DR = cv2.imread('img.tif',0)
edges_DR = cv2.Canny(img_DR,20,40)
plt.subplot(121),plt.imshow(img_DR)
plt.title('Original Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(edges_DR,cmap = 'gray')
plt.title('Edge Image'), plt.xticks([]), plt.yticks([])
plt.show()
You can find the image used in this code here.
Thanks in advance.

You could fix this in 3 steps:
1) Threshold your input image at a very low intensity, so your retina is the only foreground region. Looking at your image this should work just fine since you have no real black areas in your foreground region:
img = cv2.imread('retina.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,bin = cv2.threshold(gray,5,255,cv2.THRESH_BINARY)
2) Use erosion to remove a small margin from your foreground, you want to remove the part where your outer rim artifacts develop after you apply canny:
kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(bin,kernel,iterations = 1)
(visualised in red: the eroded area)
3) Use this eroded image as a binary mask on your current result image. This will remove the outer border while keeping all inner structures intact:
edges_DR = cv2.Canny(img,20,40)
result = cv2.bitwise_and(edges_DR,edges_DR,mask = erosion)
You may have to experiment with the kernel size for the erosion to remove the full border but only the border. But generally, this should produce really good and robust results. Even if the orientation or size of your scan is not consistent.

Related

Change background color for Thresholderd image

I have been trying to write a code to extract cracks from an image using thresholding. However, I wanted to keep the background black. What would be a good solution to keep the outer boundary visible and the background black. Attached below is the original image along with the threshold image and the code used to extract this image.
import cv2
#Read Image
img = cv2.imread('Original.png')
# Convert into gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Image processing ( smoothing )
# Averaging
blur = cv2.blur(gray,(3,3))
ret,th1 = cv2.threshold(blur,145,255,cv2.THRESH_BINARY)
inverted = np.invert(th1)
plt.figure(figsize = (20,20))
plt.subplot(121),plt.imshow(img)
plt.title('Original'),plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(inverted,cmap='gray')
plt.title('Threshold'),plt.xticks([]), plt.yticks([])
Method 1
Assuming the circle in your images stays in one spot throughout your image set you can manually create a black 'mask' image with a white hole in the middle, then overlay it on the final inverted image.
You can easily make the mask image using your favorite image editor's magic wand tool.
I made this1 by also expanding the circle inwards by one pixel to take into account some of the pixels the magic wand tool couldn't catch.
You would then use the mask image like this:
mask = cv2.imread('/path/to/mask.png')
masked = cv2.bitwise_and(inverted, inverted, mask=mask)
Method 2
If the circle does NOT stay is the same spot throughout your entire image set you can try to make the mask from all the fully black pixels in your original image. This assumes that the 'sample' itself (the thing with the cracks) does not contain fully black pixels. Although this will result in the text on the bottom left to be left white.
# make all the non black pixels white
_,mask = cv2.threshold(gray,1,255,cv2.THRESH_BINARY)
1 The original is not the same size as your inverted image and thus the mask I made won't actually fit, you're gonna have to make it yourself.

Using image colors to draw boundaries on image

I am trying to use OpenCV to take an RGB image and identify a boundary or draw a line where the sidewalk meets the grass. Typical methods like canny edge detection and then using hough lines is not extremely helpful since it is easily influenced by other potential lines in the environment.
Let's say I have the RGB image below RGB sidewalk image, in the image, there is a clear boundary where the sidewalk meets the grass. This boundary becomes even more prominent when you convert into the HSV space and blur the image as shown in HSV sidewalk image. I believe color segmentation is the best bet I am just not sure how to approach it. Using the code hsv_img = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV) green_low = np.array([25 , 0, 50] ) green_high = np.array([75, 255, 255]) curr_mask = cv2.inRange(hsv_img, green_low, green_high)
I was able to generate a mask that almost gets me to where I want as shown in this figure grass mask. I just need to use this mask to draw my line without getting mixed up with the other greens detected in the picture.

How can I use Blob detection to Isolate region in an image

I want to measure area of land in an aerial view image, so I was adviced to first use blob detection to Isolate region and threshold the image. Here is what I have done, but I am not sure if this is correct.
img = cv2.imread('landarea.jpg', cv2.IMREAD_COLOR)
# Set up the detector with default parameters.
detector = cv2.SimpleBlobDetector_create()
# Detecting blobs.
keypoints = detector.detect(img)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size
im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0,
255),cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
print(im_with_keypoints.size)
# plt.show()
cv2.imshow("Blob",im_with_keypoints )
cv2.waitKey(0)
cv2.destroyAllWindows()
# Convert to gray
gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
#Threshold the image
ret3,th3 = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
titles = ["Otsu's Thresholding"]
images = [th3]
plt.figure(figsize=(15, 10))
for i in range(1):
plt.subplot(1,1,i+1), plt.imshow(images[i], 'gray')
plt.title(titles[i])
plt.xticks([]), plt.yticks([])
plt.show()
Image Link
In summary this is what I am trying to achieve
Task : Land Area Measurement:
I am current working on getting the Area, Width and Height Measurement of a Land from aerial Mapping Images. Steps taken to achieve this are listed below:
I was advice that it's better to write a Python code from scratch to do my Image processing.
Also from SO, I was advice to use a blob detector to isolates regions, threshold my image and count the number of white pixels. Then I can calibrate the dimensions of the image with ground truth dimensions.
I have been able to detect blobs, threshold the image and I have also been able to get the count of white pixels. My major challenge is on the last two steps and how to get the measurement from this steps.
Also a friend said that normally the shape of any photo could be square, rectangle, etc. So the area might not vary if I measure area with photos.
I do not think blob detection is going to work well. You would need to threshold the image in some way to separate the land area from everything else. From what part of the image do you want to get the area?

OpenCV detect and trace darkest lines on image and overlay a grid

I am trying to recreate the tracings on an EKG and then overlay them onto a new grid but I am stuck with how to best trace the actual tracings. In the images that follow, there are 6 separate tracings I'd like to recreate on essentially a white background with a grid. Any help would be appreciated.
I have managed to find the edges and crop this from a jpg so all I am left with is this image:
I am trying to detect the tracings with either OpenCV's findContours or Hough Line transformations but my edge findings after a gaussian blur leaves me with: .. which isn't very helpful.
The hough lines look like this:
Can someone point me in the right direction? Thanks in advance.
Edit:
I did the Local Histogram and then a gaussian blur and another Canny edge detection. The local histogram image was:
and then the canny edge detection was:
You can try using the Sobel and Laplacian detectors as follows
img = cv2.imread('experiment.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.GaussianBlur(img,(3,3),0)
laplacian = cv2.Laplacian(img,cv2.CV_64F)
sobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=1)
figure = plt.figure(figsize=(10,10))
sobel = figure.add_subplot(1,2,1)
sobel.imshow(sobelx,cmap='gray')
sobel.set_title('Sobel in x')
sobel.set_axis_off()
laplacianfig = figure.add_subplot(1,2,2)
laplacianfig.imshow(laplacian,cmap='gray')
laplacianfig.set_title('Laplacian')
laplacianfig.set_axis_off()
give you the following output
As you can see, the Sobel operator can be used to detect the lines. Maybe you can then plot those points where the pixel intensities are below the mean.

OpenCV (Python): Construct Rectangle from thresholded image

The image below shows an aerial photo of a house block (re-oriented with the longest side vertical), and the same image subjected to Adaptive Thresholding and Difference of Gaussians.
Images: Base; Adaptive Thresholding; Difference of Gaussians
The roof-print of the house is obvious (to the human eye) on the AdThresh image: it's a matter of connecting some obvious dots. In the sample image, finding the blue-bounded box below -
Image with desired rectangle marked in blue
I've had a crack at implementing HoughLinesP() and findContours(), but get nothing sensible (probably because there's some nuance that I'm missing). The python script-chunk that fails to find anything remotely like the blue box, is as follows:
import cv2
import numpy as np
from matplotlib import pyplot as plt
# read in full (RGBA) image - to get alpha layer to use as mask
img = cv2.imread('rotated_12.png', cv2.IMREAD_UNCHANGED)
grey = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Otsu's thresholding after Gaussian filtering
blur_base = cv2.GaussianBlur(grey,(9,9),0)
blur_diff = cv2.GaussianBlur(grey,(15,15),0)
_,thresh1 = cv2.threshold(grey,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
thresh = cv2.adaptiveThreshold(grey,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
DoG_01 = blur_base - blur_diff
edges_blur = cv2.Canny(blur_base,70,210)
# Find Contours
(ed, cnts,h) = cv2.findContours(grey, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:4]
for c in cnts:
approx = cv2.approxPolyDP(c, 0.1*cv2.arcLength(c, True), True)
cv2.drawContours(grey, [approx], -1, (0, 255, 0), 1)
# Hough Lines
minLineLength = 30
maxLineGap = 5
lines = cv2.HoughLinesP(edges_blur,1,np.pi/180,20,minLineLength,maxLineGap)
print "lines found:", len(lines)
for line in lines:
cv2.line(grey,(line[0][0], line[0][1]),(line[0][2],line[0][3]),(255,0,0),2)
# plot all the images
images = [img, thresh, DoG_01]
titles = ['Base','AdThresh','DoG01']
for i in xrange(len(images)):
plt.subplot(1,len(images),i+1),plt.imshow(images[i],'gray')
plt.title(titles[i]), plt.xticks([]), plt.yticks([])
plt.savefig('a_edgedetect_12.png')
cv2.destroyAllWindows()
I am trying to set things up without excessive parameterisation. I'm wary of 'tailoring' an algorithm for just this one image since this process will be run on hundreds of thousands of images (with roofs/rooves of different colours which may be less distinguishable from background). That said, I would love to see a solution that 'hit' the blue-box target - that way I could at the very least work out what I've done wrong.
If anyone has a quick-and-dirty way to do this sort of thing, it would be awesome to get a Python code snippet to work with.
The 'base' image ->
Base Image
You should apply the following:
1. Contrast Limited Adaptive Histogram Equalization-CLAHE and convert to gray-scale.
2. Gaussian Blur & Morphological transforms (dialation, erosion, etc) as mentioned by #bad_keypoints. This will help you get rid of the background noise. This is the most tricky step as the results will depend on the order in which you apply (first Gaussian Blur and then Morphological transforms or vice versa) and the window sizes you choose for this purpose.
3. Apply Adaptive thresholding
4. Apply Canny's Edge detection
5. Find contour having four corner points
As said earlier you need to tweak with input parameters of these functions and also need to validate these parameters with other images. As it might be possible that it will work for this case but not for other cases. Based on trial and error you need to fix the parameter values.

Categories

Resources