Detect the path robot can take between crop rows - python

I am working on a problem where i need to find the path the robot can take without hitting any crop rows.Raw Image
My initial approach was to convert this into birds eye view and then use canny and skeletonize techniques.Then I applied Hough transform to come up with the crop rows.This works well when the rows are straight but if i rotate the image by 45 degrees I couldn't find any rows with Hough transform.So I decided to use another approach.
First I only selected the green region and applied morphological filters to remove small branches which come out
img = cv2.imread('''')
min_green2 = np.array([45, 50, 50])
max_green2 = np.array([75, 250, 250])
image_blur = cv2.GaussianBlur(img, (5, 5), 0)
image_blur_hsv = cv2.cvtColor(image_blur, cv2.COLOR_BGR2HSV)
image_green = cv2.inRange(image_blur_hsv, min_red2, max_red2)
se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (7,7))
se2 = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
mask = cv2.morphologyEx(image_green, cv2.MORPH_OPEN, se1)
I ended up with thisFinal_output
Now I want to detect the path the robot can take which is the black region.So only the first row is my region of interest and I tried different methods to draw a line in center of the row but couldn't find any help in opencv.I did manage to get a work around by splitting the image into two vertically and used cv2.fitline function to get a line joining one side of row and did the same with other side of row and finally I plotted the center line.But this is not an ideal approach and I feel like there might be some opencv functions to do it in much better way.Can some one help me with this or guide me in right way.
This is the final output I am looking for
Final expected result with green color showing the center of path

So, here is my approach using numpy and scipy, which yielded this result:
.
Without doing any bluring or morphological operations, use the Canny edge detector:
edges = cv2.Canny(image, 100, 200, None, 3, cv2.DIST_L2)
Notice that most of the edges surround the track your robot wants to follow. Since each edge is a collection of white pixels, we could calculate a column's total intensity:
normalized = cv2.normalize(edges, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX)
column_intensity = normalized.sum(axis=0)
Plotting the results, we get
If we were to find the minimum of the graph, then we would find the x direction, where most of the edges are avoided. But first, let's smooth the function so as to avoid some noise.
# smooth function through moving average
window_size = 30
window = np.ones((window_size,)) / window_size
smoothed = np.convolve(column_intensity, window, mode="valid")
Since there are a lot of local minima, our additional constraint is that the x-direction the robot should take is the closest to the center of the image.
# find indices of local minima and select the one closest to the center
indices = scipy.signal.argrelmin(smoothed)[0]
distances = np.abs(indices - int(width / 2))
x = indices[np.argmin(distances)]
Now that we have the x-direction, we need to determine a y coordinate so as to estimate the angle the robot should rotate (tan(angle)=y/x). There are as many choices as there are rows in the image, which means the y coordinate needs to be manually set. If we choose a y closer to the robot, the angle will be more volatile as the robot advances. Conversely, if we choose a y that is far from the robot, then it will be less volatile but less accurate as well. That is up to you; the final image was created with a y = 400.
I hope this fits your needs :)

Related

How to crop the largest circle out of an image (shooting target) using OpenCV

as the title states, I'm trying to crop the largest circle out of an image. I'm using OpenCV in python. To be exact, it's a shooting target, which always has the same format, but the picture of it can be taken with any mobile device and in different lighting conditions (I will include some examples lower).
I'm completely new to image recognition, so I have been trying out many different ways of doing this, but couldn't figure out a universal solution, that would work on all of my target images.
Why I'm trying to do this:
My assignment is to calculate score of one or multiple shots on the given target image. I have tried color segmentation to find the shots, but since the shots can be on different backgrounds, this wouldn't work properly. So now I'm trying to see the difference between the empty shooting target image and the already shot on target image. Also, I need to be able to tell, which target it was shot on (there are two target types). So I'm trying to crop out only the target from image to get rid of the background interference and then continue with the shot identifications.
What I have tried so far:
1) Finding the largest circle with HoughCircles. My next step would be to somehow remove the outer part of that found circle. I have played with the configuration of HoughCircles method for quite some time, but always one of the example images wasn't highlighting the most outer circle correctly or wasn't highlighting any of the circles :/.
My final configuration looked something like this:
img = cv2.GaussianBlur(img, (3, 3), 0)
cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, 2, 10000, param1=50, param2=100, minRadius=200, maxRadius=0)
It seemed like using HoughCircles wouldn't be the right way to do this, so I moved on to another possible solution I found on the internet.
2) Finding all the countours by filtering the 'black' color range in which the circles seem to be on the pictures and than finding the largest one. The problem with this solution seemed to be that sometimes the pictures had a shadow that destroyed the outer circle and therefore it seemed impossible to crop by it.
My code looked like this:
# black color boundaries [B, G, R]
lower = [0, 0, 0]
upper = [150, 150, 150]
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype="uint8")
upper = np.array(upper, dtype="uint8")
# find the colors within the specified boundaries and apply the mask
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ret, thresh = cv2.threshold(mask, 40, 255, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
# draw in blue the contours that were founded
cv2.drawContours(output, contours, -1, 255, 3)
# find the biggest countour (c) by the area
c = max(contours, key=cv2.contourArea)
x, y, w, h = cv2.boundingRect(c)
After that, I would try to draw a circle by the largest found contour (c) and crop by it. But I have already seen that the drawn circles weren't complete (probably due to some shadow on the picture) and therefore this wouldn't work anyway.
After those failures, I have tried so many solutions from other questions on here, but none would work for my problem.
Example images:
Target example 1
Target example 2
Target to calc score 1
Target to calc score 2
To be completely honest with you, I'm really lost on how to go about this. I would appreciate any help, advice, anything.
There are two different types of target in your samples. You may want to process them separately or ask the user for the input, what kind of target it is. Basically, you want to know how large the black part of the target, does it cover 7-10 or 4-10.
Binarize your image. Build a histogram along X and Y -- you'll find the position of the black part of your target as (x_left, x_right, y_top, y_bottom). Once you know that, you can calculate the center ((top+bottom)/2, (left+right)/2). After that you can easily calculate the score for every pixel of the image, since you know the center, the black spot size and the number of different score areas within.

Homography from football (soccer) field lines

I am working on using video from a football (soccer) match and try to map the frames to a top view of the pitch using homography. I have started to get find all the white lines from the frames using both Hough lines as well as using the line segment detector, where the line segment detector seems to work slightly better. See my code and examples below:
import cv2
import numpy as np
cv2.imread("frame-27.jpg")
hsv = cv2.cvtColor(frame, cv2.COLOR_RGB2HSV)
mask_green = cv2.inRange(hsv, (36, 25, 25), (86, 255, 255)) # green mask to select only the field
frame_masked = cv2.bitwise_and(frame, frame, mask=mask_green)
gray = cv2.cvtColor(frame_masked, cv2.COLOR_RGB2GRAY)
_, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
canny = cv2.Canny(gray, 50, 150, apertureSize=3)
# Hough line detection
lines = cv2.HoughLinesP(canny, 1, np.pi / 180, 50, None, 50, 20)
# Line segment detection
lines_lsd = lsd.detect(canny)[0]
This uses this input frame
and returns this image for the Hough lines
and this image for the line segment detection.
My question is twofold: (1) any ideas on how I can further refine the line detection (i.e. decrease false positives such as lines around players and outside of the field) and (2) a good way to use the detection lines to create a homography so I can map the frame to a higher overview of the field (like this). Any help is greatly appreciated!
Pencils of lines
Cluster line segments in two pencils of lines using RANSAC. A pencil of lines is a set of lines that intersect at a common point. With homogeneous coordinates, the intersection point may potentially be at infinity (e.g. if all the lines are parallel).
You can find two random line segments, compute their intersection, and then consider all the lines that go somewhat close to that intersection point (within some threshold). Repeat this until you find the two pencils with the greatest amount of line segments. We can then assume that these pencils correspond to the vanishing points.
Here, the blue and red segments correspond to two pencils of lines. The green segments are outliers. As you can see the RANSAC algorithm is extremely good at rejecting outliers.
Rectifying
I am not aware of built-in OpenCV rectification for line segments specifically; the existing functions are designed to work with point correspondences.
Run an optimization to recover the homography to deal with orientation. Generally the homography H is of the form H = KRK^-1 where K is the intrinsic matrix and R is the rotation matrix.
For example, you can run a nonlinear least squares optimization on the manifold of the Lie group SO(3) to recover the R matrix. For example you can use LocalParameterization in Ceres solver. But it is pretty simple to implement this yourself in Python too. If the focal length is unknown, you'll have to add that as an optimization parameter as well.
Instead of nonlinear least squares there may be other methods. Some methods estimate the homography directly, but may not preserve the correct aspect ratio.
You can preview the homography by calling opencv's WarpPerspective function.
Estimate translation and scale
This will involve knowing something about the geometry of the football field. You can detect some salient features unique to the football field and estimate the scale. For example you can detect the circular arcs using the circle Hough transform.

Python OpenCV: Hough Transform does not detect obvious lines

Problem:
I want to detect lines in a given image using OpenCV in Python. Although there are multiple obvious vertical lines, neither normal HoughLines nor probabilistic HoughLines does find them. As I spent plenty of time playing around with the Parameters, I guess I am doing something fundamental wrong here. I am Aware of the fact, that hough-lines is usually applied on edges, e.g. after using canny. Due to canny´s non-maximum supression, canny does not give good results here.
Image, where detecting the vertical lines Fails :
Why:
Given this (image of a water meter) :
I want to detect the rectangle around each digit. To detect the rectangles, I used sobel filters in x and y direction and calculated Magnitude and angle/Phase of the Gradient. As I assume the image to be rotated correctly in this step, I extract vertical and horizontal edges as shown in the image. My hope was to make use of houghLines to find the bounding boxes. Finding the horizontal lines works perfectly, as seen in the
Debug plot containing further insights on the Problem, where as I does not work on the vertical components (second row) :
Detecting the rectangles around each digit would help me to
locate the Region of Interest
cut out the region inside the rectangle, in other words the digit. Several other approches to detect the digits directly by using contours, all had the problem of the outer rectangles interfering with the digit.
Update: the Code for detecting the vertical lines:
#img is initialized with the binarized, vertical component image, as shown above
minLength = 30
maxGap = 7
angle_res = np.pi / 180
rad_res = 2
threshold_val = 100
linesP = cv2.HoughLinesP(img, rad_res, angle_res, threshold_val, minLineLength=minLength, maxLineGap=maxGap)
cdst = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cdstP = np.copy(cdst)
if linesP is None:
print("Error when finding lines (probabilistic hough transformation). No lines detected")
else:
# Copy edges to the images that will display the results in BGR
for i in range(0, len(linesP)):
l = linesP[i][0]
cv2.line(cdstP, (l[0], l[1]), (l[2], l[3]), (255,0,0), 3, cv2.LINE_AA)
plt.imshow(cdstP); plt.show()
First apply canny edge with proper settings of threshold. Then apply probabilistic hough line transform. After applying hough transform filter the lines with slope. You want to filter the box so you need to filter horizontal and vertical lines. After filtering lines apply morphological dilation and erosion operation back to back to resultant image to get neat box around each digit. While applying hough transform select parameters minimum line length, maximum line length and maximum line gap appropriately.
You can use trackbar function while selecting appropriate parameters. The sample code is given below for selection of threshold for canny edge.
import cv2
import numpy as np
cv2.namedWindow('Result')
img = cv2.imread('qkEuE.png')
v1 = 0
v2 = 0
def doEdges():
edges = cv2.Canny(img,v1,v2)
edges = cv2.cvtColor(edges,cv2.COLOR_GRAY2BGR)
res = np.concatenate((img,edges),axis = 0)
cv2.imshow('Result',res)
def setVal1(val):
global v1
v1 = val
doEdges()
def setVal2(val):
global v2
v2 = val
doEdges()
cv2.createTrackbar('Val1','Result',0,500,setVal1)
cv2.createTrackbar('Val2','Result',0,500,setVal2)
cv2.imshow('Result',img)
cv2.waitKey(0)
cv2.destroyAllWindows
Hope it helps you.

How to detect rectangular items in image with Python

I have found a plethora of questions regarding finding "things" in images using openCV, et al. in Python but so far I have been unable to piece them together for a reliable solution to my problem.
I am attempting to use computer vision to help count tiny surface mount electronics parts. The idea is for me to dump parts onto a solid color piece of paper, snap a picture, and have the software tell me how many items are in it.
The "things" differ from one picture to the next but will always be identical in any one image. I seem to be able to manually tune the parameters for things like hue/saturation for a particular part but it tends to require tweaking every time I change to a new part.
My current, semi-functioning code is posted below:
import imutils
import numpy
import cv2
import sys
def part_area(contours, round=10):
"""Finds the mode of the contour area. The idea is that most of the parts in an image will be separated and that
finding the most common area in the list of areas should provide a reasonable value to approximate by. The areas
are rounded to the nearest multiple of 200 to reduce the list of options."""
# Start with a list of all of the areas for the provided contours.
areas = [cv2.contourArea(contour) for contour in contours]
# Determine a threshold for the minimum amount of area as 1% of the overall range.
threshold = (max(areas) - min(areas)) / 100
# Trim the list of areas down to only those that exceed the threshold.
thresholded = [area for area in areas if area > threshold]
# Round the areas to the nearest value set by the round argument.
rounded = [int((area + (round / 2)) / round) * round for area in thresholded]
# Remove any areas that rounded down to zero.
cleaned = [area for area in rounded if area != 0]
# Count the areas with the same values.
counts = {}
for area in cleaned:
if area not in counts:
counts[area] = 0
counts[area] += 1
# Reduce the areas down to only those that are in groups of three or more with the same area.
above = []
for area, count in counts.iteritems():
if count > 2:
for _ in range(count):
above.append(area)
# Take the mean of the areas as the average part size.
average = sum(above) / len(above)
return average
def find_hue_mode(hsv):
"""Given an HSV image as an input, compute the mode of the list of hue values to find the most common hue in the
image. This is used to determine the center for the background color filter."""
pixels = {}
for row in hsv:
for pixel in row:
hue = pixel[0]
if hue not in pixels:
pixels[hue] = 0
pixels[hue] += 1
counts = sorted(pixels.keys(), key=lambda key: pixels[key], reverse=True)
return counts[0]
if __name__ == "__main__":
# load the image and resize it to a smaller factor so that the shapes can be approximated better
image = cv2.imread(sys.argv[1])
# define range of blue color in HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
center = find_hue_mode(hsv)
print 'Center Hue:', center
lower = numpy.array([center - 10, 50, 50])
upper = numpy.array([center + 10, 255, 255])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower, upper)
inverted = cv2.bitwise_not(mask)
blurred = cv2.GaussianBlur(inverted, (5, 5), 0)
edged = cv2.Canny(blurred, 50, 100)
dilated = cv2.dilate(edged, None, iterations=1)
eroded = cv2.erode(dilated, None, iterations=1)
# find contours in the thresholded image and initialize the shape detector
contours = cv2.findContours(eroded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
# Compute the area for a single part to use when setting the threshold and calculating the number of parts within
# a contour area.
part_area = part_area(contours)
# The threshold for a part's area - can't be too much smaller than the part itself.
threshold = part_area * 0.5
part_count = 0
for contour in contours:
if cv2.contourArea(contour) < threshold:
continue
# Sometimes parts are close enough together that they become one in the image. To battle this, the total area
# of the contour is divided by the area of a part (derived earlier).
part_count += int((cv2.contourArea(contour) / part_area) + 0.1) # this 0.1 "rounds up" slightly and was determined empirically
# Draw an approximate contour around each detected part to give the user an idea of what the tool has computed.
epsilon = 0.1 * cv2.arcLength(contour, True)
approx = cv2.approxPolyDP(contour, epsilon, True)
cv2.drawContours(image, [approx], -1, (0, 255, 0), 2)
# Print the part count and show off the processed image.
print 'Part Count:', part_count
cv2.imshow("Image", image)
cv2.waitKey(0)
Here's an example of the type of input image I am using:
or this:
And I'm currently getting results like this:
The results clearly show that the script is having trouble identifying some parts and it's true Achilles heel seems to be when parts touch one another.
So my question/challenge is, what can I do to improve the reliability of this script?
The script is to be integrated into an existing Python tool so I am searching for a solution using Python. The solution does not need to be pure Python as I am willing to install whatever 3rd party libraries might be needed.
If the objects are all of similar types, you might have more success isolating a single example in the image and then using feature matching to detect them.
A full solution would be out of scope for Stack Overflow, but my suggestion for progress would be to first somehow find one or more "correct" examples using your current rectangle retrieval method. You could probably look for all your samples that are of the expected size, or that are accurate rectangles.
Once you have isolated a few positive examples, use some feature matching techniques to find the others. There is a lot of reading up you probably need to do on it but that is a potential solution.
A general summary is that you use your positive examples to find "features" of the object you want to detect. These "features" are generally things like corners or changes in gradient. OpenCV contains many methods you can use.
Once you have the features, there are several algorithms in OpenCV you can look at that will search the image for all matching features. You’ll want one that is rotation invariant (can detect the same features arranged in different rotation), but you probably don’t need scale invariance (can detect the same features at multiple scales).
My one concern with this method is that the items you are searching for in your images are quite small. It might be difficult to find good, consistent features to match on.
You're tackling a 2D object recognition problem, for which there are many possible approaches. You've gone about it using background/foreground segmentation, which is ok as you have control on the scene (laying down the background paper sheet). However this will always have fundamental limitations when the objects touch. A simple solution to your problem can be this:
1) You assume that touching objects are rare events (which is a fine assumption in your problem). Therefore you can compute the areas for each segmented region, and compute the median of these, which will give a robust estimate for the object's area. Let's call this robust estimate A (in squared pixels). This will be fine if fewer than 50% of regions correspond to touching objects.
2) You then proceed to measure the number of objects in each segmented region. Let Ai be the area of the ith region. You then compute the number of objects in each region by Ni=round(Ai/A). You then sum Ni to give you the total number of objects.
This approach will be fine as long as the following conditions are met:
A) The touching objects do not significantly overlap
B) You do not have objects lying on their sides. If you do you might be able to deal with this using two area estimates (side and flat). Better to eliminate this scenario if you can for simplicity.
C) The objects are all roughly the same distance to the camera. If this is not the case then the areas of the objects (in pixels) cannot be modelled well by a single value.
D) There are not partially visible objects at the borders of the image.
E) You ensure that only the same type of object is visible in each image.

How do I programmatically find the pixel locations of specific features in an image?

I'm building an automated electricity / gas meter reader using OpenCV and Python. I've got as far as taking shots with a webcam:
I can then use afine transform to unwarp the image (an adaptation of this example):
def unwarp_image(img):
rows,cols = img.shape[:2]
# Source points
left_top = 12
left_bottom = left_top+2
top_left = 24
top_right = 13
bottom = 47
right = 180
srcTri = np.array([(left_top,top_left),(right,top_right),(left_bottom,bottom)], np.float32)
# Corresponding Destination Points. Remember, both sets are of float32 type
dst_height=30
dstTri = np.array([(0,0),(cols-1,0),(0,dst_height)],np.float32)
# Affine Transformation
warp_mat = cv2.getAffineTransform(srcTri,dstTri) # Generating affine transform matrix of size 2x3
dst = cv2.warpAffine(img,warp_mat,(cols,dst_height)) # Now transform the image, notice dst_size=(cols,rows), not (rows,cols)
#cv2.imshow("crop_img", dst)
#cv2.waitKey(0)
return dst
..which gives me an image something like this:
I still need to extract the text using some sort of OCR routine but first I'd like to automate the part that identifies what pixel locations to apply the affine transform to. So if someone knocks the webcam it doesn't stop the software working.
Since your image is pretty much planar, you can look into finding the homography between the image you get from the webcam and the desired image (in the upright position).
Edit: This will rotate the image in the upright position. Once you've registered your image (brought it in the upright position), you could do row-wise or column-wise projections (sum all the pixels along the columns to get one vector, sum all the pixels along the rows to get one vector). You can use these vectors to figure out where you have a jump in color, and crop it there.
Alternatively you can use the Hough transform, which gives you lines in an image. You can probably get away with not registering the image if you do this.

Categories

Resources