I have a series of horizontal lines detected in an image using OpenCV through Python.
For each line, I have a series of (x,y) points along the lines. The points go right across the image on the X axis at regular intervals.
I want to draw essentially a filled rectangle which covers the image from one line to the next and the fill will be white to hide contents on the row which is made up top and bottom of two sets of line points.
The problem is that the Y position of the lines changes by small amounts as I go across the image so a rectangle won't fit the row nicely.
I think I essentially need to draw a freeform shape using the points that I know and fill it
so that the area between the points on the top and bottom line is coloured in.
I have tried creating a polygon using the points that I have with the following command:
cv2.fillPoly(image, np.array([polygon_points], np.int32), 255)
However, rather than producing a fully filled shape that covers a row of the image, I get two triangles which start at the correct point but which meet in the middle, leaving the rest of the 'rectangle' unfilled.
How can I draw a freeform shape in OpenCV which covers the points along the top and bottom lines that I have but which also fills in all the pixels inbetween the two lines?
I hope that this makes sense. Thanks for any help.
This is because your points are likely sorted in your list as the follow (attempted) picture shows
_________________________
| pt1 pt2 ... ptm |
| |
| |
| ptm+1 ptm+2 ... pt2m |
|________________________|
Because of this the polyfill function is trying to fill from ptm to ptm+1 making your triangles (Personally I would describe it as an hourglass shape). There are two solutions to your problem. Either you can flip the second set of points changing your list from
points = [pt1, pt2, ..., ptm, ptm+1, ptm+2, ..., pt2m]
to
points = [pt1, pt2, ..., ptm, pt2m, pt2m-1, ..., ptm+1]
followed by your fillPoly call (although fillConvexPoly is apparently much faster)
or the alternative
x1 = min(x_points)
y1 = min(y_points)
x2 = max(x_points)
y2 = max(y_points)
cv2.rectangle(img, (x1,y1), (x2,y2), thickness=-1)
EDIT: If you are looking to create a polygon of the minimum enclosing set of points, you can use the opencv function convexHull to determine the convex hull (minimum enclosing set of points) and fill that polygon instead. Documentation is listed below:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=convexhull#convexhull
Related
I have an image with some points, and I need to draw the line of best fit on the image. The points would make a polynomial line.
This is what I've got so far:
#The coordinates are filled in earlier (self.lx, self.ly)
z = np.polyfit(self.lx, self.ly, 2)
lspace = np.linspace(0, 100, 100)
draw_x = lspace
draw_y = np.polyval(z, draw_x) #I am unsure of how to draw it on to the image
To draw a polyline on an image you can use polylines of opencv:
Drawing Polygon
To draw a polygon, first you need coordinates of vertices. Make those points into an array of shape ROWSx1x2 where ROWS are number of vertices and it should be of type int32. Here we draw a small polygon of with four vertices in yellow color.
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
pts = pts.reshape((-1,1,2))
cv.polylines(img,[pts],True,(0,255,255))
Note
If third argument is False, you will get a polylines joining all the points, not a closed shape.
cv.polylines() can be used to draw multiple lines. Just create a list of all the lines you want to draw and pass it to the function. All lines will be drawn individually. It is a much better and faster way to draw a group of lines than calling cv.line() for each line.
I'm looking for a way to split a number of images into proper rectangles. These rectangles are ideally shaped such that each of them take on the largest possible size without containing a lot of white.
So let's say that we have the following image
I would like to get an output such as this:
Note the overlapping rectangles, the hole and the non axis aligned rectangle, all of these are likely scenario's I have to deal with.
I'm aiming to get the coordinates describing the corner pieces of the rectangles so something like
[[(73,13),(269,13),(269,47)(73,47)],
[(73,13),(73,210),(109,210),(109,13)]
...]
In order to do this I have already looked at the cv2.findContours but I couldn't get it to work with overlapping rectangles (though I could use the hierarchy model to deal with holes as that causes the contours to be merged into one.
Note that although not shown holes can be nested.
A algorithm that works roughly as follow should be able to give you the result you seek.
Get all the corner points in the image.
Randomly select 3 points to create a rectangle
Count the ratio of yellow pixels within the rectangle, accept if the ratio satisfy a threshold.
Repeat 2 to 4 until :
a) every single combination of point is complete or
b) all yellow pixel are accounted for or
c) after n number of iteration
The difficult part of this algorithm lies in step 2, creating rectangle from 3 points.
If all the rectangles were right angle, you can simply find the minimum x and y to correspond for topLeft corner and maximum x and y to correspond for bottomRight corner of your new rectangle.
But since you have off axis rectangle, you will need to check if the two vector created from the 3 points have a 90 degree angle between them before generating the rectangle.
VI have a set of contour points drawn on an image which is stored as a 2D numpy array. The contours are represented by 2 numpy arrays of float values for x and y coordinates each. These coordinates are not integers and do not align perfectly with pixels but they do tell you the location of the contour points with respect to pixels.
I would like to be able to select the pixels that fall within the contours. I wrote some code that is pretty much the same as answer given here: Access pixel values within a contour boundary using OpenCV in Python
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([[a, b]]) # 2D array of shape 1x2
temp_array = np.array(temp_list)
contour_array_list = []
contour_array_list.append(temp_array)
lst_intensities = []
# For each list of contour points...
for i in range(len(contour_array_list)):
# Create a mask image that contains the contour filled in
cimg = np.zeros_like(pixel_array)
cv2.drawContours(cimg, contour_array_list, i, color=255, thickness=-1)
# Access the image pixels and create a 1D numpy array then add to list
pts = np.where(cimg == 255)
lst_intensities.append(pixel_array[pts[0], pts[1]])
When I run this, I get an error error: OpenCV(3.4.1) /opt/conda/conda-bld/opencv-suite_1527005509093/work/modules/imgproc/src/drawing.cpp:2515: error: (-215) npoints > 0 in function drawContours
I am guessing that at this point openCV will not work for me because my contours are floats, not integers, which openCV does not handle with drawContours. If I convert the coordinates of the contours to integers, I lose a lot of precision.
So how can I get at the pixels that fall within the contours?
This should be a trivial task but so far I was not able to find an easy way to do it.
I think that the simplest way of finding all pixels that fall within the contour is as follows.
The contour is described by a set of non-integer points. We can think of these points as vertices of a polygon, the contour is a polygon.
We first find the bounding box of the polygon. Any pixel outside of this bounding box is not inside the polygon, and doesn't need to be considered.
For the pixels inside the bounding box, we test if they are inside the polygon using the classical test: Trace a line from some point at infinity to the point, and count the number of polygon edges (line segments) crossed. If this number is odd, the point is inside the polygon. It turns out that Matplotlib contains a very efficient implementation of this algorithm.
I'm still getting used to Python and Numpy, this might be a bit awkward code if you're a Python expert. But it is straight-forward what it does, I think. First it computes the bounding box of the polygon, then it creates an array points with the coordinates of all pixels that fall within this bounding box (I'm assuming the pixel centroid is what counts). It applies the matplotlib.path.contains_points method to this array, yielding a boolean array mask. Finally, it reshapes this array to match the bounding box.
import math
import matplotlib.path
import numpy as np
x_pixel_nos = [...]
y_pixel_nos = [...] # Data from https://gist.github.com/sdoken/173fae1f9d8673ffff5b481b3872a69d
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([a, b])
polygon = np.array(temp_list)
left = np.min(polygon, axis=0)
right = np.max(polygon, axis=0)
x = np.arange(math.ceil(left[0]), math.floor(right[0])+1)
y = np.arange(math.ceil(left[1]), math.floor(right[1])+1)
xv, yv = np.meshgrid(x, y, indexing='xy')
points = np.hstack((xv.reshape((-1,1)), yv.reshape((-1,1))))
path = matplotlib.path.Path(polygon)
mask = path.contains_points(points)
mask.shape = xv.shape
After this code, what is necessary is to locate the bounding box within the image, and color the pixels. left contains the pixel in the image corresponding to the top-left pixel of mask.
It is possible to improve the performance of this algorithm. If the ray traced to test a pixel is horizontal, you can imagine that all the pixels along a horizontal line can benefit from the work done for the pixels to the left. That is, it is possible to compute the in/out status for all pixels on an image line with a little bit more effort than the cost for a single pixel.
The matplotlib.path.contains_points algorithm is much more efficient than performing a single-point test for all points, since sorting the polygon edges and vertices appropriately make each test much cheaper, and that sorting only needs to be done once when testing many points at once. But this algorithm doesn't take into account that we want to test many points on the same line.
These are what I see when I do
pp.plot(x_pixel_nos, y_pixel_nos)
pp.imshow(mask)
after running the code above with your data. Note that the y axis is inverted with imshow, hence the vertically mirrored shapes.
With Help of Shapely library in python, it can easily be done as:
from shapely.geometry import Point, Polygon
Convert all the x,y coords to shapely Polygons as:
coords = [(0, 0), (0, 2), (1, 1), (2, 2), (2, 0), (1, 1), (0, 0)]
pl = Polygon(coords)
Now find pixels in each of polygon as:
minx, miny, maxx, maxy = pl.bounds
minx, miny, maxx, maxy = int(minx), int(miny), int(maxx), int(maxy)
box_patch = [[x,y] for x in range(minx,maxx+1) for y in range(miny,maxy+1)]
pixels = []
for pb in box_patch:
pt = Point(pb[0],pb[1])
if(pl.contains(pt)):
pixels.append([int(pb[0]), int(pb[1])])
return pixels
Put this loop for each set of coords and then for each polygons.
good to go :)
skimage.draw.polygon can handle this 1, see the example code of this function on that page.
If you want just the contour, you can do skimage.segmentation.find_boundaries 2.
In order to get circles such that their coloring is radially symmetric, with center being the darkest, and exponential decay in color as one moves farther away from the center along the radius, I used imshow with clip_paths using Circle patches.
Here's a toy script that overlaps two such circles: https://gist.github.com/bmer/7063cc2dd09f1b80a252
Here's the output of the script:
As you can see, even though the alpha is set at 0.5 for both clipped images, there doesn't seem to be proper "color mixing" occurring (we should see a result that is symmetric along the x-axis).
Why is that, and what could I do to fix this issue?
Let's say I have a contour which is meant to represent the shape of the hand. The issue is, the contour also contains other parts of the arm (i.e. wrist, forearm, upper arm, etc.) To find the position of the hand's center, I'm looking at the combinations (size 3) of the defect points of the convex hull, finding the center of circle which is tangent to these 3 points, and averaging the most reasonable ones together to gain a rough understanding of where the hand's center is.
With this averaged center, I'd like to be able to remove points on my given contour which don't fall inside some radius that's likely to determine the width of the hand - in other words, cutoff points that don't fall inside this circle. I could simply iterate through each contour point and remove these points, but that would be horribly inefficient because of Python loops' speed. Is there a faster or more efficient way of doing this, perhaps using some inbuilt OpenCV functions or otherwise?
Thanks!
Interesting follow-up to your other question.
You can remove the unwanted points by boolean indexing:
import numpy as np
hand_contour = np.random.rand(60,2) # you can use np.squeeze on the data from opencv to get rid of that annoying singleton axis (60,1,2)->(60,2)
# You have found the center of the palm and a possible radius
center = np.array([.3, .1])
radius = .3
mask = (hand_contour[:,0] - center[0])**2 + (hand_contour[:,1] - center[1])**2 < radius**2
within_palm = hand_contour[mask,:] # Only selects those values within that circle.
You could also mask the unwanted values, with a masked_array, but if you're not interested in keeping the original data, the above method is the way to go.