I have three arrays: one contains the centre of a circle, one contains the radius of the circle and one contains the ID of the circle.
The ID refers to a 128x128 array on which the circles should be drawn and on any ID there can be 1 or many circles.
There is a command to draw_polygons in the skimage library, called draw_circles.
I am struggling with cycling through the IDs and matching them up the centres and radii in the other two arrays.
I have a data frame which stores the information and the arrays are below:
allIDs = getNumpyArrayFromPandas(data, ['Id'])
allCentreXY = getNumpyArrayFromPandas(data, ['centre x', 'centre y'])
allRadii = getNumpyArrayFromPandas(data, ['radius'])
i.e there will be 7 circles drawn for the first ID (4000), 6 circles drawn for the next ID (4001) etc.
I have tried
def draw_one_circle(img, one_circle):
radius = one_circle[0]
centre = one_circle[1]
rr, cc = disk(centre, radius,(128,128))
img[rr,cc] = 1
def draw_circles(img, circles):
for circle in circles:
draw_one_polygon(img, circle)
circles = read_input(x)
img = np.zeros((128, 128), dtype=np.uint8)
draw_polygons(img, circles)
but I don't know how to read the coordinates in from the arrays
To get the coordinates, you can iterate over the three arrays in parallel, using zip():
for ident, centre, radius in zip(allIDs, allCentreXY, allRadii):
# Draw one circle with these values.
Related
I have an image with some points, and I need to draw the line of best fit on the image. The points would make a polynomial line.
This is what I've got so far:
#The coordinates are filled in earlier (self.lx, self.ly)
z = np.polyfit(self.lx, self.ly, 2)
lspace = np.linspace(0, 100, 100)
draw_x = lspace
draw_y = np.polyval(z, draw_x) #I am unsure of how to draw it on to the image
To draw a polyline on an image you can use polylines of opencv:
Drawing Polygon
To draw a polygon, first you need coordinates of vertices. Make those points into an array of shape ROWSx1x2 where ROWS are number of vertices and it should be of type int32. Here we draw a small polygon of with four vertices in yellow color.
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32)
pts = pts.reshape((-1,1,2))
cv.polylines(img,[pts],True,(0,255,255))
Note
If third argument is False, you will get a polylines joining all the points, not a closed shape.
cv.polylines() can be used to draw multiple lines. Just create a list of all the lines you want to draw and pass it to the function. All lines will be drawn individually. It is a much better and faster way to draw a group of lines than calling cv.line() for each line.
I have some GeoTiff files that are relatively large (10980 x 10980 pixels), that all correspond to the same geographic area (and have the same coordinate reference system), and I have a large number of polygons (100,000+) corresponding to land parcels, and I want to extract from each image file the pixels corresponding to each polygon. Currently, the way I'm doing this is using shapely Polygons and the rasterio.mask.mask function, like this:
for filename in image_files:
with rasterio.open(filename) as src:
for shape in shapes:
data, _ = rasterio.mask.mask(src, [shape], crop=True)
This is empirically rather slow. If I have the mask indices precomputed, then I just need to read each image's entire data once and then use the pre-computed indices to pull out the relevant pixels for each polygon (I don't need them to be in the correct 2-dimensional configuration, I just need the values), and this is very fast. But I don't know if there's a fast way to get these pixel indices. I know that I could use rasterio's raster_geometry_mask function to get a mask the size of the whole image, and then use numpy to get the indices of the elements inside the polygon, but then it would be needlessly constructing a 10980 x 10980 array for each polygon to make the mask, and that's very very slow.
What I ended up doing is, when I open the first image, then for each polygon,
Use the image transform to convert the polygon to pixel coordinates, and find the rectangular bounding box containing the polygon in integer pixel coordinates.
To figure out which pixels in the bounding box are actually in the polygon, construct shapely Polygons for each pixel and use the .intersects() method (if you wanted to only include pixels that are completely inside the polygon, you could use .contains()). (I wasn't sure if this would be slow, but it turned out not to be.)
Save the list of coordinate pairs for all pixels in each polygon.
Then for every new image you open, you just read the entire image data and index out the parts for each polygon because you already have the pixel indices.
Code looks approximately like this:
import math
import numpy
import pyproj
import rasterio.mask
from shapely.geometry import Polygon
shape_pixels = None
for filename in image_files:
with rasterio.open(filename) as src:
if shape_pixels is None:
projector = pyproj.Proj(src.crs)
pixelcoord_shapes = [
Polygon(zip(*(~src.transform * numpy.array(projector(*zip(*shape.boundary.coords))))))
for shape in shapes
]
pixels_per_shape = []
for shape in shapes:
xmin = max(0, math.floor(shape.bounds[0]))
ymin = max(0, math.floor(shape.bounds[1]))
xmax = math.ceil(shape.bounds[2])
ymax = math.ceil(shape.bounds[3])
pixel_squares = {}
for j in range(xmin, xmax+1):
for i in range(ymin, ymax+1):
pixel_squares[(i, j)] = Polygon.from_bounds(j, i, j+1, i+1)
pixels_per_shape.append([
coords for (coords, pixel) in pixel_squares.items()
if shape.intersects(pixel)
])
whole_data = src.read()
for pixels in pixels_per_shape:
ivals, jvals = zip(*pixels)
shape_data = whole_data[0, ivals, jvals]
...
I am coding for a project which requires me to draw a grid of 15x15 black circles. The program will then randomly choose a circle to fill "gold." The circles surrounding the "gold" circle, are to be "tan," the circles surrounding "tan," should be "grey," and all other circles are "white." The colors are revealed when a mouse click is detected over the circle. I was able to draw the black circles, but am having difficulty with randomizing the "gold" circle and filling in the rest of the colors.
def circle_grid(game):
# Create a list that creates 15x15 grid of black filled circles
Center = Point(30,70)
# append to a list
Y = [ ]
for y in range (15):
for x in range (15):
CIRCLES = Circle(Center, 15)
CIRCLES.setFill("black")
Center = Point ((Center.getX()+30), (Center.getY()))
CIRCLES.draw(game)
Y.append(CIRCLES)
Center = Point(30, Center.getY()+30)
This is the specific description and image of what is supposed to happen:
I would suggest making the grid two-dimensional—a list-of-lists—so that the Circles in it can be referenced by the row and column they are in. Here's what I mean:
def circle_grid(game):
grid_width, grid_height = 15, 15
radius = 15 # of each Circle in grid
diameter = radius*2
x, y = radius, radius # Center of upper-left-most Circle of grid
grid = []
for i in range(grid_width):
row = []
for j in range(grid_height):
row.append(Circle(x+(i*diameter), y+(j*diameter), radius, 'black'))
grid.append(row)
return grid
grid = circle_grid(None)
# Print grid of Circles created.
for row in range(len(grid)):
line = []
for col in range(len(grid[0])):
line.append(str(grid[row][col]))
print(', '.join(line))
Doing this will make it relatively easy to access them via grid[row][col], so after deciding on the position of the gold one, changing the color of groups of them around it would become a matter of, adding or subtracting values from the row, col of the gold one.
For example, say you want the put the gold one at a random position on the grid:
row_gold, col_gold = random.randrange(grid_width), random.randrange(grid_width)
grid[row_gold][col_gold].setFill('gold')
Afterwards, the eight tan Circles immediately around it can be accessed relative to its position like this:
grid[row_gold-1][col_gold-1].setFill('tan')
grid[row_gold-1][col_gold].setFill('tan')
grid[row_gold-1][col_gold+1].setFill('tan')
grid[row_gold][col_gold-1].setFill('tan')
# grid[row_gold][col_gold] # don't change the gold one itself
grid[row_gold][col_gold+1].setFill('tan')
grid[row_gold+1][col_gold-1].setFill('tan')
grid[row_gold+1][col_gold].setFill('tan')
grid[row_gold+1][col_gold+1].setFill('tan')
and the indices of all the grey ones could also be calculated relative to it in a similar manner (i.e. based on the values of row_gold and col_gold).
You should be able to find basic documentation to give you random x and y values for the gold circle.
Now, what defines "adjacent" in a square lattice? The tan layer is all the circles that have x and/or y differing by 1 from the gold's position. Gray circles have to have one coordinate (or both) that differs by exactly 2.
That's the algorithm. Can you take it from there?
VI have a set of contour points drawn on an image which is stored as a 2D numpy array. The contours are represented by 2 numpy arrays of float values for x and y coordinates each. These coordinates are not integers and do not align perfectly with pixels but they do tell you the location of the contour points with respect to pixels.
I would like to be able to select the pixels that fall within the contours. I wrote some code that is pretty much the same as answer given here: Access pixel values within a contour boundary using OpenCV in Python
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([[a, b]]) # 2D array of shape 1x2
temp_array = np.array(temp_list)
contour_array_list = []
contour_array_list.append(temp_array)
lst_intensities = []
# For each list of contour points...
for i in range(len(contour_array_list)):
# Create a mask image that contains the contour filled in
cimg = np.zeros_like(pixel_array)
cv2.drawContours(cimg, contour_array_list, i, color=255, thickness=-1)
# Access the image pixels and create a 1D numpy array then add to list
pts = np.where(cimg == 255)
lst_intensities.append(pixel_array[pts[0], pts[1]])
When I run this, I get an error error: OpenCV(3.4.1) /opt/conda/conda-bld/opencv-suite_1527005509093/work/modules/imgproc/src/drawing.cpp:2515: error: (-215) npoints > 0 in function drawContours
I am guessing that at this point openCV will not work for me because my contours are floats, not integers, which openCV does not handle with drawContours. If I convert the coordinates of the contours to integers, I lose a lot of precision.
So how can I get at the pixels that fall within the contours?
This should be a trivial task but so far I was not able to find an easy way to do it.
I think that the simplest way of finding all pixels that fall within the contour is as follows.
The contour is described by a set of non-integer points. We can think of these points as vertices of a polygon, the contour is a polygon.
We first find the bounding box of the polygon. Any pixel outside of this bounding box is not inside the polygon, and doesn't need to be considered.
For the pixels inside the bounding box, we test if they are inside the polygon using the classical test: Trace a line from some point at infinity to the point, and count the number of polygon edges (line segments) crossed. If this number is odd, the point is inside the polygon. It turns out that Matplotlib contains a very efficient implementation of this algorithm.
I'm still getting used to Python and Numpy, this might be a bit awkward code if you're a Python expert. But it is straight-forward what it does, I think. First it computes the bounding box of the polygon, then it creates an array points with the coordinates of all pixels that fall within this bounding box (I'm assuming the pixel centroid is what counts). It applies the matplotlib.path.contains_points method to this array, yielding a boolean array mask. Finally, it reshapes this array to match the bounding box.
import math
import matplotlib.path
import numpy as np
x_pixel_nos = [...]
y_pixel_nos = [...] # Data from https://gist.github.com/sdoken/173fae1f9d8673ffff5b481b3872a69d
temp_list = []
for a, b in zip(x_pixel_nos, y_pixel_nos):
temp_list.append([a, b])
polygon = np.array(temp_list)
left = np.min(polygon, axis=0)
right = np.max(polygon, axis=0)
x = np.arange(math.ceil(left[0]), math.floor(right[0])+1)
y = np.arange(math.ceil(left[1]), math.floor(right[1])+1)
xv, yv = np.meshgrid(x, y, indexing='xy')
points = np.hstack((xv.reshape((-1,1)), yv.reshape((-1,1))))
path = matplotlib.path.Path(polygon)
mask = path.contains_points(points)
mask.shape = xv.shape
After this code, what is necessary is to locate the bounding box within the image, and color the pixels. left contains the pixel in the image corresponding to the top-left pixel of mask.
It is possible to improve the performance of this algorithm. If the ray traced to test a pixel is horizontal, you can imagine that all the pixels along a horizontal line can benefit from the work done for the pixels to the left. That is, it is possible to compute the in/out status for all pixels on an image line with a little bit more effort than the cost for a single pixel.
The matplotlib.path.contains_points algorithm is much more efficient than performing a single-point test for all points, since sorting the polygon edges and vertices appropriately make each test much cheaper, and that sorting only needs to be done once when testing many points at once. But this algorithm doesn't take into account that we want to test many points on the same line.
These are what I see when I do
pp.plot(x_pixel_nos, y_pixel_nos)
pp.imshow(mask)
after running the code above with your data. Note that the y axis is inverted with imshow, hence the vertically mirrored shapes.
With Help of Shapely library in python, it can easily be done as:
from shapely.geometry import Point, Polygon
Convert all the x,y coords to shapely Polygons as:
coords = [(0, 0), (0, 2), (1, 1), (2, 2), (2, 0), (1, 1), (0, 0)]
pl = Polygon(coords)
Now find pixels in each of polygon as:
minx, miny, maxx, maxy = pl.bounds
minx, miny, maxx, maxy = int(minx), int(miny), int(maxx), int(maxy)
box_patch = [[x,y] for x in range(minx,maxx+1) for y in range(miny,maxy+1)]
pixels = []
for pb in box_patch:
pt = Point(pb[0],pb[1])
if(pl.contains(pt)):
pixels.append([int(pb[0]), int(pb[1])])
return pixels
Put this loop for each set of coords and then for each polygons.
good to go :)
skimage.draw.polygon can handle this 1, see the example code of this function on that page.
If you want just the contour, you can do skimage.segmentation.find_boundaries 2.
I have two points in a 2D space:
(255.62746737327373, 257.61185343423432)
(247.86430198019812, 450.74937623762395)
Plotting them over a png with matplotlib i have this result:
Now i would like to calculate the real distance (in meters) between these two points. I know that the real dimension for that image is 125 meters x 86 meters.
How can i do this in some way?
Let ImageDim be the length of the image in x and y coordinate.
In this case it would be ImageDim = (700, 500), and let StadionDim
be length of the stadium. StadionDim = (125, 86)
So the function to calculate point in the stadium that is in the image would be:
def calc(ImageDim, StadionDim, Point):
return (Point[0] * StadionDim[0]/ImageDim[0], Point[1] * StadionDim[1]/ImageDim[1])
So now you would get two points in the stadium. Calculate the distance:
Point_one = calc((700,500), (125,86), (257, 255))
Point_two = calc((700,500), (125,86), (450, 247))
Distance = sqrt((Point_one[0]-Point_two[0])**2 + (Point_one[1]-Point_two[1])**2)
I believe your input coordinates are in world space. But when you plot the image without any scaling then you will have plot coordinates in image space from (0,0) in left bottom corner to (image_width, image_height) in right to corner. So to plot your points correctly to image there is need to transform them to image space and vice verse when any real world space calculations are needed to be done. I suppose you will not want to calculate lets say soccer ball speed in pixels per second but in meters in second.
So why not to draw an image in world coordinate to avoid the two spaces coordinates conversions pain? You may do it easily in matplotlib. Use the extent parameter.
extent : scalars (left, right, bottom, top), optional, default: None
The location, in data-coordinates, of the lower-left and upper-right corners. If None, the image is positioned such that the pixel centers fall on zero-based (row, column) indices.
For example this way:
imshow(imade_data, origin='upper',extent=[0, 0, field_width, field_height]);
Then you may plot your points on image in world coordinates. Also the distance calculation will become clear:
import math;
dx = x2-x1;
dy = y2-y1;
distance = math.sqrt(dx*dx+dy*dy);