Measuring width of contour at given angle in OpenCv - python

Given a contour in OpenCV, I can extract the width and height by using cv2.boundingRect(contour). This returns the width and height of a bounding rectangle, as illustrated by the left figure.
Given an angle, is it possible to extract the width/height of a rotated bounding rectangle, as illustrated in the right figure?
I am trying to measure the length of moving objects, but I need to measure the length in the movement direction, which may sometimes be up to 45 degrees from the horizontal line.
I know there is a way of getting a rotated bounding rectangle, using cv2.minAreaRect and cv2.boxPoints, but the rotation I need will not always match the minimum area rectangle, so I need to be able to specify the angle somehow.
I only need the rotated width and height values, I don't really need the rotated contour, if that makes it easier.

As my comment: Why not. Support you have an angle, then you can make two orthogonality lines. Calculate the distance between pts and each line, then calc max_dist - min_dict, you will get width and height.
My mother language is Chinese, and not I'm good at English writing. So I just turn it into code:
#!/usr/bin/python3
# 2017.12.13 22:50:16 CST
# 2017.12.14 00:13:41 CST
import numpy as np
def calcPointsWH(pts, theta=0):
# 计算离散点在特定角度的长宽
# Measuring width of points at given angle
th = theta * np.pi /180
e = np.array([[np.cos(th), np.sin(th)]]).T
es = np.array([
[np.cos(th), np.sin(th)],
[np.sin(th), np.cos(th)],
]).T
dists = np.dot(pts,es)
wh = dists.max(axis=0) - dists.min(axis=0)
print("==> theta: {}\n{}".format(theta, wh))
Give this diamond for testing:
pts = np.array([[100, 200],[200, 26],[300, 200],[200, 373]], np.int32)
for theta in range(0,91, 30):
calcPointsWH(pts, theta)
==> theta: 0
[ 200. 347.]
==> theta: 30
[ 173.60254038 300.51081511]
==> theta: 60
[ 300.51081511 173.60254038]
==> theta: 90
[ 347. 200.]
Now it's 2017.12.14 00:20:55 CST, goodnight.

use
cv2.minAreaRect(cnt)
Here you can find here complete example and explanation.
Edit: minAreaRect is actually what you need, what's the problem in taking width and height from an oriented rectangle?

Related

Can we draw rectangle using matplotlib.Patches by just specifying its corner?

I want to conceal few points on a plot, I am using patches to draw a rectangle, so is there any way of plotting a rectangle with just specifying the corners?
I only know how to draw by height and width parameters.
patch= ax1.add_patch(patches.Rectangle((x, y), 0.3, 0.5)
how can i modify the code to draw rectangle by just using say coordinates like these (x1,y1),(x2,y2)(x3,y3)(x4,y4).
I assume that the coordinates to be ordered in the following way:
top_left = [2,2]
bottom_left = [2, 1]
top_right = [4,2]
bottm_right = [4, 1]
So you can easily calculate the width and height and input them to patches
w = top_left[0]-top_right[0]
h = top_left[1]-bottom_left[1]
NOTE
If they are not ordered the logic is simple, you find to points where the x position is identical and calculate in absolute value the the difference and obtain the width (and symmetrically the height)
The selected answer still just calculates the length and width (and ignores any angle if one was desired). It could be made to work by calculating the angle and adding that too, but it's still hacking around your intention if you've already calculated all of the vertices.
Another option you have is to just use the patches.Polygon class.
points = [(x1,y1),(x2,y2)(x3,y3)(x4,y4)]
rect = patches.Polygon(points, linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(rect)
will end up just drawing a rectangle if that's what those points specify. Note, the order of the points matters, but that isn't a big deal. Here is an image of where I just did this. The green boxes + are my calculated points, and the red rectangles are my polygons
sample of where I did this

Aruco Marker World Coordinates

I've been working with Python's OpenCV library, using ArUco for object tracking.
The goal is to get the x/y/z coordinates at the center of the ArUco marker, and the angle in relation to the calibrated camera.
I am able to display axes on the aruco marker with the code I have so far, but cannot find how to get x/y/z coordinates from the rotation and translation vectors (if that's even the right way to go about it).
This is the line of code which defines the rotation/translation vectors:
rvec, tvec, _ = aruco.estimatePoseSingleMarkers(corners, markerLength, camera_matrix, dist_coeffs) # For a single marker
Any ideas on how to get angle/marker position in the camera world?
Thanks!
After some tribulation, I found that the x and y coordinates in aruco can be determined by the average of the corners:
x = (corners[i-1][0][0][0] + corners[i-1][0][1][0] + corners[i-1][0][2][0] + corners[i-1][0][3][0]) / 4
y = (corners[i-1][0][0][1] + corners[i-1][0][1][1] + corners[i-1][0][2][1] + corners[i-1][0][3][1]) / 4
And the angle relative to the camera can be determined by the Rodrigues of the rotation vector, the matrix must be filled prior
rotM = np.zeros(shape=(3,3))
cv2.Rodrigues(rvec[i-1], rotM, jacobian = 0)
Finally, yaw pitch and roll can be obtained by taking the RQ Decompose of the rotation matrix
ypr = cv2.RQDecomp3x3(rotM)
As said by chungzuwalla, tvec represents the position of marker center in camera coordinate system and it doesn't change with the rotation of marker at a position. If you want to know about location of corners in camera coordinate system then one needs both rvec and tvec.
Here is a perfect explanation
Aruco markers with openCv, get the 3d corner coordinates?

Real distance between two points over image

I have two points in a 2D space:
(255.62746737327373, 257.61185343423432)
(247.86430198019812, 450.74937623762395)
Plotting them over a png with matplotlib i have this result:
Now i would like to calculate the real distance (in meters) between these two points. I know that the real dimension for that image is 125 meters x 86 meters.
How can i do this in some way?
Let ImageDim be the length of the image in x and y coordinate.
In this case it would be ImageDim = (700, 500), and let StadionDim
be length of the stadium. StadionDim = (125, 86)
So the function to calculate point in the stadium that is in the image would be:
def calc(ImageDim, StadionDim, Point):
return (Point[0] * StadionDim[0]/ImageDim[0], Point[1] * StadionDim[1]/ImageDim[1])
So now you would get two points in the stadium. Calculate the distance:
Point_one = calc((700,500), (125,86), (257, 255))
Point_two = calc((700,500), (125,86), (450, 247))
Distance = sqrt((Point_one[0]-Point_two[0])**2 + (Point_one[1]-Point_two[1])**2)
I believe your input coordinates are in world space. But when you plot the image without any scaling then you will have plot coordinates in image space from (0,0) in left bottom corner to (image_width, image_height) in right to corner. So to plot your points correctly to image there is need to transform them to image space and vice verse when any real world space calculations are needed to be done. I suppose you will not want to calculate lets say soccer ball speed in pixels per second but in meters in second.
So why not to draw an image in world coordinate to avoid the two spaces coordinates conversions pain? You may do it easily in matplotlib. Use the extent parameter.
extent : scalars (left, right, bottom, top), optional, default: None
The location, in data-coordinates, of the lower-left and upper-right corners. If None, the image is positioned such that the pixel centers fall on zero-based (row, column) indices.
For example this way:
imshow(imade_data, origin='upper',extent=[0, 0, field_width, field_height]);
Then you may plot your points on image in world coordinates. Also the distance calculation will become clear:
import math;
dx = x2-x1;
dy = y2-y1;
distance = math.sqrt(dx*dx+dy*dy);

Efficient way to find the pixel coordinates in image for circle circumference intersection

I would like to efficiently find the coordinates of the line described by the intersection between the circumference of a circle and an image (origin of circle is outside the image). Right now I'm using a loop in python to start at one edge of the image and move through the image a step at a time. Each step moves a certain distance (say 0.01 inches). I calculate the angle needed to move that distance and then use polar geometry formulas to define the next pixel coordinate. This all works just fine, however, it takes a long time. I'm creating many of these lines through the image as the radius of the circle increases.
Is there a way to use a built in function or an array based formula so that I don't have to have so many steps in my algorithm? Basically, what is the most efficient way to accomplish this in python 2?
Thanks,
rb3
# circle parameters
x0 = -5
y0 = -5
R = 25
# image size
max_x = 100
max_y = 100
# sample points
theta = np.linspace(0, 2*np.pi, 2048) # make bigger if you have huge images
# the pixels that get hit
xy = list(set([xy for xy in zip( (R * cos(theta) - x0).astype(int), (R * sin(theta) - y0).astype(int)) if
xy[0] >= 0 and xy[0] < max_x and xy[1] >= 0 and xy[1] < max_y]))

python arrange images on canvas in a circle

I have bunch of images (say 10) I have generated both as array or PIL object.
I need to integrate them into a circular fashion to display them and it should adjust itself to the resolution of the screen, is there anything in python that can do this?
I have tried using paste, but figuring out the resolution canvas and positions to paste is painful, wondering if there is an easier solution?
We can say that points are arranged evenly in a circle when there is a constant angle theta between neighboring points. theta can be calculated as 2*pi radians divided by the number of points. The first point is at angle 0 with respect to the x axis, the second point at angle theta*1, the third point at angle theta*2, etc.
Using simple trigonometry, you can find the X and Y coordinates of any point that lies on the edge of a circle. For a point at angle ohm lying on a circle with radius r:
xFromCenter = r*cos(ohm)
yFromCenter = r*sin(ohm)
Using this math, it is possible to arrange your images evenly on a circle:
import math
from PIL import Image
def arrangeImagesInCircle(masterImage, imagesToArrange):
imgWidth, imgHeight = masterImage.size
#we want the circle to be as large as possible.
#but the circle shouldn't extend all the way to the edge of the image.
#If we do that, then when we paste images onto the circle, those images will partially fall over the edge.
#so we reduce the diameter of the circle by the width/height of the widest/tallest image.
diameter = min(
imgWidth - max(img.size[0] for img in imagesToArrange),
imgHeight - max(img.size[1] for img in imagesToArrange)
)
radius = diameter / 2
circleCenterX = imgWidth / 2
circleCenterY = imgHeight / 2
theta = 2*math.pi / len(imagesToArrange)
for i, curImg in enumerate(imagesToArrange):
angle = i * theta
dx = int(radius * math.cos(angle))
dy = int(radius * math.sin(angle))
#dx and dy give the coordinates of where the center of our images would go.
#so we must subtract half the height/width of the image to find where their top-left corners should be.
pos = (
circleCenterX + dx - curImg.size[0]/2,
circleCenterY + dy - curImg.size[1]/2
)
masterImage.paste(curImg, pos)
img = Image.new("RGB", (500,500), (255,255,255))
#red.png, blue.png, green.png are simple 50x50 pngs of solid color
imageFilenames = ["red.png", "blue.png", "green.png"] * 5
images = [Image.open(filename) for filename in imageFilenames]
arrangeImagesInCircle(img, images)
img.save("output.png")
Result:

Categories

Resources