Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
An image has two ellipses ( for simple case we can consider both ellipses are same). One of ellipse is rotated and translated from the other. We have only image of these ellipses. How to estimate rotation angle ( angle between two ellipses) ?
I will present some initial pre-processing steps and then with the use of some OpenCV internal methods, we can get what you are asking for.
If the image is RGB or RGBA:
convert it to GRAY [use cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)]
Threshold the image to get a binary image with 2 ellipses. [cv2.threshold()]
Find contours in the binary image [cv2.findContours()]
Call the cv2.fitline() on each contour to get the line equation.
Apply the formula to get the angle between 2 lines.
For more operations on contours visit http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
Off the top of my head, I'd do this: (considering the tags, I assume you're using opencv)
1-Use "findContours" command to get the boundaries' pixels of each ellipsis separately.
2-For each ellipsis, calculate the distance between each pairs of pixels (for all pixels in the boundary - in a dual loop) by the equation:(D=sqrt((y1-y2)^2 + (x1-x2)^2)) and find the pair that shows the most distance. This pair comprises of two ends of the major axis of the ellipsis.
3-Using two mentioned points, calculate the angle of the major axis with respect to x-axis of the image by the equation:
angle = arctan((y2-y1)/(x2-x1))
4-Find the angle for the other ellipsis and subtract two angles to find the angle between them.
Related
I have a list of (x,y) points that constitue several circles with different centers, they all have the same diameter (which is known).
I need to detect the number of circles in total (not necessary to define their parameters). Is there a simple way to do that in python? (preferably without openCV)
If all circles have the same size and they do not intersect, you can just scan the picture line-by-line, pixel-by-pixel.
When you meet a pixel of circle color, apply flood-fill algorithm from this point and mark all connected pixels of the same color with the same integer value (1 for the first circle and so on).
After all the last value is number of objects.
Aslo you can use connected-component labelling algorithm
I have some contour image and I want to obtain more simple contour. For example if there is some litle curves such wool of animal or beard of man than I would like to change it woth simple line. For instance on this image from first picture I want to get second. I use cv2 in it.
OpenCV gives you very basic control with the ContourApproximationModes enum that you pass to the findContours() function:
CHAIN_APPROX_NONE
stores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2) of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1))==1.
CHAIN_APPROX_SIMPLE
compresses horizontal, vertical, and diagonal segments and leaves only their end points.
For example, an up-right rectangular contour is encoded with 4 points.
CHAIN_APPROX_TC89_L1
applies one of the flavors of the Teh-Chin chain approximation algorithm [241]
CHAIN_APPROX_TC89_KCOS
applies one of the flavors of the Teh-Chin chain approximation algorithm [241]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I have an image in which, I have two set of coordinates between which I have draw a line.
#Get image
im_res = requests.get(image_url)
img = Image.open(BytesIO(im_res.content))
img = np.asarray(img)
#Draw first line
lineThickness = 3
cv.line(img, (ax, ay), (bx, by), (0,255,0), lineThickness)
#Draw second line
lineThickness = 3
cv.line(img, (cx, cy), (dx, dy), (0,255,0), lineThickness)
cv.imshow("Image", img)
cv.waitKey(0)
cv.destroyAllWindows()
Coordinates are A,B,C & D. I know the distance between C to D. However, the distance between A to B is unknown. What is the best way to calculate this in OpenCv?
Is there an OpenCv specific function or method to do this? Especially the distance we are taking about is in pixels? I am sorry if this question is foolish, I really don't want to to end up getting wrong values due to lack of understanding in this topic.
I saw certain references to cv2.norm() and cv2.magnitude() as solution to this problem. However, I quite didnt't understand how to choose for my situation, keeping in mind in this case the distance is within an image/photo.
Compute Euclidean from C to D and find the ratio of that with the known measurement.
ratio = known / Euclidean
Then find the Euclidean between A & B and use the earlier found ratio to convert the Euclidean to actual distance.
distance = euclidean * ratio
euclidean "sqrt((x2-x1)**2+(y2-y1)**2)"
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Okay, so I have 2 images:
and
and I want to compare them
from PIL import Image, ImageChops
im1 = Image.open('im1.png')
im2= Image.open('im2.png')
def compare(im1, im2):
#blah blah blah
Basically the 2 images are practically the same but 1 is larger and the other is smaller, so one has more pixels and the other has less pixels. I want a function that compares the 2 images and, for example, expresses the difference in numbers. If the number is small, I know the difference is almost non-existent, but if the number is large, they are different.
Or any other function that compares images. If you want use these 2 images, which I have used, so the result will be the same. Thanks
You could just subtract the values of the images after reshaping OR cropping them:
img1 = img1.reshape(100, 200)
img2 = img2.reshape(100, 200)
# Calculate the absolute difference on each channel separately
dif = np.fabs(np.subtract(img2[:], img1[:]))
If you want to see the difference visually you could create a heatmap of the difference between the two images.
#Show image
imgplot = plt.imshow(dif)
# Choose a color palette
imgplot.set_cmap('jet')
plt.axis('off')
pylab.show()
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'll use this picture as an example
I need to extract the RGB values and compare them with all of the color values to see if I can figure out which color is in it without hard coding it.
For example I get (4,5,0) and I determined this color = red. I don't know if those are the real values of red, but it's an example.
How can I extract the RGB values from the inside of the red box and how can I search for the color that corresponds to those values.
this is what I tried:
img = Image('car.png')
pixel = img.getPixel(120, 150)
print(pixel)
This retrieves the rgb on those dimensions, but I need an average around that whole box.
Please explain solution, thanks
Here's an idea of what you should do:
width = XX
height = YY
#crops to (x1,y1) to (x2,y2) when x2>x1 and y2>y1
frame = img[width/4:(width/4+width/2), height/4:(height/4+height/2)]
And then,
r = np.array(frame[:,:,0])
avg_r = np.average(r)
Repeat for G and B.