Dealing with pixels, and the troubles with concentric circles. Python - python

I'm having some trouble dealing with drawing perfect concentric circles, or perfect spacing between a circle. I'm using John Zelle's graphics library but the problem I'm dealing with is more conceptual (and graphics in general) than it is with the limitations of the Library. When I draw a circle w/ a 200 pixel radius, and I try to create 50 perfect circles within the main circle the library doesn't take into account the outline of a circle, which means I don't get perfect partitions. More circles I add, the further away I get from perimeter of the main circle. The 50 circles are evenly spaced apart, the problem is they come short of the main circle.
for x in range(1, numPartition+1): #numPartitions is 50, for 50 circles
cInsideRadius = mainCirRadius/(numPartition+1)*x
c = circle(Point(x,y),Point(x,y), cInsideRadius) #where cInsideRadius is the radius of circle c
c.draw(window)
Figured it out, has to do with partition sizes being casted as ints and not floats.

Figured it out, has to do with partition sizes being casted as ints and not floats.

Related

How do I fit rectangles to an image in python and obtain their coordinates

I'm looking for a way to split a number of images into proper rectangles. These rectangles are ideally shaped such that each of them take on the largest possible size without containing a lot of white.
So let's say that we have the following image
I would like to get an output such as this:
Note the overlapping rectangles, the hole and the non axis aligned rectangle, all of these are likely scenario's I have to deal with.
I'm aiming to get the coordinates describing the corner pieces of the rectangles so something like
[[(73,13),(269,13),(269,47)(73,47)],
[(73,13),(73,210),(109,210),(109,13)]
...]
In order to do this I have already looked at the cv2.findContours but I couldn't get it to work with overlapping rectangles (though I could use the hierarchy model to deal with holes as that causes the contours to be merged into one.
Note that although not shown holes can be nested.
A algorithm that works roughly as follow should be able to give you the result you seek.
Get all the corner points in the image.
Randomly select 3 points to create a rectangle
Count the ratio of yellow pixels within the rectangle, accept if the ratio satisfy a threshold.
Repeat 2 to 4 until :
a) every single combination of point is complete or
b) all yellow pixel are accounted for or
c) after n number of iteration
The difficult part of this algorithm lies in step 2, creating rectangle from 3 points.
If all the rectangles were right angle, you can simply find the minimum x and y to correspond for topLeft corner and maximum x and y to correspond for bottomRight corner of your new rectangle.
But since you have off axis rectangle, you will need to check if the two vector created from the 3 points have a 90 degree angle between them before generating the rectangle.

Randomly placing N circles in rectangle without overlapping

I want to place N circles with given, common radius in the rectangle of given size, such that circles are not overlapping in Python. My current solutions are:
1) to create a set of every point in the space and remove from it points that will cause overlapping before generating next circle (but it's slow when the rectangle is big).
2) to draw the center of balls from the set of not-overlapping points (e.g. every 2r + const) (but the positions are not random enough here).
Do you have other, more efficient ideas?
so the most efficient packing in 2D is hexagonal packing and you can just hard code your program to give that packing for circles
read more about it here : https://en.wikipedia.org/wiki/Circle_packing

Find the Area of a Strange Shape

I need to find the area of the shaded region using code but I have no idea how to write a program that can do this. Can someone help me?
You have four circles of radius r, situated in a square shape and tangent to each other. Then, you have a square connecting the centers of the four circles.
Since each side of the square is two radii (2r), the total area of the square is 4r**2.
We can find the area between the circles by subtracting the area of the parts of circles that are within the square. A quarter of each circle is inside the square. Since the area of a full circle is pi * r**2, the area of one quarter of a circle is 1/4 pi r**2. There are four of these inside the square, so we add them all up to find that the total area of "parts of the circle" inside the square is pi r**2.
Finally, we subtract that from the area of the square. Whatever's left must be the area of the space between the circles inside the square:
= (4 - pi) * r**2
This is a mathematical question, not a programming one. Hopefully you can adapt this solution to whichever problem you're trying to solve; but if you want us to be more helpful or provide a solution more targeted to your particular problem, you're gonna have to provide some code or a more generalized description of what you want your code to do, in terms of inputs and outputs.

How to measure image coincidence in an optical rangefinder

I have a couple of USB webcams (fixed focal length) setup as a simple stereoscopic rangefinder, spaced N mm apart with each rotated by M degrees towards the centerline, and I've calibrated the cameras to ensure alignment.
When adjusting the angle, how would I measure the coincidence between the images (preferably in Python/PIL/OpenCV) to know when the cameras are focused on an object? Is it as simple as choosing a section of pixels in each image (A rows by B columns) and calculating the sum of the difference between the pixels?
the problem is that you can not assume pixel perfect align of cameras
so let assume x-axis is the parallax shifted axis and y- axis is aligned. You need to identify the x-axis image distortion/shift to detect parallax align even if you are aligned as much as possible. The result of abs difference is not guaranteed to be in min/max so instead of substracting individual pixels substract average color of nearby area of that pixel with radius/size bigger then the align error in y-axis. Let call this radius or size r this way the resulting difference should be minimal when aligned.
Approximation search
You can even speed up the process by r
select big r
scan whole x-range with step for example 0.25*r
choose the lowest difference x-position (x0)
change r to half
go to bullet 2 (but this time whole x range is just between <x0-2.0*r,x0+2.0r>
stops if r is smaller then few pixels
This way you can search in O(log2(n)) instead of O(n)
computer vision approach
this should be even faster:
detect points of interest (in booth images)
specific change in gradient,etc ...
cross match points of interest between images
compute average x-distance between cross-matched points
change parallax align by found distance of points
goto bullet 1 until x-distance is small enough
This way you can avoid checking whole x-range because the align distance is obtained directly ... You just need to convert it to angle or what ever you use to align parallax
[notes]
You do not need to do this on whole image area just select few horizontal lines along the images and scan their nearby area.
There are also another ways to detect align for example for short distances the skew is significant marker of align so compare the height of object on its left and right side between cameras ... If near the same you are aligned if bigger/smaller you are not aligned and know which way to turn ...

Is there a way to remove points from a contour outside a given circle in Python OpenCV?

Let's say I have a contour which is meant to represent the shape of the hand. The issue is, the contour also contains other parts of the arm (i.e. wrist, forearm, upper arm, etc.) To find the position of the hand's center, I'm looking at the combinations (size 3) of the defect points of the convex hull, finding the center of circle which is tangent to these 3 points, and averaging the most reasonable ones together to gain a rough understanding of where the hand's center is.
With this averaged center, I'd like to be able to remove points on my given contour which don't fall inside some radius that's likely to determine the width of the hand - in other words, cutoff points that don't fall inside this circle. I could simply iterate through each contour point and remove these points, but that would be horribly inefficient because of Python loops' speed. Is there a faster or more efficient way of doing this, perhaps using some inbuilt OpenCV functions or otherwise?
Thanks!
Interesting follow-up to your other question.
You can remove the unwanted points by boolean indexing:
import numpy as np
hand_contour = np.random.rand(60,2) # you can use np.squeeze on the data from opencv to get rid of that annoying singleton axis (60,1,2)->(60,2)
# You have found the center of the palm and a possible radius
center = np.array([.3, .1])
radius = .3
mask = (hand_contour[:,0] - center[0])**2 + (hand_contour[:,1] - center[1])**2 < radius**2
within_palm = hand_contour[mask,:] # Only selects those values within that circle.
You could also mask the unwanted values, with a masked_array, but if you're not interested in keeping the original data, the above method is the way to go.

Categories

Resources