I've managed to create a bit of code that creates various discs and checks for collisions, but how would I go about doing it for ellipses instead?
height = 100
width= 100
circles = 15
radius = 4
xL = []
yL = []
count = 0
print("Placing ", circles, "circles on a" , width,\
"x", height, "grid...")
for j in range(circles):
x = randint(0,width-1)
y = randint(0,height-1)
if len(xList)>0:
for i in range(len(xL)):
distance = sqrt((xL[i] - x)**2 + (yL[i] - y)**2)
if (distance < radius*2):
print("Circle ", j, "collides with circle ", i)
count += 1
xL.append(x)
yL.append(y)
print("Complete.", count, "collisions.")
Would I need to go about re-writing the code completely?
Also as a follow up I've been able to implement an algorithm that moves the discs ever so slightly and then accept or reject the move if they've come closer together. With ellipses its a bit different, as I'm going to need to rotate them. How would I do that?
Related
I need to draw slanted lines like this programmatically using opencv-python, and it has to be similar in terms of the slant angle and the distance between the lines:
If using OpenCV cv.line() i need to supply the function with the line's start and endpoint.
Following this StackOverflow accepted answer, I think I will be able to know those two points, but first I need to calculate the line equation itself.
So what I have done is first I calculate the slant angle of the line using the measure tool in ai (The actual image was given by the graphic designer as ai (adobe illustrator) file), and I got 67deg and I solve the gradient of the line. But the problem is I don't know how to get the horizontal spacing/distance between the lines. I needed that so i can supply the start.X. I used the illustrator, and try to measure the distance between the lines but how to map it to opencv coordinate?
Overall is my idea feasible? Or is there a better way to achieve this?
Update 1:
I managed to draw this experimental image:
And this is code:
def show_image_scaled(window_name,image,height,width):
cv2.namedWindow(window_name,cv2.WINDOW_NORMAL)
cv2.resizeWindow(window_name,width,height)
cv2.imshow(window_name,image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def slanted_lines_background():
canvas = np.ones((200,300)) * 255
end_x = 0
start_y = 0
m = 2.35
end_x = 0
for x in range(0,canvas.shape[1],10):
start_x = x
end_y = start_y + compute_length(m,start_x,start_y,end_x)
cv2.line(canvas,(start_x,start_y),(end_x,end_y),(0,0,0),2)
show_image_scaled("Slant",canvas,200,300)
def compute_length(m,start_x,start_y,end_x=0):
c = start_y - (m * start_x)
length_square = (end_x - start_x)**2 + ((m *end_x) + c - start_y) ** 2
length = math.sqrt(length_square)
return int(length)
Still working on to fill the left part of the rectangle
This code "shades" every pixel in a given image to produce your hatched pattern. Don't worry about the math. It's mostly correct. I've checked the edge cases for small and wide lines. The sampling isn't exactly correct but nobody's gonna notice anyway because the imperfection amounts to small fractions of a pixel. And I've used numba to make it fast.
import numpy as np
from numba import njit, prange
#njit(parallel=True)
def hatch(im, angle=45, stride=10, dc=None):
stride = float(stride)
if dc is None:
dc = stride * 0.5
assert 0 <= dc <= stride
stride2 = stride / 2
dc2 = dc / 2
angle = angle / 180 * np.pi
c = np.cos(angle)
s = np.sin(angle)
(height, width) = im.shape[:2]
for y in prange(height):
for x in range(width):
# distance to origin along normal
dist_origin = c*x - s*y
# distance to center of nearest line
dist_center = stride2 - abs((dist_origin % stride) - stride2)
# distance to edge of nearest line
dist_edge = dist_center - dc2
# shade pixel, with antialiasing
# use edge-0.5 to edge+0.5 as "gradient" <=> 1-sized pixel straddles edge
# for thick/thin lines, needs hairline handling
# thin line -> gradient hits far edge of line / pixel may span both edges of line
# thick line -> gradient hits edge of adjacent line / pixel may span adjacent line
if dist_edge > 0.5: # background
val = 0
else: # pixel starts covering line
val = 0.5 - dist_edge
if dc < 1: # thin line, clipped to line width
val = min(val, dc)
elif stride - dc < 1: # thick line, little background
val = max(val, 1 - (stride - dc))
im[y,x] = val
canvas = np.zeros((128, 512), 'f4')
hatch(canvas, angle=-23, stride=5, dc=2.5)
# mind the gamma mapping before imshow
I (will) have a list of coordinates; using python's pillow module, I want to save a series of (cropped) smaller images to disk. Currently, I am using a for loop to act to determine one coordinate at a time then crop/save the image before proceeding to the next coordinate.
Is there a way to divide this job up such that multiple images can be cropped/saved simultaneously? I understand that this would take up more RAM but would be decrease performance time.
I'm sure this is possible but I'm not sure if this is simple. I've heard terms like 'vectorization' and 'multi-threading' that sound vaguely appropriate to this situation. But these topics extend beyond my experience.
I've attached the code for reference. However, I'm simply trying to solicit recommended strategies. (i.e. what techniques should I learn about to better tailor my approach, take multiple crops at once, etc?)
def parse_image(source, square_size, count, captures, offset=0, offset_type=0, print_coords=False):
"""
Starts at top left corner of image. Iterates through image by square_size (width = height)
across x values and after exhausting the row, begins next row lower by function of
square_size. Offset parameter is available such that, with multiple function calls,
overlapping images could be generated.
"""
src = Image.open(source)
dimensions = src.size
max_down = int(src.height/square_size) * square_size + square_size
max_right = int(src.width/square_size) * square_size + square_size
if offset_type == 1:
tl_x = 0 + offset
tl_y = 0
br_x = square_size + offset
br_y = square_size
for y in range(square_size,max_down,square_size):
for x in range(square_size + offset,max_right - offset,square_size):
if (tl_x,tl_y) not in captures:
sample = src.crop((tl_x,tl_y,br_x,br_y))
sample.save(f"{source[:-4]}_sample_{count}_x{tl_x}_y{tl_y}.jpg")
captures.append((tl_x,tl_y))
if print_coords == True:
print(f"image {count}: top-left (x,y): {(tl_x,tl_y)}, bottom-right (x,y): {(br_x,br_y)}")
tl_x = x
br_x = x + square_size
count +=1
else:
continue
tl_x = 0 + offset
br_x = square_size + offset
tl_y = y
br_y = y + square_size
else:
tl_x = 0
tl_y = 0 + offset
br_x = square_size
br_y = square_size + offset
for y in range(square_size + offset,max_down - offset,square_size):
for x in range(square_size,max_right,square_size):
if (tl_x,tl_y) not in captures:
sample = src.crop((tl_x,tl_y,br_x,br_y))
sample.save(f"{source[:-4]}_sample_{count}_x{tl_x}_y{tl_y}.jpg")
captures.append((tl_x,tl_y))
if print_coords == True:
print(f"image {count}: top-left (x,y): {(tl_x,tl_y)}, bottom-right (x,y): {(br_x,br_y)}")
tl_x = x
br_x = x + square_size
count +=1
else:
continue
tl_x = 0
br_x = square_size
tl_y = y + offset
br_y = y + square_size + offset
return count
What you want to achieve here is to have a higher degree of parallelism, the first thing to do is to understand what is the minimum task that you need to do here, and from that, think in a way to better distribute it.
First thing to notice here is that there is two behaviour, first if you have offset_type 0, and another if you have offset_type 1, split that off into two different functions.
Second thing is: given an image, you're taking crops of a given size, at a given offset(x,y) for the whole image. You could for instance, simplify this function to take one crop of the image, given the image offset(x,y). Then, you could call this function for all the x and y of the image in parallel. That's pretty much what most image processing frameworks tries to achieve, even more the one's that run code inside the GPU, small blocks of code, that operates locally in the image.
So lets say your image has width=100, height=100, and you're trying to make crops of w=10,h=10. Given the simplistic function that I described, I will call it crop(img, x, y, crop_size_x, crop_size_y) All you have to do is create the image:
img = Image.open(source)
crop_size_x = 10
crop_size_y = 10
crops = [crop(img, x, y, crop_size_x, crop_size_y) for x, y in zip(range(img.width), range(img.height))]
later on, you can then replace the list comprehension for a multi_processing library that can actually spawn many processes, do real parallelism, or even write such code inside a GPU kernel/shader, and use the GPU parallelism to achieve high performance.
In a 2D plane, I have a line segment (P0 and P1) and a triangle, defined by three points (t0, t1 and t2).
My goal is to test, as efficiently as possible ( in terms of computational time), whether the line touches, or cuts through, or overlaps with one of the edge of the triangle.
Everything is in 2D! I hope that someone can help me write this in python.
Thanks everyone for the help!
You can take inspiration from the Cohen-Sutherland line clipping algorithm, https://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland_algorithm, which discusses the intersections based on a "region code". In the case of a triangle, there are 7 regions to be considered (hence 21 cases, but much symmetry), and it takes three tests per segment endpoint to perform the classification.
For some combinations of the region codes, the conclusion is immediate, and for some others, you will need an extra test to tell if the supporting line crosses the triangle.
You can (should) ease the computation by means of a change of coordinates: by using an affine frame defined by the three triangle vertices, the equations of the sides simplify to X=0, Y=0, X+Y=1.
Depending on the exact distribution of the triangles and segments, a bounding box test can be useful of not.
Lucky for me the line segment can never start and end inside the triangle, it will always intersect. Below is my code in python. It is based on Kevin's idea that you just evaluate each line segment coordinates one at a time and see if they intersect. If it is a line and a triangle, run the code 3 times. Anyone have issue?(implemented in python):
#Check to see if two lines intersect
# return a 1 if they intersect
# return a 0 if the do not intersect
def line_intersection_test( p0_x, p0_y, p1_x, p1_y, p2_x, p2_y, p3_x,p3_y):
s1_x = float(p1_x - p0_x)
s1_y = float(p1_y - p0_y)
s2_x = float(p3_x - p2_x)
s2_y = float(p3_y - p2_y)
outcome = 0
s = float (-s1_y * (p0_x - p2_x) + s1_x * (p0_y - p2_y)) / (-s2_x * s1_y + s1_x * s2_y)
t = float ( s2_x * (p0_y - p2_y) - s2_y * (p0_x - p2_x)) / (-s2_x * s1_y + s1_x * s2_y)
if (s >= 0 and s <= 1 and t >= 0 and t <= 1):
outcome = 1
return outcome
#basically feed line segment and the three lines that form the triangle separately to line_intersect_test to see if they intersect
def triangle_intersection_test(vecX1, vecY1, vecX2, vecY2, triX1, triY1, triX2, triY2,triX3, triY3):
side_1_test = line_intersection_test(vecX1, vecY1, vecX2, vecY2, triX1, triY1, triX2, triY2)
side_2_test = line_intersection_test(vecX1, vecY1, vecX2, vecY2, triX2, triY2, triX3, triY3)
side_3_test = line_intersection_test(vecX1, vecY1, vecX2, vecY2, triX1, triY1, triX3, triY3)
result = side_1_test + side_2_test + side_3_test
if result > 0:
outcome = "motion detected"
else:
outcome = "motion not detected"
return outcome
`
Using Python 3, I want to find out how many complete and how many partial square tiles (side length 1 unit) one would need to fill a circular area with a given radius r. Multiple partial tiles can not be summed up to form a complete tile, also the remainder of a partial tile may not be reused anywhere else.
The center of the circle will always be on a boundary between four tiles, so we can calculate the need for 1/4 of the circle and multiply it with 4.
So if e.g. r=1, there would be 0 complete and 4 partial tiles.
For r=2 the result would be 4 complete and 12 partial tiles, and so on...
What approaches could I use? The code should be as short as possible.
I think the following should do the trick. Apologies that the print statement is python 2, but I think it should be easy to convert.
import math
# Input argument is the radius
circle_radius = 2.
# This is specified as 1, but doesn't have to be
tile_size = 1.
# Make a square box covering a quarter of the circle
tile_length_1d = int(math.ceil(circle_radius / tile_size ))
# How many tiles do you need?
num_complete_tiles = 0
num_partial_tiles = 0
# Now loop over all tile_length_1d x tile_length_1d tiles and check if they
# are needed
for i in range(tile_length_1d):
for j in range(tile_length_1d):
# Does corner of tile intersect circle?
intersect_len = ((i * tile_size)**2 + (j * tile_size)**2)**0.5
# Does *far* corner intersect (ie. is the whole tile in the circle)
far_intersect_len = (((i+1) * tile_size)**2 + ((j+1) * tile_size)**2)**0.5
if intersect_len > circle_radius:
# Don't need tile, continue
continue
elif far_intersect_len > circle_radius:
# Partial tile
num_partial_tiles += 1
else:
# Keep tile. Maybe you want to store this or something
# instead of just adding 1 to count?
num_complete_tiles += 1
# Multiple by 4 for symmetry
num_complete_tiles = num_complete_tiles * 4
num_partial_tiles = num_partial_tiles * 4
print "You need %d complete tiles" %(num_complete_tiles)
print "You need %d partial tiles" %(num_partial_tiles)
I'm currently trying to simulate many particles in a box bouncing around.
I've taken into account #kalhartt's suggestions and this is the improved code to initialize the particles inside the box:
import numpy as np
import scipy.spatial.distance as d
import matplotlib.pyplot as plt
# 2D container parameters
# Actual container is 50x50 but chose 49x49 to account for particle radius.
limit_x = 20
limit_y = 20
#Number and radius of particles
number_of_particles = 350
radius = 1
def force_init(n):
# equivalent to np.array(list(range(number_of_particles)))
count = np.linspace(0, number_of_particles-1, number_of_particles)
x = (count + 2) % (limit_x-1) + radius
y = (count + 2) / (limit_x-1) + radius
return np.column_stack((x, y))
position = force_init(number_of_particles)
velocity = np.random.randn(number_of_particles, 2)
The initialized positions look like this:
Once I have the particles initialized I'd like to update them at each time-step. The code for updating follows the previous code immediately and is as follows:
# Updating
while np.amax(abs(velocity)) > 0.01:
# Assume that velocity slowly dying out
position += velocity
velocity *= 0.995
#Get pair-wise distance matrix
pair_dist = d.cdist(position, position)
pair_d = pair_dist<=4
#If pdist [i,j] is <=4 then the particles are too close and so treat as collision
for i in range(len(pair_d)):
for j in range(i):
# Only looking at upper triangular matrix (not inc. diagonal)
if pair_d[i,j] ==True:
# If two particles are too close then swap velocities
# It's a bad hack but it'll work for now.
vel_1 = velocity[j][:]
velocity[j] = velocity[i][:]*0.9
velocity[i] = vel_1*0.9
# Masks for particles beyond the boundary
xmax = position[:, 0] > limit_x
xmin = position[:, 0] < 0
ymax = position[:, 1] > limit_y
ymin = position[:, 1] < 0
# flip velocity and assume that it looses 10% of energy
velocity[xmax | xmin, 0] *= -0.9
velocity[ymax | ymin, 1] *= -0.9
# Force maximum positions of being +/- 2*radius from edge
position[xmax, 0] = limit_x-2*radius
position[xmin, 0] = 2*radius
position[ymax, 0] = limit_y-2*radius
position[ymin, 0] = 2*radius
After updating it and letting it run to completion I get this result:
This is infinitely better than before but there are still patches that are too close together - such as:
Too close together. I think the updating works... and thanks to #kalhartt my code is wayyyy better and faster (and I learnt some things about numpy... props #kalhartt) but I still don't know where it's screwing up. I've tried changing the order of the actual updates with the pair-wise distance going last or the position +=velocity going last but to no avail. I added the *0.9 to make the entire thing die down faster and I tried it with 4 to make sure that 2*radius (=2) wasn't too tight a criteria... but nothing seems to work.
Any and all help would be appreciated.
There are just two typos standing in your way. First for i in range(len(positions)/2): only iterates over half of your particles. This is why half the particles stay in the x bounds (if you watch for large iterations its more clear). Second, the second y condition should be a minimum (I assume) position[i][1] < 0. The following block works to bound the particles for me (I didn't test with the collision code so there could be problems there).
for i in range(len(position)):
if position[i][0] > limit_x or position[i][0] < 0:
velocity[i][0] = -velocity[i][0]
if position[i][1] > limit_y or position[i][1] < 0:
velocity[i][1] = -velocity[i][1]
As an aside, try to leverage numpy to eliminate loops when possible. It is faster, more efficient, and in my opinion more readable. For example force_init would look like this:
def force_init(n):
# equivalent to np.array(list(range(number_of_particles)))
count = np.linspace(0, number_of_particles-1, number_of_particles)
x = (count * 2) % limit_x + radius
y = (count * 2) / limit_x + radius
return np.column_stack((x, y))
And your boundary conditions would look like this:
while np.amax(abs(velocity)) > 0.01:
position += velocity
velocity *= 0.995
# Masks for particles beyond the boundary
xmax = position[:, 0] > limit_x
xmin = position[:, 0] < 0
ymax = position[:, 1] > limit_y
ymin = position[:, 1] < 0
# flip velocity
velocity[xmax | xmin, 0] *= -1
velocity[ymax | ymin, 1] *= -1
Final note, it is probably a good idea to hard clip position to the bounding box with something like position[xmax, 0] = limit_x; position[xmin, 0] = 0. There may be cases where velocity is small and a particle outside the box will be reflected but not make it inside in the next iteration. So it will just sit outside the box being reflected forever.
EDIT: Collision
The collision detection is a much harder problem, but lets see what we can do. Lets take a look at your current implementation.
pair_dist = d.cdist(position, position)
pair_d = pair_dist<=4
for i in range(len(pair_d)):
for j in range(i):
# Only looking at upper triangular matrix (not inc. diagonal)
if pair_d[i,j] ==True:
# If two particles are too close then swap velocities
# It's a bad hack but it'll work for now.
vel_1 = velocity[j][:]
velocity[j] = velocity[i][:]*0.9
velocity[i] = vel_1*0.9
Overall a very good approach, cdist will efficiently calculate the distance
between sets of points and you find which points collide with pair_d = pair_dist<=4.
The nested for loops are the first problem. We need to iterate over True values of pair_d where j > i. First your code actually iterate over the lower triangular region by using for j in range(i) so that j < i, not particularly important in this instance as long since i,j pairs are not repeated. However Numpy has two builtins we can use instead, np.triu lets us set all values below a diagonal to 0 and np.nonzero will give us the indices of non-zero elements in a matrix. So this:
pair_dist = d.cdist(position, position)
pair_d = pair_dist<=4
for i in range(len(pair_d)):
for j in range(i+1, len(pair_d)):
if pair_d[i, j]:
...
is equivalent to
pair_dist = d.cdist(position, position)
pair_d = np.triu(pair_dist<=4, k=1) # k=1 to exclude the diagonal
for i, j in zip(*np.nonzero(pair_d)):
...
The second problem (as you noted) is that the velocities are just switched and scaled instead of reflected. What we really want to do is negate and scale the component of each particles velocity along the axis that connects them. Note that to do this we will need the vector connecting them position[j] - position[i] and the length of the vector connecting them (which we already calculated). So unfortunately part of the cdist calculation gets repeated. Lets quit using cdist and do it ourselves instead. The goal here is to make two arrays diff and norm where diff[i][j] is a vector pointing from particle i to j (so diff is a 3D array) and norm[i][j] is the distance between particles i and j. We can do this with numpy like so:
nop = number_of_particles
# Give pos a 3rd index so we can use np.repeat below
# equivalent to `pos3d = np.array([ position ])
pos3d = position.reshape(1, nop, 2)
# 3D arras with a repeated index so we can form combinations
# diff_i[i][j] = position[i] (for all j)
# diff_j[i][j] = position[j] (for all i)
diff_i = np.repeat(pos3d, nop, axis=1).reshape(nop, nop, 2)
diff_j = np.repeat(pos3d, nop, axis=0)
# diff[i][j] = vector pointing from position[i] to position[j]
diff = diff_j - diff_i
# norm[i][j] = sqrt( diff[i][j]**2 )
norm = np.linalg.norm(diff, axis=2)
# check for collisions and take the region above the diagonal
collided = np.triu(norm < radius, k=1)
for i, j in zip(*np.nonzero(collided)):
# unit vector from i to j
unit = diff[i][j] / norm[i][j]
# flip velocity
velocity[i] -= 1.9 * np.dot(unit, velocity[i]) * unit
velocity[j] -= 1.9 * np.dot(unit, velocity[j]) * unit
# push particle j to be radius units from i
# This isn't particularly effective when 3+ points are close together
position[j] += (radius - norm[i][j]) * unit
...
Since this post is long enough already, here is a gist of the code with my modifications.