Is there any similar function in python to ODE Event Location function? For example, how can I write a code similar to
function [value,isterminal,direction] = event(~,x)
value = x(1); % detect x(1)=0
isterminal = true; % Stop the integration
direction = -1; % positive direction=1, negative =-1, all=0
I mean I want to detect the time at which a bouncing ball hit the ground and then it jumps back . At that time initial conditions change.
Look at the documentation of solve_ivp. I think the bouncing ball is even an example there.
The equivalent is
def event(t,x): return x[0]
event.terminal=True
event.direction=-1
The difference is in multiple events, where matlab can use one vector valued event function, solve_ivp uses a vector of scalar valued functions.
Related
I am currently trying to improve my python skills by solving some example-problems. One of them is about checking if a gridpoint is located behind a barrier. Output is the number of all gridpoints that are not behind a barrier.
The given Hint is to calculate the angle of the direction to the point and compare it with the angle of the left and right side of the barrier.
My code works and gives correct results however for large grids (1000x1000) it is very slow, so I was wondering if there are ways to make the code faster.
The part that takes the longest is the checking if a point is behind an Obstacle, so I will only include this part of the Code in here. If you need the rest of the code as well, I am happy to include it :)
import math
import numpy as np
def CheckIfObstacle(point, Obstacle):
for obs in Obstacle:
# condition for a point being behind a barrier that is on positive side of Grid
if obs[0]>0 and point[0] >= obs[0] and (obs[1] >= point[1] >= obs[2]):
return True
# condition for a point being behind a barrier that is on negative side of Grid
elif obs[0]<0 and point[0] <=obs[0] and (obs[1]>= point[1]>= obs[2]):
return True
return False
Obstacle = [] #[(y1,angle_big1,angle_small1),(y2,angle_big2,angle_small2),...]
for i in range(2,nObs+2):
some code that puts data in Obstacle
Grid = calcGrid(S) # [(y1,angle1),(y2,angle2),...]
count = 0
p=0
for point in Grid:
if p%10000==0:
print(round(p/len(Grid)*100,3),'%')
p+=1
if CheckIfObstacle(point, Obstacle) ==False:
count +=1
print(count)
This is the fastes Version of all of my versions. The 1000x1000 Grid takes around 15min I think, but now I have an even bigger Grid, and it ran for an hour and was at 5% or so. If anyone has any ideas on how to improve the code, I would be happy about some feedback.
Thanks in advance!
I suggest that you split the equation into 16 different programs that run separately if you are using cloud software. If not, Have the smaller equations run consecutively. That way you don’t have a massive program that has to render all at once.
Using numba, you can compile CheckIfObstacle and make it very fast and memory efficient. In my case you can say I needed 49000x12000 grid. So using numpy with numba was life saver for me and you can see speed comparisons using numba vs best possible using numpy.
Or you can use Cython which will be bit complicated and parallelism is difficult compared to numba
Adding numba jit decorator at top of function including proper type specification. This one just an example and not storing calculation, but it is good idea to add numba to your list of optimizations
#nb.njit((nb.float64[:], nb.float64[:, :]), parallel=True)
def CheckIfObstacle(point, Obstacle):
for i in nb.prange(Obstacle):
obs = Obstacle[i]
# condition for a point being behind a barrier that is on positive side of Grid
if obs[0]>0 and point[0] >= obs[0] and (obs[1] >= point[1] >= obs[2]):
return True
# condition for a point being behind a barrier that is on negative side of Grid
elif obs[0]<0 and point[0] <=obs[0] and (obs[1]>= point[1]>= obs[2]):
return True
return False
I had similar problem, look for performance and implementation Why are np.hypot and np.subtract.outer very fast?
I'm trying to solve a dynamic food web with JiTCODE. One aspect of the network is that populations which undergo a threshold are set to zero. So I'm getting a not differentiable equation. Is there a way to implement that in JiTCODE?
Another similar problem is a Heaviside dependency of the network.
Example code:
import numpy as np
from jitcode import jitcode, y, t
def f():
for i in range(N):
if i <5:
#if y(N-1) > y(N-2): #Heavyside, how to make the if-statement
#yield (y(i)*y(N-2))**(0.02)
#else:
yield (y(i)*y(N-1))**(0.02)
else:
#if y(i) > thr:
#yield y(i)**(0.2) #?? how to set the population to 0 ??
#else:
yield y(i)**(0.3)
N = 10
thr = 0.0001
initial_value = np.zeros(N)+1
ODE = jitcode(f)
ODE.set_integrator("vode",interpolate=True)
ODE.set_initial_value(initial_value,0.0)
Python conditionals will be evaluated during the code-generation and not during the simulation (using the generated code). Therefore you cannot use them here. Instead you need to use special conditional objects that provide a reasonably smooth approximation of a step function (or build such a thing yourself):
def f():
for i in range(N):
if i<5:
yield ( y(i)*conditional(y(N-1),y(N-2),y(N-2),y(N-1)) )**0.2
else:
yield y(i)**conditional(y(i),thr,0.2,0.3)
For example, you can treat conditional(y(i),thr,0.2,0.3) to be evaluated as 0.2 if y(i)>thr and 0.3 otherwise (at simulation time).
how to set the population to 0 ??
You cannot do such a discontinuous jump within JiTCODE or the framework of differential equations in general. Usually, you would use a sharp population decline to simulate this, possibly introducing a delay (and thus JiTCDDE). If you really need this, you can either:
Detect threshold crossings after each integration step and reinitialise the integrator with respective initial conditions. If you just want to fully kill populations that went below a reproductive threshold, this seems to be a valid solution.
Implement a binary-switch dynamical variable.
Also see this GitHub issue.
I have code that is simulating interactions between lots of particles. Using profiling, I've worked out that the function that's causing the most slowdown is a loop which iterates over all my particles and works out the time for the collision between each of them. This generates a symmetric matrix, which I then take the minimum value out of.
def find_next_collision(self, print_matrix = False):
"""
Sets up a matrix of collision times
Returns the indices of the balls in self.list_of_balls that are due to
collide next and the time to the next collision
"""
self.coll_time_matrix = np.zeros((np.size(self.list_of_balls), np.size(self.list_of_balls)))
for i in range(np.size(self.list_of_balls)):
for j in range(i+1):
if (j==i):
self.coll_time_matrix[i][j] = np.inf
else:
self.coll_time_matrix[i][j] = self.list_of_balls[i].time_to_collision(self.list_of_balls[j])
matrix = self.coll_time_matrix + self.coll_time_matrix.T
self.coll_time_matrix = matrix
ind = np.unravel_index(np.argmin(self.coll_time_matrix, axis = None), self.coll_time_matrix.shape)
dt = self.coll_time_matrix[ind]
if (print_matrix):
print(self.coll_time_matrix)
return dt, ind
This code is a method inside a class which defines the positions of all of the particles. Each of these particles is an object saved in self.list_of_balls (which is a list). As you can see, I'm already only iterating over half of this matrix, but it's still quite a slow function. I've tried using numba, but this is a section of quite a large code, and I don't want to have to optimize every function with numba when this is the slow one.
Does anyone have any ideas for a more efficient way to write this function?
Thank you in advance!
Like Raubsauger mentioned in their answer, evaluating ifs is slow
for j in range(i+1):
if (j==i):
You can get rid of this if by simply doing for j in range(i). That way j goes from 0 to i-1
You should also try to avoid loops when possible. You can do this by expressing your problem in a vectorized way, and using numpy or scipy functions that leverage SIMD operations to speed up calculations. Here's a simplified example assuming the time_to_collision simply divides Euclidean distance by speed. If you store the coordinates and speeds of the balls in a numpy array instead of storing ball objects in a list, you could do:
from scipy.spatial.distance import pdist
rel_distances = pdist(ball_coordinates)
rel_speeds = pdist(ball_speeds)
time = rel_distances / rel_speeds
pdist documentation
Of course, this isn't going to work verbatim if your time_to_collision function is more complicated, but it should point you in the right direction.
First Question: How many particles do you have?
If you have a lot of particles: one improvement would be
for i in range(np.size(self.list_of_balls)):
for j in range(i):
self.coll_time_matrix[i][j] = self.list_of_balls[i].time_to_collision(self.list_of_balls[j])
self.coll_time_matrix[i][i] = np.inf
often executed ifs slow everything down. Avoid them in innerloops
2nd question: is it necessary to compute this every time? Wouldn't it be faster to compute points in time and only refresh those lines and columns which were involved in a collision?
Edit:
Idea here is to initially calculate either the time left or (better solution) the timestamp for the collision as you already as well as the order. But instead to throw the calculated results away, you only update the values when needed. This way you need to calculate only 2*n instead of n^2/2 values.
Sketch:
# init step, done once at the beginning, might need an own function
matrix ... # calculate matrix like before; I asume that you use timestamps instead of time left
min_times = np.zeros(np.size(self.list_of_balls))
for i in range(np.size(self.list_of_balls)):
min_times[i] = min(self.coll_time_matrix[i])
order_coll = np.argsort(min_times)
ind = order_coll[0]
dt = self.coll_time_matrix[ind]
return dt, ind
# function step: if a collision happened, order_coll[0] and order_coll[1] hit each other
for balls in order_coll[0:2]:
for i in range(np.size(self.list_of_balls)):
self.coll_time_matrix[balls][i] = self.list_of_balls[balls].time_to_collision(self.list_of_balls[i])
self.coll_time_matrix[i][balls] = self.coll_time_matrix[balls][i]
self.coll_time_matrix[balls][balls] = np.inf
for i in range(np.size(self.list_of_balls)):
min_times[i] = min(self.coll_time_matrix[i])
order_coll = np.argsort(min_times)
ind = order_coll[0]
dt = self.coll_time_matrix[ind]
return dt, ind
In case you calculate time left in the matrix you have to subtract the time passed from the matrix. Also you somehow need to store the matrix and (optionally) min_times and order_coll.
I write a simple program in python which includes moving my mouse (I do his with PyUserInput).
However: It is only allowed to move the mouse in integer steps (say pixels).
So mouse.move(250.3,300.2) won't work.
I call the move function about 30 times in a second and move the mouse a few pixels. The speed with which I move the mouse varies from 0.5-2.5px/call. Rounding gives me 1-3 (move only want ints) which does not really represent the speed.
I search for a solution (maybe generator?) which takes my current speed (e.g. 0.7px) and gives me back a pattern (like a PWM Signal) out of 0 and 1 (e.g. 1,1,0,1,1,0...) which yields the 0.7px in average.
However this generator has to be adaptive because speed is constantly changing.
I am quite new to python and stuck with the last point: The variability of the generator function.
Here is what I have so far:
# for 0.75px/call
def getPWM(n):
nums = [1,0,1,1]
yield nums[n%4]
What you need to do is keep track of the previous position and the desired current position, and hand out the rounded coordinate. You could track the previous position in a function but it's much easier to do it in a class.
class pwm:
def __init__(self):
self.desired_position = 0.0
self.actual_position = 0
def nextPWM(self, speed):
self.desired_position += speed
movement = round(self.desired_position - self.actual_position)
self.actual_position += movement
return movement
I have to write a collide method inside a Rectangle class that takes another Rectangle object as a parameter and returns True if it collides with the rectangle performing the method and False if it doesn't. My solution was to use a for loop that iterates through every value of x and y in one rectangle to see if it falls within the other, but I suspect there might be more efficient or elegant ways to do it. This is the method (I think all the names are pretty self explanatory, just ask if anything isn't clear):
def collide(self,target):
result = False
for x in range(self.x,self.x+self.width):
if x in range(target.get_x(),target.get_x()+target.get_width()):
result = True
for y in range(self.y,self.y+self.height):
if y in range(target.get_y(),target.get_y()+target.get_height()):
result = True
return result
Thanks in advance!
The problem of collision detection is a well-known one, so I thought rather than speculate I might search for a working algorithm using a well-known search engine. It turns out that good literature on rectangle overlap is less easy to come by than you might think. Before we move on to that, perhaps I can comment on your use of constructs like
if x in range(target.get_x(),target.get_x()+target.get_width()):
It is to Python's credit that such an obvious expression of your idea actually succeeds as intended. What you may not realize is that (in Python 2, anyway) each use of range() creates a list (in Python 3 it creates a generator and iterates over that instead; if you don't know what that means please just accept that it's little better in computational terms). What I suspect you may have meant is
if target.get_x() <= x < target.get_x()+target.get_width():
(I am using open interval testing to reflect your use of range())This has the merit of replacing N equality comparisons with two chained comparisons. By a relatively simple mathematical operation (subtracting target.get_x() from each term in the comparison) we transform this into
if 0 <= x-target.get_x() < target.get_width():
Do not overlook the value of eliminating such redundant method calls, though it's often simpler to save evaluated expressions by assignment for future reference.
Of course, after that scrutiny we have to look with renewed vigor at
for x in range(self.x,self.x+self.width):
This sets a lower and an upper bound on x, and the inequality you wrote has to be false for all values of x. Delving beyond the code into the purpose of the algorithm, however, is worth doing. Because any lit creation the inner test might have done is now duplicated many times over (by the width of the object, to be precise). I take the liberty of paraphrasing
for x in range(self.x,self.x+self.width):
if x in range(target.get_x(),target.get_x()+target.get_width()):
result = True
into pseudocode: "if any x between self.x and self.x+self.width lies between the target's x and the target's x+width, then the objects are colliding". In other words, whether two ranges overlap. But you sure are doing a lot of work to find that out.
Also, just because two objects collide in the x dimension doesn't mean they collide in space. In fact, if they do not also collide in the y dimension then the objects are disjoint, otherwise you would assess these rectangles as colliding:
+----+
| |
| |
+----+
+----+
| |
| |
+----+
So you want to know if they collide in BOTH dimensions, not just one. Ideally one would define a one-dimensional collision detection (which by now we just about have ...) and then apply in both dimensions. I also hope that those accessor functions can be replaced by simple attribute access, and my code is from now on going to assume that's the case.
Having gone this far, it's probably time to take a quick look at the principles in this YouTube video, which makes the geometry relatively clear but doesn't express the formula at all well. It explains the principles quite well as long as you are using the same coordinate system. Basically two objects A and B overlap horizontally if A's left side is between B's left and right sides. They also overlap if B's right is between A's left and right. Both conditions might be true, but in Python you should think about using the keyword or to avoid unnecessary comparisons.
So let's define a one-dimensional overlap function:
def oned_ol(aleft, aright, bleft, bright):
return (aleft <= bright < aright) or (bleft <= aright < bright)
I'm going to cheat and use this for both dimensions, since the inside of my function doesn't know which dimension's data I cam calling it with. If I am correct, the following formulation should do:
def rect_overlap(self, target):
return oned_ol(self.x, self.x+self.width, target.x, target.x+target.width) \
and oned_ol(self.y, self.y+self.height, target.y, target.y+target.height
If you insist on using those accessor methods you will have to re-cast the code to include them. I've done sketchy testing on the 1-D overlap function, and none at all on rect_overlap, so please let me know - caveat lector. Two things emerge.
A superficial examination of code can lead to "optimization" of a hopelessly inefficient algorithm, so sometimes it's better to return to first principles and look more carefully at your algorithm.
If you use expressions as arguments to a function they are available by name inside the function body without the need to make an explicit assignment.
def collide(self, target):
# self left of target?
if x + self.width < target.x:
return False
# self right of target?
if x > target.x + target.width :
return False
# self above target?
if y + self.height < target.y:
return False
# self below target?
if y > target.y + target.height:
return False
return True
Something like that (depends on your coord system, i.e. y positive up or down)