I'm trying to solve a dynamic food web with JiTCODE. One aspect of the network is that populations which undergo a threshold are set to zero. So I'm getting a not differentiable equation. Is there a way to implement that in JiTCODE?
Another similar problem is a Heaviside dependency of the network.
Example code:
import numpy as np
from jitcode import jitcode, y, t
def f():
for i in range(N):
if i <5:
#if y(N-1) > y(N-2): #Heavyside, how to make the if-statement
#yield (y(i)*y(N-2))**(0.02)
#else:
yield (y(i)*y(N-1))**(0.02)
else:
#if y(i) > thr:
#yield y(i)**(0.2) #?? how to set the population to 0 ??
#else:
yield y(i)**(0.3)
N = 10
thr = 0.0001
initial_value = np.zeros(N)+1
ODE = jitcode(f)
ODE.set_integrator("vode",interpolate=True)
ODE.set_initial_value(initial_value,0.0)
Python conditionals will be evaluated during the code-generation and not during the simulation (using the generated code). Therefore you cannot use them here. Instead you need to use special conditional objects that provide a reasonably smooth approximation of a step function (or build such a thing yourself):
def f():
for i in range(N):
if i<5:
yield ( y(i)*conditional(y(N-1),y(N-2),y(N-2),y(N-1)) )**0.2
else:
yield y(i)**conditional(y(i),thr,0.2,0.3)
For example, you can treat conditional(y(i),thr,0.2,0.3) to be evaluated as 0.2 if y(i)>thr and 0.3 otherwise (at simulation time).
how to set the population to 0 ??
You cannot do such a discontinuous jump within JiTCODE or the framework of differential equations in general. Usually, you would use a sharp population decline to simulate this, possibly introducing a delay (and thus JiTCDDE). If you really need this, you can either:
Detect threshold crossings after each integration step and reinitialise the integrator with respective initial conditions. If you just want to fully kill populations that went below a reproductive threshold, this seems to be a valid solution.
Implement a binary-switch dynamical variable.
Also see this GitHub issue.
Related
I'm using Z3 to optimize a SMT problem. I have a variable "h" (obviously bounded by some constraints) that I want to minimize with the Z3 Optimize class. The point is that I know the lower bound of h but not its upper bound, so if I write something like
optimizer.add(h >= lower_bound)
what happens is that the solver spends a lot of time trying suboptimal values of h. If instead I write
optimizer.add(h == lower_bound)
the optimizer finds the solution for h fairly quickly if there is one. My problem is that clearly the optimal solution doesn't always have h == lower_bound, but it's usually close to it. It would be nice if there was a way to specify to the optimizer to start searching from the lower bound and then go up. A workaround that I found consists in using the Z3 Solver class instead of the Optimize one and iterating over all the possible values of h starting from the lower bound, so something like:
h = lower_bound
sat = 'unsat'
while sat != 'sat':
solver = Solver()
h_var = Int('h')
solver.add(h_var == h)
# all the other constraints here...
sat = solver.check()
h += 1
But it's not really elegant. Can some of you help me? Thank you very much.
If you know an upper bound as well, then you can do a binary search. That'd be logarithmically optimal in terms of the number of calls to check you have to make.
If you don't have an upper limit, then first find it by incrementing h not by 1, but by a larger amount to "jump" to an upper-bound. (For instance, increment by 1000 till you hit unsat.) Then do a binary search since you'll have upper-lower bounds at that time. Of course a good value for increment will depend on the exact problem you have.
I'm afraid these are your only options. The "official" way of doing this is to say opt.add(h >= lower_limit), which doesn't seem to be working for your problem. Perhaps the above trick can help.
Another thing to try is a different solver: OptiMathSAT has different algorithms and optimization techniques implemented. Perhaps it'll perform better: https://optimathsat.disi.unitn.it
I am currently trying to improve my python skills by solving some example-problems. One of them is about checking if a gridpoint is located behind a barrier. Output is the number of all gridpoints that are not behind a barrier.
The given Hint is to calculate the angle of the direction to the point and compare it with the angle of the left and right side of the barrier.
My code works and gives correct results however for large grids (1000x1000) it is very slow, so I was wondering if there are ways to make the code faster.
The part that takes the longest is the checking if a point is behind an Obstacle, so I will only include this part of the Code in here. If you need the rest of the code as well, I am happy to include it :)
import math
import numpy as np
def CheckIfObstacle(point, Obstacle):
for obs in Obstacle:
# condition for a point being behind a barrier that is on positive side of Grid
if obs[0]>0 and point[0] >= obs[0] and (obs[1] >= point[1] >= obs[2]):
return True
# condition for a point being behind a barrier that is on negative side of Grid
elif obs[0]<0 and point[0] <=obs[0] and (obs[1]>= point[1]>= obs[2]):
return True
return False
Obstacle = [] #[(y1,angle_big1,angle_small1),(y2,angle_big2,angle_small2),...]
for i in range(2,nObs+2):
some code that puts data in Obstacle
Grid = calcGrid(S) # [(y1,angle1),(y2,angle2),...]
count = 0
p=0
for point in Grid:
if p%10000==0:
print(round(p/len(Grid)*100,3),'%')
p+=1
if CheckIfObstacle(point, Obstacle) ==False:
count +=1
print(count)
This is the fastes Version of all of my versions. The 1000x1000 Grid takes around 15min I think, but now I have an even bigger Grid, and it ran for an hour and was at 5% or so. If anyone has any ideas on how to improve the code, I would be happy about some feedback.
Thanks in advance!
I suggest that you split the equation into 16 different programs that run separately if you are using cloud software. If not, Have the smaller equations run consecutively. That way you don’t have a massive program that has to render all at once.
Using numba, you can compile CheckIfObstacle and make it very fast and memory efficient. In my case you can say I needed 49000x12000 grid. So using numpy with numba was life saver for me and you can see speed comparisons using numba vs best possible using numpy.
Or you can use Cython which will be bit complicated and parallelism is difficult compared to numba
Adding numba jit decorator at top of function including proper type specification. This one just an example and not storing calculation, but it is good idea to add numba to your list of optimizations
#nb.njit((nb.float64[:], nb.float64[:, :]), parallel=True)
def CheckIfObstacle(point, Obstacle):
for i in nb.prange(Obstacle):
obs = Obstacle[i]
# condition for a point being behind a barrier that is on positive side of Grid
if obs[0]>0 and point[0] >= obs[0] and (obs[1] >= point[1] >= obs[2]):
return True
# condition for a point being behind a barrier that is on negative side of Grid
elif obs[0]<0 and point[0] <=obs[0] and (obs[1]>= point[1]>= obs[2]):
return True
return False
I had similar problem, look for performance and implementation Why are np.hypot and np.subtract.outer very fast?
I have next first order differential equation (example):
dn/dt=A*n; n(0)=28
When A is constant, it is perfectly solved with python odeint.
But i have an array of different values of A from .txt file [not function,just an array of values]
A = [0.1,0.2,0.3,-0.4,0.7,...,0.0028]
And i want that in each iteration (or in each moment of time t) of solving ode A is a new value from array.
I mean that:
First iteration (or t=0) - A=0.1
Second iteration (or t=1) - A=0.2 and etc from array.
How can i do it with using python odeint?
Yes, you can to that, but not directly in odeint, as that has no event mechanism, and what you propose needs an event-action mechanism.
But you can separate your problem into steps, use inside each step odeint with the now constant A parameter, and then in the end join the steps.
T = [[0]]
N = [[n0]]
for k in range(len(A)):
t = np.linspan(k,k+1,11);
n = odeint(lambda u,t: A[k]*u, [n0],t)
n0 = n[-1]
T.append(t[1:])
N.append(n[1:])
T = np.concatenate(T)
N = np.concatenate(N)
If you are satisfied with less efficiency, both in the evaluation of the ODE and in the number of internal steps, you can also implement the parameter as a piecewise constant function.
tA = np.arange(len(A));
A_func = interp1d(tA, A, kind="zero", fill_value="extrapolate")
T = np.linspace(0,len(A)+1, 10*len(A)+11);
N = odeint(lambda u,t: A_func(t)*u, [n0], T)
The internal step size controller works on the assumption that the ODE function is well differentiable to 5th or higher order. The jumps are then seen via the implicit numerical differentiation inherent in the step error calculation as highly oscillatory events, requiring a very small step size. There is some mitigation inside the code that usually allows the solver to eventually step over such a jump, but it will require much more internal steps and thus function evaluations than the first variant above.
I need to find the coefficient of a term in a rather long, nasty expansion. I have a polynomial, say f(x) = (x+x^2)/2 and then a function that is defined recursively: g_k(x,y) = y*f(g_{k-1}(x,y)) with g_0(x,y)=yx.
I want to know, say, the coefficient of x^2y^4 in g_10(x,y)
I've coded this up as
import sympy
x, y = sympy.symbols('x y')
def f(x):
return (x+x**2)/2
def g(x,y,k):
if k==0:
return y*x
else:
return y*f(g(x,y,k-1))
fxn = g(x,y,2)
fxn.expand().coeff(x**2).coeff(y**4)
> 1/4
So far so good.
But now I want to find a coefficient for k = 10. Now fxn = g(x,y,10) and then fxn.expand() is very slow. Obviously there are a lot of steps going on, so it's not a surprise. But my knowledge of sympy is rudimentary - I've only started using it specifically because I need to be able to find these coefficients. I could imagine that there may be a way to get sympy to recognize that everything is a polynomial and so it can more quickly find a particular coefficient, but I haven't been able to find examples doing that.
Is there another approach through sympy to get this coefficient, or anything I can do to speed it up?
I assume you are only interested in the coefficients given and not the whole polynomial g(x,y,10). So you can redefine your function g to get rid of higher orders in every step of the recursion. This will significantly speed up your calculation.
def g(x,y,k):
if k==0:
return y*x
else:
temp = y*f(g(x,y,k-1)) + sympy.O(y**5) + sympy.O(x**3)
return temp.expand().removeO()
Works as follows: First everything of the order O(y**5), O(x**3) (and higher) will be grouped and then discarded. Keep in mind you loose lots of information!
Also have a look here: Sympy: Drop higher order terms in polynomial
I really need help as I am stuck at the begining of the code.
I am asked to create a function to investigate the exponential distribution on histogram. The function is x = −log(1−y)/λ. λ is a constant and I referred to that as lamdr in the code and simply gave it 10. I gave N (the number of random numbers) 10 and ran the code yet the results and the generated random numbers gave me totally different results; below you can find the code, I don't know what went wrong, hope you guys can help me!! (I use python 2)
import random
import math
N = raw_input('How many random numbers you request?: ')
N = int(N)
lamdr = raw_input('Enter a value:')
lamdr = int(lamdr)
def exprand(lamdr):
y = []
for i in range(N):
y.append(random.uniform(0,1))
return y
y = exprand(lamdr)
print 'Randomly generated numbers:', (y)
x = []
for w in y:
x.append((math.log((1 - w) / lamdr)) * -1)
print 'Results:', x
After viewing the code you provided, it looks like you have the pieces you need but you're not putting them together.
You were asked to write function exprand(lambdr) using the specified formula. Python already provides a function called random.expovariate(lambd) for generating exponentials, but what the heck, we can still make our own. Your formula requires a "random" value for y which has a uniform distribution between zero and one. The documentation for the random module tells us that random.random() will give us a uniform(0,1) distribution. So all we have to do is replace y in the formula with that function call, and we're in business:
def exprand(lambdr):
return -math.log(1.0 - random.random()) / lambdr
An historical note: Mathematically, if y has a uniform(0,1) distribution, then so does 1-y. Implementations of the algorithm dating back to the 1950's would often leverage this fact to simplify the calculation to -math.log(random.random()) / lambdr. Mathematically this gives distributionally correct results since P{X = c} = 0 for any continuous random variable X and constant c, but computationally it will blow up in Python for the 1 in 264 occurrence where you get a zero from random.random(). One historical basis for doing this was that when computers were many orders of magnitude slower than now, ditching the one additional arithmetic operation was considered worth the minuscule risk. Another was that Prime Modulus Multiplicative PRNGs, which were popular at the time, never yield a zero. These days it's primarily of historical interest, and an interesting example of where math and computing sometimes diverge.
Back to the problem at hand. Now you just have to call that function N times and store the results somewhere. Likely candidates to do so are loops or list comprehensions. Here's an example of the latter:
abuncha_exponentials = [exprand(0.2) for _ in range(5)]
That will create a list of 5 exponentials with λ=0.2. Replace 0.2 and 5 with suitable values provided by the user, and you're in business. Print the list, make a histogram, use it as input to something else...
Replacing exporand with expovariate in the list comprehension should produce equivalent results using Python's built-in exponential generator. That's the beauty of functions as an abstraction, once somebody writes them you can just use them to your heart's content.
Note that because of the use of randomness, this will give different results every time you run it unless you "seed" the random generator to the same value each time.
WHat #pjs wrote is true to a point. While statement mathematically, if y has a uniform(0,1) distribution, so does 1-y appears to be correct, proposal to replace code with -math.log(random.random()) / lambdr is just wrong. Why? Because Python random module provide U(0,1) in the range [0,1) (as mentioned here), thus making such replacement non-equivalent.
In more layman term, if your U(0,1) is actually generating numbers in the [0,1) range, then code
import random
def exprand(lambda):
return -math.log(1.0 - random.random()) / lambda
is correct, but code
import random
def exprand(lambda):
return -math.log(random.random()) / lambda
is wrong, it will sometimes generate NaN/exception, as log(0) will be called