I have next first order differential equation (example):
dn/dt=A*n; n(0)=28
When A is constant, it is perfectly solved with python odeint.
But i have an array of different values of A from .txt file [not function,just an array of values]
A = [0.1,0.2,0.3,-0.4,0.7,...,0.0028]
And i want that in each iteration (or in each moment of time t) of solving ode A is a new value from array.
I mean that:
First iteration (or t=0) - A=0.1
Second iteration (or t=1) - A=0.2 and etc from array.
How can i do it with using python odeint?
Yes, you can to that, but not directly in odeint, as that has no event mechanism, and what you propose needs an event-action mechanism.
But you can separate your problem into steps, use inside each step odeint with the now constant A parameter, and then in the end join the steps.
T = [[0]]
N = [[n0]]
for k in range(len(A)):
t = np.linspan(k,k+1,11);
n = odeint(lambda u,t: A[k]*u, [n0],t)
n0 = n[-1]
T.append(t[1:])
N.append(n[1:])
T = np.concatenate(T)
N = np.concatenate(N)
If you are satisfied with less efficiency, both in the evaluation of the ODE and in the number of internal steps, you can also implement the parameter as a piecewise constant function.
tA = np.arange(len(A));
A_func = interp1d(tA, A, kind="zero", fill_value="extrapolate")
T = np.linspace(0,len(A)+1, 10*len(A)+11);
N = odeint(lambda u,t: A_func(t)*u, [n0], T)
The internal step size controller works on the assumption that the ODE function is well differentiable to 5th or higher order. The jumps are then seen via the implicit numerical differentiation inherent in the step error calculation as highly oscillatory events, requiring a very small step size. There is some mitigation inside the code that usually allows the solver to eventually step over such a jump, but it will require much more internal steps and thus function evaluations than the first variant above.
Related
I have a very large multiply and sum operation that I need to implement as efficiently as possible. The best method I've found so far is bsxfun in MATLAB, where I formulate the problem as:
L = 10000;
x = rand(4,1,L+1);
A_k = rand(4,4,L);
tic
for k = 2:L
i = 2:k;
x(:,1,k+1) = x(:,1,k+1)+sum(sum(bsxfun(#times,A_k(:,:,2:k),x(:,1,k+1-i)),2),3);
end
toc
Note that L will be larger in practice. Is there a faster method? It's strange that I need to first add the singleton dimension to x and then sum over it, but I can't get it to work otherwise.
It's still much faster than any other method I've tried, but not enough for our application. I've heard rumors that the Python function numpy.einsum may be more efficient, but I wanted to ask here first before I consider porting my code.
I'm using MATLAB R2017b.
I believe both of your summations can be removed, but I only removed the easier one for the time being. The summation over the second dimension is trivial, since it only affects the A_k array:
B_k = sum(A_k,2);
for k = 2:L
i = 2:k;
x(:,1,k+1) = x(:,1,k+1) + sum(bsxfun(#times,B_k(:,1,2:k),x(:,1,k+1-i)),3);
end
With this single change the runtime is reduced from ~8 seconds to ~2.5 seconds on my laptop.
The second summation could also be removed, by transforming times+sum into a matrix-vector product. It needs some singleton fiddling to get the dimensions right, but if you define an auxiliary array that is B_k with the second dimension reversed, you can generate the remaining sum as ~x*C_k with this auxiliary array C_k, give or take a few calls to reshape.
So after a closer look I realized that my original assessment was overly optimistic: you have multiplications in both dimensions in your remaining term, so it's not a simple matrix product. Anyway, we can rewrite that term to be the diagonal of a matrix product. This implies that we're computing a bunch of unnecessary matrix elements, but this still seems to be slightly faster than the bsxfun approach, and we can get rid of your pesky singleton dimension too:
L = 10000;
x = rand(4,L+1);
A_k = rand(4,4,L);
B_k = squeeze(sum(A_k,2)).';
tic
for k = 2:L
ii = 1:k-1;
x(:,k+1) = x(:,k+1) + diag(x(:,ii)*B_k(k+1-ii,:));
end
toc
This runs in ~2.2 seconds on my laptop, somewhat faster than the ~2.5 seconds obtained previously.
Since you're using an new version of Matlab you might try broadcasting / implicit expansion instead of bsxfun:
x(:,1,k+1) = x(:,1,k+1)+sum(sum(A_k(:,:,2:k).*x(:,1,k-1:-1:1),3),2);
I also changed the order of summation and removed the i variable for further improvement. On my machine, and with Matlab R2017b, this was about 25% faster for L = 10000.
I need to find the coefficient of a term in a rather long, nasty expansion. I have a polynomial, say f(x) = (x+x^2)/2 and then a function that is defined recursively: g_k(x,y) = y*f(g_{k-1}(x,y)) with g_0(x,y)=yx.
I want to know, say, the coefficient of x^2y^4 in g_10(x,y)
I've coded this up as
import sympy
x, y = sympy.symbols('x y')
def f(x):
return (x+x**2)/2
def g(x,y,k):
if k==0:
return y*x
else:
return y*f(g(x,y,k-1))
fxn = g(x,y,2)
fxn.expand().coeff(x**2).coeff(y**4)
> 1/4
So far so good.
But now I want to find a coefficient for k = 10. Now fxn = g(x,y,10) and then fxn.expand() is very slow. Obviously there are a lot of steps going on, so it's not a surprise. But my knowledge of sympy is rudimentary - I've only started using it specifically because I need to be able to find these coefficients. I could imagine that there may be a way to get sympy to recognize that everything is a polynomial and so it can more quickly find a particular coefficient, but I haven't been able to find examples doing that.
Is there another approach through sympy to get this coefficient, or anything I can do to speed it up?
I assume you are only interested in the coefficients given and not the whole polynomial g(x,y,10). So you can redefine your function g to get rid of higher orders in every step of the recursion. This will significantly speed up your calculation.
def g(x,y,k):
if k==0:
return y*x
else:
temp = y*f(g(x,y,k-1)) + sympy.O(y**5) + sympy.O(x**3)
return temp.expand().removeO()
Works as follows: First everything of the order O(y**5), O(x**3) (and higher) will be grouped and then discarded. Keep in mind you loose lots of information!
Also have a look here: Sympy: Drop higher order terms in polynomial
I have a python code that maximizes a function over 8 parameters using a nested for loop. It takes approximately 16 Minutes to execute which is way too much because I have to do the optimization numerous time for the problem I am trying to solve.
I tried:
1.) Replacing the for loop by list comprehension but there was no change in performance.
2.) Jug to parallize but the entire system freezes and restarts.
My Questions:
1.) Is there any other way to parallelize a nested for loop using multiprocessing module?
2.) Is there any way I can replace the nested loop with completely different method to maximize the function?
Code Snippet:
def SvetMaxmization(): #Maximization function
Max = 0
res = 1.0 # Step Size, execution time grows expo if the value is reduced
for a1 in np.arange(0, pi, res):
for a2 in np.arange(0,pi, res):
for b1 in np.arange(0,pi, res):
for b2 in np.arange(0,pi, res):
for c1 in np.arange(0,pi, res):
for c2 in np.arange(0,pi, res):
for d1 in np.arange(0,pi, res):
for d2 in np.arange(0,pi, res):
present =Svet(a1,a2,b1,b2,c1,c2,d1,d2) #function to be maximized
if present > Max:
Max = present
svet() function:
def Svet(a1,a2,b1,b2,c1,c2,d1,d2):
Rho = Desnitystate(3,1) #Rho is a a matrix of dimension 4x4
CHSH1 = tensor(S(a1),S(b1)) + tensor(S(a1),S(b2)) + tensor(S(a2),S(b1)) - tensor(S(a2),S(b2)) # S returns a matrix of dimension 2x2
CHSH2 = tensor(S(a1),S(b1)) + tensor(S(a1),S(b2)) + tensor(S(a2),S(b1)) - tensor(S(a2),S(b2))
SVet3x1 = tensor(CHSH1, S(c2)) + tensor(CHSH2, S(c1))
SVet3x2 = tensor(CHSH2, S(c1)) + tensor(CHSH1, S(c2))
SVet4x1 = tensor(SVet3x1, S(d2)) + tensor(SVet3x2, S(d1))
Svd = abs((SVet4x1*Rho).tr())
return Svd
System details: Intel Core I5 clocked at 3.2GHz
Thanks for your time!!
It's hard to give a single "right" answer, as it will depend a lot on the behaviour of your cost function.
But, considering that you are now doing a gridsearch over the parameter space (basically brute-forcing the solution), I think there are some things worth trying.
See if you can use a more sophisticated optimization algorithm.
See the scipy.optimize module, e.g. just if
x0 = ... # something
bounds = [(0,np.pi) for _ in range(len(x0))]
result = minimize(Svet, x0, bounds=bounds)
can solve the problem.
If the cost function is so badly behaved that none of those methods work, your only hope is probably to speed up the execution of the cost function itself. In my own experience, I would try the following:
numba is a good first alternative because it is very simple to try since it does not require you to change anything in your current code. It doesn't always speed up your code though.
Rewrite the cost function with Cython. This requires some work on your part, but will likely give a large boost in speed. Again, this depends on the nature of your cost function.
Rewrite using e.g. C, C++, or any other "fast" language.
Why does the following code return a ValueError?
from scipy.optimize import fsolve
import numpy as np
def f(p,a=0):
x,y = p
return (np.dot(x,y)-a,np.outer(x,y)-np.ones((3,3)),x+y-np.array([1,2,3]))
x,y = fsolve(f,(np.ones(3),np.ones(3)),9)
ValueError: setting an array element with a sequence.
The basic problem here is that your function f does not satisfy the criteria required for fsolve to work. These criteria are described in the documentation - although arguably not very clearly.
The particular things that you need to be aware of are:
the input to the function that will be solved for must be an n-dimensional vector (referred to in the docs as ndarray), such that the value of x you want is the solution to f(x, *args) = 0.
the output of f must be the same shape as the x input to f.
Currently, your function takes a 2 member tuple of 1x3-arrays (in p) and a fixed scalar offset (in a). It returns a 3 member tuple of types (scalar,3x3 array, 1x3 array)
As you can see, neither condition 1 nor 2 is met.
It is hard to advise you on exactly how to fix this without being exactly sure of the equation you are trying to solve. It seems you are trying to solve some particular equation f(x,y,a) = 0 for x and y with x0 = (1,1,1) and y0 = (1,1,1) and a = 9 as a fixed value. You might be able to do this by passing in x and y concatenated (e.g. pass in p0 = (1,1,1,1,1,1) and in the function use x=p[:3] and y = p[3:] but then you must modify your function to output x and y concatenated into a 6-dimensional vector similarly. This depends on the exact function your are solving for and I can't work this out from the output of your existing f (i.e based on a dot product, outer product and sum based tuple).
Note that arguments that you don't pass in the vector (e.g. a in your case) will be treated as fixed values and won't be varied as part of the optimisation or returned as part of any solution.
Note for those who like the full story...
As the docs say:
fsolve is a wrapper around MINPACK’s hybrd and hybrj algorithms.
If we look at the MINPACK hybrd documentation, the conditions for the input and output vectors are more clearly stated. See the relevant bits below (I've cut some stuff out for clarity - indicated with ... - and added the comment to show that the input and output must be the same shape - indicated with <--)
1 Purpose.
The purpose of HYBRD is to find a zero of a system of N non-
linear functions in N variables by a modification of the Powell
hybrid method. The user must provide a subroutine which calcu-
lates the functions. The Jacobian is then calculated by a for-
ward-difference approximation.
2 Subroutine and type statements.
SUBROUTINE HYBRD(FCN,N,X, ...
...
FCN is the name of the user-supplied subroutine which calculates
the functions. FCN must be declared in an EXTERNAL statement
in the user calling program, and should be written as follows.
SUBROUTINE FCN(N,X,FVEC,IFLAG)
INTEGER N,IFLAG
DOUBLE PRECISION X(N),FVEC(N) <-- input X is an array length N, so is output FVEC
----------
CALCULATE THE FUNCTIONS AT X AND
RETURN THIS VECTOR IN FVEC.
----------
RETURN
END
N is a positive integer input variable set to the number of
functions and variables.
X is an array of length N. On input X must contain an initial
estimate of the solution vector. On output X contains the
final estimate of the solution vector.
Optimization on a set of data using python.
Following data sets available
x, y, f(x), f(y).
Function to be optimized (maximize):
f(x,y) = f(x)*y - f(y)*x
based on following contraints:
V >= sqrt(f(x)^2+f(y)^2)
I >= sqrt(x^2+y2)
where V and I are constants.
Can anyone please let me know what optimization module do I need to use? From what I understand I need to perform a discrete optimization as I have set f values for x, y, f(x) and f(y).
Using complex optimizers (http://docs.scipy.org/doc/scipy/reference/optimize.html) for such a problem is rather a bad idea.
It looks like a problem which can be quite easily solved in under O(n^2) where n=max(|x|,|y|), simply:
sort x,y,f(x),f(y) creating sorted(x), sorted(y), sorted(f(x)), sorted(f(y))
for each x find the positions in sorted(y) for which I^2 >= x^2+y^2 holds and similarly for f(x) and sorted(f(y)) and V^2 >= f(x)^2 + f(y)^2 (two binary searches, as I^2 >= x^2+y^2 <=> |y| <= sqrt(I^2-x^2) so you can find the "barrier"in constant time and then use bin searches to find actual data points which are the closest ones "on the right side of inequality")
Iterate through sorted(x) and for each x:
Iterate simultanously through elements of y and f(y) and discard (in this loop) points which are not in borth intervals found in step 2. (linear complexity)
Record argument pairs x_max,y_max for which f(x_max,y_max) is maximized
Return x_max,y_max
Total complexity is under quadratic, as step 1 takes O(nlgn), each iteration of loop in step 2 is O(lgn) so the whole step 2 takes O(nlgn), loop in step 3 is O(n) and loop in first substep of step 3 is O(n) (but in real life it should be almost constant due to the constraints), which makes the whole algorithm O(n^2) (and in most cases it will behave as O(nlgn)). It also does not depend on the definition of f(x,y) (it uses it as a black box) so you can optimize an arbitrary function is such a way.