scipy.integrate.odeint fails depending on time steps - python

I use python for scientific applications, esp. for solving differential equations. I´ve already used the odeint function successfully on simple equation systems.
Right now my goal is to solve a rather complex system of over 300 equations. At this point the odeint functions give me reasonable results as long as the time steps in the t-array are equal or smaller than 1e-3. But I need bigger time steps since the system has to be integrated over several thousand seconds. Bigger time steps yield the "excess work done.." error.
Does anyone have experience with odeint and can tell me, why this is the case, although the odeint function seems to choose its time steps automatically and then displays the results that match the time steps given by me?
I simply don´t understand why this happens. I think I can work around the problem by integrating multiple times, but maybe someone knows a better solution. I apologize in advance in the case there already is a solution elsewhere and I haven´t seen it.

Related

How to interpret cProfile results of PuLP

I fear I am a bit in over my head. I have profiled a Mixed Integer Problem (MIP) with cProfile and gprof2dot. The MIP is implemented via the pulp library. The MIP problem is solvable, which I tested on smaller problems. I profiled the MIP on a larger problem, for which it could not find a solution.
In the following picture of the cProfile output, it can be seen that 88.83% of the time the file is active in the function WaitForSingleObject. I am not familiar enough with the pulp source code to know what times are appropriate for this part of ActualSolve. Intuitively, I expected that most time needs to be spend in the ActualSolve. However, for me it seems like within this function, the most time is spent waiting. Is it possible to reduce this waiting time?
Thanks in advance and kind regards.

How to improve speed of sympy.solvers for an exponential equation?

I am using sympy as part of a graph drawing program and it works pretty much perfectly for anything I throw at it, but there is one particular equation that seemed to cause it to struggle. Other exponential equations work fine, but this particular one just seems to make my code load indefinitely.
My code is as follows:
import math,sympy
x=sympy.Symbol("x")
print (sympy.solvers.solve(19.9*sympy.exp(-1.257*x)-275.0),x)
Thanks for any and all help!

Acessing earlier values using scipy's odeint

I am solving a system of differential equations using scipy's odeint and I have a problem now that I can't seem to fix:
if a certain condition is met, I have to 'go back in time' to an earlier state, change a (global) parameter and continue integrating from there.
At first I thought, although you can't really go back in time with odeint that I will store the time(s) at which the condition is met, change the state of the system (the variable that is integrated) and just continue the integration from there.
Unfortunately I can't find a way to access the values of the variable for past times from within odeint. Does somebody know a way to solve my dilemma?
If that is not possible, can somebody suggest a different way to do this? Performance is an issue, so I would really like to not have to write my own solver in python. I could do it in C, but that would require me to change the whole implementation and that would cost me a lot of time.

Long solving time for MILP and want to observe how result change with changing variable automatically

I am trying to solve a MILP problem and it relatively takes so much time to find the optimal solution.(Around an hour or so). I have put a time limit and also change optimality gap(separately) to reduce the amount of time consumed however in that cases I cannot be able to read the relaxed solution values since the problem is not solved.
I need to find out the value of some of the variables in the model with changing some parameters. However, since it takes too much time to solve one problem, it is not efficient to change those parameters after every problem solved.(wake up and change at the middle of my sleep). Is that possible or how to automate the change of some parameters and solve the problem again and again till the whole paramaters range that I defined has solved?(there are multiple parameters I need to change each time, so there are a lot of combinations)
I have a server online 7/24 and after modifying my model with automated version, I want to run it and take all the results later on when they all solved.
Here some part of the log of my problem in case it gives some understanding of problem size
Cutting planes:
MIR: 4
StrongCG: 2
Flow cover: 1
Explored 19098 nodes (6494268 simplex iterations) in 22971.46 seconds
Thread count was 32 (of 160 available processors)
Solution count 10: 2.03677e+07 4.57223e+06 4.01329e+06 ... -0
Pool objective bound 2.03677e+07
Optimal solution found (tolerance 1.00e-04)
Best objective 2.036765174668e+07, best bound 2.036765174668e+07, gap 0.0000%
#
If anyone willing to help, I can also share my model. I am kind of desperate for this model because of long solving time.
Thank you for your time!.

Computational cost of scipy.newton_krylov

I am trying to compare the computational cost of several methods of solving a nonlinear partial differential equation. One of the methods uses scipy.optimize.newton_krylov. The other methods use scipy.sparese.linalg.lgmres.
My first thought was to count the number of iterations of newton_krylov using the 'callback' keyword. This worked, to a point. The counter that I set up counts the number of iterations made by the newton_krylov solver.
I would also like to count the number of steps it takes in each (Newton) iteration to solve the linear system. I thought that I could use a similar counter and try using a keyword like 'inner_callback', but that gave the number of iterations the newton_krylov solver used rather than the number of steps to solve the inner linear system.
It is possible that I don't understand how newton_krylov was implemented and that it is impossible to separate the the two types of iterations (Newton steps vs lgmres steps) but it would be great if there was a way.
I tried posting this on scicomp.stackexchange, but I didn't get any answers. I think that the question might be too specific to Scipy to get a good answer there, so I hope that someone here can help me. Thank you for your help!

Categories

Resources