Does anyone know how to feed in an initial solution or matrix of initial solutions into the differential evolution function from the Scipy library?
The documentation doesn't explain if its possible but I know that initial solution implementation is not unusual. Scipy is so widely used I would expect it to have that type of functionality.
Ok, after review and testing I believe I now understand it.
There are a set of parameters that the scipy.optimize.differential_evolution(...) function can accept, one is the init parameter which allows you to upload an array of solutions. Personally I was looking at a set of coordinates so enumerated them into an array and fed in 99 other variations of it (100 different solutions) and fed this matrix into the inti parameter. I believe it needs to have more than 4 solutions or your are going to get a tuple error.
I probably didn't need to ask/answer the question though it may help others that got equally confused.
Related
I am fitting datasets with some broken powerlaws, the data has assymetrical errors in X and Y, and I'd like to be able to introduce constrains on the fitted parameters (i.e. not below 0, or within a certain range).
Using Scipy.ODR, I can fit the data great including the assymetrical errors on both axes, however I can't seem to find any way in the documentation to introduce bounds on my fitted parameters and discussions online seem to suggest this is flat out impossible with this module: https://stackoverflow.com/a/17786438/19086741
Using Lmfit, I can also fit the data well and can introduce bounds to the fitted parameters. However, discussions online once again state that Lmfit is not able to handle asymmetrical errors, and errors on both axes.
Is there some module, or perhaps I am missing something with one of these modules, that would allow me to meet both of my requirements in this case? Many thanks.
Sorry, I don't have a good answer for you. As you note, Lmfit does not support ODR regression which allows for uncertainties in the (single) independent variable as well as uncertainties in the dependent variables.
I think this would be possible in principle. Unfortunately, ODR has a very different interface to the other minimization routines making a wrapper as "another possible solving algorithm for lmfit" a bit challenging. I am sure that none of the developers would object to someone trying this, but it would take some effort.
FWIW, you say "both axes" as if you are certain there are exactly 2 axes. ODR supports exactly 1 independent variable: lmfit is not limited to this assumption.
You also say that lmfit cannot handle asymmetric uncertainties. That is only partially true. The lmfit.Model interface allows only a single uncertainty value per data point. But with the lmfit.minimize interface, you write your own objective function to calculate the array you want minimized, and so can weight some residual of "data" and "model" any way you want.
This question may be half computational math, half programming.
I'm trying to estimate log[\int_0^\infty\int_0^\infty f(x,y)dxdy] [actually thousands of such integrals] in Python. The function f(x,y) involves some very large/very small numbers that are bound to cause overflow/underflow errors; so I'd really prefer to work with log[f(x,y)] instead of f(x,y).
Thus my question is two parts:
1) Is there a way to estimate log[\int_0^\infty\int_0^\infty f(x,y)dxdy] using the log of the function instead of the function itself?
2) Is there an implementation of this in Python?
Thanks
I would be surprised if the math and/or numpy libraries or perhaps some more specific third party libraries would not be able to solve a problem like this. Here are some of their log functions:
math.log(x[, base]), math.log1p(x), math.log2(x), math.log10(x) (https://docs.python.org/3.3/library/math.html)
numpy.log, numpy.log10, numpy.log2, numpy.log1p, numpy.logaddexp, numpy.logaddexp2 (https://numpy.org/doc/stable/reference/routines.math.html#exponents-and-logarithms)
Generally, Just google: "logarithm python library" and try to identify similar stackoverflow problems, which will allow you to find the right libraries and functions to try out. Once you do that, then you can follow this guide, so that someone can try to help you get from input to expected output: How to make good reproducible pandas examples
I'm working with scipy.integrate.odeint and want to understand it better. For this I have two slightly related questions:
Which mathematical method is it using? Runge-Kutta? Adams-Bashforth? I found this site, but it seems to be for C++, but as far as I know the python function uses the C++ version as well... It states that it switches automatically between implicit and explicit solver, does anybody know how it does this?
To understand/reuse the information I would like to know at which timepoints it evaluates the function and how exactly it computes the solution of the ODE, but fulloutput does not seem to help/I wasn't able to find out how. So to be more precise, an example with Runge-Kutta-Fehlberg: I want the different timepoints at which it evaluated f and the weights it used to multiply it.
Additional information (what for this Info is needed):
I want to reuse this information to use automatic differentiation. So I would call odeint as a black box, find out all the relevant steps it made and reuse this info to calculate the differential dx(T_end)/dx0.
If you know of any other method to solve my problem, please go ahead. Also if another ode solver might be more appropriate to d this.
PS: I'm new, so would it be better to split this question into to questions? I.e. seperate 1. and 2.?
Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization
first, let me say that I lack experiences with scientific math or statistics - so this might be a very well-known problem, but I don't know where to start.
I have a function f(x1, x2, ..., xn) where I need to guess the x'ses and find the highest value for f. The function has the following properties:
the total number or parameters is usually around 40 to 60, so a brute-force approach is impossible.
the possible values for each x range from 0.01 to 2.99
the function is steady, meaning that a higher f value means that the guess for the parameters is better and vice versa.
So far, I implemented a pretty basic method in python. It initially sets all parameters to 1, randomly guesses new values and checks if the f is higher than before. If not, roll back to the previous values.
In a loop with 10,000 iterations this seems to work somehow, but the result is propably far from being perfect.
Any suggestions on how to improve the search for the optimal parameters will be appreciated. When googling this issue things linke MCMC came up, but that seems like a very advanced method and I would need a lot of time to even understand the method.
Basic hints or concepts would help me more than elaborated methods and algorithms.
Don't do it yourself. Install SciPy and use its optimization routines. scipy.optimize.minimize looks like a good fit.
I think you want to take a look at scipy.optimize (http://docs.scipy.org/doc/scipy-0.10.0/reference/tutorial/optimize.html). A maximization is the minimization of the -1*function.