I was reading this question and I was trying to do the same, but I want the function to have a single parameter say x. And that parameter is an array of "values" to be filled by an optimization solver. For instance:
def f(x):
return x[0]**2 + 3*x[1]
That function will refer to: f(x)=x^2 + 3y, meaning x is an array of variables. Those variables will be present on the current function or not, because they are all the variables in the whole optimization problem, meaning they can be present on the constraints. So I will like to find that functions partial derivatives of all variables. So,in this case, i will need 2 callable functions so I can use it to form a new array that is the Jacobian of the function. Is there a way to do that? How?
Disclaimer: I am the author of pyneqsys.
If you are open to using a library, pyneqsys does exactly this. If not, you can look at the source of pyneqsys/symbolic.py which (approximately) does this to calculate the jacobian:
f = sympy.Matrix(self.nf, 1, self.exprs)
x = sympy.Matrix(self.nx, 1, self.x)
J = f.jacobian(x)
You then need to use sympy.lambdify to obtain a callable with the expected syntax of your particular solver.
Related
For starters, let's set up what kind of function is needed, a cubic represented as a function:
def solve(x):
return 5*x*x*x + 4*x*x + 3*x + 2
This function would then be used as part of another function to solve this cubic where values would be substituted for x to eventually find the correct value. Simple enough.
However, I am now given the prompt where I need to grab numbers that will serve as coefficients in this function, these are stored in a list named termsList. With this list, I grab the numbers and need to use a function named cubic() and am told that the only parameters used in cubic() are the terms I will be using for this function, while also having cubic() generate a function on its own, solve(). It's difficult to describe but based on my understanding of the instructions the result should be something vaguely similar to this:
def cubic(a, b, c, d):
def solve(x):
return float(a)*x*x*x, float(b)*x*x, float(c)*x, float(d)
solve1= cubic(termsList[0], termsList[1], termsList[2], termsList[3])
solving(solve(x))
All of my attempts to make this work have failed and I'm not sure where to go from this. The only things that cannot change at all are:
The result of using cubic() must be stored in variable solve1.
The function named solve() must be created as a result of running cubic()
The only 4 acceptable parameters for cubic() are the 4 values that will be used to make the function.
The resulting function named solve() must be runnable in a separate function after running cubic() as shown above.
I've omitted other parts of the code for simplicity's sake but that's the situation I'm in. All other code, including the function that will be using solve() later, has been tested to work. I'm really and truly stumped. No libraries can be used.
Your naming convention is odd, let's use more descriptive names:
def make_cubic(a, b, c, d):
def func(x):
return float(a)*x*x*x + float(b)*x*x + float(c)*x + float(d)
return func
cubic = make_cubic(*termsList)
whatever(cubic(x))
I would like to create a NumPy function that computes the Jacobian of a function at a certain point - with the Jacobian hard coded into the function.
Say I have a vector containing two arbitrary scalars X = np.array([[x],[y]]), and a function f(X) = np.array([[2xy],[3xy]]).
This function has Jacobian J = np.array([[2y, 2x],[3y, 3x]])
How can I write a function that takes in the array X and returns the Jacobian? Of course, I could do this using array indices (e.g. x = X[0,0]), but am wondering if there is a way to do this directly without accessing the individual elements of X.
I am looking for something that works like this:
def foo(x,y):
return np.array([[2*y, 2*x],[3*y, 3*x]])
X = np.array([[3],[7]])
J = foo(X)
Given that this is possible on 1-dimensional arrays, e.g. the following works:
def foo(x):
return np.array([x,x,x])
X = np.array([1,2,3,4])
J = foo(X)
You want the jacobian, which is the differential of the function. Is that correct? I'm afraid numpy is not the right tool for that.
Numpy works with fixed numbers not with variables. That is given some number you can calculate the value of a function. The differential is a different function, that has a special relationship to the original function but is not the same. You cannot just calculate the differential but must deduce it from the functional form of the original function using differentiating rules. Numpy cannot do that.
As far as I know you have three options:
use a numeric library to calculate the differential at a specific point. However you only will get the jacobian at a specific point (x,y) and no formula for it.
take a look at a pythen CAS library like e.g. sympy. There you can define expressions in terms of variables and compute the differential with respect to that variables.
Use a library that perform automatic differentiation. Maschine learning toolkits like pytorch or tensorflow have excellent support for automatic differentiation and good integration of numpy arrays. They essentially calculate the differential, by knowing the differential for all basic operation like multiplication or addition. For composed functions, the chain rule is applied and the difderential can be calculated for arbitray complex functions.
I am having a problem with a function I am trying to fit to some data. I have a model, given by the equation inside the function which I am using to find a value for v. However, the order in which I write the variables in the function definition greatly effects the value the fit gives for v. If, as in the code block below, I have def MAR_fit(v,x) where x is the independent variable, the fit gives a value for v hugely different from if I have the definition def MAR_fit(x,v). I haven't had a huge amount of experience with the curve_fit function in the scipy package and the docs still left me wondering.
Any help would be great!
def MAR_fit(v,x):
return (3.*((2.-1.)**2.)*0.05*v)/(2.*(2.-1.)*(60.415**2.)) * (((3.*x*((2.-1.)**2.)*v)/(60.415**2.))+1.)**(-((5./2.)-1.)/(2.-1.))
x = newCD10_AVB1_AMIN01['time_phys'][1:]
y = (newCD10_AVB1_AMIN01['MAR'][1:])
popt_tf, pcov = curve_fit(MAR_fit, x, y)
Have a look at the documentation again, it says that the callable that you pass to curve_fit (the function you are trying to fit) must take the independent variable as its first argument. Further arguments are the parameters you are trying to fit. You must use MAR_fit(x,v) because that is what curve_fit expects.
Say, I have an equation f(x) = x**2 + 1, I need to find the value of f(2).
Easiest way is to create a function, accept a parameter and return the value.
But the problem is, f(x) is created dynamically and so, a function cannot be written beforehand to get the value.
I am using cvxpy for an optimization value. The equation would look something like below:
x = cvx.Variable()
Si = [(cvx.square(prev[i] + cvx.sqrt(200 - cvx.square(x))) for i in range(3)]
prev is an array of numbers. There will be a Si[0] Si[1] Si[2].
How do i find the value of Si[0] for x=20?
Basically, Is there any way to substitue the said Variable and find the value of equation When using cvxpy ?
Set the value of the variables and then you can obtain the value of the expression, like so:
>>> x.value = 3
>>> Si[0].value
250.281099844341
(although it won't work for x = 20 because then you'd be taking the square root of a negative number).
The general solution to interpreting code on-the-fly in Python is to use the built-in eval() but eval is dangerous with user-supplied input which could do all sorts of nasty to your system.
Fortunately, there are ways to "sandbox" eval using its additional parameters to only give the expression access to known "safe" operations. There is an example of how to limit access of eval to only white-listed operations and specifically deny it access to the built-ins. A quick look at that implementation looks close to correct, but I won't claim it is foolproof.
The sympy.sympify I mentioned in my comment uses eval() inside and carries the same warning.
In parallel to your cvx versions, you can use lambda to define functions on the fly :
f=[lambda x,i=j : (prev[i] + (200 - x*x)**.5)**2 for j in range(3)] #(*)
Then you can evaluate f[0](20), f[1](20), and so on.
(*) the i=j is needed to fit each j in the associated function.
I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.