Python odeint, parameter - python

I'm trying to solve a simple equation: dM/dr = r*p(r) in python.
I have the values of p at certain values of r:
p(0)=1, p(1)=3, p(2)=5, p(3)=7, p(4)=9, p(5)=11.
I tried using the following code but I get the error
The size of the array returned by func (6) does not match the size of
y0 (1).
I think the problem is that I'm not matching the p values with the r values correctly. There should only be one initial condition since I am only trying to solve one equation. Any help would be greatly appreciated.
This is my code:
from scipy import integrate
import numpy as np
r = np.array([0, 1, 2, 3, 4, 5])
p = np.array([1, 3, 5, 7, 9, 11])
def deriv (z, r, data):
M = r*p
return M
init = np.array([0])
soln = integrate.odeint(deriv, init, p, (r,), full_output=True)
print soln

You are seeing this error because the size of init does not match the size of the array returned by deriv().
To solve the problem, change the following line
init = np.array([0])
to
init = np.array([0, 0, 0, 0, 0, 0])
For more examples on using 'odeint', see:
http://scipy-cookbook.readthedocs.org/items/numpy_scipy_ordinary_differential_equations.html
http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html

Related

Python array optimization with two constraints

I have an optimization problem where I'm trying to find an array that needs to optimize two functions simultaneously.
In the minimal example below I have two known arrays w and x and an unknown array y. I initialize array y to contains only 1s.
I then specify function np.sqrt(np.sum((x-np.array)**2) and want to find the array y where
np.sqrt(np.sum((x-y)**2) approaches 5
np.sqrt(np.sum((w-y)**2) approaches 8
The code below can be used to successfully optimize y with respect to a single array, but I would like to find that the solution that optimizes y with respect to both x and y simultaneously, but am unsure how to specify the two constraints.
y should only consist of values greater than 0.
Any ideas on how to go about this ?
w = np.array([6, 3, 1, 0, 2])
x = np.array([3, 4, 5, 6, 7])
y = np.array([1, 1, 1, 1, 1])
def func(x, y):
z = np.sqrt(np.sum((x-y)**2)) - 5
return np.zeros(x.shape[0],) + z
r = opt.root(func, x0=y, method='hybr')
print(r.x)
# array([1.97522498 3.47287981 5.1943792 2.10120135 4.09593969])
print(np.sqrt(np.sum((x-r.x)**2)))
# 5.0
One option is to use scipy.optimize.minimize instead of root, Here you have multiple solver options and some of them (ie SLSQP) allow you to specify multiple constraints. Note that I changed the variable names so that x is the array you want to optimise and y and z define the constraints.
from scipy.optimize import minimize
import numpy as np
x0 = np.array([1, 1, 1, 1, 1])
y = np.array([6, 3, 1, 0, 2])
z = np.array([3, 4, 5, 6, 7])
constraint_x = dict(type='ineq',
fun=lambda x: x) # fulfilled if > 0
constraint_y = dict(type='eq',
fun=lambda x: np.linalg.norm(x-y) - 5) # fulfilled if == 0
constraint_z = dict(type='eq',
fun=lambda x: np.linalg.norm(x-z) - 8) # fulfilled if == 0
res = minimize(fun=lambda x: np.linalg.norm(x), constraints=[constraint_y, constraint_z], x0=x0,
method='SLSQP', options=dict(ftol=1e-8)) # default 1e-6
print(res.x) # [1.55517124 1.44981672 1.46921122 1.61335466 2.13174483]
print(np.linalg.norm(res.x-y)) # 5.00000000137866
print(np.linalg.norm(res.x-z)) # 8.000000000930026
This is a minimizer so besides the constraints it also wants a function to minimize, I chose just the norm of y, but setting the function to a constant (ie lambda x: 1) would have also worked.
Note also that the constraints are not exactly fulfilled, you can increase the accuracy by setting optional argument ftol to a smaller value ie 1e-10.
For more information see also the documentation and the corresponding sections for each solver.

<lambda>() takes 1 positional argument but 2 were given

I am trying to implement the same Sage code here: find vector center in python, as follows:
import numpy as np
from scipy.optimize import minimize
def norm(x):
return x/np.linalg.norm(x)
vectors = np.array([[1,2,3],[4,5,6],[7,8,9]])
unit_vectors = [np.divide(v,norm(v)) for v in vectors]
constraints = [lambda x: np.dot(x,u)-1 for u in unit_vectors]
target = lambda x: norm(x)
res = minimize(target,[3,3,3],constraints)
But I keep getting the same problem:
TypeError: <lambda>() takes 1 positional argument but 2 were given
I am not a mathematician, I just want to write a code that can find a center of multidimensional vectors. I tried many things to solve the problem but nothing worked.
Thanks.
The algorithm of the answer that you indicate is not written in python, so which obviously can fail, considering the official docs I have implemented the following solution:
import numpy as np
from scipy.optimize import minimize
x0 = 10, 10, 10
vectors = [
np.array([1, 2, 3]),
np.array([1, 0, 2]),
np.array([3, 2, 4]),
np.array([5, 2, -1]),
np.array([1, 1, -1]),
]
unit_vectors = [vector / np.linalg.norm(vector) for vector in vectors]
constraints = [
{"type": "ineq", "fun": lambda x, u=u: (np.dot(x, u) - 1)} for u in unit_vectors
]
target = lambda x: np.linalg.norm(x)
res = minimize(fun=target, x0=x0, constraints=constraints)
print(res.x)
Output:
[1.38118173 0.77831221 0.42744313]

Scipy minimize function on matrix x0

Scipy optimize.minimize seems to accept only single-dimension x0. I have a problem where my x0 are shape(n, m). Constraints exist such that each row of x0 should match a certain value.
I could simply iterate through each row and perform the optimization on that; however, I'm hoping to add constraints to the columns at some point.
Is there a known way of handling this? I can't find much discussion of it. I've tried various versions of broadcasting, flattening, etc., but haven't had much luck in creating a reasonable structure.
EDIT: I've added a minimal code example. The constraint condition returns proper zeros when tested with test_x.
import numpy as np
import scipy.optimize
def cost(x, p):
x.reshape(3, 4)
p.reshape(3, 4)
return (x * p).sum()
def demand_constraint(x, d):
x = x.reshape(3, 4)
b = x.sum(axis=0) - d
return np.broadcast_to(b, (3, 4)).flatten()
demand = np.array([10, 14, 8, 26])
prices = np.array([[4, 4, 5, 5], [2, 8, 6, 2], [3, 2, 9, 8]])
x0 = np.zeros_like(prices).flatten()
p0 = prices.flatten()
test_x = np.array([[4, 14, 8, 26], [5, 0, 0, 0], [0, 0, 0, 0]])
cost(x0, p0)
cons = ({'type': 'eq', 'fun': demand_constraint, 'args': (demand,)})
output = scipy.optimize.minimize(cost, x0, args=p0, constraints=cons)
For anyone who may encounter this in a search, the way to handle it is to add a constraint for every individual row. So the demand_constraint above will take a row_index field and a single value will be returned.
This single value can then be incorporated into a constraint that is added. You continue to add constraints (each it's own dictionary) for the shape. My mistake was to assume that the constraint could apply to all of the x. It's better suited to apply to a single x.

interpolation between arrays in python

What is the easiest and fastest way to interpolate between two arrays to get new array.
For example, I have 3 arrays:
x = np.array([0,1,2,3,4,5])
y = np.array([5,4,3,2,1,0])
z = np.array([0,5])
x,y corresponds to data-points and z is an argument. So at z=0 x array is valid, and at z=5 y array valid. But I need to get new array for z=1. So it could be easily solved by:
a = (y-x)/(z[1]-z[0])*1+x
Problem is that data is not linearly dependent and there are more than 2 arrays with data. Maybe it is possible to use somehow spline interpolation?
This is a univariate to multivariate regression problem. Scipy supports univariate to univariate regression, and multivariate to univariate regression. But you can instead iterate over the outputs, so this is not such a big problem. Below is an example of how it can be done. I've changed the variable names a bit and added a new point:
import numpy as np
from scipy.interpolate import interp1d
X = np.array([0, 5, 10])
Y = np.array([[0, 1, 2, 3, 4, 5],
[5, 4, 3, 2, 1, 0],
[8, 6, 5, 1, -4, -5]])
XX = np.array([0, 1, 5]) # Find YY for these
YY = np.zeros((len(XX), Y.shape[1]))
for i in range(Y.shape[1]):
f = interp1d(X, Y[:, i])
for j in range(len(XX)):
YY[j, i] = f(XX[j])
So YY are the result for XX. Hope it helps.

Python scipy optimize fmin from matlab fminsearch error

I am converting this matlab function handle to python and am receiving this error (ValueError: setting an array element with a sequence.) in python. I'm pretty new to python sorry if there is an obvious error.
In matlab:
P = [1 1; 6 1; 6 5]
fh = #(x) sqrt(sum((ones(3,1)*x - P).^2, 2))
[x,fval] = fminsearch(#(x) max(fh(x)),[0 0])
In python:
P = np.matrix([[1, 1],[ 6, 1],[ 6, 5]])
fh = lambda x:np.sqrt(sum(np.power((np.ones((3,1))*x - P),2),axis = 0))
xopt = scipy.optimize.fmin(func=fh,x0 = np.matrix([0, 0]))
The code works in matlab but not in python thanks.
In your matlab code, fminsearch is minimizing the max of fh(x). In the Python code, therefore, the func passed to fmin should be the max of fh as well:
import numpy as np
from scipy import optimize
P = np.array([[1, 1],[ 6, 1],[ 6, 5]])
def fh(x):
return np.max(np.sqrt(np.sum((x - P)**2, axis=1)))
xopt = optimize.fmin(func=fh, x0=np.array([0, 0]))
print(xopt)
yields
Optimization terminated successfully.
Current function value: 3.201562
Iterations: 117
Function evaluations: 222
[ 3.50007127 2.99991092]

Categories

Resources