Use scipy.optimize.root on a numpy array with additional arguments - python

Given the optimization problem (1) as depicted below where p_i, p'_i and w_ji are given for i=0,...,6889, I want to use the Levenberg-Marquardt method to find an optimal solution for R_j and v_j using scipy.optimize.root (I'm open to any other suggestions).
However, I don't know how to set up the callable function that needs to be passed to root. So far, all I have is this which is obviously wrong.
def fun(x, (old_points, new_points, weights, n_joints)):
"""
:param x: variable to optimize. It is supposed to encapsulate R and v from (1)
:param old_points: original vertex positions, (6890,3) numpy array
:param new_points: transformed vertex positions, (6890,3) numpy array
:param weights: weight matrix obtained from spectral clustering, (n_joints, 6890) numpy array
:param n_joints: number of joints
:return: non-linear cost function to find the root of
"""
# Extract rotations and offsets
R = np.array([(np.array(x[j * 15:j * 15 + 9]).reshape(3, 3)) for j in range(n_joints)])
v = np.array([(np.array(x[j * 15 + 9:j * 15 + 12])) for j in range(n_joints)])
# Use equation (1) for the non-linear pass.
# R_j p_i
Rp = np.einsum('jkl,il', x, old_points) # x shall replace R
# w_ji (Rp_ij + v_j)
wRpv = np.einsum('ji,ijk->ik', weights, Rp + x) # x shall replace v
# Set up a non-linear cost function, then compute the squared norm.
d = new_points - wRpv
result = np.einsum('ik,ik', d, d)
return result
EDIT: This is now the correct result.

Use your original fun (but give it a better name)
def fun(x, (old_points, new_points, weights, n_joints)):
"""
:param x: variable to optimize. It is supposed to encapsulate R and v from (1)
:param old_points: original vertex positions, (6890,3) numpy array
:param new_points: transformed vertex positions, (6890,3) numpy array
:param weights: weight matrix obtained from spectral clustering, (n_joints, 6890) numpy array
:param n_joints: number of joints
:return: non-linear cost function to find the root of
"""
# Extract rotations and offsets
R = np.array([(np.array(x[j * 15:j * 15 + 9]).reshape(3, 3)) for j in range(n_joints)])
v = np.array([(np.array(x[j * 15 + 9:j * 15 + 12])) for j in range(n_joints)])
# Use equation (1) for the non-linear pass.
# R_j p_i
Rp = np.einsum('jkl,il', x, old_points) # x shall replace R
# w_ji (Rp_ij + v_j)
wRpv = np.einsum('ji,ijk->ik', weights, Rp + x) # x shall replace v
# Set up a non-linear cost function, then compute the squared norm.
d = new_points - wRpv
result = np.einsum('ik,ik', d, d)
return result
make a closure on it so that it takes a single input (the variable you are solving):
old_points = ...
new_points = ...
weights = ...
rv = ...
n_joints = ...
def cont_function(x):
return fun(x, old_points, new_points, weights, rv, n_joints)
now try use cost_function in roots

Related

Get an X(std) value of efficient frontier from a given Y(mean) value with cvxopt?

I'm trying to do portfolio optimization with cvxopt (Python), I'm able to get the efficient frontier with the following code, however, I'm not able to specify a Y value (mean or return) and get a corresponding X value (std or risk), if anyone has knowledge about this, I would be more than grateful if you can share it:
def optimal_portfolio(returns_vec, cov_matrix):
n = len(returns_vec)
N = 1000
mus = [10 ** (5 * t / N - 1.0) for t in range(N)]
# Convert to cvxopt matrices
S = opt.matrix(cov_matrix)
pbar = opt.matrix(returns_vec)
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n, 1))
A = opt.matrix(1.0, (1, n))
b = opt.matrix(1.0)
#solvers.options["feastol"] =1e-9
# Calculate efficient frontier weights using quadratic programming
portfolios = [solvers.qp(mu * S, -pbar, G, h, A, b)['x']
for mu in mus]
## CALCULATE RISKS AND RETURNS FOR FRONTIER
returns = [blas.dot(pbar, x) for x in portfolios]
risks = [np.sqrt(blas.dot(x, S * x)) for x in portfolios]
## CALCULATE THE 2ND DEGREE POLYNOMIAL OF THE FRONTIER CURVE
m1 = np.polyfit(returns, risks, 2)
x1 = np.sqrt(m1[2] / m1[0])
# CALCULATE THE OPTIMAL PORTFOLIO
wt = solvers.qp(opt.matrix(x1 * S), -pbar, G, h, A, b)['x']
return np.asarray(wt), returns, risks
return_vec = [0.055355,
0.010748,
0.041505,
0.074884,
0.039795,
0.065079
]
cov_matrix =[[ 0.005329, -0.000572, 0.003320, 0.006792, 0.001580, 0.005316],
[-0.000572, 0.000625, 0.000606, -0.000266, -0.000107, 0.000531],
[0.003320, 0.000606, 0.006610, 0.005421, 0.000990, 0.006852],
[0.006792, -0.000266, 0.005421, 0.011385, 0.002617, 0.009786],
[0.001580, -0.000107, 0.000990, 0.002617, 0.002226, 0.002360],
[0.005316, 0.000531, 0.006852, 0.009786, 0.002360, 0.011215]]
weights, returns, risks = optimal_portfolio(return_vec, cov_matrix)
fig = plt.figure()
plt.ylabel('Return')
plt.xlabel('Risk')
plt.plot(risks, returns, 'y-o')
print(weights)
plt.show()
I would like to find the corresponding risk value for a given return value.
Thank you very much!
Not sure I properly understood your question but if you are looking to find a solution for a given return then you should use the min return required as a constraint
def optimal_portfolio(returns_vec, cov_matrix, Y):
h1 = opt.matrix(0.0, (n, 1))
h2 = np.matrix(-np.ones((1,1))*Y)
h = concatenate (h1,h2)
This will eventually return the optimal portfolio based on a lowerbound Y. You can then add an upper bound in the same way so that you can fix the value of Y

Improve speed of gradient descent

I am trying to maximize a target function f(x) with function scipy.optimize.minimum. But it usually takes 4-5 hrs to run the code because the function f(x) involves a lot of computation of complex matrix. To improve its speed, I want to use gpu. And I've already tried tensorflow package. Since I use numpy to define f(x), I have to convert it into tensorflow's format. However, it doesn't support the computation of complex matrix. What else package or means I can use? Any suggestions?
To specific my problem, I will show calculate scheme below:
Calculate the expectation :
-where H=x*H_0, x is the parameter
Let \phi go through the dynamics of Schrödinger equation
-Different H is correspond to a different \phi_end. Thus, parameter x determines the expectation
Change x, calculate the corresponding expectation
Find a specific x that minimize the expectation
Here is a simple example of part of my code:
import numpy as np
import cmath
from scipy.linalg import expm
import scipy.optimize as opt
# create initial complex matrixes
N = 2 # Dimension of matrix
H = np.array([[1.0 + 1.0j] * N] * N) # a complex matrix with shape(N, N)
A = np.array([[0.0j] * N] * N)
A[0][0] = 1.0 + 1j
# calculate the expectation
def value(phi):
exp_H = expm(H) # put the matrix in the exp function
new_phi = np.linalg.linalg.matmul(exp_H, phi)
# calculate the expectation of the matrix
x = np.linalg.linalg.matmul(H, new_phi)
expectation = np.inner(np.conj(phi), x)
return expectation
# Contants
tmax = 1
dt = 0.1
nstep = int(tmax/dt)
phi_init = [1.0 + 1.0j] * N
# 1st derivative of Schrödinger equation
def dXdt(t, phi, H): # 1st derivative of the function
return -1j * np.linalg.linalg.matmul(H, phi)
def f(X):
phi = [[0j] * N] * nstep # store every time's phi
phi[0] = phi_init
# phi go through the dynamics of Schrödinger equation
for i in range(nstep - 1):
phi[i + 1] = phi[i] - dXdt(i * dt, X[i] * H, phi[i]) * dt
# calculate the corresponding value
f_result = value(phi[-1])
return f_result
# Initialize the parameter
X0 = np.array(np.ones(nstep))
results = opt.minimize(f, X0) # minimize the target function
opt_x = results.x
PS:
Python Version: 3.7
Operation System: Win 10

Finding Fourier coefficients algorithm

Ok, so I have been trying to code a "naive" method to calculate the coefficients for a standard Fourier Series in complex form. I am getting very close, I think, but there are some odd behaviors. This may be more of a math question than programming one, but I already asked on math.stackexchange and got zero answers. Here is my working code:
import matplotlib.pyplot as plt
import numpy as np
def coefficients(fn, dx, m, L):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
N = 2*L / dx
coeffs = np.zeros(m, dtype=np.complex_)
xk = np.arange(-L, L + dx, dx)
# Calculate the coefficients for each wave
for mi in range(m):
coeffs[mi] = 1/N * sum(fn(xk)*np.exp(-1j * mi * np.pi * xk / L))
return coeffs
def fourier_graph(range, L, c_coef, function=None, plot=True, err_plot=False):
"""
Given a range to plot and an array of complex fourier series coefficients,
this function plots the representation.
:param range: the x-axis values to plot
:param c_coef: the complex fourier coefficients, calculated by coefficients()
:param plot: Default True. Plot the fourier representation
:param function: For calculating relative error, provide function definition
:param err_plot: relative error plotted. requires a function to compare solution to
:return: the fourier series values for the given range
"""
# Number of coefficients to sum over
w = len(c_coef)
# Initialize solution array
s = np.zeros(len(range))
for i, ix in enumerate(range):
for iw in np.arange(w):
s[i] += c_coef[iw] * np.exp(1j * iw * np.pi * ix / L)
# If a plot is desired:
if plot:
plt.suptitle("Fourier Series Plot")
plt.xlabel(r"$t$")
plt.ylabel(r"$f(x)$")
plt.plot(range, s, label="Fourier Series")
if err_plot:
plt.plot(range, function(range), label="Actual Solution")
plt.legend()
plt.show()
# If error plot is desired:
if err_plot:
err = abs(function(range) - s) / function(range)
plt.suptitle("Plot of Relative Error")
plt.xlabel("Steps")
plt.ylabel("Relative Error")
plt.plot(range, err)
plt.show()
return s
if __name__ == '__main__':
# Assuming the interval [-l, l] apply discrete fourier transform:
# number of waves to sum
wvs = 50
# step size for calculating c_m coefficients (trap rule)
deltax = .025 * np.pi
# length of interval for Fourier Series is 2*l
l = 2 * np.pi
c_m = coefficients(np.exp, deltax, wvs, l)
# The x range we would like to interpolate function values
x = np.arange(-l, l, .01)
sol = fourier_graph(x, l, c_m, np.exp, err_plot=True)
Now, there is a factor of 2/N multiplying each coefficient. However, I have a derivation of this sum in my professor's typed notes that does not include this factor of 2/N. When I derived the form myself, I arrived at a formula with a factor of 1/N that did not cancel no matter what tricks I tried. I asked over at math.stackexchange what was going on, but got no answers.
What I did notice is that adding the 1/N decreased the difference between the actual solution and the fourier series by a massive amount, but it's still not right. so I tried 2/N and got even better results. I am really trying to figure this out so I can write a nice, clean algorithm for basic fourier series before I try to learn about Fast Fourier Transforms.
So what am I doing wrong here?
assuming c_n is given by A_n as in mathworld
idem c_n = 1/T \int_{-T/2}^{T/2}f(x)e^{-2ipinx/T}dx
we can compute (trivially) the coeffs c_n analytically (which is a good way to compare to your trapezoidal integral)
k = (1-2in)/2
c_n = 1/(4*pi*k)*(e^{2pik} - e^{-2pik})
So your coeffs are likely to be properly computed (the both wrong curves look alike)
Now notice that when you reconstitue f, you add the coeff c_0 up to c_m
But the reconstruction should occur with c_{-m} to c_m
So you are missing half of the coeffs.
Below a fix with your adaptated coefficients function and the theoretical coeffs
import matplotlib.pyplot as plt
import numpy as np
def coefficients(fn, dx, m, L):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
N = 2*L / dx
coeffs = np.zeros(m, dtype=np.complex_)
xk = np.arange(-L, L + dx, dx)
# Calculate the coefficients for each wave
for mi in range(m):
n = mi - m/2
coeffs[mi] = 1/N * sum(fn(xk)*np.exp(-1j * n * np.pi * xk / L))
return coeffs
def fourier_graph(range, L, c_coef, ref, function=None, plot=True, err_plot=False):
"""
Given a range to plot and an array of complex fourier series coefficients,
this function plots the representation.
:param range: the x-axis values to plot
:param c_coef: the complex fourier coefficients, calculated by coefficients()
:param plot: Default True. Plot the fourier representation
:param function: For calculating relative error, provide function definition
:param err_plot: relative error plotted. requires a function to compare solution to
:return: the fourier series values for the given range
"""
# Number of coefficients to sum over
w = len(c_coef)
# Initialize solution array
s = np.zeros(len(range), dtype=complex)
t = np.zeros(len(range), dtype=complex)
for i, ix in enumerate(range):
for iw in np.arange(w):
n = iw - w/2
s[i] += c_coef[iw] * (np.exp(1j * n * ix * 2 * np.pi / L))
t[i] += ref[iw] * (np.exp(1j * n * ix * 2 * np.pi / L))
# If a plot is desired:
if plot:
plt.suptitle("Fourier Series Plot")
plt.xlabel(r"$t$")
plt.ylabel(r"$f(x)$")
plt.plot(range, s, label="Fourier Series")
plt.plot(range, t, label="expected Solution")
plt.legend()
if err_plot:
plt.plot(range, function(range), label="Actual Solution")
plt.legend()
plt.show()
return s
def ref_coefficients(m):
"""
Calculate the complex form fourier series coefficients for the first M
waves.
:param fn: function to sample
:param dx: sampling frequency
:param m: number of waves to compute
:param L: We are solving on the interval [-L, L]
:return: an array containing M Fourier coefficients c_m
"""
coeffs = np.zeros(m, dtype=np.complex_)
# Calculate the coefficients for each wave
for iw in range(m):
n = iw - m/2
k = (1-(1j*n)/2)
coeffs[iw] = 1/(4*np.pi*k)* (np.exp(2*np.pi*k) - np.exp(-2*np.pi*k))
return coeffs
if __name__ == '__main__':
# Assuming the interval [-l, l] apply discrete fourier transform:
# number of waves to sum
wvs = 50
# step size for calculating c_m coefficients (trap rule)
deltax = .025 * np.pi
# length of interval for Fourier Series is 2*l
l = 2 * np.pi
c_m = coefficients(np.exp, deltax, wvs, l)
# The x range we would like to interpolate function values
x = np.arange(-l, l, .01)
ref = ref_coefficients(wvs)
sol = fourier_graph(x, 2*l, c_m, ref, np.exp, err_plot=True)

Elements of a matrix inverse for ill-conditioned matrix

I am trying to find the elements of a matrix inverse for an ill-conditioned matrix
Consider the complex non-Hermitian matrix M, I know this matrix has one zero eigenvalue, and is therefor singular. However, I need to find the sum of the matrix elements: v#f(M)#u, where u and v are both vectors and f(x)=1/x (effectively the matrix inverse). I know that the zeroth eigenvalue does not contribute to this sum, so there is no explicit issue with the singularity. However, my code is very numerically unstable, I presume this is a consequence of an error in finding the eigenvalues of the system.
Starting by building the preliminary matrices:
import numpy as np
import scipy as sc
g0 = np.array([0,0,1])
g1 = np.array([0,1,0])
e0 = np.array([1,0,0])
sm = np.outer(g0, e0)
sp = np.outer(e0, g0)
def spre(op):
return np.kron(np.eye(op.shape[0]),op)
def spost(op):
return np.kron(op.T,np.eye(op.shape[0]))
def sprepost(op1,op2):
return np.kron(op1.T,op2)
sm_reg = spre(sm)
sp_reg = spre(sp)
spsm_reg=spre(sp#sm)
hil_dim = int(g0.shape[0])
cav_proj= np.eye(hil_dim).reshape(hil_dim**2,)
rho0 =(np.outer(e0,e0)).reshape(hil_dim**2,)
def ham(g):
return g * (np.outer(g1,e0) + np.outer(e0, g1))
def lind_op(A):
L = 2 * sprepost(A,A.conj().T) - spre(A.conj().T # A)
L += - spost(A.conj().T # A)
return L
def JC_lio(g, kappa, gamma):
unit = -1j * (spre(ham(g)) - spost(ham(g)))
lind = gamma * lind_op(np.outer(g0 , e0)) + kappa * lind_op(np.outer(g0 , g1))
return unit + lind
Now define a function that first finds the left and right eigenvalues, and then finds the sum of the matrix elements:
def power_int(g, kappa, gamma):
# Construct the non-Hermitian matrix of interest
lio = JC_lio(g,kappa,gamma)
#Find its left and right eigenvectors:
ev, left, right = scipy.linalg.eig(lio, left=True,right=True)
# Find the appropriate normalisation factors
norm = np.array([(left.conj().T[ii]).dot(right.conj().T[ii]) for ii in range(len(ev))])
#Find the similarity transformation for the problem
P = right
Pinv = (left/norm).conj().T
#find the projectors for the Eigenbasis
Proj = [np.outer(P.conj().T[ii],Pinv[ii]) for ii in range(len(ev))]
#Find the relevant matrix elements between the Eigenbasis and the projectors --- this is where the zero eigenvector gets removed
PowList = [(spsm_reg# Proj[ii] # rho0).dot(cav_proj) for ii in range(len(ev))]
#apply the function
Pow = 0
for ii in range(len(ev)):
if PowList[ii]==0:
Pow = Pow
else:
Pow += PowList[ii]/ev[ii]
return -np.pi * np.real(Pow)
#example run:
grange = np.linspace(0.001,10,40)
dat = np.array([power_int(g, 1, 1) for g in grange])
Running this code leads to extremely oscillatory results where I expect a smooth curve. I suspect this error is due to poor accuracy in determining the eigenvectors, but I can't seem to find any documentation on this. Any insights would be welcome.

Elegant way to translate set of weighted sums to python

Given a set of terms ||(p_i' - sum{w_ji*(R_j*p_i+v_j)})||^2, where ||...||^2 denotes the squared norm, I want to efficiently set up an array (or a list) in Python filled with these terms. p_i', p_i, v_j are three-dimensional vectors, and R_j is a 3x3 matrix.
I've already tried this but I don't know how to incorporate the sum over j.
new_points = r_mesh.points() # p', return Nx3 array
old_points = avg_mesh.points() # p
n_joints = 3
rv = np.arange(n_joints * 15) # R_j and v_j are stored in rv
weights = np.random.rand(n_joints, len(new_points)) # w
func = [[np.linalg.norm(
new_points[i] - (weights[j, i] * ((np.array(rv[j * 15:j * 15 + 9]).reshape(3, 3) # old_points[i]) + np.array(
rv[j * 9 + 9: j * 9 + 12])))) for j in range(n_joints)] for i in range(len(new_points))]
To make things clearer here is the original equation that I transformed into a non-linear function as to feed it to the Levenberg-Marquardt method.
EDIT: I'm sorry, before, there was a wrong image.
The simplest ("auto pilot", no actual thinking required) method would be np.einsum:
# set up example:
n_i, n_j = 20, 30
p = np.random.random((n_i, 3))
pp = np.random.random((n_i, 3))
R = np.random.random((n_j, 3, 3))
w = np.random.random((n_j, n_i))
v = np.random.random((n_j, 3))
# now just tell einsum which index is where and let it
# do its magic
# R_j p_i
Rp = np.einsum('jkl,il', R,p)
# by Einstein convention this will sum over l,
# so Rp has indices ijk
# w_ji (Rp_ij + v_j)
wRpv = np.einsum('ji,ijk->ik', w,Rp+v)
# pure Einstein convention would sum over i and j,
# we override this by passing explicit output indices
# ik to keep i alive
# squared norm
d = pp - wRpv
result = np.einsum('ik,ik', d,d)

Categories

Resources