I am using SCIPY to optimize a storage facility that uses forward prices for a deal term of 1 year. Gas can be injected and withdrawn from this facility, based on monthly spreads (e.g. March 21 vs May 20 spread) being high enough to cover the variable cost of operation. The attached picture represents the problem (the values here are arbitrary, don't match the values in the code; pic is just for concept)
The cells in blue are the "changing cells", volumes that SCIPY will adjust to maximize profits. The constraints need to be set up for each month separately. I get errors when I attempt to set up these constraints in SCIPY. Here's a reproducible version of the problem:
import numpy as np
import scipy.optimize as opt
p= np.array([4, 5, 6.65, 12]) #p = prices
pmx = np.triu(p - p[:, np.newaxis]) #pmx = price matrix, upper triangular
q =np.triu(np.ones((4,4))) # q = quantity, upper triangular
def profit(q):
profit = -np.sum(q.flatten() * pmx.flatten())
return profit
bnds = (0,10)
bnds = [bnds for i in q.flatten()]
def cons1(q):
np.sum(q,axis=1) - 10
#def cons2(q):
# np.sum(q,axis=0) - 8
#con1 = {'type':'ineq','fun':cons1}
#con2 = {'type':'ineq','fun':cons2}
cons = [con1] # using only 1 constraint (con1) to test the model
#sol = opt.minimize(profit,q,method='SLSQP', bounds= bnds,constraints = cons)
sol = opt.minimize(profit,q,method='SLSQP', bounds= bnds)
sol
The model runs fine when I exclude the constraints. When I add one of the constraints, the error I get is:
AxisError: axis 1 is out of bounds for array of dimension 1
I think this has to do with the the way I'm specifying the constraints....I'm not sure though. For the constraints, I do need to identify injections and withdrawals and set the constraints as shown in the picture. Help would be appreciated. Thanks!
As an alternative to Scipy.minimize.optimize, here is a solution with Python gekko.
import numpy as np
import scipy.optimize as opt
from gekko import GEKKO
p= np.array([4, 5, 6.65, 12]) #p = prices
pmx = np.triu(p - p[:, np.newaxis]) #pmx = price matrix, upper triangular
m = GEKKO(remote=False)
q = m.Array(m.Var,(4,4),lb=0,ub=10)
# only upper triangular can change
for i in range(4):
for j in range(4):
if j<=i:
q[i,j].upper=0 # set upper bound = 0
def profit(q):
profit = np.sum(q.flatten() * pmx.flatten())
return profit
for i in range(4):
m.Equation(np.sum(q[i,:])<=10)
m.Equation(np.sum(q[:,i])<=8)
m.Maximize(profit(q))
m.solve()
print(q)
This gives the solution:
[[[0.0] [2.5432017412] [3.7228765674] [3.7339217013]]
[[0.0] [0.0] [4.2771234426] [4.2660783187]]
[[0.0] [0.0] [0.0] [0.0]]
[[0.0] [0.0] [0.0] [0.0]]]
Related
in the following code I want to optimize(maximize power output) a wind farm using scipy optimize. the variable in each iteration is c, which c=0 shows the wind turbine is off and c=1 shows, it is running. so in each iteration, I want to change C values and get the power output. the problem is that the initial guess which is x0 and indeed is c remains the same at each iteration and will not update. so optimization is not useful and does not work since c remains the same. now I want to know how can I solve this problem? I have put some print to show that how c and power output(function must be optimized) will not change.I mean I put any c values as initial guess(x0) will give a power output which will not change neither c, nor power output.
import time
from py_wake.examples.data.hornsrev1 import V80
from py_wake.examples.data.hornsrev1 import Hornsrev1Site # We work with the Horns Rev 1 site, which comes already set up with PyWake.
from py_wake import BastankhahGaussian
from py_wake.turbulence_models import GCLTurbulence
from py_wake.deflection_models.jimenez import JimenezWakeDeflection
from scipy.optimize import minimize
from py_wake.wind_turbines.power_ct_functions import PowerCtFunctionList, PowerCtTabular
import numpy as np
def newSite(x,y):
xNew=np.array([x[0]+560*i for i in range(4)])
yNew=np.array([y[0]+560*i for i in range(4)])
x_newsite=np.array([xNew[0],xNew[0],xNew[0],xNew[1]])
y_newsite=np.array([yNew[0],yNew[1],yNew[2],yNew[0]])
return (x_newsite,y_newsite)
def wt_simulation(c):
c = c.reshape(4,360,23)
site = Hornsrev1Site()
x, y = site.initial_position.T
x_newsite,y_newsite=newSite(x,y)
windTurbines = V80()
for item in range(4):
for j in range(10,370,10):
for i in range(j-10,j):
c[item][i]=c[item][j-5]
windTurbines.powerCtFunction = PowerCtFunctionList(
key='operating',
powerCtFunction_lst=[PowerCtTabular(ws=[0, 100], power=[0, 0], power_unit='w', ct=[0, 0]), # 0=No power and ct
windTurbines.powerCtFunction], # 1=Normal operation
default_value=1)
operating = np.ones((4,360,23)) # shape=(#wt,wd,ws)
operating[c <= 0.5]=0
wf_model = BastankhahGaussian(site, windTurbines,deflectionModel=JimenezWakeDeflection(),turbulenceModel=GCLTurbulence())
# run wind farm simulation
sim_res = wf_model(
x_newsite, y_newsite, # wind turbine positions
h=None, # wind turbine heights (defaults to the heights defined in windTurbines)
wd=None, # Wind direction (defaults to site.default_wd (0,1,...,360 if not overriden))
ws=None, # Wind speed (defaults to site.default_ws (3,4,...,25m/s if not overriden))
operating=operating
)
print(-float(np.sum(sim_res.Power)))
return -float(np.sum(sim_res.Power)) # negative because of scipy minimize
t0 = time.perf_counter()
def solve():
wt =4 # for V80
wd=360
ws=23
x0 = np.ones((wt,wd,ws)).reshape(-1) # initial value for c
b=(0,1)
bounds=np.full((wt,wd,ws,2),b).reshape(-1, 2)
res = minimize(wt_simulation, x0=x0, bounds=bounds)
return res
res=solve()
print(f'success status: {res.success}')
print(f'aep: {-res.fun}') # negative to get the true maximum aep
print(f'c values: {res.x}\n')
print(f'elapse: {round(time.perf_counter() - t0)}s')
sim_res=wt_simulation(res.x)
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
import numpy as np
from numpy import sin, cos, pi
from matplotlib.pyplot import *
rng = np.random.default_rng(42)
N = 200
center = 10, 15
sigmas = 10, 2
theta = 20 / 180 * pi
# covariance matrix
rotmat = np.array([[cos(theta), -sin(theta)],[sin(theta), cos(theta)]])
diagmat = np.diagflat(sigmas)
mean =np.array([−1,−2,−3])
# covar = rotmat # diagmat # rotmat.T
covar= np.array([[2, 2 ,0],[2 ,3, 1],[0, 1 ,19]])
print('covariance matrix:')
print(covar)`enter code here`
eigval, eigvec = np.linalg.eigh(covar)
print(f'eigenvalues: {eigval}\neigenvectors:\n{eigvec}')
print('angle of eigvector corresponding to larger eigenvalue:',
180 /pi * np.arctan2(eigvec[1,1], eigvec[0,1]))
# PCA
mean = data.mean(axis=0)
print('mean:', mean)
# S1: explicit sum
S1 = np.zeros((2,2), dtype=float)
print(len(data))
for i in range(len(data)):
S1 += np.outer(data[i] - mean, data[i] - mean)
S1 /= len(data)
print(f'S1= (explicit sum)\n{S1}')
# S2:
S2 = np.cov(data, rowvar=False, bias=True)
print(f'S2= (np.cov)\n{S2}')
# PCA:
lambdas, u = np.linalg.eigh(S2)
print(f'\nPCA\nlambda={lambdas}\nu=\n{u}')
u1 = u[:,1] # largest
print('u1=\n',u1)
print(f'first principal component angle: {180/pi*np.arctan2(u1[1], u1[0])}')
after that I need to Perform PCA on the above data to one principal component and two principal components. What is the fractional explained variance in this two
cases
For generating the data, you need two tricks:
Compute a "square root" of covariance matrix S using eigenvalue-eigenvector factorization
Use the standard formula for generating a random normal with given mean and covariance. With Numpy it works on vectors (quoting from help(np.random.randn)):
For random samples from :math:`N(\mu, \sigma^2)`, use:
``sigma * np.random.randn(...) + mu``
Example:
import numpy as np
import matplotlib.pyplot as plt
# Part 1: generating random normal data with the given mean and covariance
N = 200
# covariance matrix
S = np.array([[2, 2, 0], [2, 3, 1], [0, 1, 19]])
# mean
mu = np.array([[-1, -2, -3]]).T
# get "square root" of covariance matrix via eigenfactorization
w, v = np.linalg.eig(S)
sigma = np.sqrt(w) * v
# ready, set, go!
A = sigma # np.random.randn(3, N) + mu
print(f'sample covariance:\n{np.cov(A)}')
# sample covariance:
# [[ 1.70899164 1.74288639 0.21190326]
# [ 1.74288639 2.59595547 1.2822817 ]
# [ 0.21190326 1.2822817 22.04077608]]
print(f'sample mean:\n{A.mean(axis=1)}')
# sample mean:
# [-1.02385787 -1.87783415 -2.96077204]
# --------------------------------------------
# Part 2: principal component analysis on random data A
# estimate the sample covariance
R = np.cov(A)
# do the PCA
lam, u = np.linalg.eig(R)
# fractional explained variance is the relative magnitude of
# the accumulated eigenvalues
# reorder the eigenvalues & vectors with hottest eigenvalues first
col_order = np.argsort(lam)[::-1]
lam = lam[col_order]
u = u[:, col_order]
print(f'eigenvalues: {lam}')
# eigenvalues: [22.13020272 3.87946467 0.3360558 ]
var_explained = lam.cumsum() / lam.sum()
print(f'fractional explained variance: {var_explained}')
# fractional explained variance: [0.83999223 0.98724439 1. ]
# ^^ 84% in first dimension alone,
# 99% in first two dimensions
# do the projection
B = u.T # A
# now the variance in B is concentrated in the first two dimensions
covariance after PCA projection:
[[ 2.21302027e+01 -2.68545720e-15 -1.60675493e-15]
[-2.68545720e-15 3.87946467e+00 -1.19613978e-15]
[-1.60675493e-15 -1.19613978e-15 3.36055802e-01]]
# scatter plot
plt.plot(B[0], B[1], '.')
plt.axis('equal')
plt.grid('on')
plt.xlabel('principal axis 0')
plt.ylabel('principal axis 1')
plt.title('Random data projected onto two principal axes')
# project back using ONLY a two dimensional subspace of B
# i.e. drop the last eigenvector
A_approx = u[:,:2] # B[:2,:]
# error analysis
err3 = A - A_approx
mse = (err3**2).sum(axis=0).mean()
print(f'predicted error variance: {lam[-1]}')
print(f'measured error variance: {mse}')
# predicted error variance: 0.3360558019705344
# measured error variance: 0.41137559916273914
I am trying to fit a quadratic function to some data, and I'm trying to do this without using numpy's polyfit function.
Mathematically I tried to follow this website https://neutrium.net/mathematics/least-squares-fitting-of-a-polynomial/ but somehow I don't think that I'm doing it right. If anyone could assist me that would be great, or If you could suggest another way to do it that would also be awesome.
What I've tried so far:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
ones = np.ones(3)
A = np.array( ((0,1),(1,1),(2,1)))
xfeature = A.T[0]
squaredfeature = A.T[0] ** 2
b = np.array( (1,2,0), ndmin=2 ).T
b = b.reshape(3)
features = np.concatenate((np.vstack(ones), np.vstack(xfeature), np.vstack(squaredfeature)), axis = 1)
featuresc = features.copy()
print(features)
m_det = np.linalg.det(features)
print(m_det)
determinants = []
for i in range(3):
featuresc.T[i] = b
print(featuresc)
det = np.linalg.det(featuresc)
determinants.append(det)
print(det)
featuresc = features.copy()
determinants = determinants / m_det
print(determinants)
plt.scatter(A.T[0],b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
p2 = np.polyfit(A.T[0],b,2)
plt.plot(u, np.polyval(p2,u), 'b--')
plt.show()
As you can see my curve doesn't compare well to nnumpy's polyfit curve.
Update:
I went through my code and removed all the stupid mistakes and now it works, when I try to fit it over 3 points, but I have no idea how to fit over more than three points.
This is the new code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
ones = np.ones(3)
A = np.array( ((0,1),(1,1),(2,1)))
xfeature = A.T[0]
squaredfeature = A.T[0] ** 2
b = np.array( (1,2,0), ndmin=2 ).T
b = b.reshape(3)
features = np.concatenate((np.vstack(ones), np.vstack(xfeature), np.vstack(squaredfeature)), axis = 1)
featuresc = features.copy()
print(features)
m_det = np.linalg.det(features)
print(m_det)
determinants = []
for i in range(3):
featuresc.T[i] = b
print(featuresc)
det = np.linalg.det(featuresc)
determinants.append(det)
print(det)
featuresc = features.copy()
determinants = determinants / m_det
print(determinants)
plt.scatter(A.T[0],b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
p2 = np.polyfit(A.T[0],b,2)
plt.plot(u, np.polyval(p2,u), 'r--')
plt.show()
Instead using Cramer's Rule, actually solve the system using least squares. Remember that Cramer's Rule will only work if the total number of points you have equals the desired order of polynomial plus 1.
If you don't have this, then Cramer's Rule will not work as you're trying to find an exact solution to the problem. If you have more points, the method is unsuitable as we will create an overdetermined system of equations.
To adapt this to more points, numpy.linalg.lstsq would be a better fit as it solves the solution to the Ax = b by computing the vector x that minimizes the Euclidean norm using the matrix A. Therefore, remove the y values from the last column of the features matrix and solve for the coefficients and use numpy.linalg.lstsq to solve for the coefficients:
import numpy as np
import matplotlib.pyplot as plt
ones = np.ones(4)
xfeature = np.asarray([0,1,2,3])
squaredfeature = xfeature ** 2
b = np.asarray([1,2,0,3])
features = np.concatenate((np.vstack(ones),np.vstack(xfeature),np.vstack(squaredfeature)), axis = 1) # Change - remove the y values
determinants = np.linalg.lstsq(features, b)[0] # Change - use least squares
plt.scatter(xfeature,b)
u = np.linspace(0,3,100)
plt.plot(u, u**2*determinants[2] + u*determinants[1] + determinants[0] )
plt.show()
I get this plot now, which matches what the dashed curve is in your graph, also matching what numpy.polyfit gives you:
I Need to find the Minimum variance Portfolio using scipy optimize. I have the following covariance Matrix:
[[0.00076593 0.00087803 0.00082423 0.00094616 0.00090782]
[0.00087803 0.0015638 0.00086395 0.00097013 0.00107642]
[0.00082423 0.00086395 0.00092068 0.00104474 0.00099357]
[0.00094616 0.00097013 0.00104474 0.00133651 0.00119894]
[0.00090782 0.00107642 0.00099357 0.00119894 0.00132278]]]
And the initial guess for Portfolio weights is:
[0.2,0.2,0.2,0.2,0.2]
Running the following Code, where V is the covariance Matrix from above:
import pandas as pd
import numpy as np
from scipy.optimize import minimize
from tqdm import tqdm
#Minimum Variance
cov_rol = df_smartbeta.rolling(window=36).cov()
cov_rol = cov_rol.values.reshape(665,5,5)
V = cov_rol
def objective(w, V): #portfolio variance, 1x1 array
w1= np.matrix(w)
wt = np.matrix(w1)
wt = np.transpose(wt)
return (w1*V)*wt
def constraint1(x): #constraint 1: sum w0 = 1
return np.sum(x)-1
w0 = np.array([0.2,0.2,0.2,0.2,0.2]) #initial guess
cons = ({"type":"eq", "fun": constraint1}) #merge constraints
b = (0,None) #define bounds. Lower bound: 0. Upper bound: None
bnds = (b,b,b,b,b)
count = 0
w_min = []
for i in tqdm(range(len(cov_rol))):
res =minimize(objective, w0, args=(V[count]), method="SLSQP",bounds=bnds, constraints=cons)
w_min.append(res.x)
count += 1
w_min_dataframe = pd.DataFrame(w_min)
The Code just Returns the initial guess, although it should yield
[1.0,0,0,0,0]
If I rescale the covariance Matrix by multiplying with 100, the Code works. Does anyone know why this is Happening and how to solve the issue without rescaling? I have seen some similar questions before but havent found any Solutions that work...
I previously implemented the original Bayesian Probabilistic Matrix Factorization (BPMF) model in pymc3. See my previous question for reference, data source, and problem setup. Per the answer to that question from #twiecki, I've implemented a variation of the model using LKJCorr priors for the correlation matrices and uniform priors for the standard deviations. In the original model, the covariance matrices are drawn from Wishart distributions, but due to current limitations of pymc3, the Wishart distribution cannot be sampled from properly. This answer to a loosely related question provides a succinct explanation for the choice of LKJCorr priors. The new model is below.
import pymc3 as pm
import numpy as np
import theano.tensor as t
n, m = train.shape
dim = 10 # dimensionality
beta_0 = 1 # scaling factor for lambdas; unclear on its use
alpha = 2 # fixed precision for likelihood function
std = .05 # how much noise to use for model initialization
# We will use separate priors for sigma and correlation matrix.
# In order to convert the upper triangular correlation values to a
# complete correlation matrix, we need to construct an index matrix:
n_elem = dim * (dim - 1) / 2
tri_index = np.zeros([dim, dim], dtype=int)
tri_index[np.triu_indices(dim, k=1)] = np.arange(n_elem)
tri_index[np.triu_indices(dim, k=1)[::-1]] = np.arange(n_elem)
logging.info('building the BPMF model')
with pm.Model() as bpmf:
# Specify user feature matrix
sigma_u = pm.Uniform('sigma_u', shape=dim)
corr_triangle_u = pm.LKJCorr(
'corr_u', n=1, p=dim,
testval=np.random.randn(n_elem) * std)
corr_matrix_u = corr_triangle_u[tri_index]
corr_matrix_u = t.fill_diagonal(corr_matrix_u, 1)
cov_matrix_u = t.diag(sigma_u).dot(corr_matrix_u.dot(t.diag(sigma_u)))
lambda_u = t.nlinalg.matrix_inverse(cov_matrix_u)
mu_u = pm.Normal(
'mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,
testval=np.random.randn(dim) * std)
U = pm.MvNormal(
'U', mu=mu_u, tau=lambda_u,
shape=(n, dim), testval=np.random.randn(n, dim) * std)
# Specify item feature matrix
sigma_v = pm.Uniform('sigma_v', shape=dim)
corr_triangle_v = pm.LKJCorr(
'corr_v', n=1, p=dim,
testval=np.random.randn(n_elem) * std)
corr_matrix_v = corr_triangle_v[tri_index]
corr_matrix_v = t.fill_diagonal(corr_matrix_v, 1)
cov_matrix_v = t.diag(sigma_v).dot(corr_matrix_v.dot(t.diag(sigma_v)))
lambda_v = t.nlinalg.matrix_inverse(cov_matrix_v)
mu_v = pm.Normal(
'mu_v', mu=0, tau=beta_0 * lambda_v, shape=dim,
testval=np.random.randn(dim) * std)
V = pm.MvNormal(
'V', mu=mu_v, tau=lambda_v,
testval=np.random.randn(m, dim) * std)
# Specify rating likelihood function
R = pm.Normal(
'R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),
observed=train)
# `start` is the start dictionary obtained from running find_MAP for PMF.
# See the previous post for PMF code.
for key in bpmf.test_point:
if key not in start:
start[key] = bpmf.test_point[key]
with bpmf:
step = pm.NUTS(scaling=start)
The goal with this reimplementation was to produce a model that could be estimated using the NUTS sampler. Unfortunately, I'm still getting the same error at the last line:
PositiveDefiniteError: Scaling is not positive definite. Simple check failed. Diagonal contains negatives. Check indexes [ 0 1 2 3 ... 1030 1031 1032 1033 1034 ]
I've made all the code for PMF, BPMF, and this modified BPMF available in this gist to make it simple to replicate the error. All you need to do is download the data (also referenced in the gist).
It looks like you are passing the complete precision matrix into the normal distribution:
mu_u = pm.Normal(
'mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,
testval=np.random.randn(dim) * std)
I assume you only want to pass the diagonal values:
mu_u = pm.Normal(
'mu_u', mu=0, tau=beta_0 * t.diag(lambda_u), shape=dim,
testval=np.random.randn(dim) * std)
Does this change to mu_u and mu_v fix it for you?