Can the Python module gekko handle complex numbers? - python

I'm faced with an Hessenberg index-2 DAE, I'm trying to solve it using the python module gekko.
After a few days of trial and error, I think I'm not too far from a code that works. But I've just realized that maybe gekko is not able to handle complex numbers?
Here is a minimal working example:
import numpy as np
from gekko import GEKKO
# Define the simulation and its parameters
g = GEKKO()
g.options.IMODE = 7
g.options.NODES = 1
# define the time array
n_steps = 100
Time = np.linspace(0, 2 * np.pi, n_steps)
g.time = Time
# Initialise the variables
x = g.Var(0.0)
# Write the model's equations
g.Equation(x.dt() == 1.0j * x)
# solve the equations
g.solve(disp = False)
print(x.value)
If I try to run this code, I expect to find the standard complex exponential.
But instead, I get the following error:
File "gekko.py", line 2185, in solve
raise Exception(response)
Exception: #error: Model Expression
*** Error in syntax of function string: Missing operator
Position: 9
$v1-(((1j)*(v1)))
Could you confirm that gekko cannot handle complex numbers? And maybe suggest another python DAE solver that does?
Thank you so much!

Gekko does not natively handle complex numbers. The automatic differentiation and gradient-based solvers haven't been programmed with that in mind. As you discussed in the comments, there are workarounds to solve complex number problems by splitting the variable. There are additional suggestions at Application of complex numbers in Linear Programming?

Related

Problems regarding Pyomo-provided math functions

I am trying to solve a bilevel problem using Pyomo in Python. However, when I try to run the code, I am getting the following error:
"Implicit conversion of Pyomo NumericValue type `mon' to a float is disabled. This error is often the result of using Pyomo components as arguments to one of the Python built-in math module functions when
defining expressions. Avoid this error by using Pyomo-provided math functions."
In Pyomo's documentation there is no reference to Pyomo-provided functions. I want to know how I can modify the penultimate line of code shown so that model.rn[i,j] meet the integer requirement?
The following is my code:
import random
import matplotlib.pyplot as plt
import numpy as np
from pyomo.environ import *
from pyomo.bilevel import *
from pyomo.bilevel.components import SubModel
from pyomo.opt import SolverFactory
capacity =[150,80, 65]
model = ConcreteModel()
model.sub = SubModel()
model.M=RangeSet(1,3)
model.N=RangeSet(1,12)
model.f= Param(model.M,model.N,within=NonNegativeIntegers,initialize=20)
model.v= Param(model.M,model.N,within=NonNegativeIntegers)
model.sub.x = Param(within=Binary)
model.r= Var(model.M,model.N,within=PercentFraction)
model.rp= Var(model.M,model.N,within=NonNegativeReals,bounds=(0, 10))
model.rn = Var(model.M, model.N, within=NonNegativeIntegers)
model.un= Var(model.M,model.N,within=NonNegativeIntegers)
for j in range(1,13):
model.v[1,j] = capacity[0]-model.f[1,j]
model.v[2,j] = capacity[1]-model.f[2,j]
model.v[3,j] = capacity[2]-model.f[3,j]
for j in range(1,13):
for i in range(1,4):
model.rn[i,j]=floor(model.v[i,j]*model.r[i,j])
model.un[i,j]=model.v[i,j]-model.rn[i,j]
That's tricky to do. As far as I know, it may only work on the values of that pyomo object as model.r is a pyomo object. It is not a problem on a parameter but the variable.
You may want to write out constraints that models the python 'floor' function instead.

Python: Using scipy optimize minimize does not minimize function

im new into Python and i try to figure out how everythings work. I have a little problem with the minimize function of the scipy.optimize package. I try to minimize a given function with some start values but python gives me very high parameter values.
This ist my simple code:
import numpy as np
from scipy.optimize import minimize
global array
y_wert = np.array([1,2,3,4,5,6,7,8])
global x_wert
x_wert = np.array([1,2,3,4,5,6,7,8])
def Test(x):
Summe = 0
for i in range(0,len(y_wert)):
Summe = Summe + (y_wert[i] - (x[0]*x_wert[i]+x[1]))
return(Summe)
x_0 = [1,0]
xopt = minimize(Test,x_0, method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
print(xopt)
If i run this script the best given parameters are:
[1.02325529e+44, 9.52347084e+40]
which really doesnt solve this problem. Ive also try some slightly different startvalues but that doesnt solve my problems.
Can anyone give me a clue as to where my mistake lies?
Thanks a lot for your help!
Your test function is effectively a straight line with negative gradient so there is no minimum, it's an infinitely decreasing function, that explains your large results, try something like x squared instead

sympy issue solving a linear system

I am using Python v.3.6 running on the Jupyter QtConsole. I am attempting to do some linear algebra on a dataset using Sympy for a personal project linking predictions with survey scores.
In essence, I set up an augmented matrix, with N = 14 linear equations and M = 5 unknowns, and am trying to solve the system. My problem is that when I use the solve_linear_system command on my augmented matrix, I don't get any output for my code:
import sympy
from sympy import *
from sympy import Matrix, solve_linear_system
from sympy.abc import x, y, z, u, v
system = Matrix(((1,1,-1,0,0,1),(1,1,-1,0,0,2),(0,0,-1,0,-1,3),
(0,0,-1,0,-1,2),(0,0,0,1,0,1),(1,0,1,1,-1,2),(0,0,-1,0,-1,2),(1,0,1,0,0,1),
(1,1,1,0,1,3),(1,1,1,0,0,2),(-1,1,0,0,-1,3),(1,-1,-1,-1,0,2),(-1,1,1,1,-1,3),
(0,-1,0,0,0,2)))
solve_linear_system(system, x, y, z, u, v)
>>
Can someone explain what might be the issue and how to remedy the situation? I have tried other matrices and it seems to work with them, so is there something fundamentally wrong with what I am asking Sympy todo or is it the method?
Thank you.
The reason is there are no solutions to the augmented system in reference.
(probably too many constraints, you could try to relax it by eliminating some of the superfluous equations)
If you stare at your matrix for a little while, you will find that there are incompatible equations, for instance, rows 2 & 3: (0,0,-1,0,-1,3), (0,0,-1,0,-1,2), or rows 0 and 1: (1,1,-1,0,0,1),(1,1,-1,0,0,2). There may also be redundant ones.

ValueError: Linkage 'Z' uses the same cluster more than once in Python scipy fcluster

I'm getting ValueError: Linkage 'Z' uses the same cluster more than once. when trying to get flat clusters in Python with scipy.cluster.hierarchy.fcluster. This error happens only sometimes, usually only with really big matrices ie 10000x10000.
import scipy.cluster.hierarchy as sch
Z = sch.linkage(d, method="ward")
# some computation here, returning n (usually between 5-30)
clusters = sch.fcluster(Z, t=n, criterion='maxclust')
Why does it happen? How can I avoid it? Unfortunately I couldn't find any useful info by googling...
EDIT Error occurs also when trying to get dendrogram.
No such error appear if method='average' is used.
It seems using fastcluster instead of scipy.cluster.hierarchy solves the problem. In addition, fastcluster implementation is slightly faster than scipy.
For more details have a look at the paper.
import fastcluster
Z = fastcluster.linkage(d, method="ward")
# some computation here, returning n (usually between 5-30)
clusters = fastcluster.fcluster(Z, t=n, criterion='maxclust')

Vectorization and Optimization of function in Python

I am fairly new to python and trying to transfer some code from matlab to python. I am trying to optimize a function in python using fmin_bfgs. I always try to vectorize the code when possible, but I ran into the following problem that I can't figure out. Here is a test example.
from pylab import *
from scipy.optimize import fmin_bfgs
## Create some linear data
L=linspace(0,10,100).reshape(100,1)
n=L.shape[0]
M=2*L+5
L=hstack((ones((n,1)),L))
m=L.shape[0]
## Define sum of squared errors as non-vectorized and vectorized
def Cost(theta,X,Y):
return 1.0/(2.0*m)*sum((theta[0]+theta[1]*X[:,1:2]-Y)**2)
def CostVec(theta,X,Y):
err=X.dot(theta)-Y
resid=err**2
return 1.0/(2.0*m)*sum(resid)
## Initialize the theta
theta=array([[0.0], [0.0]])
## Run the minimization on the two functions
print fmin_bfgs(Cost, x0=theta,args=(L,M))
print fmin_bfgs(CostVec, x0=theta,args=(L,M))
The first answer, with the unvectorized function, gives the correct answer which is just the vector [5, 2]. But, the the second answer, using the vectorizied form of the cost function returns roughly [15,0]. I have figured out the 15 doesn't appear from nowhere as it is 2 times the mean of the data plus the intercept, i.e., $2\times 5+5$. Any help is greatly appreciated.

Categories

Resources