I am trying to combine cvxopt (an optimization solver) and PyMC (a sampler) to solve convex stochastic optimization problems.
For reference, installing both packages with pip is straightforward:
pip install cvxopt
pip install pymc
Both packages work independently perfectly well. Here is an example of how to solve an LP problem with cvxopt:
# Testing that cvxopt works
from cvxopt import matrix, solvers
# Example from http://cvxopt.org/userguide/coneprog.html#linear-programming
c = matrix([-4., -5.])
G = matrix([[2., 1., -1., 0.], [1., 2., 0., -1.]])
h = matrix([3., 3., 0., 0.])
sol = solvers.lp(c, G, h)
# The solution sol['x'] is correct: (1,1)
However, when I try using it with PyMC (e.g. by putting a distribution on one of the coefficients), PyMC gives an error:
import pymc as pm
import cvxopt
c1 = pm.Normal('c1', mu=-4, tau=.5**-2)
#pm.deterministic
def my_lp_solver(c1=c1):
c = matrix([c1, -5.])
G = matrix([[2., 1., -1., 0.], [1., 2., 0., -1.]])
h = matrix([3., 3., 0., 0.])
sol = solvers.lp(c, G, h)
solution = np.array(sol['x'],dtype=float).flatten()
return solution
m = pm.MCMC(dict(c1=c1, x=x))
m.sample(20000, 10000, 10)
I get the following PyMC error:
<ipython-input-21-5ce2909be733> in x(c1)
14 #pm.deterministic
15 def x(c1=c1):
---> 16 c = matrix([c1, -5.])
17 G = matrix([[2., 1., -1., 0.], [1., 2., 0., -1.]])
18 h = matrix([3., 3., 0., 0.])
TypeError: invalid type in list
Why? Is there any way to make cvxoptplay nicely with PyMC?
Background:
In case anyone wonders, PyMC allows you to sample from any function of your choice. In this particular case, the function from which we sample is one that maps an LP problem to a solution. We are sampling from this function because our LP problem contains stochastic coefficients, so one cannot just apply an LP solver off-the-shelf.
More specifically in this case, a single PyMC output sample is simply a solution to the LP problem. As parameters of the LP problem vary (according to distributions of your choice), the output samples from PyMC would be different, and the hope is to get a posterior distribution.
The solution above is inspired by this answer, the only difference is that I am hoping to use a true general solver (in this case cvxopt)
The type of c1 generated with pm.Normal is numpy array, you just need to strip it out and convert it to float(c1), then it works finely:
>>> #pm.deterministic
... def my_lp_solver(c1=c1):
... c = matrix([float(c1), -5.])
... G = matrix([[2., 1., -1., 0.], [1., 2., 0., -1.]])
... h = matrix([3., 3., 0., 0.])
... sol = solvers.lp(c, G, h)
... solution = np.array(sol['x'],dtype=float).flatten()
... return solution
...
pcost dcost gap pres dres k/t
0: -8.1223e+00 -1.8293e+01 4e+00 0e+00 7e-01 1e+00
1: -8.8301e+00 -9.4605e+00 2e-01 1e-16 4e-02 3e-02
2: -9.0229e+00 -9.0297e+00 2e-03 2e-16 5e-04 4e-04
3: -9.0248e+00 -9.0248e+00 2e-05 3e-16 5e-06 4e-06
4: -9.0248e+00 -9.0248e+00 2e-07 2e-16 5e-08 4e-08
Optimal solution found.
Related
Steps to reproduce
import nevergrad as ng
import numpy as np
loc = ng.p.Scalar(lower=-5,upper=5)
scale = ng.p.Scalar(lower=0, upper=5)
s = ng.p.Scalar(lower=0, upper=10)
k = ng.p.Choice(list(range(2,6)))
w = ng.p.Array(shape=(self.times.shape[0],)).set_bounds(-10,10)
instru = ng.p.Instrumentation(loc=loc,
scale = scale,
s=s,
k=k,
w = w)
optimizer = ng.optimizers.DE(parametrization=instru,
budget=budget)
optimizer.suggest((),{'k':3,'loc':-2,'s':2,'scale':2,'w':np.ones(self.times.shape[0])})
Observed Results
ValueError: Tuple value must be a tuple of size 0, got: ((), {'k': 3, 'loc': -2, 's': 2, 'scale': 2, 'w': array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1.])}).
Current value: ()
Expected Results
For initial values to be set in an optimizer run
Has anyone had success using the suggest method in Nevergrad?
If so, would you mind copying/pasting working code? I've been trying different forms of the example in the documentation, but cannot seem to get it to work.
The question was answered in a relevant Github thread:
Basically, suggest should be called the same way as the function to optimize, in your case, given you are using an Instrumentation, I guess it should be:
optimizer.suggest(k=3, loc=-2, s=2, scale=2, w=mp.ones(self.times.shape[0]))
Another option, which can work for all but the Choice parameter would be to use the init option of Array and Scalar (eg: loc = ng.p.Scalar(init=-2, lower=-5, upper=5))
I have an initial value problem that needs to be solved; the differential equations are derived from a dictionary that looks like:
eqs = {'a': array([-1., 2., 4., 0., ...]),
'b': array([ 1., -10., 0., 0., ...]),
'c': array([ 0., 3., -4., 0., ...]),
'd': array([ 0., 5., 0., -0., ...]),
...}
The differential equation da/dt is given as -1*[a]+2*[b]+4*[c]+0*[d]....
Using the dictionary above, I write a function dXdt as:
def dXdt (X, t):
sys_a, sys_b, sys_c, sys_d,... = eqs['a'], eqs['b'], eqs['c'], eqs['d'],...
dadt = sys_a[0]*X[0]+sys_a[1]*X[1]+sys_a[2]*X[2]+sys_a[3]*X[3]+...
dbdt = sys_b[0]*X[0]+sys_b[1]*X[1]+sys_b[2]*X[2]+sys_b[3]*X[3]+...
dcdt = sys_c[0]*X[0]+sys_c[1]*X[1]+sys_c[2]*X[2]+sys_c[3]*X[3]+...
dddt = sys_d[0]*X[0]+sys_d[1]*X[1]+sys_d[2]*X[2]+sys_d[3]*X[3]+...
...
return [dadt, dbdt, dcdt, dddt, ...]
The initial conditions are:
X0 = [1, 0, 0, 0, ...]
and the solution is given as:
X = integrate.odeint(dXdt, X0, np.linspace(0,10,11))
This works well for a small system, where I can write the equations by hand. However, I have a system that has ~150 differential equations, and I need to automate the way I write dXdt to be used with scipy.integrate.odeint, given the dictionary of eqs. Is there a way to do so?
Any time something follows a simple linear pattern, you can use an iteration or a comprehension to express it. If you have multiple such patterns, you can just nest them. So this:
sys_a, sys_b, sys_c, sys_d,... = eqs['a'], eqs['b'], eqs['c'], eqs['d'],...
dadt = sys_a[0]*X[0]+sys_a[1]*X[1]+sys_a[2]*X[2]+sys_a[3]*X[3]+...
dbdt = sys_b[0]*X[0]+sys_b[1]*X[1]+sys_b[2]*X[2]+sys_b[3]*X[3]+...
dcdt = sys_c[0]*X[0]+sys_c[1]*X[1]+sys_c[2]*X[2]+sys_c[3]*X[3]+...
dddt = sys_d[0]*X[0]+sys_d[1]*X[1]+sys_d[2]*X[2]+sys_d[3]*X[3]+...
...
[dadt, dbdt, dcdt, dddt, ...]
can be expressed simply as:
[sum(eqs[char][i] * X[i] for i in range(len(X))) for char in eqs.keys()]
I was wondering if there is any function in numpy to determine whether a matrix is Unitary?
This is the function I wrote but it is not working. I would be thankful if you guys can find an error in my function and/or tell me another way to find out if a given matrix is unitary.
def is_unitary(matrix: np.ndarray) -> bool:
unitary = True
n = matrix.size
error = np.linalg.norm(np.eye(n) - matrix.dot( matrix.transpose().conjugate()))
if not(error < np.finfo(matrix.dtype).eps * 10.0 *n):
unitary = False
return unitary
Let's take an obviously unitary array:
>>> a = 0.7
>>> b = (1-a**2)**0.5
>>> m = np.array([[a,b],[-b,a]])
>>> m.dot(m.conj().T)
array([[ 1., 0.],
[ 0., 1.]])
and try your function on it:
>>> is_unitary(m)
Traceback (most recent call last):
File "<ipython-input-28-8dc9ddb462bc>", line 1, in <module>
is_unitary(m)
File "<ipython-input-20-3758c2016b67>", line 5, in is_unitary
error = np.linalg.norm(np.eye(n) - matrix.dot( matrix.transpose().conjugate()))
ValueError: operands could not be broadcast together with shapes (4,4) (2,2)
which happens because
>>> m.size
4
>>> np.eye(m.size)
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
If we replace n = matrix.size with len(m) or m.shape[0] or something, we get
>>> is_unitary(m)
True
I might just use
>>> np.allclose(np.eye(len(m)), m.dot(m.T.conj()))
True
where allclose has rtol and atol parameters.
If you are using NumPy's matrix class, there is a property for the Hermitian conjugate, so:
def is_unitary(m):
return np.allclose(np.eye(m.shape[0]), m.H * m)
e.g.
In [79]: P = np.matrix([[0,-1j],[1j,0]])
In [80]: is_unitary(P)
Out[80]: True
For my astronomy homework, I need to simulate the elliptical orbit of a planet around a sun. To do this, I need to use a for loop to repeatedly calculate the motion of the planet. However, every time I try to run the program, I get the following error:
RuntimeWarning: invalid value encountered in power
r=(x**2+y**2)**1.5
Traceback (most recent call last):
File "planetenstelsel3-4.py", line 25, in <module>
ax[i] = a(x[i],y[i])*x[i]
ValueError: cannot convert float NaN to integer
I've done some testing, and I think the problem lies in the fact that the values that are calculated are greater than what fits in an integer, and the arrays are defined as int arrays. So if there was a way do define them as float arrays, maybe it would work. Here is my code:
import numpy as np
import matplotlib.pyplot as plt
dt = 3600 #s
N = 5000
x = np.tile(0, N)
y = np.tile(0, N)
x[0] = 1.496e11 #m
y[0] = 0.0
vx = np.tile(0, N)
vy = np.tile(0, N)
vx[0] = 0.0
vy[0] = 28000 #m/s
ax = np.tile(0, N)
ay = np.tile(0, N)
m1 = 1.988e30 #kg
G = 6.67e-11 #Nm^2kg^-2
def a(x,y):
r=(x**2+y**2)**1.5
return (-G*m1)/r
for i in range (0,N):
r = x[i],y[i]
ax[i] = a(x[i],y[i])*x[i]
ay[i] = a(x[i],y[i])*y[i]
vx[i+1] = vx[i] + ax[i]*dt
vy[i+1] = vy[i] + ay[i]*dt
x[i+1] = x[i] + vx[i]*dt
y[i+1] = y[i] + vy[i]*dt
plt.plot(x,y)
plt.show()
The first few lines are just some starting parameters.
Thanks for the help in advance!
When you are doing physics simulations you should definitely use floats for everything. 0 is an integer constant in Python, and thus np.tile creates integer arrays; use 0.0 as the argument to np.tile to do floating point arrays; or preferably use the np.zeros(N) instead:
You can check the datatype of any array variable from its dtype member:
>>> np.tile(0, 10).dtype
dtype('int64')
>>> np.tile(0.0, 10).dtype
dtype('float64')
>>> np.zeros(10)
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
>>> np.zeros(10).dtype
dtype('float64')
To get a zeroed array of float32 you'd need to give a float32 as the argument:
>>> np.tile(np.float32(0), 10)
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
or, preferably, use zeros with a defined dtype:
>>> np.zeros(10, dtype='float32')
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
You need x = np.zeros(N), etc.: this declares the arrays as float arrays.
This is the standard way of putting zeros in an array (np.tile() is convenient for creating a tiling with a fixed array).
I am very new to Python (in the past I used Mathematica, Maple, or Matlab scripts). I am very impressed how NumPy can evaluate functions over arrays but having problems trying to implement it in several dimensions. My question is very simple (please don't laugh): is there a more elegant and efficient way to evaluate some function f (which is defined over R^2) without using loops?
import numpy
M=numpy.zeros((10,10))
for i in range(0,10):
for j in range(0,10):
M[i,j]=f(i,j)
return M
The goal when coding with numpy is to implement your computation on the whole array, as much as possible. So if your function is, for example, f(x,y) = x**2 +2*y and you want to apply it to all integer pairs x,y in [0,10]x[0,10], do:
x,y = np.mgrid[0:10, 0:10]
fxy = x**2 + 2*y
If you don't find a way to express your function in such a way, then:
Ask how to do it (and state explicitly the function definition)
use numpy.vectorize
Same example using vectorize:
def f(x,y): return x**2 + 2*y
x,y = np.mgrid[0:10, 0:10]
fxy = np.vectorize(f)(x.ravel(),y.ravel()).reshape(x.shape)
Note that in practice I only use vectorize similarly to python map when the content of the arrays are not numbers. A typical example is to compute the length of all list in an array of lists:
# construct a sample list of lists
list_of_lists = np.array([range(i) for i in range(1000)])
print np.vectorize(len)(list_of_lists)
# [0,1 ... 998,999]
Yes, many numpy functions operate on N-dimensional arrays. Take this example:
>>> M = numpy.zeros((3,3))
>>> M[0][0] = 1
>>> M[2][2] = 1
>>> M
array([[ 1., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 1.]])
>>> M > 0.5
array([[ True, False, False],
[False, False, False],
[False, False, True]], dtype=bool)
>>> numpy.sum(M)
2.0
Note the difference between numpy.sum, which operates on N-dimensional arrays, and sum, which only goes 1 level deep:
>>> sum(M)
array([ 1., 0., 1.])
So if you build your function f() out of operations that work on n-dimensional arrays, then f() itself will work on n-dimensional arrays.
You can also use numpy multi-dimension slicing, like below. You just provide slices for each dimension:
arr = np.zeros((5,5)) # 5 rows, 5 columns
# update only first column
arr[:,0] = 1
# update only last row ... same as arr[-1] = 1
arr[-1,:] = 1
# update center
arr[1:-1, 1:-1] = 1
print arr
output:
array([[ 1., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 0.],
[ 1., 1., 1., 1., 1.]])
A pure python answer, not depending upon numpy tools, is to make the Cartesian Product of two sequences:
from itertools import product
for i, j in product(range(0, 10), range(0, 10)):
M[i,j]=f(i,j)
Edit: Actually, I should have read the question properly. This still uses loops, just one less loop.