Can we add numpy object pulp objective function - python

I came across following code:
from pulp import *
import numpy as np
prob = LpProblem("lp_prob", LpMinimize)
decision_variables = LpVariable.dicts('x', range(5))
prob += np.sum(decision_variables.values())
When I tried same code on my machine, it gave following error on last line:
TypeError: Can only add LpConstraintVar, LpConstraint, LpAffineExpression or True objects
Cant I add numpy array to LpProblem. Now I am guessing if the given code is incorrect? Also is there any other way / version (of python and/or numpy and/or pulp) in which adding numpy object to LpProblem works?

decision_variables is not a numpy array. It is a dictionary and that's why you can do decision_variables.values().
decision_variables.values() is not a numpy array either, it's a dictvalues object.
the result of np.sum is not a numpy array. When applied to a numpy array it should return a scalar value. But for a dictvalues object it does nothing (no sum, at least).
I'm not sure why you would need to use np.sum at all here.
In pulp, the sum of a set of pulp variables (list, dictvalues, dictkeys, etc.) is done with the lpSum function (https://www.coin-or.org/PuLP/pulp.html#pulp.lpSum).
prob += lpSum(decision_variables.values())

Related

python: Plotting and optimizing the same function

Lets say I have the following function:
def f(x):
return log(3*exp(3*x) + 7*exp(7*x))
I want to do two things:
1) plot the function over a range of x-values
2) find the root of the function using the Newton method from scipy
My problem is that it seems that plotting is best done with a numpy array x=np.linspace(-2,2,1000), but then evaluating the function results in erros TypeError: only size-1 arrays can be converted to Python scalars. I can fix this by simply changing log and exp to np.log and np.exp, respectively.
But doing so then makes scipy.optimize.newton unhappy.
It seems like I need to define the function twice, once for use in plotting (with np. ...) and once for optimizing in the form given above.
I can't imagine that this is actually the case. Any hints would be greatly appreciated.
Seems legit, you just need to use numpy functions instead of base math functions:
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
%matplotlib inline
def f(x):
return np.log(3*np.exp(3*x) + 7*np.exp(7*x))
x = np.linspace(-2,2,1000)
y = f(x)
plt.scatter(x, y)
optimize.root(f, 1)

Exporting large array variables (type = object) to CSV files

I have used Gekko from APM in Python to solve an optimization problem. The two main decision variables (DVs) are large arrays. The problem has converged successfully, however, I need the results of these tables in an excel worksheet for further work.
An example variable name is 's'. Since the arrays created within Gekko are GKVariable/Object variable types I cannot simply use:
pd.DataFrame(s).to_csv(r'C:\Users\...\s.csv')
because the result gives every cell of the array the label of each variable defined in the model (i.e. v1, v2, etc.)
Using print 's' within the kernel will show the numbers of the array from the optimization results but in a format that doesn't guarantee that each line is a new row of the matrix because of the many columns.
Is there another solution to copy just the resulting value of the DV 's' so it becomes a normal np.array instead of the object type variable? Open to any ideas for this.
You can use s[i].value[0]`` for steady state problems (IMODE=1orIMODE=3) ors[i].value[:]``` to access the array of values for all other IMODE options. Here is a simple example with writing the results to a file with pandas and numpy.
import numpy as np
from gekko import GEKKO
import pandas as pd
m = GEKKO(remote=False)
# Random 3x3
A = np.random.rand(3,3)
# Random 3x1
b = np.random.rand(3,1)
# Ax = b
y = m.axb(A,b)
m.solve()
yn = [y[i].value[0] for i in range(3)]
print(yn)
pd.DataFrame(yn).to_csv(r'y1.csv')
np.savetxt('y2.csv',yn,delimiter=',',comments='')

Declare a function to do exponential smothing on data

I am trying to do an exponential smothing in Python on some detrended data on a Jupyter notebook. I try to import
from statsmodels.tsa.api import ExponentialSmoothing
but the following error comes up
ImportError: cannot import name 'SimpleExpSmoothing'
I don't know how to solve that problem from a Jupyter notebook, so I am trying to declare a function that does the exponential smoothing.
Let's say the function's name is expsmoth(list,a) and takes a list list and a number a and gives another list called explist whose elements are given by the following recurrence relation:
explist[0] == list[0]
explist[i] == a*list[i] + (1-a)*explist[i-1]
I am still leargnin python. How to declare a function that takes a list and a number as arguments and gives back a list whose elements are given by the above recurrence relation?
A simple solution to your problem would be
def explist(data, a):
smooth_data = data.copy() # make a copy to avoid changing the original list
for i in range(1, len(data)):
smooth_data[i] = a*data[i] + (1-a)*smooth_data[i-1]
return smooth_data
The function should work with both native python lists or numpy arrays.
import matplotlib.pyplot as plt
import numpy as np
data = np.random.random(100) # some random data
smooth_data = explist(data, 0.2)
plt.plot(data, label='orginal')
plt.plot(smooth_data, label='smoothed')
plt.legend()
plt.show()

Result of 3D FFT using pyculib is wrong

I use pyculib to perform 3D FFT on a matrix in Anaconda 3.5. I just followed the example code posted in the website. But I found something interesting and don't understand why.
Performing a 3D FFT on matrix with pyculib is correct only when using numpy.arange to create the matrix.
Here is the code:
from pyculib.fft.binding import Plan, CUFFT_C2C
import numpy as np
from numba import cuda
data = np.random.rand(26, 256, 256).astype(np.complex64)
orig = data.copy()
d_data = cuda.to_device(data)
fftplan = Plan.three(CUFFT_C2C, *data.shape)
fftplan.forward(d_data, d_data)
fftplan.inverse(d_data, d_data)
d_data.copy_to_host(data)
result = data / n
np.allclose(orig, result.real)
Finally, it turns out to be False. And the difference between orig and result is not a small number, not negligible.
I tried some other data sets (not random numbers), and get the some wrong results.
Also, I test without inverse FFT:
from pyculib.fft.binding import Plan, CUFFT_C2C
import numpy as np
from numba import cuda
from scipy.fftpack import fftn,ifftn
data = np.random.rand(26,256,256).astype(np.complex64)
orig = data.copy()
orig_fft = fftn(orig)
d_data = cuda.to_device(data)
fftplan = Plan.three(CUFFT_C2C, *data.shape)
fftplan.forward(d_data, d_data)
d_data.copy_to_host(data)
np.allclose(orig_fft, data)
The result is also wrong.
The test code on website, they use numpy.arange to create the matrix. And I tried:
n = 26*256*256
data = np.arange(n, dtype=np.complex64).reshape(26,256,256)
And the FFT result of this matrix is right.
Could anyone help to point out why?
I don't use CUDA, but I think your problem is numerical in nature. The difference lies in the two data sets you are using. random.rand has dynamic range of 0-1, and arange 0-26*256*256. The FFT attempts to resolve spatial frequency components on the order of range of values / number of points. For arange, this becomes unity, and the FFT is numerically accurate. For rand, this is 1/26*256*256 ~ 5.8e-7.
Just running FFT/IFFT on your numpy arrays without using CUDA shows similar differences.

The equivalence of Matlab sprand() in Python?

I am trying to translate a Matlab code snippet into a Python one. However, I am not very sure how to correctly implement the sprand() function.
This is how the Matlab code use sprand():
% n_z is an integer, n_dw is a matrix
n_p_z_dw = cell(n_z, 1); % n(d,w) * p(z|d,w)
for z = 1:n_z
n_p_z_dw{z} = sprand(n_dw);
And this is how I implement the above logic in Python:
n_p_z_dw = [None]*n_z # n(d,w) * p(z|d,w)
density = np.count_nonzero(n_dw)/float(n_dw.size)
for i in range(0, n_z):
n_p_z_dw[i] = scipy.sparse.rand(n_d, n_w, density=density)
It seems to work, but I am not very sure about this. Any comment or suggestion?
The following should be a relatively fast way, I think, for a sparse array A:
import scipy.sparse as sparse
import numpy as np
sparse.coo_matrix((np.random.rand(A.nnz),A.nonzero()),shape=A.shape)
This will construct a COO format sparse matrix: it uses A.nonzero() as the coordinates, and A.nnz (the number of nonzero entries in A) to find the number of random numbers to generate.
I wonder, though, whether this might be a useful addition to the scipy.sparse.rand function.

Categories

Resources