Python Scipy Optimize not recongnising Lambdas while using numpy array - python

While trying to create an optimisation algorithm for work, i found a particular problem:
Here is some basic information about the code :
LZ is a nestedlist.
M is a numpy array converted from the nestedlist.
here is the code :
for i in range(len(LZ)):
for j in range(len(LZ[i])):
constraints1 = lambda MA, i=i,j=j: MAXQ - abs(M[i][j]-MA[i][j])
print(M[i][j])
if j <len(LZ[i])-1:
constraints2 = lambda MA, i=i,j=j: PENTEMAX +((MA[i][j]-MA[i][j+1])/LL[i][j])
constraints3 = lambda MA, i=i,j=j: PENTEMAX - ((MA[i][j]-MA[i][j+1])/LL[i][j])
cons.append({'type' : 'ineq','fun' : constraints1})
cons.append({'type' : 'ineq','fun' : constraints2})
cons.append({'type' : 'ineq','fun' : constraints3})
x0 = M
sol = minimize(objective,x0,method='SLSQP',constraints=cons)
I run the code, and here is what i get :
it prints the M[i][j] just fine, the printing is long so i didnt copy it here :
Traceback (most recent call last):
File "D:/Opti Assainissement/VOIRIE5.py", line 118, in <module>
sol = minimize(objective,x0,method='SLSQP',constraints=cons)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\_minimize.py", line 611, in minimize
constraints, callback=callback, **options)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\slsqp.py", line 315, in _minimize_slsqp
for c in cons['ineq']]))
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\scipy\optimize\slsqp.py", line 315, in <listcomp>
for c in cons['ineq']]))
File "D:/Opti Assainissement/VOIRIE5.py", line 101, in <lambda>
constraints1 = lambda MA, i=i,j=j: cdt(MA,i,j)
File "D:/Opti Assainissement/VOIRIE5.py", line 98, in cdt
return MAXQ - abs(M[i][j]-MA[i][j])
IndexError: invalid index to scalar variable.
My first guess was that SciPy doesnt recognise MA as an array, but i can't know if it's related to SciPy or the lambda construction or to my lack of knowledge in the matter. I'll be glad to get somehelp from the community !

Related

Using `xarray.apply_ufunc` with `np.linalg.pinv` returns an error with `dask.array`

I get an error when running the following MWE:
import xarray as xr
import numpy as np
from numpy.linalg import pinv
import dask
data = np.random.randn(4, 4, 3, 2)
da = xr.DataArray(data=data, dims=("x", "y", "i", "j"),)
da = da.chunk(x=1, y=1)
da_inv = xr.apply_ufunc(pinv, da,
input_core_dims=[["i", "j"]],
output_core_dims=[["i", "j"]],
exclude_dims=set(("i", "j")),
dask = "parallelized",
)
This throws me this error:
Traceback (most recent call last):
File "/glade/scratch/tomasc/tracer_inversion2/mwe.py", line 14, in <module>
da_inv = xr.apply_ufunc(pinv, da,
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 1204, in apply_ufunc
return apply_dataarray_vfunc(
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 315, in apply_dataarray_vfunc
result_var = func(*data_vars)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 771, in apply_variable_ufunc
result_data = func(*input_data)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/xarray/core/computation.py", line 747, in func
res = da.apply_gufunc(
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/dask/array/gufunc.py", line 489, in apply_gufunc
core_output_shape = tuple(core_shapes[d] for d in ocd)
File "/glade/u/home/tomasc/miniconda3/envs/py310/lib/python3.10/site-packages/dask/array/gufunc.py", line 489, in <genexpr>
core_output_shape = tuple(core_shapes[d] for d in ocd)
KeyError: 'dim0'
Even though when using dask.array.map_blocks directly, things seem to work right out of the box:
data_inv = dask.array.map_blocks(pinv, da.data).compute() # works!
What am I missing here?
(Same question answered on the xarray repository here.)
You were almost there, you just needed to add the sizes of new the output dimensions by including the kwarg
dask_gufunc_kwargs={'output_sizes': {'i': 2, 'j': 3}}
It does sort of say this in the docstring for apply_ufunc but it could definitely be clearer!
That's a very unhelpful error, but it's ultimately being thrown because the keys 'i' and 'j' don't exist in the dict of expected sizes of the output (because you didn't provide them).
The actual error message has been improved in xarray version v2023.2.0.

Manual integration in Sympy doesn't work correctly with noncommutative symbols

I have the following
x=Symbol('x',commutative=False)
y=Symbol('y',commutative=False)
expr = 2*x + 87*x*y + 7*y
Now, this works
integrate(expr,y,manual=True)
because it gives
2*x*y + 87*x*y**2/2 + 7*y**2/2
but the same exact thing with x fails:
integrate(expr,x,manual=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 1295, in integrate
risch=risch, manual=manual)
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 486, in doit
conds=conds)
File "/usr/local/lib/python2.7/dist-packages/sympy/integrals/integrals.py", line 774, in _eval_integral
poly = f.as_poly(x)
File "/usr/local/lib/python2.7/dist-packages/sympy/core/basic.py", line 706, in as_poly
poly = Poly(self, *gens, **args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polytools.py", line 113, in __new__
opt = options.build_options(gens, args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 731, in build_options
return Options(gens, args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 154, in __init__
preprocess_options(args)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 152, in preprocess_options
self[option] = cls.preprocess(value)
File "/usr/local/lib/python2.7/dist-packages/sympy/polys/polyoptions.py", line 293, in preprocess
raise GeneratorsError("non-commutative generators: %s" % str(gens))
sympy.polys.polyerrors.GeneratorsError: non-commutative generators: (x,)
Why Sympy is so weird? How can I fix this?
You seem satisfied with
integrate(2*x + 87*x*y + 7*y, y, manual=True)
returning
2*x*y + 87*x*y**2/2 + 7*y**2/2
But the first term of this answer could also be 2*y*x. Or x*y + y*x. And these are all different answers. So, is the notion of an integral with noncommutative symbols well-defined to begin with? Maybe it's not that SymPy is weird, but the question you are asking it is.
The concrete reason for this behavior is that manual integration is based on matching certain patterns. Such as "constant times something" pattern:
coeff, f = integrand.as_independent(symbol)
The method as_independent splits the product as independent * possibly_dependent, in this order. So,
(x*y).as_independent(y) # returns (x, y)
(x*y).as_independent(x) # returns (1, x*y)
As a result, constant factors are recognized only in front of the expression, when the product is noncommutative.
I don't think this can be fixed without rewriting one of the core methods as_independent to support noncommutative products (possibly returning independent * dependent * independent2) which looks like a lot of work to me. Before doing that work, I'd want to know whether the objective (antiderivative with noncommuting variables) is well defined.

I am trying to solve a system of equation with two variable in tkinter python

Good morning,
I am trying to solve a system of equation with 2 variables in Python, but using Tkinter to display the answers on the screen. I did most of it, but I can not display the answes.
That is the error I am seeing:
enter coException in Tkinter callback
Traceback (most recent call last):
File "C:\Users\edwin\AppData\Local\Programs\Python\Python36-32\lib\tkinter\__init__.py", line 1699, in __call__
return self.func(*args)
File "C:\Users\edwin\AppData\Local\Programs\Python\Python36-32\ed.py", line 122, in Calculate
z = np.linalg.solve ( a, b)
File "C:\Users\edwin\AppData\Local\Programs\Python\Python36-32\lib\site-packages\numpy\linalg\linalg.py", line 375, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
File "C:\Users\edwin\AppData\Local\Programs\Python\Python36-32\lib\site-packages\numpy\linalg\linalg.py", line 90, in _raise_linalgerror_singular
raise LinAlgError("Singular matrix")
numpy.linalg.linalg.LinAlgError: Singular matrixde here
A singular matrix is not invertible. A singular matrix does not satisfy the property: The equation Ax = b has exactly one solution for each b in Kn. This means that the system you are attempting to solve is either incorrectly converted into matrix form, or does not have a unique solution.

python pandas rolling function with two arguments in a grouped DataFrame

This is a somewhat extension to my previous problem
python pandas rolling function with two arguments .
How do I perform the same by group? Let's say that the 'C' column below is used for grouping.
I am struggling to:
Group by column 'C'
Within each group, sort by 'A'
Withing each group, apply a rolling function taking two arguments, like kendalltau, to arguments 'A' and 'B'.
The expected result would be a DataFrame like the one below:
I have been trying the 'pass an index' workaround as described in the link above, but the complexity of this case is beyond my skills :-( . This is a toy example, not that far from what I am working with, so for simplicity i used randomly generated data.
rand = np.random.RandomState(1)
dff = pd.DataFrame({'A' : np.arange(20),
'B' : rand.randint(100, 120, 20),
'C' : rand.randint(0, 2, 20)})
def my_tau_indx(indx):
x = dff.iloc[indx, 0]
y = dff.iloc[indx, 1]
tau = sp.stats.mstats.kendalltau(x, y)[0]
return tau
dff['tau'] = dff.sort_values(['C', 'A']).groupby('C').rolling(window = 5).apply(my_tau_indx, args = ([dff.index.values]))
Every fix I make creates yet another bug...
The Above issue has been solved by Nickil Maveli and it works with numpy 1.11.0, pandas 0.18.1, scipy 0.17.1, andwith conda 4.1.4. It generates some warnings, but works.
On my another machine with latest & greatest numpy 1.12.0, pandas 0.19.2, scipy 0.18.1, conda version 3.10.0 and BLAS/LAPACK - it does not work and I get the traceback below. This seems versions related since I upgraded the 1st machine it also stopped working... In the name of science... ;-)
As Nickil suggested, this was due to incompatibility between numpy 1.11 and 1.12. Downgrading numpy helped. Since I had had BLAS/LAPACK on a Windows, I installed numpy 1.11.3+mkl from http://www.lfd.uci.edu/~gohlke/pythonlibs/ .
Traceback (most recent call last):
File "<ipython-input-4-bbca2c0e986b>", line 16, in <module>
t = grp.apply(func)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\groupby.py", line 651, in apply
return self._python_apply_general(f)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\groupby.py", line 655, in _python_apply_general
self.axis)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\groupby.py", line 1527, in apply
res = f(group)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\groupby.py", line 647, in f
return func(g, *args, **kwargs)
File "<ipython-input-4-bbca2c0e986b>", line 15, in <lambda>
func = lambda x: pd.Series(pd.rolling_apply(np.arange(len(x)), 5, my_tau_indx), x.index)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\stats\moments.py", line 584, in rolling_apply
kwargs=kwargs)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\stats\moments.py", line 240, in ensure_compat
result = getattr(r, name)(*args, **kwds)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\window.py", line 863, in apply
return super(Rolling, self).apply(func, args=args, kwargs=kwargs)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\window.py", line 621, in apply
center=False)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\window.py", line 560, in _apply
result = calc(values)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\window.py", line 555, in calc
return func(x, window, min_periods=self.min_periods)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\window.py", line 618, in f
kwargs)
File "pandas\algos.pyx", line 1831, in pandas.algos.roll_generic (pandas\algos.c:51768)
File "<ipython-input-4-bbca2c0e986b>", line 8, in my_tau_indx
x = dff.iloc[indx, 0]
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\indexing.py", line 1294, in __getitem__
return self._getitem_tuple(key)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\indexing.py", line 1560, in _getitem_tuple
retval = getattr(retval, self.name)._getitem_axis(key, axis=axis)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\indexing.py", line 1614, in _getitem_axis
return self._get_loc(key, axis=axis)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\indexing.py", line 96, in _get_loc
return self.obj._ixs(key, axis=axis)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\core\frame.py", line 1908, in _ixs
label = self.index[i]
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\indexes\range.py", line 510, in __getitem__
return super_getitem(key)
File "C:\Apps\Anaconda\v2_1_0_x64\envs\python35\lib\site-packages\pandas\indexes\base.py", line 1275, in __getitem__
result = getitem(key)
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
The final check:
One way to achieve would be to iterate through every group and use pd.rolling_apply on every such groups.
import scipy.stats as ss
def my_tau_indx(indx):
x = dff.iloc[indx, 0]
y = dff.iloc[indx, 1]
tau = ss.mstats.kendalltau(x, y)[0]
return tau
grp = dff.sort_values(['A', 'C']).groupby('C', group_keys=False)
func = lambda x: pd.Series(pd.rolling_apply(np.arange(len(x)), 5, my_tau_indx), x.index)
t = grp.apply(func)
dff.reindex(t.index).assign(tau=t)
EDIT:
def my_tau_indx(indx):
x = dff.ix[indx, 0]
y = dff.ix[indx, 1]
tau = ss.mstats.kendalltau(x, y)[0]
return tau
grp = dff.sort_values(['A', 'C']).groupby('C', group_keys=False)
t = grp.rolling(5).apply(my_tau_indx).get('A')
grp.head(dff.shape[0]).reindex(t.index).assign(tau=t)

broadcasting linalg.pinv on a 3D theano tensor

in the example below, there is a 3d numpy matrix of size (4, 3, 3)+ a solution about how to calculate pinv of each of 4 of those 3*3 matrices in numpy. I also tried to use the same function worked in numpy, in theano hoping that it is implemented the same, but it failed. Any idea how to do it in theano?
dt = np.dtype(np.float32)
a=[[[12,3,1],
[2,4,1],
[2,4,2],],
[[12,3,3],
[2,4,4],
[2,4,5],],
[[12,3,6],
[2,4,5],
[2,4,4],],
[[12,3,3],
[2,4,5],
[2,4,6]]]
a=np.asarray(a,dtype=dt)
print(a.shape)
apinv=np.zeros((4,3,3))
print(np.linalg.pinv(a[0,:,:]).shape)
#numpy solution
apinv = map(lambda n: np.linalg.pinv(n), a)
apinv = np.asarray(apinv,dtype=dt)
#theano solution (not working)
at=T.tensor3('a')
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
The error is:
Original exception was:
Traceback (most recent call last):
File "pydevd.py", line 2403, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "pydevd.py", line 1794, in run
launch(file, globals, locals) # execute the script
File "exp_thn_pinv_map.py", line 35, in <module>
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
File "theano/tensor/var.py", line 549, in __iter__
raise TypeError(('TensorType does not support iteration. '
TypeError: TensorType does not support iteration. Maybe you are using builtin.sum instead of theano.tensor.sum? (Maybe .max?)
The error message is
Traceback (most recent call last):
File "D:/Dropbox/source/intro_theano/pinv.py", line 32, in <module>
apinvt = map(lambda n: T.nlinalg.pinv(n), at)
File "d:\dropbox\source\theano\theano\tensor\var.py", line 549, in __iter__
raise TypeError(('TensorType does not support iteration. '
TypeError: TensorType does not support iteration. Maybe you are using builtin.sum instead of theano.tensor.sum? (Maybe .max?)
This is occurring because, as the error message indicates, the symbolic variable at is not iterable.
The fundamental problem here is that you're incorrectly mixing immediately executed Python code with delayed execution Theano symbolic code.
You need to use a symbolic loop, not a Python loop. The correct solution is to use Theano's scan operator:
at=T.tensor3('a')
apinvt, _ = theano.scan(lambda n: T.nlinalg.pinv(n), at, strict=True)
f = theano.function([at], apinvt)
print np.allclose(f(a), apinv)

Categories

Resources