Calculating exponential of a matrix in python - python

I want to calculate the exponential of a 200x200 matrix (expm(B)) and get the following problem. Thanks a lot for your help.
exp_matrix2 = expm(B)
File ".../python2.7/site-packages/scipy/linalg/matfuncs.py", line
261, in expm
return scipy.sparse.linalg.expm(A)
File ".../python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 582, in expm
return _expm(A, use_exact_onenorm='auto')
File ".../python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 618, in _expm
eta_1 = max(h.d4_loose, h.d6_loose)
File ".../python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 457, in d4_loose
structure=self.structure)**(1/4.)
File ".../python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 301, in _onenormest_matrix_power
MatrixPowerOperator(A, p, structure=structure))
File ".../python2.7/site-packages/scipy/sparse/linalg/_onenormest.py",
line 95, in onenormest
est, v, w, nmults, nresamples = _onenormest_core(A, A.H, t, itmax)
File "/python2.7/site-packages/scipy/sparse/linalg/_onenormest.py",
line 424, in _onenormest_core
Z = np.asarray(AT_linear_operator.matmat(S))
File ".../python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 326, in matmat
Y = self._matmat(X)
File ".../python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 468, in _matmat
return super(_CustomLinearOperator, self)._matmat(X)
File ".../python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 174, in _matmat
return np.hstack([self.matvec(col.reshape(-1,1)) for col in X.T])
File
"/home/dk2518/anaconda2/lib/python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 219, in matvec
y = self._matvec(x)
File ".../python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 471, in _matvec
return self.__matvec_impl(x)
File ".../python2.7/site-packages/scipy/sparse/linalg/interface.py",
line 266, in rmatvec
y = self._rmatvec(x)
File ".../python2.7/site-packages/scipy/sparse/linalg/matfuncs.py",
line 203, in _rmatvec
x = A_T.dot(x)
ValueError: shapes (207,207) and (1,207) not aligned: 207 (dim 1) != 1
(dim 0)

As discussed in #TomNash's link, a large np.matrix is the problem.
ndarray and sparse matrix work fine:
In [309]: slg.expm(np.ones((200,200)));
In [310]: slg.expm(sparse.csc_matrix(np.ones((200,200))));
In [311]: slg.expm(np.matrix(np.ones((200,200))));
ValueError: shapes (200,200) and (1,200) not aligned: 200 (dim 1) != 1 (dim 0)
Not every np.matrix gives problems:
In [313]: slg.expm(np.matrix(np.eye(200)));
Turning the np.matrix back into ndarray works:
In [315]: slg.expm(np.matrix(np.ones((200,200))).A);
This uses slg.matfuncs._expm(A, use_exact_onenorm='auto')
which has an test, early one, for:
if use_exact_onenorm == "auto":
# Hardcode a matrix order threshold for exact vs. estimated one-norms.
use_exact_onenorm = A.shape[0] < 200
That explains, in part, why we get the problem with a (200,200) matrix, but not a (199,199).
This works:
slg.matfuncs._expm(M, use_exact_onenorm=True);
It fails with False. But from there I get lost in the details of how it sets up _ExpmPadeHelper and attempts a Pade order 3.
In short - avoid np.matrix, especially if (200,200) or larger.

Related

MinMaxScaler for dataframe: ValueError: setting an array element with a sequence

I do the preprocessing for the data to apply to K-means cluster for time-series data following hour. Then, I normalize the data but it shows the error:
`
Traceback (most recent call last):
File ".venv\lib\site-packages\pandas\core\series.py", line 191, in wrapper
raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv\timesequence.py", line 210, in <module>
matrix = pd.DataFrame(scaler.fit_transform(x_calls), columns=df_hours.columns, index=df_hours.index)
File ".venv\lib\site-packages\sklearn\base.py", line 867, in fit_transform
return self.fit(X, **fit_params).transform(X)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 420, in fit
return self.partial_fit(X, y)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 457, in partial_fit
X = self._validate_data(
File ".venv\lib\site-packages\sklearn\base.py", line 577, in _validate_data
X = check_array(X, input_name="X", **check_params)
File ".venv\lib\site-packages\sklearn\utils\validation.py", line 856, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File ".venv\lib\site-packages\pandas\core\generic.py", line 2064, in __array__
return np.asarray(self._values, dtype=dtype)
ValueError: setting an array element with a sequence.
#--------------------Preprocessing ds
counter_ = 0
zero = 0
df_hours = pd.DataFrame({
'Hour': [],
'SumView':[],
'CountStudent':[]
}, dtype=object)
while counter_ < 24:
if (counter_ in sub_data_hour['Hour']):
row = sub_data_hour.loc[(pd.to_numeric(sub_data_hour['Hour'], errors='coerce')) == counter_]
df_hours.loc[len(df_hours.index)] = [counter_, row['SumView'], row['CountStudent']]
else:
df_hours.loc[len(df_hours.index)] = [counter_, zero, zero]
counter_ += 1
#----------Normalize dataset------------
x_calls = df_hours.columns[2:]
scaler = MinMaxScaler()
matrix = pd.DataFrame(scaler.fit_transform(df_hours[x_calls]), columns=x_calls, index=df_hours.index)
`
I did try .to_numpy() or .values or [['column1','column2']] following this post pandas dataframe columns scaling with sklearn
But it did not work. Could anyone please help me to fix this? Thanks.
The problem here is the datatype of df_hours I preprocessed.
Solution: change row['SumView'] to row['SumView'].values[0] and do the same with row['CountStudent'].

Issue using mpcalc.advection( ) to calculate advection of a scalar field

I'm attempting to follow this training example to calculate QG omega on NCEP/NCAR data but I'm getting hung up on mpcalc.advection().
It appears as if my dx and dy variables are a different shape, but I'm directly following the routine in an online example that supposedly works.
import numpy as np
import xarray as xr
import metpy.calc as mc
import metpy.constants as mpconstants
from metpy.units import units
# CONSTANTS
# ---------
sigma = 2.0e-6 * units('m^2 Pa^-2 s^-2')
f0 = 1e-4 * units('s^-1')
Rd = mpconstants.Rd
path = './'
uf = 'uwnd.2018.nc'
vf = 'vwnd.2018.nc'
af = 'air.2018.nc'
ads = xr.open_dataset(path+af).metpy.parse_cf()
uds = xr.open_dataset(path+uf).metpy.parse_cf()
vds = xr.open_dataset(path+vf).metpy.parse_cf()
a700 = ads['air'].metpy.sel(
level=700 * units.hPa,
time='2018-01-04T12')
u700 = uds['uwnd'].metpy.sel(
level=700 * units.hPa,
time='2018-01-04T12')
v700 = vds['vwnd'].metpy.sel(
level=700 * units.hPa,
time='2018-01-04T12')
lats = ads['lat'].metpy.unit_array
lons = ads['lon'].metpy.unit_array
X, Y = np.meshgrid(lons,lats)
dx, dy = mc.lat_lon_grid_deltas(lons,lats)
avort = mc.absolute_vorticity(u700, v700,
dx, dy, lats[:,None])
print('Array shape:', avort.shape)
print('DX shape:', dx.shape)
print('DY shape:', dy.shape)
print('U700 shape:', u700.shape)
print('V700 shape:', v700.shape)
vortadv = mc.advection(avort, (u700,v700), (dx,dy)).to_base_units()
Here's the error message, it also looks like I may have a unit issue?
Found lat/lon values, assuming latitude_longitude for projection grid_mapping variable
Found lat/lon values, assuming latitude_longitude for projection grid_mapping variable
Found lat/lon values, assuming latitude_longitude for projection grid_mapping variable
/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/calc/basic.py:1033: UserWarning: Input over 1.5707963267948966 radians. Ensure proper units are given.
'Ensure proper units are given.'.format(max_radians))
/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/pint/quantity.py:888: RuntimeWarning: invalid value encountered in true_divide
magnitude = magnitude_op(new_self._magnitude, other._magnitude)
Array shape: (73, 144)
DX shape: (73, 143)
DY shape: (72, 144)
U700 shape: (73, 144)
V700 shape: (73, 144)
Traceback (most recent call last):
File "metpy.decomp.py", line 56, in <module>
vortadv = mc.advection(avort, (u700,v700), (dx,dy)).to_base_units()
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/xarray.py", line 570, in wrapper
return func(*args, **kwargs)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/calc/kinematics.py", line 61, in wrapper
ret = func(*args, **kwargs)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/calc/kinematics.py", line 320, in advection
wind = _stack(wind)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/calc/kinematics.py", line 24, in _stack
return concatenate([a[np.newaxis] if iterable(a) else a for a in arrs], axis=0)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/metpy/calc/kinematics.py", line 24, in <listcomp>
return concatenate([a[np.newaxis] if iterable(a) else a for a in arrs], axis=0)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/dataarray.py", line 642, in __getitem__
return self.isel(indexers=self._item_key_to_dict(key))
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/dataarray.py", line 1040, in isel
indexers, drop=drop, missing_dims=missing_dims
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/dataset.py", line 2014, in _isel_fancy
name, var, self.indexes[name], var_indexers
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/indexes.py", line 106, in isel_variable_and_index
new_variable = variable.isel(indexers)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/variable.py", line 1118, in isel
return self[key]
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/variable.py", line 766, in __getitem__
dims, indexer, new_order = self._broadcast_indexes(key)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/variable.py", line 612, in _broadcast_indexes
return self._broadcast_indexes_outer(key)
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/variable.py", line 688, in _broadcast_indexes_outer
return dims, OuterIndexer(tuple(new_key)), None
File "/work1/jsa/anconda3/envs/earth/lib/python3.7/site-packages/xarray/core/indexing.py", line 410, in __init__
f"invalid indexer array, does not have integer dtype: {k!r}"
TypeError: invalid indexer array, does not have integer dtype: array(None, dtype=object)
Thanks in advance for any help!
So the problem is that in MetPy 1.0 the signature for advection changed, to be easier to use, to advection(scalar, u, v). Also, since you're working with Xarray data with MetPy 1.0, it can handle all of the coordinate stuff for you--you don't even need to deal with dx and dy manually:
subset = dict(level=700 * units.hPa, time='2018-01-04T12')
a700 = ads['air'].metpy.sel(**subset)
u700 = uds['uwnd'].metpy.sel(**subset)
v700 = vds['vwnd'].metpy.sel(**subset)
avort = mc.absolute_vorticity(u700, v700)
vortadv = mc.advection(avort, u700, v700).to_base_units()
We need to update that training example, but right now I'd recommend looking at the gallery examples 500 hPa Geopotential Heights, Absolute Vorticity, and Winds and 500 hPa Vorticity Advection.

Librosa feature tonnetz ends up in TypeError

I'm trying to extract tonnetz from the harmonic components of my audio. My code is basically a copy paste from the tutorial https://librosa.github.io/librosa/generated/librosa.feature.tonnetz.html
My code:
import librosa
def extract_feature(file_name):
y, sr = librosa.load(file_name)
y = librosa.effects.harmonic(y)
tonnetz = librosa.feature.tonnetz(y=y, sr=sr)
return tonnetz
print extract_feature("out.wav")
Here's the stack trace:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/core/pitch.py:160: DeprecationWarning: object of type <type 'numpy.float64'> cannot be safely interpreted as an integer.
bins = np.linspace(-0.5, 0.5, np.ceil(1./resolution), endpoint=False)
Traceback (most recent call last):
File "test_python.py", line 10, in <module>
print extract_feature("out.wav")
File "test_python.py", line 6, in extract_feature
tonnetz = librosa.feature.tonnetz(y=y, sr=sr)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/feature/spectral.py", line 1157, in tonnetz
chroma = chroma_cqt(y=y, sr=sr)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/feature/spectral.py", line 936, in chroma_cqt
real=False))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/core/constantq.py", line 251, in cqt
cqt_resp.append(__cqt_response(my_y, n_fft, my_hop, fft_basis))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/core/constantq.py", line 531, in __cqt_response
D = stft(y, n_fft=n_fft, hop_length=hop_length, window=np.ones)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/core/spectrum.py", line 167, in stft
y_frames = util.frame(y, frame_length=n_fft, hop_length=hop_length)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/librosa/util/utils.py", line 102, in frame
strides=(y.itemsize, hop_length * y.itemsize))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/stride_tricks.py", line 102, in as_strided
array = np.asarray(DummyArray(interface, base=x))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: 'float' object cannot be interpreted as an index
Any idea how to fix this?
I rolled back to numpy 1.10.1 to fix this issue (although I was running chroma_cqt and getting this error).
Numpy > 1.11 as_strided makes a dummyarray and asarray does not handle the float array properly. 1.10.1 numpy has better stride_tricks, apparently.

Multilayer perceptron with target variable as array instead of a single value

I am new to deep learning and I have been trying to use the theano library to train my data. MLP tutorial here has a scalar output value while my use case has an array with a 1 corresponding to the value depicted in the output.
For example (assume the possible scalar values are 0,1,2,3,4,5),
0 = [1,0,0,0,0,0]
1 = [0,1,0,0,0,0]
2 = [0,0,1,0,0,0]
I have only modified the code to read my input and output (output now is a 2 dimensional array or matrix in the parlance of theano). Other parts of the code is as is from the MLP tutorial pasted above.
The error I am getting is in the following function
test_model = theano.function(inputs=[index],
outputs=classifier.errors(y),
givens={
x: test_set_x[index * batch_size:(index + 1) * batch_size],
y: test_set_y[index * batch_size:(index + 1) * batch_size]}) //line 286
Error stack:
Traceback (most recent call last):
File "mlp.py", line 398, in <module>
test_mlp()
File "mlp.py", line 286, in test_mlp
y: test_set_y[index * batch_size:(index + 1) * batch_size]})
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 223, in function
profile=profile)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 490, in pfunc
no_default_updates=no_default_updates)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 241, in rebuild_collect_shared
cloned_v = clone_v_get_shared_updates(outputs, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 92, in clone_v_get_shared_updates
clone_a(v.owner, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 131, in clone_a
clone_v_get_shared_updates(i, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 92, in clone_v_get_shared_updates
clone_a(v.owner, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 131, in clone_a
clone_v_get_shared_updates(i, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 92, in clone_v_get_shared_updates
clone_a(v.owner, copy_inputs_over)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 135, in clone_a
strict=rebuild_strict)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/graph.py", line 213, in clone_with_new_inputs
new_inputs[i] = curr.type.filter_variable(new)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/type.py", line 205, in filter_variable
self=self)<br><br>
TypeError: Cannot convert Type TensorType(int64, matrix) (of Variable Subtensor{int64:int64:}.0) into Type TensorType(int32, vector). You can try to manually convert Subtensor{int64:int64:}.0 into a TensorType(int32, vector).
I would like to know how to change this theano.function to accommodate the y value as a matrix.
You need to define y as T.imatrix() instead of T.lvector().

Matplotlib: Quiver 2D array

i am struggling with the quiver function in python.
I want to create a 2D vector field of a given 2D array containing the two components of the vectors using quiver.
Given the array b:
>>> b
VigraArray(shape=(512, 512, 2), axistags=x y c, dtype=float32, data=
[[[ 0.59471679 0.51902866 0.38904327 ..., -0.56878477 -0.50834674
-0.48382956]
[ 0.58222073 0.50713873 0.37990916 ..., -0.56091702 -0.50057167
-0.47613338]
[ 0.53815156 0.46551338 0.34787226 ..., -0.54245669 -0.48314109
-0.45911735]
...,
and using quiver:
>>> X,Y = meshgrid(range(b.shape[0]),range(b.shape[1]))
>>> quiver(X,Y,b[...,0],b[...,1])
returns:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/pymodules/python2.7/matplotlib/pyplot.py", line 2892, in quiver
ret = ax.quiver(*args, **kw)
File "/usr/lib/pymodules/python2.7/matplotlib/axes.py", line 6641, in quiver
q = mquiver.Quiver(self, *args, **kw)
File "/usr/lib/pymodules/python2.7/matplotlib/quiver.py", line 419, in __init__
self.set_UVC(U, V, C)
File "/usr/lib/pymodules/python2.7/matplotlib/quiver.py", line 463, in set_UVC
U = ma.masked_invalid(U, copy=False).ravel()
File "/usr/lib/python2.7/dist-packages/numpy/ma/core.py", line 3969, in ravel
r._mask = ndarray.ravel(self._mask).reshape(r.shape)
File "/usr/lib/python2.7/dist-packages/vigra/arraytypes.py", line 1308, in reshape
res = numpy.ndarray.reshape(self, shape, order)
TypeError: an integer is required
I never had problems using VigraArray with matplotlib, so i don't think this is the problem. Thanks for help!
As a matter of fact it seems that pylab.quiver does not handle vigra.VigraArray correctly. The following substitution worked for me:
b[...,0] = numpy.array(b[...,0])
b[...,1] = numpy.array(b[...,1])

Categories

Resources