skcuda.linalg.PCA's fit_transform throws error - python

I am trying to run PCA (Principal component analysis) on GPU. I am using skcuda.linalg.PCA for that purpose, but it's not working. From their tutorial (https://scikit-cuda.readthedocs.io/en/latest/generated/skcuda.linalg.PCA.html):
import pycuda.autoinit
import pycuda.gpuarray as gpuarray
import numpy as np
import skcuda.linalg as linalg
from skcuda.linalg import PCA as cuPCA
pca = cuPCA(n_components=4) # map the data to 4 dimensions
X = np.random.rand(1000,100) # 1000 samples of 100-dimensional data vectors
X_gpu = gpuarray.GPUArray((1000,100), np.float64, order="F") # note that order="F" or a transpose is necessary. fit_transform requires row-major matrices, and column-major is the default
X_gpu.set(X) # copy data to gpu
T_gpu = pca.fit_transform(X_gpu) # calculate the principal components
When I run it I get the following error:
cublasInternalError Traceback (most recent call last)
<ipython-input-31-02aaf0fa19e4> in <module>
8 X_gpu = gpuarray.GPUArray((1000,100), np.float64, order="F") # note that order="F" or a transpose is necessary. fit_transform requires row-major matrices, and column-major is the default
9 X_gpu.set(X) # copy data to gpu
---> 10 T_gpu = pca.fit_transform(X_gpu)
/opt/conda/lib/python3.7/site-packages/skcuda/linalg.py in fit_transform(self, X_gpu)
204 cuGemv (self.h, 'n', p, k, -1.0, P_gpu.gpudata, p, U_gpu.gpudata, 1, 1.0, P_gpu[:,k].gpudata, 1)
205
--> 206 l2 = cuNrm2(self.h, p, P_gpu[:,k].gpudata, 1)
207 cuScal(self.h, p, 1.0/l2, P_gpu[:,k].gpudata, 1)
208 cuGemv(self.h, 'n', n, p, 1.0, R_gpu.gpudata, n, P_gpu[:,k].gpudata, 1, 0.0, T_gpu[:,k].gpudata, 1)
/opt/conda/lib/python3.7/site-packages/skcuda/cublas.py in cublasDnrm2(handle, n, x, incx)
1295 n, int(x), incx,
1296 ctypes.byref(result))
-> 1297 cublasCheckStatus(status)
1298 return np.float64(result.value)
1299
/opt/conda/lib/python3.7/site-packages/skcuda/cublas.py in cublasCheckStatus(status)
177 raise cublasError
178 else:
--> 179 raise e
180
181 # Helper functions:
cublasInternalError
Initially, I was running on my own data and I got this error. Then I decided to run the example and I got exactly the same error. Does any1 know what's the problem here? I am using Kaggle notebook with Tesla T4 GPU. Thanks.

Related

curve_fit multivariable arrays non-linear regression

I am trying to fit the coefficients of a multivariate function with curve_fit. All variables are arrays of the following shape : (1000,) Manually I can fit the curves as follows. First I define my function where the variables = [dphi1,dphi2,phi1,phi2,M] and the coefficients = [c1,c2,c3,c4,c5,c6,c7,c8,c9]:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import math
# Function definition
def ddphi1(dphi1,dphi2,phi1,phi2,M,c1,c2,c3,c4,c5,c6,c7,c8,c9):
return (-(c1*np.sin(phi1-phi2)*np.cos(phi1-phi2)*dphi1**2)-(c2*np.sin(phi1-phi2)*dphi2**2)+(c3*np.cos(phi1-phi2)*np.sin(phi2))+(c4*np.cos(phi1-phi2)*(dphi2-dphi1))-(c5*np.sin(phi1))+c6*M-c7*dphi1)/(c8-(c9*np.cos(phi1-phi2)*np.cos(phi1-phi2)))
my first prediction of the coefficients:
p = [0.5625, 0.375, 27.590625000000003, 0.09375, 55.18125, 62.5, 0.425, 1, 0.5625]
I calculate the values of the function iteratively. I take the length of any of the variables already have the same size:
n = len(time1)
y = np.empty(n)
for i in range(n):
y[i] = ddphi1(dphi11[i],dphi22[i],phi11[i],phi22[i],M[i],p[0],p[1],p[2],p[3],p[4],p[5],p[6],p[7],p[8])
plt.plot(time1, ddphi11)
plt.plot(time1, y, 'r')
Predicted Vs real data
Now the idea is to calculate the coefficients automatically with curve_fit as follows: ** ddphi1 ist my Callback function and ddphi11 my data of shape (1000,) as well as the other variables
from scipy.optimize import curve_fit
g = [0.56, 0.37, 27.63, 0.094, 55.18, 62.5, 0.625, 1, 0.56]
c,cov =curve_fit(ddphi1,(dphi1,dphi2,phi1,phi2,M),ddphi11,g)
print(c)
and I receive this error
--------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-158-e8e42e7b1216> in <module>()
1 from scipy.optimize import curve_fit
2
----> 3 c,cov =curve_fit(ddphi1,(dphi1,dphi2,phi1,phi2,M),ddphi11,g=[0.56, 0.37, 27.63, 0.094, 55.18, 62.5, 0.625, 1, 0.56])
4 print(c)
1 frames
/usr/local/lib/python3.7/dist-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs)
719 # non-array_like `xdata`.
720 if check_finite:
--> 721 xdata = np.asarray_chkfinite(xdata, float)
722 else:
723 xdata = np.asarray(xdata, float)
/usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py in asarray_chkfinite(a, dtype, order)
484
485 """
--> 486 a = asarray(a, dtype=dtype, order=order)
487 if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
488 raise ValueError(
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (5,) + inhomogeneous part.
I have seen that most of the data that goes into the curve_fir is in the form list. Maybe there is a solution when dealing with arrays? espero I hope you can help me as I am new to Python.
I was finally able to solve it. only the arrays should have been concatenated in a global variable
X=np.column_stack([dphi11,dphi22,phi11,phi22,M])
Then describe the model in based the global variable
def model(X,c1,c2,c3,c4,c5,c6,c7,c8,c9):
dphi1 = X[:,0]
dphi2 = X[:,1]
phi1 = X[:,2]
phi2 = X[:,3]
M = X[:,4]
f = (-(c1*np.sin(phi1-phi2)*np.cos(phi1-phi2)*dphi1**2)-(c2*np.sin(phi1-phi2)*dphi2**2)+(c3*np.cos(phi1-phi2)*np.sin(phi2))+(c4*np.cos(phi1-phi2)*(dphi2-dphi1))-(c5*np.sin(phi1))+c6*M-c7*dphi1)/(c8-(c9*np.cos(phi1-phi2)*np.cos(phi1-phi2)))
return f
and the magic begins
guesses = [0.56, 0.37, 27.63, 0.094, 55.18, 62.5, 0.625, 1, 0.56]
from scipy.optimize import curve_fit
popt, pcov = curve_fit(model, X, ddphi11, guesses)
print(popt)

TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'

I need to make an integral of the type g(u)jn(u) where g(u) is a smooth function without zeros and jn(u) in the Bessel function with infinity zeros, but I got the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
First I need to change of variable x to variable u and make an integration in the new variable u but how the function u(x) is not analytically invertible so I need to use interpolation to make this inversion numerically.
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
x = np.linspace(0.1, 100, 1000)
u = lambda x: x*np.exp(x)
dxdu_x = lambda x: 1/((1+x) * np.exp(x)) ## dxdu as function of x: not invertible
dxdu_u = InterpolatedUnivariateSpline(u(x), dxdu_x(x)) ## dxdu as function of u: change of variable
After this, the integral is:
from mpmath import mp
def f(n):
integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
bjz = lambda nth: mp.besseljzero(n, nth)
return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
I use quadosc from mpmath and not quad from scipy because quadosc is more appropriate to make integral of rapidly oscillating functions, like Bessel functions. But, by other hand, this force me to use two different packges, scipy to calculate dxdu_u by interpolation, and mpmath to calculate the Bessel functions mp.besselj(n,U) and the integral of the product dxdu_u(U) * mp.bessel(n,U) so I suspect that this mix of two different packages can make some issue/ conflict. So when I make:
print(f(0))
I got the error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-38-ac2976a6b736> in <module>
12 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
13
---> 14 f(0)
<ipython-input-38-ac2976a6b736> in f(n)
10 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
11 bjz = lambda nth: mp.besseljzero(n, nth)
---> 12 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
13
14 f(0)
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
Does anyone know how I can solve this problem?
Thanks
The full traceback (the part you sniped) shows that the error is in the __call__ method of the univariatespline object. So indeed the problem is that the mpmath integration routine feeds in its mpf decimals, and scipy has no way of dealing with them.
A simplest fix is then to manually cast the offending part of the argument of the integrand to a float:
integrand = lambda U: dxdu_u(float(U)) * mp.besselj(n,U)
In general this is prone to numerical errors (mpmath uses its high-precision variables on purpose!) so proceed with caution. In this specific case it might be OK, because the interpolation is actually done in double precision. Still, best check the results.
A possible alternative might be to avoid mpmath and use the weights argument to scipy.integrate.quad, see the docs (scroll down to weights="sin" part)
Another alternative is to stick with mpmath all the way and implement the interpolation yourselves in pure python (this way, mpf objects are probably fine since they should support usual arithmetics). It's likely a simple linear interpolation is enough. If it's not, it's not too big of a deal to code up your own cubic spline interpolator.
The full traceback:
In [443]: f(0)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-443-6bfbdbfff9c4> in <module>
----> 1 f(0)
<ipython-input-440-7ebeff3611f6> in f(n)
2 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
3 bjz = lambda nth: mp.besseljzero(n, nth)
----> 4 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
5
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quadosc(ctx, f, interval, omega, period, zeros)
998 # raise ValueError("zeros do not appear to be correctly indexed")
999 n = 1
-> 1000 s = ctx.quadgl(f, [a, zeros(n)])
1001 def term(k):
1002 return ctx.quadgl(f, [zeros(k), zeros(k+1)])
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quadgl(ctx, *args, **kwargs)
807 """
808 kwargs['method'] = 'gauss-legendre'
--> 809 return ctx.quad(*args, **kwargs)
810
811 def quadosc(ctx, f, interval, omega=None, period=None, zeros=None):
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in quad(ctx, f, *points, **kwargs)
740 ctx.prec += 20
741 if dim == 1:
--> 742 v, err = rule.summation(f, points[0], prec, epsilon, m, verbose)
743 elif dim == 2:
744 v, err = rule.summation(lambda x: \
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in summation(self, f, points, prec, epsilon, max_degree, verbose)
230 print("Integrating from %s to %s (degree %s of %s)" % \
231 (ctx.nstr(a), ctx.nstr(b), degree, max_degree))
--> 232 results.append(self.sum_next(f, nodes, degree, prec, results, verbose))
233 if degree > 1:
234 err = self.estimate_error(results, prec, epsilon)
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in sum_next(self, f, nodes, degree, prec, previous, verbose)
252 case the quadrature rule is able to reuse them.
253 """
--> 254 return self.ctx.fdot((w, f(x)) for (x,w) in nodes)
255
256
/usr/local/lib/python3.6/dist-packages/mpmath/ctx_mp_python.py in fdot(ctx, A, B, conjugate)
942 hasattr_ = hasattr
943 types = (ctx.mpf, ctx.mpc)
--> 944 for a, b in A:
945 if type(a) not in types: a = ctx.convert(a)
946 if type(b) not in types: b = ctx.convert(b)
/usr/local/lib/python3.6/dist-packages/mpmath/calculus/quadrature.py in <genexpr>(.0)
252 case the quadrature rule is able to reuse them.
253 """
--> 254 return self.ctx.fdot((w, f(x)) for (x,w) in nodes)
255
256
<ipython-input-440-7ebeff3611f6> in <lambda>(U)
1 def f(n):
----> 2 integrand = lambda U: dxdu_u(U) * mp.besselj(n,U)
3 bjz = lambda nth: mp.besseljzero(n, nth)
4 return mp.quadosc(integrand, [0,mp.inf], zeros=bjz)
5
at this point it starts using the scipy interpolation code
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/fitpack2.py in __call__(self, x, nu, ext)
310 except KeyError:
311 raise ValueError("Unknown extrapolation mode %s." % ext)
--> 312 return fitpack.splev(x, self._eval_args, der=nu, ext=ext)
313
314 def get_knots(self):
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/fitpack.py in splev(x, tck, der, ext)
366 return tck(x, der, extrapolate=extrapolate)
367 else:
--> 368 return _impl.splev(x, tck, der, ext)
369
370
/usr/local/lib/python3.6/dist-packages/scipy/interpolate/_fitpack_impl.py in splev(x, tck, der, ext)
596 shape = x.shape
597 x = atleast_1d(x).ravel()
--> 598 y, ier = _fitpack._spl_(x, der, t, c, k, ext)
599
600 if ier == 10:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
_fitpack._spl_ probably is compiled code (for speed). It can't take the mpmath objects directly; it has to pass their values as C compatible doubles.
To illustrate the problem, make a numpy array of mpmath objects:
In [444]: one,two = mp.mpmathify(1), mp.mpmathify(2)
In [445]: arr = np.array([one,two])
In [446]: arr
Out[446]: array([mpf('1.0'), mpf('2.0')], dtype=object)
In [447]: arr.astype(float) # default 'unsafe' casting
Out[447]: array([1., 2.])
In [448]: arr.astype(float, casting='safe')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-448-4860036bcca8> in <module>
----> 1 arr.astype(float, casting='safe')
TypeError: Cannot cast array from dtype('O') to dtype('float64') according to the rule 'safe'
With integrand = lambda U: dxdu_u(float(U)) * mp.besselj(n,U),
In [453]: f(0) # a minute or so later
Out[453]: mpf('0.61060303588231069')

Python MemoryError with metric-learn

I am trying to learn a new distance metric using neighborhood component analysis (using this source)
My data matrix has the shape of 11000x128 (11k elements and 128 features). However I keep getting memory error, I even downsized my input data to 1000x128 but I still get the Memory error. Here is my code snippet. I am using a C4.8xlarge machine on AWS (36 cores, 60 GiB Memory). Is there a way that I can bypass this? How do I make sure that I am using all the available cores?
from metric_learn import NCA
num_el = 1000
nca = NCA(max_iter=1, learning_rate=0.01)
nca.fit(feats_annotated[:num_el], labels_annotated[:num_el])
MemoryError Traceback (most recent call last)
<ipython-input-5-bfd6fe47f16d> in <module>()
1 num_el = 1000
2 nca = NCA(max_iter=1, learning_rate=0.01)
----> 3 nca.fit(feats_annotated[:num_el], labels_annotated[:num_el])
/home/user/anaconda2/lib/python2.7/site-packages/metric_learn/nca.pyc in fit(self, X, labels)
34 # Run NCA
35 dX = X[:,None] - X[None] # shape (n, n, d)
---> 36 tmp = np.einsum('...i,...j->...ij', dX, dX) # shape (n, n, d, d)
37 masks = labels[:,None] == labels[None]
38 learning_rate = self.params['learning_rate']
/home/user/anaconda2/lib/python2.7/site-packages/numpy/core/einsumfunc.pyc in einsum(*operands, **kwargs)
946 # If no optimization, run pure einsum
947 if optimize_arg is False:
--> 948 return c_einsum(*operands, **kwargs)
949
950 valid_einsum_kwargs = ['out', 'dtype', 'order', 'casting']
MemoryError:

hierarchical clustering in scipy - memory error

Here is my code:
import numpy as np
from scipy.cluster.hierarchy import fclusterdata
def mydist(p1,p2):
return 1
Y = np.random.randn(100000,2)
fclust1 = fclusterdata(Y, 1.0, metric=mydist)
It produces the following error:
MemoryError Traceback (most recent call last)
<ipython-input-52-818db8791e96> in <module>()
----> 1 fclust1 = fclusterdata(Y, 1.0, metric=mydist)
C:\Anaconda3\lib\site-packages\scipy\cluster\hierarchy.py in fclusterdata(X, t, criterion, metric, depth, method, R)
1682 'array.')
1683
-> 1684 Y = distance.pdist(X, metric=metric)
1685 Z = linkage(Y, method=method)
1686 if R is None:
C:\Anaconda3\lib\site-packages\scipy\spatial\distance.py in pdist(X, metric, p, w, V, VI)
1218
1219 m, n = s
-> 1220 dm = np.zeros((m * (m - 1)) // 2, dtype=np.double)
1221
1222 wmink_names = ['wminkowski', 'wmi', 'wm', 'wpnorm']
MemoryError:
So I am guessing my vector is too large. I am a bit surprised, since my distance function is trivial. What is max size vector that fclusterdata can accept?
Hierarchical clustering usually requires a pairwise distance matrix.
That means you need O(n^2) memory. And it does not 'see' that your distance is constant (and it doesn't make sense to optimize for this either).
It's not a very scalable algorithm.

Sparse Matrix Addition yields 'ValueError: setting an array element with a sequence.'

The lines in question are:
# Make efficient matrix that can be built
K = sparse.lil_matrix((N, N))
# Calculate K matrix (<i|pHp|j> in the LGL-nodes basis)
for i in range(Ne):
idx_s, idx_e = i*(Np-1), i*(Np-1)+Np
print(shape(K[idx_s:idx_e, idx_s:idx_e]))
print(shape(dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np)).dot(dmat)))
K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np)).dot(dmat)
But, currently, Numpy is yielding the error
(8, 8)
(8, 8)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-62-cc7cc21f07e5> in <module>()
22
23 for _ in range(N):
---> 24 ll, q = getLL(Ne, Np, x_d, w_d, dmat_d, x, w, dL, peq*peq, data)
25 peq = (peq*q)
26
<ipython-input-61-a52c13d48b87> in getLL(Ne, Np, x_d, w_d, dmat_d, x, w, dmat, peq, data)
15 print(shape(K[idx_s:idx_e, idx_s:idx_e]))
16 print(shape(dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np)).dot(dmat)))
---> 17 K[idx_s:idx_e, idx_s:idx_e] += dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np)).dot(dmat)
18
19 # Re-make matrix for efficient vector products
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/sparse/lil.py in __iadd__(self, other)
157
158 def __iadd__(self,other):
--> 159 self[:,:] = self + other
160 return self
161
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/sparse/lil.py in __setitem__(self, index, x)
307
308 # Make x and i into the same shape
--> 309 x = np.asarray(x, dtype=self.dtype)
310 x, _ = np.broadcast_arrays(x, i)
311
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
460
461 """
--> 462 return array(a, dtype, copy=False, order=order)
463
464 def asanyarray(a, dtype=None, order=None):
ValueError: setting an array element with a sequence.
This is a little cryptic as it seems that the error is happening somewhere inside of the Numpy library---not in my code. But I'm not terribly familiar with numpy, per se, so perhaps I'm indirectly causing the error.
Both slices are of the same shape, so that doesn't seem to be the actual error.
The problem is that
(dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np)).dot(dmat)
is not a simple array. It has the right shape, but the elements are sparse matrices (the 'sequence' in the error message).
Turning the inner sparse matrix into a dense array should solve the problem:
dmat.T.dot(sparse.spdiags(w*peq[idx_s:idx_e], 0, Np, Np).A).dot(dmat)
The np.dot method is not aware of sparse matrices, at least not in your version of numpy (1.8?), so it treats it as sequence. Newer versions are 'sparse' aware.
Another solution is to use the sparse matrix product (dot or *).
sparse.spdiags(...).dot(dmat etc)
I had to play around to get reasonable values for N,Np,Ns, dmat,peq. You really should have given us small samples. It makes testing ideas much easier.

Categories

Resources