I'm having a really frustrating problem using numpy's inverse fast fourier transform function. I know the fft function works well based on my other results. Error seems to be introduced after calling ifft. The following should be printing zeros for example:
temp = Eta[50:55]
print(temp)
print(temp-np.fft.fft(np.fft.ifft(temp)))
Output:
[ -4.70429130e+13 -3.15161484e+12j -2.45515846e+13 +5.43230842e+12j -2.96326088e+13 -4.55029496e+12j 2.99158889e+13 -3.00718375e+13j -3.87978563e+13 +9.98287428e+12j]
[ 0.00781250+0.00390625j -0.02734375+0.01757812j 0.05078125-0.02441406j 0.01171875-0.01171875j -0.01562500+0.015625j ]
Please help!
You are seeing normal floating point imprecision. Here's what I get with your data:
In [58]: temp = np.array([ -4.70429130e+13 -3.15161484e+12j, -2.45515846e+13 +5.43230842e+12j, -2.96326088e+13 -4.55029496e+12j, 2.99158889e+13 -3.00718375e+13j, -3.87978563e+13 +9.98287428e+12j])
In [59]: delta = temp - np.fft.fft(np.fft.ifft(temp))
In [60]: delta
Out[60]:
array([ 0.0000000+0.00390625j, -0.0312500+0.01953125j,
0.0390625-0.02539062j, 0.0078125-0.015625j , -0.0156250+0.015625j ])
Relative to the input, those values are, in fact, "small", and reasonable for 64 bit floating point calculations:
In [61]: np.abs(delta)/np.abs(temp)
Out[61]:
array([ 8.28501685e-17, 1.46553699e-15, 1.55401584e-15,
4.11837758e-16, 5.51577805e-16])
Related
I've a numpy matrix a defined as:
>>> a
>>> array([[ 1.920941165 , 0.9518795607, 1.5358781432],
[-0.2418292026, 0.0851087409, -0.2760766872],
[-0.4161812806, 0.7409229185, -0.3248560283],
[-0.3439163186, 1.4052927665, -1.612850871 ],
[ 1.5810794171, 1.1820622504, 1.8063415367]])
If I typecast it to float32, it gives:
>>> a.astype(np.float32)
>>> array([[ 1.9209411 , 0.95187956, 1.5358782 ],
[-0.2418292 , 0.08510874, -0.27607667],
[-0.41618127, 0.7409229 , -0.32485604],
[-0.34391633, 1.4052927 , -1.6128509 ],
[ 1.5810794 , 1.1820623 , 1.8063415 ]], dtype=float32)
When I convert original a matrix to a tensor, I get:
>>> torch.tensor(a)
>>> tensor([[ 1.9209411650, 0.9518795607, 1.5358781432],
[-0.2418292026, 0.0851087409, -0.2760766872],
[-0.4161812806, 0.7409229185, -0.3248560283],
[-0.3439163186, 1.4052927665, -1.6128508710],
[ 1.5810794171, 1.1820622504, 1.8063415367]], dtype=torch.float64)
which looks correct as it retains original values from matrix a.
But when I convert float32-typecasted matrix to a tensor, I get different floating point numbers.
>>> torch.tensor(a.astype(np.float32))
>>> tensor([[ 1.9209411144, 0.9518795609, 1.5358781815],
[-0.2418292016, 0.0851087421, -0.2760766745],
[-0.4161812663, 0.7409229279, -0.3248560429],
[-0.3439163268, 1.4052927494, -1.6128509045],
[ 1.5810793638, 1.1820622683, 1.8063415289]])
Why can't the second tensor(tensor of type-casted matrix) be equal to the second matrix(type-casted one) provided above.
float32 has 24fraction bit (7.2 decimal point in decimal), what you see after that is not meaningful. ex: 1.920941165 (9 point).
this means if you want to retain all points you shoult represent as 64 float. however when you convert to 32, either in numpy or torch they should be same values, it is only printing is different. torch print till the number of floating that you have set, while numpy truncate only till valid points.
for example:
import numpy as np
np.set_printoptions(precision=10)
a = np.array([1.920941165],dtype=np.float32)
array([1.9209411], dtype=float32)**
t = torch.tensor(a , dtype=torch.float32)
tensor([1.9209411144])
however if you look into underlying memory of both (one uint32) are the same:
np.ndarray(1, dtype=np.uint32,buffer=a )
array([1073078630], dtype=uint32)
import ctypes
ptr = ctypes.cast(t.data_ptr(), ctypes.POINTER(ctypes.c_uint32))
ptr[0]
1073078630
Hi I am trying to vectorise the QR decomposition in numpy as the documentation suggests here, however I keep getting dimension issues. I am confused as to what I am doing wrong as I believe the following follows the documentation. Does anyone know what is wrong with this:
import numpy as np
X = np.random.randn(100,50,50)
vecQR = np.vectorize(np.linalg.qr)
vecQR(X)
From the doc: "By default, pyfunc is assumed to take scalars as input and output.".
So you need to give it a signature:
vecQR = np.vectorize(np.linalg.qr, signature='(m,n)->(m,p),(p,n)')
How about just map np.linalg.qr to the 1st axis of the arr?:
In [35]: np.array(list(map(np.linalg.qr, X)))
Out[35]:
array([[[[-3.30595447e-01, -2.06613421e-02, 2.50135751e-01, ...,
2.45828025e-02, 9.29150994e-02, -5.02663489e-02],
[-1.04193390e-01, -1.95327811e-02, 1.54158438e-02, ...,
2.62127499e-01, -2.21480958e-02, 1.94813279e-01],
[ 1.62712767e-01, -1.28304663e-01, -1.50172509e-01, ...,
1.73740906e-01, 1.31272690e-01, -2.47868876e-01]
I'm trying to implement some algorithm on python. For the sake of documentation and clear understanding of the flow details I use sympy. As it turned out it fails on computation of an inverted float matrix.
So I'm getting
TypeError Traceback (most recent call last)
<ipython-input-20-c2193b2ae217> in <module>()
10 np.linalg.inv(xx)
11 symInv = lambdify(X0,X0Inv)
---> 12 symInv(xx)
/opt/anaconda3/lib/python3.6/site-packages/numpy/__init__.py in <lambda>(X0)
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
If the matrix is integer it works fine:
import numpy as np
from sympy import *
init_printing()
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
#xx = np.random.randint(10,size=(4,4)) # this line makes it workable
X0Inv = X0**-1
np.linalg.inv(xx)
symInv = lambdify(X0,X0Inv)
symInv(xx)
Link to a live version of the code
If anybody knows any workaround it would be great if you could share. Thanks in advance.
UPDATE. As it is pointed out by #hpaulj and #tel the issue is how lambdify translates ** to numpy code for matrix symbols: by some reason it tries to XOR elements. I will try to find an easy way to alter this behavior. Any help/hints are appreciated.
As hpaulj points out, the error seems to stem from a conversion of ** to ^ that happens in lambdify, for some reason.
You can fix the error that you're getting by using np.power instead of **:
import numpy as np
from sympy import MatrixSymbol, lambdify
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
X0Inv = np.power(X0, -1)
symInv = lambdify(X0,X0Inv)
print('matrix xx')
print(xx, end='\n\n')
print('result of symInv(xx)')
print(symInv(xx), end='\n\n')
Output:
matrix xx
[[0.4514882 0.84588859 0.02431252 0.25468078]
[0.46767727 0.85748153 0.51207567 0.59636962]
[0.84557537 0.38459205 0.76814414 0.96624407]
[0.0933803 0.43467119 0.77823338 0.58770188]]
result of symInv(xx)
[[2.214897321138516, 1.1821887747951494], [2.1382266426713077, 1.1662058776397513]]
However, as you have it set up symInv doesn't produce the matrix inverse, but instead only does the element-wise exponentiation of each value in xx. In other words, symInv(xx)[i,j] == xx[i,j]**-1. This code shows the difference between element-wise exponentiation and the true inverse.
print('result of xx**-1')
print(xx**-1, end='\n\n')
print('result of np.linalg.inv(xx)')
print(np.linalg.inv(xx))
Output:
result of xx**-1
[[ 2.21489732 1.18218877 41.13107402 3.92648394]
[ 2.13822664 1.16620588 1.95283638 1.67681243]
[ 1.18262669 2.60015778 1.301839 1.0349352 ]
[10.7088969 2.30058954 1.28496159 1.70154295]]
result of np.linalg.inv(xx)
[[-118.7558445 171.37619558 -20.37188041 -88.94733652]
[ -0.56274492 2.49107626 -1.00812489 -0.62648633]
[-160.35674704 230.3266324 -28.87548299 -116.75862026]
[ 231.62940572 -334.07044947 42.21936405 170.90926978]]
Edit: workaround
I'm 95% sure that what you've run into is a bug in the Sympy code. It seems that X0^-1 was valid syntax for Sympy Matrix objects at some point, but no longer. However, it seems that someone forgot to tell that to whomever maintains the lambdify code, since it still translates every matrix exponentiation into the carrot ^ syntax.
So what you should do is submit an issue on the Sympy github. Just post your code and the error it produces, and ask if that's the intended behavior. In the meantime, here's a filthy hack to work around the problem:
import numpy as np
from sympy import MatrixSymbol, lambdify
class XormulArray(np.ndarray):
def __new__(cls, input_array):
return np.asarray(input_array).view(cls)
def __xor__(self, other):
return np.linalg.matrix_power(self, other)
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
X0Inv = X0.inv()
symInv = lambdify(X0,X0Inv,'numpy')
print('result of symInv(XormulArray(xx))')
print(symInv(XormulArray(xx)), end='\n\n')
print('result of np.linalg.inv(xx)')
print(np.linalg.inv(xx))
Output:
result of symInv(XormulArray(xx))
[[ 3.50382881 -3.84573344 3.29173896 -2.01224981]
[-1.88719742 1.86688465 0.3277883 0.0319487 ]
[-3.77627792 4.30823019 -5.53247103 5.53412775]
[ 3.89620805 -3.30073088 4.27921307 -4.68944191]]
result of np.linalg.inv(xx)
[[ 3.50382881 -3.84573344 3.29173896 -2.01224981]
[-1.88719742 1.86688465 0.3277883 0.0319487 ]
[-3.77627792 4.30823019 -5.53247103 5.53412775]
[ 3.89620805 -3.30073088 4.27921307 -4.68944191]]
Basically, you'll have to cast all of your arrays to the thin wrapper type XormulArray right before you pass them into symInv. This hack is not best practice for a bunch of reasons (including the fact that it apparently breaks the (2,2) shape restriction you placed on X0), but it'll probably be the best you can do until the Sympy codebase is fixed.
I have got two dask arrays i.e., a and b. I get dot product of a and b as below
>>>z2 = da.from_array(a.dot(b),chunks=1)
>>> z2
dask.array<from-ar..., shape=(3, 3), dtype=int32, chunksize=(1, 1)>
But when i do
sigmoid(z2)
Shell stops working. I can't even kill it.
Sigmoid is given as below:
def sigmoid(z):
return 1/(1+np.exp(-z))
When working with Dask Arrays, it is normally best to use functions provided in dask.array. The problem with using NumPy functions directly is they will pull of the data from the Dask Array into memory, which could be the cause of the shell freezing that you experienced. The functions provided in dask.array are designed to avoid this by lazily chaining computations until you wish to evaluate them. In this case, it would be better to use da.exp instead of np.exp. Provided an example of this below.
Have provided a modified version of your code to demonstrate how this would be done. In the example I have called .compute(), which also pulls the full result into memory. It is possible that this could also cause issues for you if your data is very large. Hence I have demonstrated taking a small slice of the data before calling compute to keep the result small and memory friendly. If your data is large and you wish to keep the full result, would recommend storing it to disk instead.
Hope this helps.
In [1]: import dask.array as da
In [2]: def sigmoid(z):
...: return 1 / (1 + da.exp(-z))
...:
In [3]: d = da.random.uniform(-6, 6, (100, 110), chunks=(10, 11))
In [4]: ds = sigmoid(d)
In [5]: ds[:5, :6].compute()
Out[5]:
array([[ 0.0067856 , 0.31701817, 0.43301395, 0.23188129, 0.01530903,
0.34420555],
[ 0.24473798, 0.99594466, 0.9942868 , 0.9947099 , 0.98266004,
0.99717379],
[ 0.92617922, 0.17548207, 0.98363658, 0.01764361, 0.74843615,
0.04628735],
[ 0.99155315, 0.99447542, 0.99483032, 0.00380505, 0.0435369 ,
0.01208241],
[ 0.99640952, 0.99703901, 0.69332886, 0.97541982, 0.05356214,
0.1869447 ]])
Got it... I tried and it worked!
ans = z2.map_blocks(sigmoid)
I have something like this:
coefs = [28, -36, 50, -22]
print(numpy.roots(coefs))
Of course the result is:
[ 0.35770550+1.11792657j 0.35770550-1.11792657j 0.57030329+0.j ]
However, by using this method, how do I get it only to print the real roots if any (as floats)? Meaning just this for my example:
0.57030329
Do NOT use .iscomplex() or .isreal(), because roots() is a numerical algorithm, and it returns the numerical approximation of the actual roots of the polynomial. This can lead to spurious imaginary parts, that are interpreted by the above methods as solutions.
Example:
# create a polynomial with these real-valued roots:
p = numpy.poly([2,3,4,5,56,6,5,4,2,3,8,0,10])
# calculate the roots from the polynomial:
r = numpy.roots(p)
print(r) # real-valued roots, with spurious imaginary part
array([ 56.00000000 +0.00000000e+00j, 10.00000000 +0.00000000e+00j,
8.00000000 +0.00000000e+00j, 6.00000000 +0.00000000e+00j,
5.00009796 +0.00000000e+00j, 4.99990203 +0.00000000e+00j,
4.00008066 +0.00000000e+00j, 3.99991935 +0.00000000e+00j,
3.00000598 +0.00000000e+00j, 2.99999403 +0.00000000e+00j,
2.00000000 +3.77612207e-06j, 2.00000000 -3.77612207e-06j,
0.00000000 +0.00000000e+00j])
# using isreal() fails: many correct solutions are discarded
print(r[numpy.isreal(r)])
[ 56.00000000+0.j 10.00000000+0.j 8.00000000+0.j 6.00000000+0.j
5.00009796+0.j 4.99990203+0.j 4.00008066+0.j 3.99991935+0.j
3.00000598+0.j 2.99999403+0.j 0.00000000+0.j]
Use some threshold depending on your problem at hand instead. Moreover, since you're interested in the real roots, keep only the real part:
real_valued = r.real[abs(r.imag)<1e-5] # where I chose 1-e5 as a threshold
print(real_valued)
You can do it, using iscomplex as follows:
r = numpy.roots(coefs)
In [15]: r[~numpy.iscomplex(r)]
Out[15]: array([ 0.57030329+0.j])
Also you can use isreal as pointed out in comments:
In [17]: r[numpy.isreal(r)]
Out[17]: array([ 0.57030329+0.j])
I hope this help.
roots = np.roots(coeff);
for i in range(len(roots)):
if np.isreal(roots[i]):
print(np.real(roots[i]))