Print real roots only in numpy - python

I have something like this:
coefs = [28, -36, 50, -22]
print(numpy.roots(coefs))
Of course the result is:
[ 0.35770550+1.11792657j 0.35770550-1.11792657j 0.57030329+0.j ]
However, by using this method, how do I get it only to print the real roots if any (as floats)? Meaning just this for my example:
0.57030329

Do NOT use .iscomplex() or .isreal(), because roots() is a numerical algorithm, and it returns the numerical approximation of the actual roots of the polynomial. This can lead to spurious imaginary parts, that are interpreted by the above methods as solutions.
Example:
# create a polynomial with these real-valued roots:
p = numpy.poly([2,3,4,5,56,6,5,4,2,3,8,0,10])
# calculate the roots from the polynomial:
r = numpy.roots(p)
print(r) # real-valued roots, with spurious imaginary part
array([ 56.00000000 +0.00000000e+00j, 10.00000000 +0.00000000e+00j,
8.00000000 +0.00000000e+00j, 6.00000000 +0.00000000e+00j,
5.00009796 +0.00000000e+00j, 4.99990203 +0.00000000e+00j,
4.00008066 +0.00000000e+00j, 3.99991935 +0.00000000e+00j,
3.00000598 +0.00000000e+00j, 2.99999403 +0.00000000e+00j,
2.00000000 +3.77612207e-06j, 2.00000000 -3.77612207e-06j,
0.00000000 +0.00000000e+00j])
# using isreal() fails: many correct solutions are discarded
print(r[numpy.isreal(r)])
[ 56.00000000+0.j 10.00000000+0.j 8.00000000+0.j 6.00000000+0.j
5.00009796+0.j 4.99990203+0.j 4.00008066+0.j 3.99991935+0.j
3.00000598+0.j 2.99999403+0.j 0.00000000+0.j]
Use some threshold depending on your problem at hand instead. Moreover, since you're interested in the real roots, keep only the real part:
real_valued = r.real[abs(r.imag)<1e-5] # where I chose 1-e5 as a threshold
print(real_valued)

You can do it, using iscomplex as follows:
r = numpy.roots(coefs)
In [15]: r[~numpy.iscomplex(r)]
Out[15]: array([ 0.57030329+0.j])
Also you can use isreal as pointed out in comments:
In [17]: r[numpy.isreal(r)]
Out[17]: array([ 0.57030329+0.j])

I hope this help.
roots = np.roots(coeff);
for i in range(len(roots)):
if np.isreal(roots[i]):
print(np.real(roots[i]))

Related

numpy vs pytorch precision

I've a numpy matrix a defined as:
>>> a
>>> array([[ 1.920941165 , 0.9518795607, 1.5358781432],
[-0.2418292026, 0.0851087409, -0.2760766872],
[-0.4161812806, 0.7409229185, -0.3248560283],
[-0.3439163186, 1.4052927665, -1.612850871 ],
[ 1.5810794171, 1.1820622504, 1.8063415367]])
If I typecast it to float32, it gives:
>>> a.astype(np.float32)
>>> array([[ 1.9209411 , 0.95187956, 1.5358782 ],
[-0.2418292 , 0.08510874, -0.27607667],
[-0.41618127, 0.7409229 , -0.32485604],
[-0.34391633, 1.4052927 , -1.6128509 ],
[ 1.5810794 , 1.1820623 , 1.8063415 ]], dtype=float32)
When I convert original a matrix to a tensor, I get:
>>> torch.tensor(a)
>>> tensor([[ 1.9209411650, 0.9518795607, 1.5358781432],
[-0.2418292026, 0.0851087409, -0.2760766872],
[-0.4161812806, 0.7409229185, -0.3248560283],
[-0.3439163186, 1.4052927665, -1.6128508710],
[ 1.5810794171, 1.1820622504, 1.8063415367]], dtype=torch.float64)
which looks correct as it retains original values from matrix a.
But when I convert float32-typecasted matrix to a tensor, I get different floating point numbers.
>>> torch.tensor(a.astype(np.float32))
>>> tensor([[ 1.9209411144, 0.9518795609, 1.5358781815],
[-0.2418292016, 0.0851087421, -0.2760766745],
[-0.4161812663, 0.7409229279, -0.3248560429],
[-0.3439163268, 1.4052927494, -1.6128509045],
[ 1.5810793638, 1.1820622683, 1.8063415289]])
Why can't the second tensor(tensor of type-casted matrix) be equal to the second matrix(type-casted one) provided above.
float32 has 24fraction bit (7.2 decimal point in decimal), what you see after that is not meaningful. ex: 1.920941165 (9 point).
this means if you want to retain all points you shoult represent as 64 float. however when you convert to 32, either in numpy or torch they should be same values, it is only printing is different. torch print till the number of floating that you have set, while numpy truncate only till valid points.
for example:
import numpy as np
np.set_printoptions(precision=10)
a = np.array([1.920941165],dtype=np.float32)
array([1.9209411], dtype=float32)**
t = torch.tensor(a , dtype=torch.float32)
tensor([1.9209411144])
however if you look into underlying memory of both (one uint32) are the same:
np.ndarray(1, dtype=np.uint32,buffer=a )
array([1073078630], dtype=uint32)
import ctypes
ptr = ctypes.cast(t.data_ptr(), ctypes.POINTER(ctypes.c_uint32))
ptr[0]
1073078630

Dot product for correlation with complex numbers

OK, this question probably has a very simple answer, but I've been searching for quite a while with no luck...
I want to get the dot product of 2 complex numbers in complex-plane-space. However, np.dot and np.vdot both give the wrong result.
Example of what I WANT to do:
a = 1+1j
b = 1-1j
dot(a,b) == 0
What I actually get:
np.dot(a,b) == 2+0j
np.vdot(a,b) == 0-2j
np.conj(a)*b == 0-2j
I am able to get what I want using this rather clumsy expression (edit for readability):
a.real*b.real + a.imag*b.imag
But I am very surprised not to find a nice ufunc to do this. Does it not exist? I was not expecting to have to write my own ufunc to vectorize such a common operation.
Part of my concern here, is that it seems like my expression is doing a lot of extra work extracting out the real/imaginary parts when they should be already in adjacent memory locations (considering a,b are actually already combined in a data type like complex64). This has the potential to cause a pretty severe slowdown.
** EDIT
Using Numba I ended up defining a ufunc:
#vectorize
def cdot(a, b):
return (a.real*b.real + a.imag*b.imag)
This allowed me to correlate complex data properly. Here's a correlation image for the guys who helped me!
For arrays and np.complex scalars but not plain python complex numbers you can viewcast to float. For example:
a = np.exp(1j*np.arange(4))
b = np.exp(-1j*np.arange(4))
a
# array([ 1. +0.j , 0.54030231+0.84147098j,
# -0.41614684+0.90929743j, -0.9899925 +0.14112001j])
b
# array([ 1. -0.j , 0.54030231-0.84147098j,
# -0.41614684-0.90929743j, -0.9899925 -0.14112001j])
ar = a[...,None].view(float)
br = b[...,None].view(float)
ar
# array([[ 1. , 0. ],
# [ 0.54030231, 0.84147098],
# [-0.41614684, 0.90929743],
# [-0.9899925 , 0.14112001]])
br
# array([[ 1. , -0. ],
# [ 0.54030231, -0.84147098],
# [-0.41614684, -0.90929743],
# [-0.9899925 , -0.14112001]])
Now, for example, all pairwise dot products:
np.inner(ar,br)
# array([[ 1. , 0.54030231, -0.41614684, -0.9899925 ],
# [ 0.54030231, -0.41614684, -0.9899925 , -0.65364362],
# [-0.41614684, -0.9899925 , -0.65364362, 0.28366219],
# [-0.9899925 , -0.65364362, 0.28366219, 0.96017029]])

python - Get an inverted float matrix with sympy and numpy

I'm trying to implement some algorithm on python. For the sake of documentation and clear understanding of the flow details I use sympy. As it turned out it fails on computation of an inverted float matrix.
So I'm getting
TypeError Traceback (most recent call last)
<ipython-input-20-c2193b2ae217> in <module>()
10 np.linalg.inv(xx)
11 symInv = lambdify(X0,X0Inv)
---> 12 symInv(xx)
/opt/anaconda3/lib/python3.6/site-packages/numpy/__init__.py in <lambda>(X0)
TypeError: ufunc 'bitwise_xor' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
If the matrix is integer it works fine:
import numpy as np
from sympy import *
init_printing()
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
#xx = np.random.randint(10,size=(4,4)) # this line makes it workable
X0Inv = X0**-1
np.linalg.inv(xx)
symInv = lambdify(X0,X0Inv)
symInv(xx)
Link to a live version of the code
If anybody knows any workaround it would be great if you could share. Thanks in advance.
UPDATE. As it is pointed out by #hpaulj and #tel the issue is how lambdify translates ** to numpy code for matrix symbols: by some reason it tries to XOR elements. I will try to find an easy way to alter this behavior. Any help/hints are appreciated.
As hpaulj points out, the error seems to stem from a conversion of ** to ^ that happens in lambdify, for some reason.
You can fix the error that you're getting by using np.power instead of **:
import numpy as np
from sympy import MatrixSymbol, lambdify
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
X0Inv = np.power(X0, -1)
symInv = lambdify(X0,X0Inv)
print('matrix xx')
print(xx, end='\n\n')
print('result of symInv(xx)')
print(symInv(xx), end='\n\n')
Output:
matrix xx
[[0.4514882 0.84588859 0.02431252 0.25468078]
[0.46767727 0.85748153 0.51207567 0.59636962]
[0.84557537 0.38459205 0.76814414 0.96624407]
[0.0933803 0.43467119 0.77823338 0.58770188]]
result of symInv(xx)
[[2.214897321138516, 1.1821887747951494], [2.1382266426713077, 1.1662058776397513]]
However, as you have it set up symInv doesn't produce the matrix inverse, but instead only does the element-wise exponentiation of each value in xx. In other words, symInv(xx)[i,j] == xx[i,j]**-1. This code shows the difference between element-wise exponentiation and the true inverse.
print('result of xx**-1')
print(xx**-1, end='\n\n')
print('result of np.linalg.inv(xx)')
print(np.linalg.inv(xx))
Output:
result of xx**-1
[[ 2.21489732 1.18218877 41.13107402 3.92648394]
[ 2.13822664 1.16620588 1.95283638 1.67681243]
[ 1.18262669 2.60015778 1.301839 1.0349352 ]
[10.7088969 2.30058954 1.28496159 1.70154295]]
result of np.linalg.inv(xx)
[[-118.7558445 171.37619558 -20.37188041 -88.94733652]
[ -0.56274492 2.49107626 -1.00812489 -0.62648633]
[-160.35674704 230.3266324 -28.87548299 -116.75862026]
[ 231.62940572 -334.07044947 42.21936405 170.90926978]]
Edit: workaround
I'm 95% sure that what you've run into is a bug in the Sympy code. It seems that X0^-1 was valid syntax for Sympy Matrix objects at some point, but no longer. However, it seems that someone forgot to tell that to whomever maintains the lambdify code, since it still translates every matrix exponentiation into the carrot ^ syntax.
So what you should do is submit an issue on the Sympy github. Just post your code and the error it produces, and ask if that's the intended behavior. In the meantime, here's a filthy hack to work around the problem:
import numpy as np
from sympy import MatrixSymbol, lambdify
class XormulArray(np.ndarray):
def __new__(cls, input_array):
return np.asarray(input_array).view(cls)
def __xor__(self, other):
return np.linalg.matrix_power(self, other)
X0 = MatrixSymbol('X0',2,2)
xx = np.random.rand(4,4)
X0Inv = X0.inv()
symInv = lambdify(X0,X0Inv,'numpy')
print('result of symInv(XormulArray(xx))')
print(symInv(XormulArray(xx)), end='\n\n')
print('result of np.linalg.inv(xx)')
print(np.linalg.inv(xx))
Output:
result of symInv(XormulArray(xx))
[[ 3.50382881 -3.84573344 3.29173896 -2.01224981]
[-1.88719742 1.86688465 0.3277883 0.0319487 ]
[-3.77627792 4.30823019 -5.53247103 5.53412775]
[ 3.89620805 -3.30073088 4.27921307 -4.68944191]]
result of np.linalg.inv(xx)
[[ 3.50382881 -3.84573344 3.29173896 -2.01224981]
[-1.88719742 1.86688465 0.3277883 0.0319487 ]
[-3.77627792 4.30823019 -5.53247103 5.53412775]
[ 3.89620805 -3.30073088 4.27921307 -4.68944191]]
Basically, you'll have to cast all of your arrays to the thin wrapper type XormulArray right before you pass them into symInv. This hack is not best practice for a bunch of reasons (including the fact that it apparently breaks the (2,2) shape restriction you placed on X0), but it'll probably be the best you can do until the Sympy codebase is fixed.

Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'

I am trying to solve an optimization problem as shown below. But every time I get an error Cannot cast array data from dtype('complex128') to dtype('float64') according to the rule 'safe'.
Can anyone help me to see what is wrong here in the code?
def func(vec):
linspec = -(kx**2)+((1.-nu)*(kx**4))
lin = linspec*np.fft.fft(vec)
nlin = np.zeros_like(lin)
nlinre = vec*vec
nlinspec = np.fft.fft(nlinre)
nlin = (0.5*1j*kx*nlinspec)
sol = lin+nlin
rhs = np.zeros_like(sol, dtype='complex')
sol -= rhs
sol = np.fft.ifft(sol).real
return sol
def kssol(u0):
u1 = np.ones((2*Mx,), dtype='complex')
#u1 = 100.*u0
u = scipy.optimize.fsolve(func, u1)
return u
scipy.optimize.fsolve finds the root of functions from Rn ↦ R. Your function func seems to be Cn ↦ R, in particular the starting point u1 has explicit complex type (though it's value is real), and that's exactly what the error is talking about.
If your input is always real, simply set dtype='float' in u1. Otherwise you'd have to use a different function for optimization, e.g. mpmath.findroot from sympy.

Numpy ifft error

I'm having a really frustrating problem using numpy's inverse fast fourier transform function. I know the fft function works well based on my other results. Error seems to be introduced after calling ifft. The following should be printing zeros for example:
temp = Eta[50:55]
print(temp)
print(temp-np.fft.fft(np.fft.ifft(temp)))
Output:
[ -4.70429130e+13 -3.15161484e+12j -2.45515846e+13 +5.43230842e+12j -2.96326088e+13 -4.55029496e+12j 2.99158889e+13 -3.00718375e+13j -3.87978563e+13 +9.98287428e+12j]
[ 0.00781250+0.00390625j -0.02734375+0.01757812j 0.05078125-0.02441406j 0.01171875-0.01171875j -0.01562500+0.015625j ]
Please help!
You are seeing normal floating point imprecision. Here's what I get with your data:
In [58]: temp = np.array([ -4.70429130e+13 -3.15161484e+12j, -2.45515846e+13 +5.43230842e+12j, -2.96326088e+13 -4.55029496e+12j, 2.99158889e+13 -3.00718375e+13j, -3.87978563e+13 +9.98287428e+12j])
In [59]: delta = temp - np.fft.fft(np.fft.ifft(temp))
In [60]: delta
Out[60]:
array([ 0.0000000+0.00390625j, -0.0312500+0.01953125j,
0.0390625-0.02539062j, 0.0078125-0.015625j , -0.0156250+0.015625j ])
Relative to the input, those values are, in fact, "small", and reasonable for 64 bit floating point calculations:
In [61]: np.abs(delta)/np.abs(temp)
Out[61]:
array([ 8.28501685e-17, 1.46553699e-15, 1.55401584e-15,
4.11837758e-16, 5.51577805e-16])

Categories

Resources