I'm using python's numdifftools library to perform derivatives. However, a few tests prove the library is highly inaccurate:
import numpy as np
from numdifftools import Derivative
# Result should be 1/2 or 0.5
Derivative(np.log, 1)(2.0)
>>> array(0.5493061443340549)
Is there a way to fix this inaccuracy?
Using numdifftools 0.9.16 and numpy 1.9.3 the following code gives an exact result:
import numpy as np
from numdifftools import Derivative
# Result should be 1/2 or 0.5
Derivative(np.log)(2.0)
Output:
array(0.5000000000000238)
Issue spotted.
Derivative(np.log, 1)(2.0)
gives the wrong answer. The n should be explicitly stated:
Derivative(np.log, n=1)(2.0)
>>> array(0.5000000000000234)
Related
I'm trying to calculate the gamma function of a number and it is going to infinity. When I use maple (for instance) it returns the correct answer (2.57e1133 (yes, it's huge)). I tried to use DECIMAL but 0 success. Is there a solution? Thanks in advance.
the code
import scipy as sp
from scipy.special import gamma
from decimal import Decimal
from scipy import special
def teste(k):
Bk2 = Decimal(gamma((1/(2*k))+(3/4)))
return Bk2
print(teste(0.001))
Result
Infinity
I used gammaln to prevent overflow, then I cast the result to Decimal. Lastly, I use the exponential function to recover the intended result.
import scipy as sp
from scipy.special import gammaln
from decimal import Decimal
from scipy import special
from numpy import exp
def teste(k):
Bk2 = exp(Decimal(gammaln((1/(2*k))+(3/4))))
return Bk2
print(teste(0.001))
When I try to integrate a periodic array with the scipy function sp.fftpack.diff(x,order=-1), it sometimes works and sometimes doesn't.
For example, when integrating x=sin(alpha) to obtain an array of the values of the integral when evaluated from 0 to discrete values up to 2*pi I get the expected result -cos(\alphas). However, when I use it to calculate the values of the integrals of x=sin(alpha)+cos(alpha)+1 in the same ranges I do not get the right answer, even when the function is periodic.
I do not understand how this function works. Does someone have an idea?
https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.diff.html
For example, with this code I obtain the results in the image,I am also comparing the results with the obtained by the trapezoidal rule, which does work when fixing the offset.enter image description here
import numpy as np
from scipy import fftpack as sp
from scipy import integrate as inte
import matplotlib.pyplot as plt
N=150
h=(2*np.pi)/N
x=np.arange(-np.pi,np.pi,h)
y=np.sin(x)+np.cos(x)+1
arrExact=-np.cos(x)+np.sin(x)+x
st=inte.cumtrapz(y,x,initial=0)-2.1
di=sp.diff(y, order=-1)-1
plt.plot(x,di,label='diff')
plt.plot(x,arrExact,label='Exact')
plt.plot(x,st,label='cumpTrapz')
plt.legend()
plt.show()
Edit: Well, reading again I realized scipy assumes x[0]=0, however I need to integrate spectrally arrays that do not satisfies this condition, How can I proceed?
I have been porting code for an isomap algorithm from MATLAB to Python. I am trying to visualize the sparsity pattern using the spy function.
MATLAB command:
spy(sparse(A));
drawnow;
Python command:
matplotlib.pyplot.spy(scipy.sparse.csr_matrix(A))
plt.show()
I am not able to reproduce the MATLAB result in Python using the above command. Using the command with only A in non-sparse format gives quite similar result to MATLAB. But it's taking quite long (A being 2000-by-2000). What would be the MATLAB equivalent of a sparse function for scipy?
Maybe it's your version of matplotlib that makes trouble, as for me scipy.sparse and matplotlib.pylab work well together.
See sample code below that produces the 'spy' plot attached.
import matplotlib.pylab as plt
import scipy.sparse as sps
A = sps.rand(10000,10000, density=0.00001)
M = sps.csr_matrix(A)
plt.spy(M)
plt.show()
# Returns here '1.3.0'
matplotlib.__version__
This gives this plot:
I just released betterspy, which arguably does a better job here. Install with
pip install betterspy
and run with
import betterspy
from scipy import sparse
A = sparse.rand(20, 20, density=0.1)
betterspy.show(A)
betterspy.write_png("out.png", A)
With smaller markers:
import matplotlib.pylab as pl
import scipy.sparse as sps
import scipy.io
import sys
A=scipy.io.mmread(sys.argv[1])
pl.spy(A,precision=0.01, markersize=1)
pl.show()
I'm a Python newbie coming from using MATLAB extensively. I was converting some code that uses log2 in MATLAB and I used the NumPy log2 function and got a different result than I was expecting for such a small number. I was surprised since the precision of the numbers should be the same (i.e. MATLAB double vs NumPy float64).
MATLAB Code
a = log2(64);
--> a=6
Base Python Code
import math
a = math.log2(64)
--> a = 6.0
NumPy Code
import numpy as np
a = np.log2(64)
--> a = 5.9999999999999991
Modified NumPy Code
import numpy as np
a = np.log(64) / np.log(2)
--> a = 6.0
So the native NumPy log2 function gives a result that causes the code to fail a test since it is checking that a number is a power of 2. The expected result is exactly 6, which both the native Python log2 function and the modified NumPy code give using the properties of the logarithm. Am I doing something wrong with the NumPy log2 function? I changed the code to use the native Python log2 for now, but I just wanted to know the answer.
No. There is nothing wrong with the code, it is just because floating points cannot be represented perfectly on our computers. Always use an epsilon value to allow a range of error while checking float values. Read The Floating Point Guide and this post to know more.
EDIT - As cgohlke has pointed out in the comments,
Depending on the compiler used to build numpy np.log2(x) is either computed by the C library or as 1.442695040888963407359924681001892137*np.log(x) See this link.
This may be a reason for the erroneous output.
In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?
>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.
One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False
If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).