Should I use scipy.pi, numpy.pi, or math.pi? - python

In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?

>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.

One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False

If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).

Related

convert Symengine I to python 1j

I want to convert Symengine complex number I to python 1j so that I can use these numbers in normal python. Currently, I have a code but it runs using sympy which makes it slower. Any alternative solution for this or speed up this code?. Here is the code I have:
import sympy
from time import time
from symengine import *
def sym2num(x):
return complex(sympy.sympify(x))
Here is the reference from which I took the code
Thank you
There's no built-in way to do this in SymEngine yet (PRs are welcome).
Here's a workaround that is reasonably fast.
def sym2num(x):
return float(x.real_part()) + float(x.imaginary_part())*1j

Is there a way to serialize a sympy ufuncify-ied function?

Is there a way to serialize or save a function that has been binarized using the ufuncify tool from SymPy's autowrap module?
There is a solution here using dill: How to serialize sympy lambdified function?, but this only works for lambdified functions.
Here's a minimal working example to illustrate:
>>> import pickle
>>> import dill
>>> import numpy
>>> from sympy import *
>>> from sympy.utilities.autowrap import ufuncify
>>> x, y, z = symbols('x y z')
>>> fx = ufuncify([x], sin(x))
>>> dill.settings['recurse'] = True
>>> dill.detect.errors(fx)
pickle.PicklingError("Can't pickle <ufunc 'wrapper_module_0'>: it's not found as __main__.wrapper_module_0")
Motivation: I have a few 50-million character-long SymPy expressions that I'd like to speed up; ufuncify works wonderfully (three-orders-of-magnitude improvement over lambidfy alone), but it takes 3 hr to ufuncify each expression. I would like to be able to take advantage of the ufuncify-ied expressions from time to time (without waiting a day to re-process them). Update: to illustrate, computer going to sleep killed my python kernel, and now I will need to wait ~10 hr to recover the binarized functions
Got it. This is actually simpler than using dill/pickle, as the autowrap module produces and compiles C code (by default) that can be imported as a C-extension module [1] [2].
One way to save the generated code is to simply specify the temp directory for autowrap[3], e.g.:
Python environment 1:
import numpy
from sympy import *
from sympy.utilities.autowrap import ufuncify
x, y, z = symbols('x y z')
fx = ufuncify([x], sin(x), tempdir='tmp')
Python environment 2 (same root directory):
import sys
sys.path.append('tmp')
import wrapper_module_0
wrapper_module_0.wrapped_8859643136(1)
There's probably a better approach (e.g., is there an easy way to name the module and its embedded function?), but this will work for now.
[1] http://www.scipy-lectures.org/advanced/interfacing_with_c/interfacing_with_c.html
[2] https://docs.python.org/2/extending/extending.html
[3] http://docs.sympy.org/latest/modules/utilities/autowrap.html

Sympy: Expanding sum that involves Kets from its quantum module

Today I started using sympy and its quantum module to implement some basic calculations in Bra-Ket notation.
Executing the code:
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import *
from sympy import *
from sympy.abc import k
print Sum(Ket(k),(k,0,5))
yields the expected result, that is, Sum(|k>, (k, 0, 5)) is printed.
Now I'd like to expand the sum and therefore write:
print Sum(Ket(k),(k,0,5)).doit()
However, this doesn't give the correct result, but prints out 6*|k> which obviously is not the desired output. Apparently, the program doesn't recognize Ket(k) as depending on the index k.
How could I work around or solve this issue?
Looks like a bug. You can probably work around it by doing the sum outside of sympy, with standard python functions like sum(Ket(i) for i in range(6)).

How can I make a transfer function for an RC circuit in python

I'm fairly new to programming, but this problem happens in python and in excel as well.
I'm using the following formulas for the RC transfer function
s/(s+1) for High Pass
1/(s+1) for Low Pass
with s = jwRC
below is the code I used in python
from pylab import *
from numpy import *
from cmath import *
"""
Generating a transfer function for RC filters.
Importing modules for complex math and plotting.
"""
f = arange(1, 5000, 1)
w = 2.0j*pi*f
R=100
C=1E-5
hp_tf = (w*R*C)/(w*R*C+1) # High Pass Transfer function
lp_tf = 1/(w*R*C+1) # Low Pass Transfer function
plot(f, hp_tf) # plot high pass transfer function
plot(f, lp_tf, '-r') # plot low pass transfer function
xscale('log')
I can't post images yet so I can't show the plot. But the issue here is the cutoff frequency is different for each one. They should cross at y=0.707, but they actually cross at about 0.5.
I figure my formula or method is wrong somewhere, but I can't find the mistake can anyone help me out?
Also, on a related note, I tried to convert to dB scale and I get the following error:
TypeError: only length-1 arrays can be converted to Python scalars
I'm using the following
debl=20*log(hp_tf)
This is a classical example why you should avoid pylab and more generally imports of the form
from module import *
unless you know exactly what it does, since it hopelessly clutters the name space.
Using,
import matplotlib.pyplot as plt
import numpy as np
and then calling np.log and plt.plot etc. will solve your problem.
Furether explanations
What's happening here is that,
from pylab import *
defines a log function from numpy that operate on arrays (the one you want).
However, the later import,
from cmath import *
overwrites it with a version that only accepts scalars, hence your error.

sympy error - AttributeError: sqrt

I am new to Python, and I keep getting the following error
..., line 27, in <module>
eq=(p**2+2)/p/sqrt(p**2+4)
AttributeError: sqrt
I tried to add math.sqrt or numpy.sqrt but neither of these work. Does anyone know where I'm going wrong?
My code is:
from numpy import *
from matplotlib import *
from pylab import *
from sympy import Symbol
from sympy.solvers import solve
p=Symbol('p')
eq=(p**2+2)/p/sqrt(p**2+4)
solve(eq,1.34,set=True)
sqrt is defined in the math module, import it this way. This should remove the errors!
from math import sqrt
You are using a sympy symbol: either you wanted to do numerical sqrt (in which case use numpy.sqrt on an actual number) or you wanted symbolic sqrt (in which case use sympy.sqrt). Each of the imports replaces the definition of sqrt in the current namespace, from math, sympy or numpy. It's best to be explicit and not use "import *".
I suspect from the line which follows, you want sympy.sqrt here.

Categories

Resources