I want to convert Symengine complex number I to python 1j so that I can use these numbers in normal python. Currently, I have a code but it runs using sympy which makes it slower. Any alternative solution for this or speed up this code?. Here is the code I have:
import sympy
from time import time
from symengine import *
def sym2num(x):
return complex(sympy.sympify(x))
Here is the reference from which I took the code
Thank you
There's no built-in way to do this in SymEngine yet (PRs are welcome).
Here's a workaround that is reasonably fast.
def sym2num(x):
return float(x.real_part()) + float(x.imaginary_part())*1j
Related
I enjoy using a lot of functional programming features when playing with Python lists. When I switch to Numpy for big dataset, I would expect that it is significantly more efficient than native Python list operations over ndarray.tolist() since it is stored differently.
So when I try to apply map, reduce, filter such FP things on Numpy array, I first search over the Numpy's doc for some "optimized things". And what I get is numpy.ufunc.reduce it seems to be the right thing. However, for curiosity, I did a simple test on both approaches:
Use Numpy reduce
import numpy as np
a = np.array(range(100000000))
adf = lambda res, a: res + a
u_adf = np.frompyfunc(adf, 2, 1)
print(u_adf.reduce(a, initial=0))
Use ndarray.tolist() and then use Python native reduce
import numpy as np
from functools import reduce
a = np.array(range(100000000))
adf = lambda res, a: res + a
print(reduce(adf, a.tolist(), 0))
Here comes the most unexpected thing:
> python 1.py
4999999950000000
python 1.py 28.00s user 5.71s system 102% cpu 32.925 total
> python 2.py
4999999950000000
python 2.py 26.38s user 6.38s system 103% cpu 31.792 total
The so-called "stupid" approach is actually the more efficient way?
How can that be? Can anyone please explain this for me? And hopefully gives some advice on using functional programming features on Numpy arrays.
Appreciate ^_^
I am trying to apply the Q-function values for a problem. I don't know the function available for it in Python.
What is the python equivalent for the following code in octave?
>> f=0:0.01:1;
>> qfunc(f)
The Q-function can be expressed in terms of the error function. Check here for more info. "scipy" has the error function, special.erf(), that can be used to calculate the Q-function.
import numpy as np
from scipy import special
f = np.linspace(0,1,101)
0.5 - 0.5*special.erf(f/np.sqrt(2)) # Q(f) = 0.5 - 0.5 erf(f/sqrt(2))
Take a look at this https://docs.scipy.org/doc/scipy-0.19.1/reference/generated/scipy.stats.norm.html
Looks like the norm.sf method (survival function) might be what you're looking for.
I've used this Q function for my code and it worked perfectly well,
from scipy import special as sp
def qfunc(x):
return 0.5-0.5*sp.erf(x/sqrt(2))
I'vent used this one but I think it should work,
def invQfunc(x):
return sqrt(2)*sp.erfinv(1-2x)
references:
https://mail.python.org/pipermail/scipy-dev/2016-February/021252.html
Python equivalent of MATLAB's qfuncinv()
Thanks #Anton for letting me know how to write a good answer
Today I started using sympy and its quantum module to implement some basic calculations in Bra-Ket notation.
Executing the code:
from sympy.physics.quantum import *
from sympy.physics.quantum.qubit import *
from sympy import *
from sympy.abc import k
print Sum(Ket(k),(k,0,5))
yields the expected result, that is, Sum(|k>, (k, 0, 5)) is printed.
Now I'd like to expand the sum and therefore write:
print Sum(Ket(k),(k,0,5)).doit()
However, this doesn't give the correct result, but prints out 6*|k> which obviously is not the desired output. Apparently, the program doesn't recognize Ket(k) as depending on the index k.
How could I work around or solve this issue?
Looks like a bug. You can probably work around it by doing the sum outside of sympy, with standard python functions like sum(Ket(i) for i in range(6)).
I'm trying to speed up the following code:
from math import log
from random import random
def logtest1(N):
tr=0
for i in range(1,N):
T= 40 + 10*random()
tr += -log(random())/T
I'm fairly new to python (coming from matlab)... and this same code runs 5x slower in python than matlab (and Julia), which got my attention.
I tried using a numba and a parakeet wrapper, and numpy functions instead of python ones, but didn't get any improvement at all.
I'd appreciate any help.
Thanks.
edit: the whole thing is a Monte Carlo simulation, so N is very large... 10e6 for testing purposes
You should really be looking into numpy. And Scipy, while you're at it. Numpy is a speed-optimized package for N-dimensional array numerics, and Scipy is a collection of scientific computing stuff built upon numpy.
If you write the function using numpy arrays, it looks like this:
def logtest2(N):
T = 40. + 10. * np.random.rand(N)
return np.sum(-1*np.log(np.random.rand(N)) / T)
It's also a lot faster. Testing with N = 1000000 gave me a runtime of 500ms for your version and 75ms for this one.
Right off the bat id say use xrange if you are using Python 2.7x
So in 2.7 it would be:
def logtest1(N):
tr=0
for i in xrange(N):
a = random() # Just generate the random number once
T= 40 + 10*a
tr += -log(a)/T
Here is a summary on why xrange is better: Should you always favor xrange() over range()?
In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?
>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.
One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False
If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).