Is there a way to serialize or save a function that has been binarized using the ufuncify tool from SymPy's autowrap module?
There is a solution here using dill: How to serialize sympy lambdified function?, but this only works for lambdified functions.
Here's a minimal working example to illustrate:
>>> import pickle
>>> import dill
>>> import numpy
>>> from sympy import *
>>> from sympy.utilities.autowrap import ufuncify
>>> x, y, z = symbols('x y z')
>>> fx = ufuncify([x], sin(x))
>>> dill.settings['recurse'] = True
>>> dill.detect.errors(fx)
pickle.PicklingError("Can't pickle <ufunc 'wrapper_module_0'>: it's not found as __main__.wrapper_module_0")
Motivation: I have a few 50-million character-long SymPy expressions that I'd like to speed up; ufuncify works wonderfully (three-orders-of-magnitude improvement over lambidfy alone), but it takes 3 hr to ufuncify each expression. I would like to be able to take advantage of the ufuncify-ied expressions from time to time (without waiting a day to re-process them). Update: to illustrate, computer going to sleep killed my python kernel, and now I will need to wait ~10 hr to recover the binarized functions
Got it. This is actually simpler than using dill/pickle, as the autowrap module produces and compiles C code (by default) that can be imported as a C-extension module [1] [2].
One way to save the generated code is to simply specify the temp directory for autowrap[3], e.g.:
Python environment 1:
import numpy
from sympy import *
from sympy.utilities.autowrap import ufuncify
x, y, z = symbols('x y z')
fx = ufuncify([x], sin(x), tempdir='tmp')
Python environment 2 (same root directory):
import sys
sys.path.append('tmp')
import wrapper_module_0
wrapper_module_0.wrapped_8859643136(1)
There's probably a better approach (e.g., is there an easy way to name the module and its embedded function?), but this will work for now.
[1] http://www.scipy-lectures.org/advanced/interfacing_with_c/interfacing_with_c.html
[2] https://docs.python.org/2/extending/extending.html
[3] http://docs.sympy.org/latest/modules/utilities/autowrap.html
Related
One can use Julia's built-in modules and functions using the juliacall package. for example:
>>> from juliacall import Main as jl
>>> import numpy as np
# Create a 2*2 random matrix
>>> arr = jl.rand(2,2)
>>> arr
<jl [0.28133223988783074 0.22498491616860727; 0.008312971104033062 0.12927167014532326]>
# Check whether Numpy can recognize the shape of the array or not
>>> np.array(arr).shape
(2, 2)
>>> type(np.array(arr))
<class 'numpy.ndarray'>
Then I'm curious to know if importing an installed julia package into Python is possible? For example, let's say that one wants to import Flux.jl into Python. Is there a way to achieve this?
I found it through scrutinizing the pictures of the second example of the juliacall GitHub page. According to the example, I'm able to import Flux.jl by taking these steps:
>>> from juliacall import Main as jl
>>> jl.seval("using Flux")
Also, one can install any registered Julia package using Pkg in Python:
>>> from juliacall import Main as jl
>>> jl.Pkg.add("Flux")
I have some code that currently returns the pylint error:
source.py:2:0: W0611: Unused sympy imported as sp (unused-import)
there is a clue: "Note: parse_expr() is available for you to use in this function and sympy has been imported into this question as sp" but I don't understand what to do.
It works fine in my WingIDE but is failing on the server I submit it. Any input on how to change the code to not have this error would be appreciated. Code is below :)
from sympy.parsing.sympy_parser import parse_expr
from sympy import Eq, solve
def num_intersections(expressions):
"""return a list of the number of times each expression
in this list intersects with every other expression."""
lst = []
for i in range(len(expressions)):
expressions[i] = parse_expr(expressions[i])
new_exp_lst = expressions
for expp in expressions:
intersection_cnt = 0
new_exp_lst.remove(expp)
for n_expp in new_exp_lst:
intersection_cnt += len(solve(Eq(expp, n_expp), list=True))
lst.append(intersection_cnt)
new_exp_lst.append(expp)
return lst
It look like your professor already imported sympy for you as sp and is doing some magic to append your code to its own file that he/she then lint with pylint. Use sp.x when you need sympy, i.e.
Instead of
from sympy.parsing.sympy_parser import parse_expr
from sympy import Eq, solve
Use:
import sympy as sp # Remove that before submitting
And then replace len(solve(Eq(expp, n_expp), list=True)) by len(sp.solve(sp.Eq(expp, n_expp), list=True)) (among others).
I am trying to print my Sympy-expression as a string ready to be used with Numpy. I just cannot figure out how to do it.
I found that there is sp.printing.pycode: https://docs.sympy.org/latest/_modules/sympy/printing/pycode.html
The web page states that "This module contains python code printers for plain python as well as NumPy & SciPy enabled code.", but I just cannot figure out how to get it to output the expression numpy format.
sp.printing.pycode(expr)
'math.cos((1/2)*alpha)*math.cos((1/2)*beta)'
That web page also contain class NumPyPrinter(PythonCodePrinter) but I do not know how to use it. def pycode(expr, **settings) just seems to use return PythonCodePrinter(settings).doprint(expr) as a default all the time.
The definition of pycode is almost trivial:
def pycode(expr, **settings):
# docstring skipped
return PythonCodePrinter(settings).doprint(expr)
It should be straight forward to run NumPyPrinter().doprint(expr) instead. The problem is that sympy.printing re-exports the pycode function which shadows the module with the same name. However, we can still import the class directly and use it:
import sympy as sy
from sympy.printing.pycode import NumPyPrinter
x = sy.Symbol('x')
y = x * sy.cos(x * sy.pi)
code = NumPyPrinter().doprint(y)
print(code)
# x*numpy.cos(numpy.pi*x)
I'm using the pp module of python.
What I need to do is run in parallel the "fmin" function of "scipy.optimize".
I'm importing fmin like this:
from scipy.optimize import fmin
Next, I'm defining a function which executes the fmin function like this:
def fitting():
v = fmin(e, v0, args=(x,y),maxiter=10000, maxfun=10000)
return v
And for this to run in parallel I'm using:
job5 = job_server.submit(fitting, (e, v0, x, y,), (fitting,), ("scipy.optimize",))
v = job5()
Then I get a PicklingError in module of job5. That is "scipy.optimize" I guess.
I also tried import scipy.optimize as sth but job_server.submit does not accept "sth" as a module.
Any solutions?
Thank you.
Put the from scipy.optimize import fmin import line into the fitting function directly, and stop passing it into submit.
You can't do this with pp very easily. However, if you use dill and the fork of pp that is in pathos (i.e. pathos.pp) then it works in most cases.
See several examples in the mystic optimization package, which provides parallel and distributed optimization using extensions of the scipy optimizers.
For example, this works with both pathos.multiprocessing and pathos.pp:
https://github.com/uqfoundation/mystic/blob/master/examples/buckshot_example06.py
The above code launches several fmin_powell instances in parallel, which can give you pseudo-global optimization with steepest-descent speeds.
Get the code here: https://github.com/uqfoundation
In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?
>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.
One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False
If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).