One can use Julia's built-in modules and functions using the juliacall package. for example:
>>> from juliacall import Main as jl
>>> import numpy as np
# Create a 2*2 random matrix
>>> arr = jl.rand(2,2)
>>> arr
<jl [0.28133223988783074 0.22498491616860727; 0.008312971104033062 0.12927167014532326]>
# Check whether Numpy can recognize the shape of the array or not
>>> np.array(arr).shape
(2, 2)
>>> type(np.array(arr))
<class 'numpy.ndarray'>
Then I'm curious to know if importing an installed julia package into Python is possible? For example, let's say that one wants to import Flux.jl into Python. Is there a way to achieve this?
I found it through scrutinizing the pictures of the second example of the juliacall GitHub page. According to the example, I'm able to import Flux.jl by taking these steps:
>>> from juliacall import Main as jl
>>> jl.seval("using Flux")
Also, one can install any registered Julia package using Pkg in Python:
>>> from juliacall import Main as jl
>>> jl.Pkg.add("Flux")
Related
This might be a pretty niche problem but I've come across it and found a solution which maybe will help someone eventually. Basically, I'm trying to call the GUDHI C++ library through R using rpy2 using the following code:
import gudhi
import rpy2.robjects.packages as rpackages
import rpy2.robjects.vectors as rvectors
import rpy2.robjects as robjects
import rpy2.robjects.numpy2ri
import numpy as np
point_cloud = np.random.uniform(0, 10, size=(30, 3))
# tda.point_cloud_to_pl(point_cloud, method="GUDHI_R")
rpy2.robjects.numpy2ri.activate()
# import R's "base" package
base = rpackages.importr('base')
# import R's "utils" package
utils = rpackages.importr('utils')
# select a mirror for R packages
utils.chooseCRANmirror(ind=1) # select the first mirror in the list
# R package names
packnames = ('TDA', 'deldir', 'Matrix', 'SparseM')
names_to_install = [x for x in packnames if not rpackages.isinstalled(x)]
if len(names_to_install) > 0:
utils.install_packages(rvectors.StrVector(names_to_install))
# Importing packages
rpackages.importr('TDA')
rpackages.importr('deldir')
rpackages.importr('Matrix')
rpackages.importr('SparseM')
X = robjects.r.assign("X", point_cloud)
alpha_complex_diag = robjects.r['alphaComplexDiag']
ph_output = alpha_complex_diag(X)
print(ph_output)
But keep getting a segmentation fault error saying
*** caught segfault ***
R[write to console]: address 0x7fa6ac2c6090, cause 'invalid permissions'
The solution is to remove the import gudhi at the top of the python script. For some reason that causes R to not be able to use it? Perhaps because they are trying to call the same library?
It may indeed be a conflict between dynamically loaded libraries used by both Python and R. One such example is scipy doing what seems odd things with BLAS, and this causes a crash: https://github.com/rpy2/rpy2/issues/505
I do not know GUDHI at all unfortunately. If this is a blocker you'd need to run this through a debugger to find what it is happening in the C library.
To get a feel for NumPy's source code I want to start by adding my own dummy custom function. I've followed their docs to set up a developing environment and have done an inplace build of NumPy as advised ($ python setup.py build_ext -i; export PYTHONPATH=$PWD).
Now I want to add this function:
def multiplybytwo(x):
"""
Return the double of the input
"""
y = x*2
return y
But I do not know where to place it so that this code would run properly:
import numpy as np
a = np.array([10])
b = np.mulitplybytwo(a)
This answer has been added at OP’s request.
As mentioned in my comments above, a starting point (of many) would be to investigate the top level __init__.py file in the numpy library. Crack this open and read through the various imports and initialising setup. This will help to get your bearings as to where the new function could be placed, as you’ll likely see from where other (familiar) top-level functions are imported.
Caveat:
While I’d generally discourage adding custom functionality to a library (as all changes will be lost on upgrade), I understand why you’re looking into this. Just keep in mind what is stated in previous sentence.
if you want to add your function to Numpy, I'd:
clone github repo git clone https://github.com/numpy/numpy.git
write your function somewhere in /numpy/core
compile it: python setup.py build_ext --inplace -- more info about compilation/installation: https://numpy.org/doc/stable/user/building.html
for a deeper feel of Numpy - check how to contribute: https://numpy.org/devdocs/dev/index.html
Everything in Python is an object!
So, you can add your function to NumPy in the same way you say that x = 1
See below:
>>> def multiplybytwo(x):
... """
... Return the double of the input
... """
... y = x*2
... return y
>>> import numpy as np
>>> np.mulitplybytwo = multiplybytwo ## HERE IS THE "TRICK"
>>> a = np.array([10])
>>> b = np.mulitplybytwo(a)
>>> b
array([20])
>>> print(b)
[20]
In that line, you are creating a custom function in NumPy and saying that that function is the one defined by you.
You can have a proof using id
>>> id(np.multiplybytwo)
140489045965856
>>> id(multiplybytwo)
140489045965856
>>>
Both id are the same.
Is there a way to serialize or save a function that has been binarized using the ufuncify tool from SymPy's autowrap module?
There is a solution here using dill: How to serialize sympy lambdified function?, but this only works for lambdified functions.
Here's a minimal working example to illustrate:
>>> import pickle
>>> import dill
>>> import numpy
>>> from sympy import *
>>> from sympy.utilities.autowrap import ufuncify
>>> x, y, z = symbols('x y z')
>>> fx = ufuncify([x], sin(x))
>>> dill.settings['recurse'] = True
>>> dill.detect.errors(fx)
pickle.PicklingError("Can't pickle <ufunc 'wrapper_module_0'>: it's not found as __main__.wrapper_module_0")
Motivation: I have a few 50-million character-long SymPy expressions that I'd like to speed up; ufuncify works wonderfully (three-orders-of-magnitude improvement over lambidfy alone), but it takes 3 hr to ufuncify each expression. I would like to be able to take advantage of the ufuncify-ied expressions from time to time (without waiting a day to re-process them). Update: to illustrate, computer going to sleep killed my python kernel, and now I will need to wait ~10 hr to recover the binarized functions
Got it. This is actually simpler than using dill/pickle, as the autowrap module produces and compiles C code (by default) that can be imported as a C-extension module [1] [2].
One way to save the generated code is to simply specify the temp directory for autowrap[3], e.g.:
Python environment 1:
import numpy
from sympy import *
from sympy.utilities.autowrap import ufuncify
x, y, z = symbols('x y z')
fx = ufuncify([x], sin(x), tempdir='tmp')
Python environment 2 (same root directory):
import sys
sys.path.append('tmp')
import wrapper_module_0
wrapper_module_0.wrapped_8859643136(1)
There's probably a better approach (e.g., is there an easy way to name the module and its embedded function?), but this will work for now.
[1] http://www.scipy-lectures.org/advanced/interfacing_with_c/interfacing_with_c.html
[2] https://docs.python.org/2/extending/extending.html
[3] http://docs.sympy.org/latest/modules/utilities/autowrap.html
I am writing a script using python Spyder 2.2.5 with Windows 7, python 2.7
At the very beginning I have tried all the import ways:
from numpy import *
or
import numpy
and also
import numpy as np
And, for each an every line where I use numpy I am getting an error when compiling
QR10 = numpy.array(QR10,dtype=float)
QR20 = numpy.array(QR20,dtype=float)
QR11 = numpy.array(QR11,dtype=float)
QR21 = numpy.array(QR21,dtype=float)
However, even with this 30 errors, the script works if I run it....
Any help about this?
Python cannot actually be compiled. Spyder performs just a static code analysis using Pylint. Depending on the version of Pylint that is being used, it could be a bug or an undetectable case for it.
For example, the import statement (or the path that gets to it) could be within a conditional block, which cannot be resolved until runtime. Given that you are using Spyder, it could also be that you put your import statement directly on the console, or in a separate file, and then use the imported module from the script.
You may try to see if you receive the same error with a script like the following:
import numpy
QR10 = [1, 2, 3]
QR20 = [1, 2, 3]
QR11 = [1, 2, 3]
QR21 = [1, 2, 3]
QR10 = numpy.array(QR10,dtype=float)
QR20 = numpy.array(QR20,dtype=float)
QR11 = numpy.array(QR11,dtype=float)
QR21 = numpy.array(QR21,dtype=float)
You should not see the E0602 here. Funny enough, however, you may receive [E1101] Module 'numpy' has no 'array' member, because it turns out that numpy does some dynamic definition of members, so Pylint cannot know about it (as you may see here) of a bug that has actually been solved already.
The moral of the story is that Pylint errors shouldn't keep you awake at night. It's good to see the report, but if you are sure that your code makes sense and it runs just right, you may just ignore them - although trying to know why it is giving an error is always a good exercise.
import numpy as np
then use
QR10 = np.array(QR10,dtype=float) # instead of numpy.array
In a project using SciPy and NumPy, should I use scipy.pi, numpy.pi, or math.pi?
>>> import math
>>> import numpy as np
>>> import scipy
>>> math.pi == np.pi == scipy.pi
True
So it doesn't matter, they are all the same value.
The only reason all three modules provide a pi value is so if you are using just one of the three modules, you can conveniently have access to pi without having to import another module. They're not providing different values for pi.
One thing to note is that not all libraries will use the same meaning for pi, of course, so it never hurts to know what you're using. For example, the symbolic math library Sympy's representation of pi is not the same as math and numpy:
import math
import numpy
import scipy
import sympy
print(math.pi == numpy.pi)
> True
print(math.pi == scipy.pi)
> True
print(math.pi == sympy.pi)
> False
If we look its source code, scipy.pi is precisely math.pi; in fact, it's defined as
import math as _math
pi = _math.pi
In their source codes, math.pi is defined to be equal to 3.14159265358979323846 and numpy.pi is defined to be equal to 3.141592653589793238462643383279502884; both are well above the 15 digit accuracy of a float in Python, so it doesn't matter which one you use.
That said, if you're not already using numpy or scipy, importing them just for np.pi or scipy.pi would add unnecessary dependency while math is a Python standard library, so there's not dependency issues. For example, for pi in tensorflow code in python, one could use tf.constant(math.pi).