Python module Cupy function gives error when using cupy.einsum() - python

I am dealing with a problem with Cupy. I am currently using Cupy and it works great at a very satisfactory high speed. But I have a problem when I use cupy.einsum() method
I am using the same syntax with Numpy without any error. But when using Cupy it gives me an error. Here is the code section
import numpy as np
A = np.random.randn(2,3,10)
B = np.random.randn(3,4)
C = np.einsum('ijk,jl->ijl',A,B)
This works quite well and I get the result that I want consistently. However, when I write the same code with Cupy
import cupy as cp
A = cp.random.randn(2,3,10)
B = cp.random.randn(3,4)
C = cp.einsum('ijk,jl->ijl',A,B)
When I run this, A and B are calculated. But It gives me an error when calculating C. This is the error:
Traceback (most recent call last):
File "", line 4, in
C = cp.einsum('ijk,jl->ijl',A,B)
File
"C:\Users\Okan\anaconda3\lib\site-packages\cupy\linalg\einsum.py",
line 389, in einsum
result_dtype = cupy.result_type(*operands) if dtype is None else dtype
File "<array_function internals>", line 6, in result_type
TypeError: no implementation found for 'numpy.result_type' on types
that implement array_function: [<class 'cupy.core.core.ndarray'>]
I would be so glad if you have an idea or solution about this issue.
Thank you.

For those who are experiencing the same problem, open a new environment in Conda and install the python version above 3.9. After that, when you install cupy by
conda install cupy
it will directly install the latest version (v.7.8 or higher). The problem was based on the version of the cupy. After upgarding, the problem fixed.

Related

'ValueError: numpy.ndarray size changed, may indicate binary incompatibility' - but 2nd attempt succeeds

I am getting the following error when I deserialize a causalml (0.10.0) model in linux (x86-64), that has been serialized on os-x (darwin):
ValueError: numpy.ndarray size changed, may indicate binary incompatibility.
Unexpectedly, trying to deserializing it again in the same python session does succeed!
The environment
On the serializing machine:
Python 3.8, in a poetry .venv
numpy 1.18.5 (the latest version compatible with causalml 0.10.0)
os-x
On the deserializing machine:
docker based on AWS lambda python 3.8
Python 3.8
linux x86_64
Both have cython version 0.28, causalml version 0.10.0.
With cython version 0.29.26 (compatible according to pip), rerunning does not succeed.
The error gets raised in the causaltree.cpython-38-x86_64-linux-gnu.so .
Joblib or Pickle
I tried both python's pickle, and joblib, both raise the error.
In the case of using joblib, the following stacktrace occurs:
File "/var/task/joblib/numpy_pickle.py", line 577, in load
obj = _unpickle(fobj)
File "/var/task/joblib/numpy_pickle.py", line 506, in _unpickle
obj = unpickler.load()
File "/var/lang/lib/python3.8/pickle.py", line 1212, in load
dispatch[key[0]](self)
File "/var/lang/lib/python3.8/pickle.py", line 1537, in load_stack_global
self.append(self.find_class(module, name))
File "/var/lang/lib/python3.8/pickle.py", line 1579, in find_class
__import__(module, level=0)
File "/var/task/causalml/inference/tree/__init__.py", line 3, in <module>
from .causaltree import CausalMSE, CausalTreeRegressor
File "__init__.pxd", line 238, in init causalml.inference.tree.causaltree
Using a more recent python version
Other answers mention that upgrading (on the deserializing environment) to a more recent numpy, which should be backwards compatible, could help. In my case it did not help.
After installing causalml, I separately pip3 install --upgrade numpy==XXX to replace the numpy version on the deserializing machine.
With both numpy 1.18.5 and 1.19.5, the error mentions: Expected 96 from C header, got 80 from PyObject
With numpy 1.20.3, the error mentions: Expected 96 from C header, got 88 from PyObject
Can other numpy arrays be serialized & deserialized?: Yes
To verify if numpy serialization & deserialization is actually possible, I've tested serializing a random array (both with pickle and joblib:
with open(str(path / "numpy.pkl"), "wb") as f:
pickle.dump(object, f, protocol=5)
with open(str(path / "numpy.joblib"), "wb") as f:
joblib.dump(object, f, compress=True)
These actually deserialize without errors:
with open(str(path / "numpy.pkl"), "rb") as f:
read_object = pickle.load(f)
with open(str(path / "numpy.pkl"), "rb") as f:
read_object = joblib.load(f)
Numpy source
If I look at the source code of numpy at this line it seems the error only gets raised when the retrieved size is bigger than the expected size.
Some other (older) stackoverflow answers mention that the warnings can be silenced as follows. But didn't help neither:
import warnings;
warnings.filterwarnings("ignore", message="numpy.dtype size changed");
warnings.filterwarnings("ignore", message="numpy.ufunc size changed");
Trying twice solves it
I found one way to solve this: in the same python session, load the serialized model twice. The first time raises the error, the second time it does not.
The loaded model then does behave as expected.
What is happening? And is there a way to make it succeed the first time?

Why am I getting "module 'rpy2.robjects.conversion' has no attribute 'py2rpy'" error?

I'm trying to convert a pandas dataframe into an R dataframe using the guide here. The code I have so far is:
import pandas as pd
import rpy2.robjects as ro
from rpy2.robjects import pandas2ri
from rpy2.robjects.conversion import localconverter
pd_df = pd.DataFrame({'int_values': [1, 2, 3],
'str_values': ['abc', 'def', 'ghi']})
with localconverter(ro.default_converter + pandas2ri.converter):
r_from_pd_df = ro.conversion.py2rpy(pd_df)
However, this is giving me the following error:
Traceback (most recent call last):
File <my_file_ref>, line 13, in <module>
r_from_pd_df = ro.conversion.py2rpy(pd_df)
AttributeError: module 'rpy2.robjects.conversion' has no attribute 'py2rpy'
I have found this relevant question where the OP refers to function names being changed however doesn't explain what the changes are. I've tried looking at the module but I'm not quite advanced enough to be able to make sense of it.
The accepted answer refers to checking versions which I've done and I'm definitely using rpy2 v3.3.3 which is the same as the guide I'm following.
Has anyone encountered this error and found a solution?
The section of the rpy2 documentation you are pointing out is built by running the code example. This is means that the example did work with the corresponding version of rpy2.
I am not sure you are using that version of rpy2 at runtime? For example, add print(rpy2.__version__) to check that this is the case.
For what is worth, the latest release in the rpy2 3.3.x series is 3.3.6 and there is probably no good reason to stay with 3.3.3. Otherwise rpy2 3.4.0 was just released; if using R 4.0 or greater, or the latest release of the R packages dplyr or ggplot2 together with their rpy2 wrapper, I would recommend to use that release.
[PS: I just tried your example with rpy2-3.4.0 and it runs without error]

using qgis and shaply error: GEOSGeom_createLinearRing_r returned a NULL pointer

I tried to create a polygon shapefile in QGIS and read it in python by shapely. An example code looks like this:
import fiona
from shapely.geometry import shape
multipolys = fiona.open(somepath)
multi = multipolys[0]
coord = shape(multi['geometry'])
The EOSGeom_createLinearRing_r returned a NULL pointer
I checked if the polygon is valid in QGIS and no error was reported. Actually, it does not work for even a simple triangle generated in QGIS. Anyone knows how to solve it?
Thank you
Like J. P., I had this issue with creating LineStrings as well. There is an old issue (2016) in the Shapely github repository that seems related. Changing the order of the imports solved the problem for me:
from shapely.geometry import LineString
import fiona
LineString([[0, 0], [1, 1]]).to_wkt()
# 'LINESTRING (0.0000000000000000 0.0000000000000000, 1.0000000000000000 1.0000000000000000)'
whereas
import fiona
from shapely.geometry import LineString
LineString([[0, 0], [1, 1]]).to_wkt()
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "C:\Users\xxxxxxx\AppData\Roaming\Python\Python37\site-packages\shapely\geometry\linestring.py", line 48, in __init__
# self._set_coords(coordinates)
# File "C:\Users\xxxxxxx\AppData\Roaming\Python\Python37\site-packages\shapely\geometry\linestring.py", line 97, in _set_coords
# ret = geos_linestring_from_py(coordinates)
# File "shapely\speedups\_speedups.pyx", line 208, in shapely.speedups._speedups.geos_linestring_from_py
# ValueError: GEOSGeom_createLineString_r returned a NULL pointer
Some other issues in the Shapely repository to look at
553 for import order issues on a Mac
887 (same reverse-import-order trick with osgeo and shapely)
919
I had a similar problem but with the shapely.geometry.LineString. The error I got was
ValueError: GEOSGeom_createLineString_r returned a NULL pointer
I don't know the reason behind this message, but there are two ways, how to avoid it:
Do the following:
...
from shapely import speedups
...
speedups.disable()
Import the speedups module and disable the speedups. This needs to be done, since they are enabled by default.
From shapelys speedups init method:
"""
The shapely.speedups module contains performance enhancements written in C.
They are automaticaly installed when Python has access to a compiler and
GEOS development headers during installation, and are enabled by default.
"""
If you disable them, you won't get the NULL Pointer Exception, because you don't use the C implementation, rather than the usual implementation.
If you call python in a command shell, type:
from shapely.geometry import shape
this loads your needed shape. Then load your program
import yourscript
then run your script.
yourscript.main()
This should also work. I think in this variant the C modules get properly loaded and therefore you don't get the NULL Pointer Exception. But this only works, if you open a python terminal by hand and import the needed shape by hand. If you import the shape with your program, you will run into the same error again.
Face the same issue and this work for me
import shapely
shapely.speedups.disable()

Open3d Python, Type error using TriangleMesh.create_from_point_cloud_alpha_shape()

I am trying to use open3d to create an "alphahull" around a set of 3d points using TriangleMesh. However I get a TypeError.
import open3d as o3d
import numpy as np
xx =np.asarray([[10,21,18], [31,20,25], [36,20,24], [33,19,24], [22,25,13], [25,19,24], [22,26,10],[29,19,24]])
cloud = o3d.geometry.PointCloud()
cloud.points = o3d.utility.Vector3dVector(xx)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd=cloud, alpha=10.0)
output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: create_from_point_cloud_alpha_shape(): incompatible function arguments. The following argument types are supported:
1. (pcd: open3d.open3d.geometry.PointCloud, alpha: float, tetra_mesh: open3d::geometry::TetraMesh, pt_map: List[int]) -> open3d.open3d.geometry.TriangleMesh
The error says the object I am passing the function is the wrong type. But when I check the type I get this:
>>print(type(cloud))
<class 'open3d.open3d.geometry.PointCloud'>
Please can someone help me with this error?
Note: A comment on this post Python open3D no attribute 'create_coordinate_frame' suggested it might be a problem with the installation and that a solution was to compile the library from source. So I compiled the library from source. After this ran
make install-pip-package. Though I am not sure it completed correctly because I couldn't import open3d in python yet; see output of installation: https://pastebin.com/sS2TZfTL
(I wasn't sure if that command was supposed to complete the installation, or if you were still required to run pip? After I ran python3 -m pip install Open3d I could import the library in python.)
Bug in bindings of current release (v0.9.0):
https://github.com/intel-isl/Open3D/issues/1621
Fix for master:
https://github.com/intel-isl/Open3D/pull/1624
Workaround:
tetra_mesh, pt_map = o3d.geometry.TetraMesh.create_from_point_cloud(pcd)
mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_alpha_shape(pcd, alpha, tetra_mesh, pt_map)

Assessing R packages in Python

I am using rpy2 to call R packages in Python and I am facing technical problems. Some functions in R packages have a dot "." in their names, so it is hard for Python to identify these functions. For example, there is a function called "Granger.conditional()" in the R package called "grangers". When I use rpy2 to call the function like this:
grangers = rpackages.importr('grangers')
res = grangers.Granger.conditional(trnsetmdl_i.iloc[:, i], trnsetmdl_i.iloc[:, j], trnsetmdl_i.iloc[:, k])
I am getting the following error message:
Traceback (most recent call last):
File "D:/CMAPSSRUL/TimeSeriesModelling/GrangerCausality/ConditionalGC.py", line 53, in <module>
res = grangers.Granger.conditional(trnsetmdl_i.iloc[:, i], trnsetmdl_i.iloc[:, j], trnsetmdl_i.iloc[:, k])
AttributeError: module 'grangers' has no attribute 'Granger'
Anyone has solutions to this problem?
The answer to the question is in the documentation:
https://rpy2.github.io/doc/v3.2.x/html/robjects_rpackages.html#importing-r-packages
Also, I see from your error message that you are using Windows. Note that rpy2 is not supported, and pre-built binaries you may have found are almost certainly several releases behind the current one.

Categories

Resources