how can i test hdbscan using rapids without getting error - python

Good morning,
i want to test the hdbscan (Hierarchical Density-Based Spatial Clustering of Applications w/ Noise)using GPU so i should use the framework rapids.
When i tried to follow the steps described here https://colab.research.google.com/drive/1rY7Ln6rEE1pOlfSHCYOVaqt8OvDO35J0#forceEdit=true&sandboxMode=true&scrollTo=EwaJSKuswsNi
taken from Rapids website: https://rapids.ai/start.html
i get the following error when i run the code of the function CUDF:
import cudf
import io, requests
# download CSV file from GitHub
url="https://github.com/plotly/datasets/raw/master/tips.csv"
content = requests.get(url).content.decode('utf-8')
# read CSV from memory
tips_df = cudf.read_csv(io.StringIO(content))
tips_df['tip_percentage'] = tips_df['tip']/tips_df['total_bill']*100
# display average tip by dining party size
print(tips_df.groupby('size').tip_percentage.mean())
ValueError Traceback (most recent call last)
<ipython-input-1-a95ca25217db> in <module>()
----> 1 import cudf
2 import io, requests
3
4 # download CSV file from GitHub
5 url="https://github.com/plotly/datasets/raw/master/tips.csv"
2 frames
/usr/local/lib/python3.7/site-packages/cudf/_lib/__init__.py in <module>()
2 import numpy as np
3
----> 4 from . import (
5 avro,
6 binaryop,
cudf/_lib/avro.pyx in init cudf._lib.avro()
cudf/_lib/column.pyx in init cudf._lib.column()
cudf/_lib/scalar.pyx in init cudf._lib.scalar()
cudf/_lib/interop.pyx in init cudf._lib.interop()
ValueError: pyarrow.lib.Codec size changed, may indicate binary incompatibility. Expected 48 from C header, got 40 from PyObject
could you please help me.
thanks an advance

Colab made some enhancements this week that affected the RAPIDS installation process. Work toward a resolution is active, and progress is being tracked in this issue (which includes a potential workaround)

Related

jax.experimental importing error Python 3.9

I'd seen previous errors importing form JAX from several years ago (https://github.com/google/jax/issues/372), but the post implied an update would fix it. I just installed JAX and am trying to get set up on a jupyter notebook. Could you let me know what might be going wrong?
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Input In [1], in <cell line: 4>()
1 ########## JAX ON MNIST #####################
2 # Import some additional JAX and dataloader helpers
3 from jax.scipy.special import logsumexp
----> 4 from jax.experimental import optimizers
6 import torch
7 from torchvision import datasets, transforms
ImportError: cannot import name 'optimizers' from 'jax.experimental' (/Users/XXX/opt/anaconda3/lib/python3.9/site-packages/jax/experimental/__init__.py)
I saw that the similar previous error was in 2019 and implied a version difference would fix it. I did not know where to go from there.
According to the CHANGELOG
jax 0.3.16
Deprecations:
Removed jax.experimental.optimizers; it has long been a deprecated alias of jax.example_libraries.optimizers.
So it sounds like if you're using JAX version 0.3.16 or newer, you should do
from jax.example_libraries import optimizers
But as noted in the jax.example_libraries.optimizers documentation, this is not well-supported code and you'll probably have a better experience with something like Optax or JAXopt.

I get error , cannot import from file helper

Please help, I get the error below running jupyter notebook.
import numpy as np
import pandas as pd
from helper import boston_dataframe
np.set_printoptions(precision=3, suppress=True)
Error:
ImportError Traceback (most recent call last)
<ipython-input-3-a6117bd64450> in <module>
1 import numpy as np
2 import pandas as pd
----> 3 from helper import boston_dataframe
4
5
ImportError: cannot import name 'boston_dataframe' from 'helper' (/Users/irina/opt/anaconda3/lib/python3.8/site-packages/helper/__init__.py)
Since you are not giving the where you get the notebook, I have to guess that you get it from this course Supervised Learning: Regression provided IBM.
In the zip folder in week 1, it provides helper.py.
What you need to do it is to change the directory to where this file is. Change IPython/Jupyter notebook working directory
Alternatively, you can load boston data from sklearn then load it to Pandas Dataframe
Advices for you:
Learn how to use Jupyter notebook
Learn how Python import work
Learn how to provide information in a question so that no one need to guess

Runtime error while executing code from google colab document for creating Deepfakes image animation

enter image description hereI'm getting runtime error while executing code from google colab document for creating Deepfakes image animation.
RuntimeError Traceback (most recent call last)
<ipython-input-5-dbd18151b569> in <module>()
1 from demo import load_checkpoints
2 generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml',
----> 3 checkpoint_path='/content/gdrive/My Drive/first-order-motion-model/vox-cpk.pth.tar')
10 frames
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
188 raise AssertionError(
189 "libcudart functions unavailable. It looks like you have a broken build?")
--> 190 torch._C._cuda_init()
191 # Some of the queued calls may reentrantly call _lazy_init();
192 # we need to just return without initializing in that case.
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:47
You may be running on a colab environment that only has TPU's available and not GPU's in which case you need to utilize XLA with PyTorch. You might find this notebook and repository very helpful if this is the case:
https://colab.research.google.com/github/pytorch/xla/blob/master/contrib/colab/resnet18-training.ipynb
https://github.com/pytorch/xla
1.check the path weather the file exists or not
2.in your screen shot there is no drive mounted try checking it once
3.if top two are done still facing problem check with the file given is correct or not
Hope this is helpfu :)

Generating Captions from the Images Using Python (and Pythia)

some months ago I stumbled upon this article on SEJ explaining how to generate captions from images using Python and Pythia framework.
The script used to work fine until some months ago. You can find the original copy here in Colab. However, at some point, Pythia repository in Git was redirect to MMF framework and the script stopped working. Colab returns the following error:
<ipython-input-9-b64f6c325603> in <module>()
34
35
---> 36 from pythia.utils.configuration import ConfigNode
37 from pythia.tasks.processors import VocabProcessor, CaptionProcessor
38 from pythia.models.butd import BUTD
ModuleNotFoundError: No module named 'pythia'
Replacing this line of code with:
from mmf.utils.configuration import ConfigNode
from mmf.datasets.processors import BaseProcessor
from mmf.models.base_model import BaseModel
from mmf.common.registry import registry
from mmf.common.sample import Sample, SampleList
The script goes on for a little bit, but it stops again on these two errors:
---> 32 from maskrcnn_benchmark.config import cfg
33 from maskrcnn_benchmark.layers import nms
34 from maskrcnn_benchmark.modeling.detector import build_detection_model
----> 4 from yacs.config import CfgNode as CN
Right now I am stucked at this point.
Does anyone has an idea of which workaround might be the right resolution for this script?
Thank you

unable to import normalize_corpus python 3

I am working on some NLP task, trying to import normalize_corpus from the normalization module.
I am getting the below error.
from normalization import normalize_corpus
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-13-5919bba55473> in <module>()
----> 1 from normalization import normalize_corpus
ImportError: cannot import name 'normalize_corpus'
The normalization module you have installed (probably https://pypi.org/project/normalization/) does not correspond to the code you are trying to run (possibly from "Text Analytics with Python").
Uninstall the normalization module and track down the code from the book. (A place to start: https://github.com/dipanjanS/text-analytics-with-python)
For whoever lands here because you are reading "Text analytics with Python". The author of the book is referring to a custom script they built and is hosted in the first edition - chapter 4 or 5 -
https://github.com/dipanjanS/text-analytics-with-python/blob/master/Chapter-4/normalization.py
https://github.com/dipanjanS/text-analytics-with-python/blob/b4f6cefc9dd96e2a3e74e01bda391019bd7fb053/Old-First-Edition/Ch05_Text_Summarization/normalization.py

Categories

Resources