I have a package called boo which I install on google colab from a github repository. The installation process looks fine and results in success message Successfully installed boo-0.1. However import boo fails on first internal import.
I replicated the same installation steps in a local virtual environment and package worked, but not on collab.
Here are my steps and error trace:
!rm -rf sandbox
!git clone https://github.com/ru-corporate/sandbox.git
!pip install -r sandbox/requirements.txt
!pip install sandbox/.
Alternatively, I tried
!pip install git+https://github.com/ru-corporate/sandbox.git#master
The error trace is:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-7-fc0b1d036b35> in <module>()
----> 1 import boo
/usr/local/lib/python3.6/dist-packages/boo/__init__.py in <module>()
----> 1 from boo.boo import download, build, read_dataframe, files
2 from boo.views.whatis import whatis
/usr/local/lib/python3.6/dist-packages/boo/boo.py in <module>()
3 from tqdm import tqdm
4
----> 5 from boo.file.download import curl
Basically, from root __init__.py the import goes to root boo.py and stumbles upon finding boo/file/download.py.
How do I make this package work on collab?
I could fix the subpackage behavior by editing setup.py as suggested here:
# ...
packages=setuptools.find_packages()
# ...
Somehow Colab is more restrictive on this parameter than local installation.
Related
I was running this Google Colab today everything was working fine but eventually I starting getting these errors in the Set Up Environment. I can't find a fix. Any help would be appreciated let me know if I need to provide more info.
Here's a link to the colab: https://colab.research.google.com/github/entmike/disco-diffusion-1/blob/main/Simplified_Disco_Diffusion.ipynb
📁 Google Drive already mounted.
✅ Disco Diffusion root path will be "/content/gdrive/MyDrive/disco-diffusion-1"
Google Colab detected.
📄 Pulling updates from GitHub...
M download_models.sh
M examples/docker/disco-file.sh
M examples/docker/disco.sh
M examples/docker/interactive.sh
M examples/docker/unittest.sh
M examples/linux/configfile.sh
M examples/linux/simple.sh
Your branch is up to date with 'origin/main'.
Already up to date.
📦 Upgrading pyyaml...
📦 Installing pip requirements...
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-11-4fe875dbbe21> in <module>()
70 # Import DD helper modules
71 sys.path.append(PROJECT_DIR)
---> 72 import dd, dd_args
73
74 # Unsure about these:
1 frames
/content/gdrive/MyDrive/disco-diffusion-1/disco_xform_utils.py in <module>()
3
4 # import pytorch3dlite.pytorch3dlite as p3d
----> 5 from pytorch3d import renderer
6 from midas import utils as midas_utils
7 from PIL import Image
ModuleNotFoundError: No module named 'pytorch3d'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
--------------------------------------------------------------------------- ```
I've opened an issue with the author of the repo (here).
In the meantime, open the "Set Up Environment" cell and update the section under "if is_colab" like so:
#Upgrade pyyaml if in Colab
if is_colab:
print(f'📦 checking out specific commit...')
for cmd in ['git clean -df', f'git reset --hard 3fc1dddb043f7f814db49fe951b4abb7eebd22fd', f'git log -1']:
gitresults = subprocess.run(f'{cmd}'.split(' '), stdout=subprocess.PIPE).stdout.decode("utf-8")
print(f'{gitresults}')
print(f'📦 Upgrading pyyaml...')
subprocess.run(f'pip install --upgrade pyyaml --quiet'.split(' '), stdout=subprocess.PIPE).stdout.decode("utf-8")
print(f'📦 Installing pip requirements...')
subprocess.run(f'pip install -r colab-requirements.txt --quiet'.split(' '), stdout=subprocess.PIPE).stdout.decode("utf-8")
what this does is it reverts the last commit which appears to have broken things.
Just re-run the cell (and the ones after)... it works for me.
It's fixed now. Thanks for reporting it!
need your help, I tried to install the chembl_webresource_client on colab, it usually works fine, but today to my surprise there is a mistake in the very first step.
! pip install chembl_webresource_client # install the client
from chembl_webresource_client.new_client import new_client # here is where is wrong
molecule = new_client.molecule
res = molecule.search('viagra')
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-7-19aed0e54aea> in <module>()
----> 1 from chembl_webresource_client.new_client import new_client
2 molecule = new_client.molecule
3 res = molecule.search('viagra')
4 frames
/usr/local/lib/python3.7/dist-packages/chembl_webresource_client/cache.py in <module>()
1 __author__ = 'mnowotka'
2
----> 3 from requests_cache.backends.base import BaseCache, hashlib, _to_bytes
4
5 def create_key(self, request):
ImportError: cannot import name 'hashlib' from 'requests_cache.backends.base' (/usr/local/lib/python3.7/dist-packages/requests_cache/backends/base.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
This is an issue with requests-cache. I downgraded to 0.5.2 and the error goes away.
This issue was fixed on version 0.10.3 of the chembl_webresource_client. Upgrading should fix it
I have installed the chembl_webresource_client package.
Then I tried to import a module from the package:
from chembl_webresource_client.new_client import new_client
But it fails to execute and this error appears:
ImportError Traceback (most recent call
last) in ()
1 # Import necessary libraries
2 import pandas as pd
----> 3 from chembl_webresource_client.new_client import new_client
4 frames
/usr/local/lib/python3.7/dist-packages/chembl_webresource_client/cache.py
in ()
1 author = 'mnowotka'
2
----> 3 from requests_cache.backends.base import BaseCache, hashlib, _to_bytes
4
5 def create_key(self, request):
ImportError: cannot import name 'hashlib' from
'requests_cache.backends.base'
(/usr/local/lib/python3.7/dist-packages/requests_cache/backends/base.py)
Is there a fix for this?
I have faced the same issue today and resolved it by updating chembl-webresource-client library's version from 0.10.2 to 0.10.3 in my project's requirements.txt file.
For example: chembl-webresource-client==0.10.3
Also after making these changes and activating your project's virtual environment, please don't forget to fetch and re-install all listed libraries in your requirements.txt using the following command:
pip install -r .\requirements.txt
hy I followed the installation instructions here
and installed with
pip install --upgrade azureml-sdk[notebooks,automl] azureml-dataprep --ignore-installed PyYAML
It seem to work but a simple
import azureml.core
azureml.core.VERSION
Throws me a numpy error
> AttributeError Traceback (most recent call
> last) <ipython-input-3-08b704cd5542> in <module>
> ----> 1 import azureml.core
> 2 azureml.core.VERSION
c:\users\werth\appdata\local\continuum\anaconda3\envs\azuresdk\lib\site-packages\azureml\core\__init__.py in <module>
4
5 """Setup file for core package."""
----> 6 from azureml.core.workspace import Workspace
7 from azureml.core.experiment import Experiment
8 from azureml.core.runconfig import RunConfiguration
... I did not include the total traceback as it is apparently a Azure import problem.
AttributeError: type object 'numpy.ndarray' has no attribute '__array_function__'
It seems that the workspace has a problem. But I cannot think why. The Notebook is in a subfolder of the working directory. Numpy is installed.
If you would have an idea I would be thankfull.
Hi greatest Floridaman,
the answer is simple. During the installation of the azureml-train-automl 1.0.8 package, the numpy package needs to be maximum at version 1.15.0
So just downgrade numpy to that version
conda install numpy=1.15.0
pip-install qiskit-aqua completed successfully.
Following is ths stack trace:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-8f03022943b2> in <module>()
2 import sys
3 from datasets import *
----> 4 from qiskit_aqua.algorithms.many_sample.qsvm.data_preprocess import *
5 from qiskit_aqua.input import get_input_instance
6 from qiskit_aqua import run_algorithm
ModuleNotFoundError: No module named 'qiskit_aqua.algorithms'
After some searching, I found the code that you're running: it appears to come from one of the tutorials in the repository at https://github.com/Qiskit/aqua-tutorials. The current version of this repository is compatible with the current master branch of the Qiskit Aqua repository at https://github.com/Qiskit/aqua, which is currently somewhat ahead of the latest version available on PyPI (i.e. the one you installed using PIP). I expect PyPI will be updated with the latest version soon, but in the meantime I'd recommend that you clone the master branch of the Qiskit Aqua repository from GitHub. You can then install it using pip install -e if desired.