How to run the notebook on google colab (throws an error) - python

I understand very little in programming, so please answer as simply as possible. One of the notebooks on google colab throws an error. I just want to run this demo. I am attaching a screenshot, a link and the code of error.
Link to the notebook
ERROR: tensorflow 2.5.0 has requirement h5py~=3.1.0, but you'll have h5py 2.10.0 which is incompatible.

The versions are clashing.
To fix the error, run these two commands in a cell before running other cells of the notebook:
Install compatible TensorFlow version :
!pip install tensorflow==2.2.0
Install compatible h5py version
!pip install h5py==2.7.0

Related

Installation always stuck on PyCaret 2.2.2 + Package problems

I'm stuck on an issue that I can't seem to solve. I was fine using PyCaret on my other PC and had recently got a new desktop.
I was working on one dataset on my old PC and had no problems with setup() and PyCaret preprocessed my data without any issues. When I worked on my the same dataset with my new desktop and Jupyter newly installed, I noticed I ran into an ValueError: Setting a random_state has no effect since shuffle is False. You should leave random_state to its default (None), or set shuffle=True. I thought it was strange but went on to set fold_shuffle=True to get through this.
Next, I encountered AttributeError: 'Simple_Imputer' object has no attribute 'fill_value_categorical'. It seems I'm getting failures at every step of setup(). I went through the forums and found a thread where at the bottom of it, #eddygeek mentioned that PyCaret was set up to fail if the sklearn version is wrong. This got me looking into the packages I have that may meet dependencies between packages.
I noticed the following issues:
I get several errors:
ERROR: Command errored out with exit status 1: C:\Users\%%USER%%\anaconda3\python.exe'
Ignoring numpy: markers 'python_version >= "3.8" and platform_system == "AIX"' don't match your environment
ERROR: Could not find a version that satisfies the requirement scikit-learn==0.23.2
Screenshot of more errors attached
Jupyter Notebook fails to launch because of Pandas Profiling Import Error: cannot import name 'soft_unicode' from 'markupsafe'. I got around this by installing markupsafe===2.0.1 but this leads to incompatibility warning by pandas-profiling 3.2.0 saying it needs markupsafe 2.1.1
PyCaret keeps getting installed as 2.2.2 version. I think that's why it keeps looking for scikit-learn 0.23.2 when the latest PyCaret 2.3.10 works with scikit-learn >=1.0. I've tried uninstalling and reinstalling PyCaret several times but it's still the same.
What I've done
I'm on Python 3.9.12 that was installed together with Anaconda3. My PyCaret was installed with pip install pycaret[full] --user on Anaconda Prompt.
In my pip list, I have:
scikit-learn 1.1.2
markupsafe 2.1.1
pandas-profiling 3.2.0
pycaret 2.2.2
I've added C:\Users\%%USER%%\AppData\Roaming\Python\Python39\Scripts to PATH
I'm really at my wits end so I hope I can get some advice on this. Thank you.
I've encountered the very same issues and solved as follows.
According to the documentation, there are a few problems with your setup:
PyCaret is not yet compatible with sklearn>=0.23.2
PyCaret is tested and supported on the following 64-bit systems:
Python 3.6 – 3.8
Python 3.9 for Ubuntu only
So if you're using python 3.9 on Windows, I'd start with that.
I went into a rabbit hole of downgrading the packages and getting one error after another.
Long story short, the setup that finally worked was:
sklearn 0.23.1
scipy 1.5.2
Both installed on a virtual conda environment but at the end I had to run:
pip3 install pycaret[full]
Notice pip3 intead of pip because I was getting permission errors.
You are using a very old version of pycaret which does not work in Python 3.9. Please install the latest version in a fresh (conda) environment. Make sure it is a new environment in order to avoid any package issues.
# This installs the pre-release 3.0.0 release which has reduced dependencies.
pip install --pre pycaret

Sagemaker Torch Installation Failing

I try to install torch on Sagemaker with the shebang-command in python.
!pip install torch==1.6.0
However, I can only run a notebook once on that specific version. The next time i install that torch version using the notebook, it fails.
The exact error message is:
Collecting torch==1.6.0
Killed
The only workaround would be to install slightly different versions for next notebook runs.

Transformer: Error importing packages. "ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler'"

I am working on a machine learning project on Google Colab, it seems recently there is an issue when trying to import packages from transformers. The error message says:
ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' (/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py)
The code is simple as follow:
!pip install transformers==3.5.1
from transformers import BertTokenizer
So far I've tried to install different versions of the transformers, and import some other packages, but it seems importing any package with:
from transformers import *Package
is not working, and will result in the same error. I wonder if anyone is running into the same issue as well?
Change the torch version in colab by running this command
!pip install torch==1.4.0. Then, It worked for me.
Just change the version of tranformers to the latest one (4.5.1 at this time). That worked in colab.
!pip install transformers
The same issue occurred to me with the PyTorch version after being upgraded.
As for the solution downgrade Pytorch version to 1.4.0.
Use the below command to install
!pip install -q torch==1.4.0 -f https://download.pytorch.org/whl/cu101/torch_stable.html
It's solved a lot of problems with transformers also.
The above from udara vimukthi worked for me after trying a lot of different things, trying to get the code for "Getting started with Google BERT" to work after cloning the gitHub repository locally, so now ALL of the chapter code works while I'm showing my daughter the models.
Operating system - Windows. Running locally with GPU support, using Anaconda environment.
pip install -q --user torch==1.4.0 -f https://download.pytorch.org/whl/cu101/torch_stable.html
then I ran into some more issues and had to install the ipwidgets
pip install ipywidgets
Now it all works, as far as I've gotten. Thanks for the help with the above suggestion it saved me a lot of headaches. :)

How to get back to default tensorflow version on google colab

I did not know that tensorflow and keras were installed by default on the machine used by Google Colab. And I installed my own versions. But it was buggy. So I decided to go back to the previous versions. I did:
!pip install tensorflow==1.6.0
and
!pip install keras==2.1.5
But now, when I do import keras, I get the following error:
AttributeError: module 'tensorflow' has no attribute 'name_scope'
Nota:
I asked a friend to know the default tensorflow and keras versions, and he gave me these:
!pip show tensorflow # 1.6.0
!pip show keras # 2.1.5
So I suspect, my installations were wrong somehow. What can I do so I can import keras again ?
To get back to the default versions, I had to restart the VM.
To do so, just do:
!kill -9 -1
Then, wait 30 seconds, and reconnect.
I got the information by opening an issue on the github repository.

AttributeError: module 'tensorflow.python.pywrap_tensorflow' has no attribute 'TFE_Py_RegisterExceptionClass'

I am trying to develop some time-series sequence prediction, using the latest resources available. To that end, I did check the example code from TensorFlow time-series, but I'm getting this error:
AttributeError: module 'tensorflow.python.pywrap_tensorflow' has no attribute 'TFE_Py_RegisterExceptionClass'
I'm using Anaconda. The current environment is Python 3.5 and TensorFlow 1.2.1. Also tried TensorFlow 1.3, but nothing changed.
Here is the code I'm trying to run. I did not find anything useful related to the issue on Google. Any ideas on how to solve it?
As Conan.Net wrote:
I tried to remove/clean some environments from anaconda and install
all again and it work this time.
This solution worked for me as well, so though not ideal, it will solve the problem. If you are using anaconda, it might happen when installing some packages and then removing them (e.g. tensorflow vs tensorflow-gpu) leaves some dependencies hanging. In my case, I used:
conda remove --name py2_tf_gpu --all
then
conda create --name py2_tf_gpu python=2 anaconda pandas numpy scipy jupyter
source activate py2_tf_gpu
pip install --ignore-installed --upgrade tensorflow-gpu
pip currently installs a later(1.4) than anaconda(1.3) version and I had need for it.
Maybe the version of tensorflow doesn't match the version of keras.
Using a lower version of keras solve this problem

Categories

Resources