How to get back to default tensorflow version on google colab - python

I did not know that tensorflow and keras were installed by default on the machine used by Google Colab. And I installed my own versions. But it was buggy. So I decided to go back to the previous versions. I did:
!pip install tensorflow==1.6.0
and
!pip install keras==2.1.5
But now, when I do import keras, I get the following error:
AttributeError: module 'tensorflow' has no attribute 'name_scope'
Nota:
I asked a friend to know the default tensorflow and keras versions, and he gave me these:
!pip show tensorflow # 1.6.0
!pip show keras # 2.1.5
So I suspect, my installations were wrong somehow. What can I do so I can import keras again ?

To get back to the default versions, I had to restart the VM.
To do so, just do:
!kill -9 -1
Then, wait 30 seconds, and reconnect.
I got the information by opening an issue on the github repository.

Related

Cant install Tensorflow due to protobuf issue?

I am not to sure why I am getting this issue all of a sudden when trying to import tensorflow into my jupyter notebooks. The issue is related to the protobuf, and I am not to sure what happened all of a sudden that causes this error.
I did install WSL on my system yesterday and thought that might be the problem. I have subsequently uninstalled it.
Here is so the code
!pip install tensorflow
!pip install protobuf
from tensorflow.keras import models, layers
from tensorflow.keras.utils import to_categorical
And it produces an error:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
Downgrade the protobuf package to 3.20.x or lower.
Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
How do I sort this issue out?
I have tried to install a version of protobuf==3.19.5 but that still doesnt work. I am just baffled by what causes this ?

Transformer: Error importing packages. "ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler'"

I am working on a machine learning project on Google Colab, it seems recently there is an issue when trying to import packages from transformers. The error message says:
ImportError: cannot import name 'SAVE_STATE_WARNING' from 'torch.optim.lr_scheduler' (/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py)
The code is simple as follow:
!pip install transformers==3.5.1
from transformers import BertTokenizer
So far I've tried to install different versions of the transformers, and import some other packages, but it seems importing any package with:
from transformers import *Package
is not working, and will result in the same error. I wonder if anyone is running into the same issue as well?
Change the torch version in colab by running this command
!pip install torch==1.4.0. Then, It worked for me.
Just change the version of tranformers to the latest one (4.5.1 at this time). That worked in colab.
!pip install transformers
The same issue occurred to me with the PyTorch version after being upgraded.
As for the solution downgrade Pytorch version to 1.4.0.
Use the below command to install
!pip install -q torch==1.4.0 -f https://download.pytorch.org/whl/cu101/torch_stable.html
It's solved a lot of problems with transformers also.
The above from udara vimukthi worked for me after trying a lot of different things, trying to get the code for "Getting started with Google BERT" to work after cloning the gitHub repository locally, so now ALL of the chapter code works while I'm showing my daughter the models.
Operating system - Windows. Running locally with GPU support, using Anaconda environment.
pip install -q --user torch==1.4.0 -f https://download.pytorch.org/whl/cu101/torch_stable.html
then I ran into some more issues and had to install the ipwidgets
pip install ipywidgets
Now it all works, as far as I've gotten. Thanks for the help with the above suggestion it saved me a lot of headaches. :)

Tensorflow - The kernel appears to have died. It will restart automatically

I am reading "Hands-On Machine Learning with Scikit-Learn, Keras and Tensorflow" and installed Tensorflow 2 as follows:
$ python3 -m pip install --upgrade tensorflow
In the jupyter notebook I tried to import Tensorflow as follows:
import tensorflow as tf
But then I get the following error message:
The kernel appears to have died. It will restart automatically
I know there is a bunch of StackOverflow threads about this topic. I have read them all. Some of them are old, some are new. Most of them suggest to downgrade the Tensorflow version to 1.5. But when I do that I can not use some of the methods of the Keras API (e.g. load_data() could not be found).
Is there anyone who have found a solution for that?
The second version of the textbook is all about TensorFlow version 2 so you have to use TensorFlow version 2 to use code. if there is a problem get the first version of the textbook which uses TensorFlow 1.
But I suggest learning TensorFlow 2 as it is the latest version.
if you are using anaconda Try installing TensorFlow 2 in a new environment.
To create a new environment open anaconda prompt.
conda create -n envname python=3.6
and then activate the environment
activate envname
Now try installing TensorFlow 2 and other necessary modules
and check.
If it does not work the best solution is to use google colab(colab.research.google.com/).where you can do everything online, you can even have free GPU.

cuDNN on Google colab

When I run following line:
import tensorflow
I'm getting error like:
Importerror : libcudnn. so. 6 : cannot open shared object file: No such file or directory
I'm using Google Colab for my project. I'm using tensorflow-gpu 1.4. Somehow I managed to install Cuda 8.0 on Google Colab. But I need to install cuDNN, which is necessary requirement of tensorflow-gpu 1.4. I should not upgrade tensorflow version to 1.12.
How to install cuDNN on Google Colab?
Getting the various packages' versions correct for using a GPU is complicated, which is why Colab does it for you. You're going to have a bad time if you try to use another set of versions, but if you really want to try then the answer is to follow NVIDIA's documentation for how to install their stuff.
Note that there's a definite cutoff for how far back you go because userland libs and driver versions are not independent, and you will not be able to change the driver version on colab no matter what you do.

Import Error on Keras : 'can not import name 'abs'

I am trying to use keras for image classification. I want to load an already trained model (VGG16) for my project. but when I run
from keras.applications.vgg16 import VGG16
I get an error.
ImportError: cannot import name 'abs'
I reinstalled both tensorflow and keras using :
pip install --ignore-installed --upgrade tensorflow
conda install -c conda-forge keras
since I have found some suggestions that reinstalling could help on here though it was related tfp not VGG16.
Could someone help me, please? Why I am getting this error and how could I fix it?
OS:windows
Tensorflow and keras installed on CPU
after all trying to install tensorflow and keras in a virtual environment solved the problem. Still, don't know why this problem existed in the first place. steps are taken:
conda create --name vgg16project python # you can name it other than vgg16project
activate vgg16project
then install other packages you need such as pandas, seaborn etc. then installing tensorflow and keras with pip
pip install --upgrade tensorflow
pip install --upgrade keras
simply solved it. I guess there must be a reason why it is recommended to use tensorflow and keras in a virtual environment.
I was having similar issue with keras cannot import abs. Tried updating and found tensorflow file was still in use.
Could not install packages due to an EnvironmentError: [WinError 32] The process
cannot access the file because it is being used by another process: 'c:\progra
m files (x86)\microsoft visual studio\shared\python36_64\Lib\site-packages\
\tensorflow\python\ops\gen_dataset_ops.py'
Consider using the --user option or check the permissions.
After uninstalling keras and tensorflow - I deleted the whole tensorflow folder and reinstalled both tensorflow 1.10 and keras. This resolved my issue.
I had the same issue and just uninstalled (and deleted) tensorflow.
After that I installed it again with:
pip install tensorflow-gpu==2.0.0-rc1
I tried something like three different versions before getting it to work.

Categories

Resources