I am not to sure why I am getting this issue all of a sudden when trying to import tensorflow into my jupyter notebooks. The issue is related to the protobuf, and I am not to sure what happened all of a sudden that causes this error.
I did install WSL on my system yesterday and thought that might be the problem. I have subsequently uninstalled it.
Here is so the code
!pip install tensorflow
!pip install protobuf
from tensorflow.keras import models, layers
from tensorflow.keras.utils import to_categorical
And it produces an error:
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
Downgrade the protobuf package to 3.20.x or lower.
Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
How do I sort this issue out?
I have tried to install a version of protobuf==3.19.5 but that still doesnt work. I am just baffled by what causes this ?
Related
I'm having some basic issues running the torch and cuda modules in my Python script.
I think that this has something to do with the different versions of Python that I have installed. I have two versions of Python installed:
I think I have torch and cuda installed for the wrong one or something. I don't know how to fix this.
Per the Pytorch website, I installed torch as follows:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
I've installed cuda as follows: pip3 install cuda-python==11.7 (It doesn't matter if I use the newest version -- I still get the same results.)
When I look up the versions, I get the following:
So it seems like it is all installed correctly.
However, if I run the following:
import torch
print(torch.cuda.is_available())
I get False.
If I try to run my code that uses torch, I get the following error:
I don't get what I'm doing wrong. As far as I can tell, I've installed torch with CUDA enabled. So I'm not sure why it's telling me otherwise.
Any ideas? Thanks for any help.
Edit: Here is the info for my GPU:
The issue had to do with the different versions of Python. I ended up using a Conda virtual environment, which solved this issue by ensuring I was using the correct version of Pytorch.
I have some models I trained using TF and have been using for awhile now but since V2.8 came out I am having issues with the models based in MobileNetV3 (large and small), I posted the issue on the tensor-flow git and am waiting for a solution. In the mean time I wan to make some predictions on colab using V2.7 instead of 2.8. I know this involves installing CUDA and and cuDNN. I am really in experienced at this level and setting up TF. does anyone know how to proceed with this? I saw this post but was hoping for a less intensive solution. like can I 'flash' an old colab machine that has 2.7 setup?
as a side note, shouldn't colab have options like this? the main reason I am using colab is that I can run my code anywhere and that it is repeatable.
also I can install and run my code for V2.7 for the CPU version but I want to run on the GPU.
thanks for your help!
edit: sorry I did a poor job at explaining what I already tried. I have tired using pip
!pip install --upgrade tensorflow-gpu==2.7.*
!pip install --upgrade tensorflow==2.7.*
but I get this error
UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
I have also pip uninstalled keras, TF and TF-GPU before installing and I get the same error. yes I restart the runtime as well. someone mentioned that conda tried to install everything when installing TF, is this a possible solution?
I have been dealing with this much too long. I am using a MacBook Pro. This suddenly started happening when running Jupyter. I could not fix it. I completely uninstalled anaconda (which was using Python3.8) then completely uninstalled Python3.9. I installed Python3.96. Then went back to basics and reinstalled numpy, pandas, sklearn. Everything seems to be there in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages
This command fails in IDLE and gives the error in the title of my question.
from sklearn.model_selection import train_test_split
It does not give the error in Mac's Terminal
The search path for IDLE looks correct.
You can use pip to install it.
And it's called scikit-learn instead of sklearn in pip, if you are using Jupyter, do
!pip install scikit-learn
If you already have it, try
!pip install --upgrade scikit-learn
So after a few weeks of simply using Googles Colab I did the following. Purchased App Cleaner and Uninstaller for Mac and got rid of all Python related apps that I do not need any more after moving to Colab and Kaggle's API. Removed and reinstalled Python. Ran the Idle program again and it worked! Yay!. The problem started while using Jupiter with Anaconda. Just randomly appeared. When I decided to work on it again yesterday I thought it might have been some malware on my Mac. The reason I thought it was malware is that the Idle message was giving me the message that sklearn was not a package and it could not find train_test split - even when I asked for another sklearn module - e.g. datasets. It always referred to train_test_split even when I did not request it. Hence the cleanup of my Mac (everything Python related and more). It's all good now.
I created vritualenv for my project A. I ran the same project A after long time.
I was using same virtualenv for other projects as well ,so depending upon other requirements I have installed other libraries as well.
Now when I running project It gives me sklearn, which was working fine earlier.
What can be the reason now it gives import error with sklearn package?
Since you are using the code after a long time, I suspect your old code is outdated.
You can actually use import joblib directly instead of doing it using sklearn.externals, since it is deprecated in the latest version of scikitlearn.
DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib.
You might want to run this first:
pip install joblib
I did not know that tensorflow and keras were installed by default on the machine used by Google Colab. And I installed my own versions. But it was buggy. So I decided to go back to the previous versions. I did:
!pip install tensorflow==1.6.0
and
!pip install keras==2.1.5
But now, when I do import keras, I get the following error:
AttributeError: module 'tensorflow' has no attribute 'name_scope'
Nota:
I asked a friend to know the default tensorflow and keras versions, and he gave me these:
!pip show tensorflow # 1.6.0
!pip show keras # 2.1.5
So I suspect, my installations were wrong somehow. What can I do so I can import keras again ?
To get back to the default versions, I had to restart the VM.
To do so, just do:
!kill -9 -1
Then, wait 30 seconds, and reconnect.
I got the information by opening an issue on the github repository.