I have developed a deep learning model with the Keras, a while ago and do not remember the version of Keras and Tensorflow which I used. Now, I have created a Kivy app that uses that model. The problem is that when I try to use the app on my ubuntu device, which has python 3.8, Keras 2.5.0, and Tensorflow 2.5.0, the app works perfectly. But, when I try to use the app on my device which exploits Windows 10, using python 3.8, Keras 2.5.0, and Tensorflow 2.5.0, It fails and raises the error: "bad marshal data(unknown code type)". The reason that I use python 3.7 on windows 10 is that I want to create a .exe file out of my project and pyinstaller does not work properly with python 3.8.
How can I fix the problem? I will appreciate your kind answers since I'm struggling with this issue for about a month.
Related
As title. My requirement is very simple. Since I will probably need to use the latest features of Python at my work. I wonder how to know the latest version of Python can be used with Tensorflow v2.x without any trouble regarding compatibility. I must put emphasis on that I need to use the tensorflow.keras module. I don't want to get an error message during the model training. Any advice?
I did try to follow the issue on their GitHub on supporting Python3.9. While the issue is closed, most of the comments there are NOT from the contributors/maintainers. And the last comment is on 2021/6. Is Python3.9 the lastest compatible version to run Tensorflow v2.x?
TensorFlow 2 is supported from Python 3.7 to 3.10 according to their website: https://www.tensorflow.org/install?hl=en
I've made a piece of code using a tutorial based on tensorflow 1.6 which uses 'contrib' and this is not compatible with my current tensorflow verison (2.1.0).
I haven't been able to run the upgrade script and downgrading my version of tf causes another host of problems.
I've also tried using other modules in tensor flow 2 such as tensorflow-addons and disabling version 2 behaviour.
What to do??
Thank you to #jdehesa
Here is the information on TensorFlow official website.
Warning: The tf.contrib module is not included in TensorFlow 2. Many
of its submodules have been integrated into TensorFlow core, or
spun-off into other projects like tensorflow_io, or tensorflow_addons.
For instructions on how to upgrade see the Migration guide.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib
https://www.tensorflow.org/guide/migrate
Or, you can just convert the code to an appropriate version for TF 2.x.
I am trying to run a ML model on google cloud ML. I am using pytorch and want to use the GPU. Using the standard Python3.6 version installed on the Google cloud VM, I get an error described below and tried solving it by upgrading the Python version to Python 3.7, but this version does not recognize the GPU that comes with the Google cloud VM.
Whenever I run my code (which works when ran locally) on the Google cloud VM (based on Python3.6) I get the error
python: symbol lookup error: /home/julsoles/anaconda3/lib/python3.6/site-packages/torch/lib/libtorch_python.so: undefined symbol: PySlice_Unpack
Trying to find a solution online, I figured out that this was an issue with the version of Python 3.6 and the only solution was to upgrade my version of Python.
I was able to upgrade my version of Python to Python3.7 in the Google Cloud VM and can run code with this new version using the command Python3.7 file.py.
Now, the issue is that whenever I run code using this version of Python, the VM does not recognize the GPU that comes with the system. I get the error
File "pred.py", line 75, in
predict(model_list, test_dataset) File "pred.py", line 28, in predict
x = Variable(torch.from_numpy(x).float()).cuda() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py",
line 161, in _lazy_init
_check_driver() File "/opt/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py",
line 75, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Right now, the only solution I have found is to run my code just using cpu, but it is painstakingly slow. Is there any way to make Python3.7 recognize the GPU so that I can run my code using the GPU?
Thanks for your help!
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Raspberry Pi 3B+ (latest dist)
TensorFlow installed from (source or binary): pip
TensorFlow version (or github SHA if from source): 1.11.0 (latest from pip)
Keras version: 2.2.4 (latest from pip)
Hello,
I am trying to accelerate a CNN model in order to run it in python and on my raspberry pi. I've tried using tensorflow lite as a solution to this but I still have no success in converting my keras model to a lite one and use it afterwards. I've tried tflite_convert on the command line but I get the error quoted below. I've also tried using the TocoConverter inside python and also using the TFLiteConverter and tflite_convert on a machine with fully updated tensorflow and keras but I still get the same error. Could you help me?
Provide the text output from tflite_convert
raise TypeError('Keyword argument not understood:' , kwarg)
TypeError: ('Keyword argument not understood:' , u'output_padding')
In development, I have been using the gpu-accelerated tensorflow
https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp35-cp35m-linux_x86_64.whl
I am attempting to deploy my trained model along with an application binary for my users. I compile using PyInstaller (3.3.dev0+f0df2d2bb) on python 3.5.2 to create my application into a binary for my users.
For deployment, I install the cpu version, https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl
However, upon successful compilation, I run my program and receive the infamous tensorflow cuda error:
tensorflow.python.framework.errors_impl.NotFoundError:
tensorflow/contrib/util/tensorflow/contrib/cudnn_rnn/python/ops/_cudnn_rnn_ops.so:
cannot open shared object file: No such file or directory
why is it looking for cuda when I've only got the cpu version installed? (Let alone the fact that I'm still on my development machine with cuda, so it should find it anyway. I can use tensorflow-gpu/cuda fine in uncompiled scripts. But this is irrelevant because deployment machines won't have cuda)
My first thought was that somehow I'm importing the wrong tensorflow, but I've not only used pip uninstall tensorflow-gpu but then I also went to delete the tensorflow-gpu in /usr/local/lib/python3.5/dist-packages/
Any ideas what could be happening? Maybe I need to start using a virtual-env..