I'm working with python and I would like to use tensorflow with my GTX2080TI but tensorflow is using only the CPU.
when I ask for device on my computer, it always return an empty list:
In [3]: tf.config.list_physical_devices('GPU')
Out[3]: []
I try this post: How do I use TensorFlow GPU?
but I don't use cuda and the tennsorflow-gpu seems outdated.
I also try this well done tutorial https://www.youtube.com/watch?v=hHWkvEcDBO0 without success.
I install the card drivers, CUDA and cudNN but I still get the same issue.
I also unsinstall tensorflow and keras an install them again without success.
I don't know how can I find what is missing or if I made something wrong.
Python 3.10
Tensoflow version: 2.11.0
Cuda version: 11.2
cudNN: 8.1
This line code tell's me that cuda is not build:
print(tf_build_info.build_info)
OrderedDict([('is_cuda_build', False), ('is_rocm_build', False), ('is_tensorrt_build', False), ('msvcp_dll_names', 'msvcp140.dll,msvcp140_1.dll')])
Starting with TensorFlow 2.11, GPU support was dropped for native windows. You can optionally use Direct ML Plugin
Tutorial here
Related
I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15
There has been many questions on this topic before, but none of the answers fit my specific question.
I am trying to train Mask RCNN using Keras 2.2.5 on Google Colab. I am getting a message saying "You are connected to a GPU runtime, but not utilizing the GPU". I look up similar questions on stack overflow:
Why isn't my colab notebook using the GPU?
Anyone experienced the warning about Google colaboratory:You are connected to a GPU runtime, but not utilizing the GPU
but all of them only say to use a package that uses GPU, such as Tensorflow. However, I am using Keras 2.2.5 (presumably with Tensorflow 1.14 backend as I had to install Tensorflow 1.14 for Keras 2.2.5 to work), which is compatible with GPU. Is there any reason why this is happening?
More info:
Google Colab
Python 3.6
I am using Keras 2.2.5 (with Tensorflow 1.14) instead of Keras 2.4.3 because I am using some code from 2017 which works only with Tensorflow 1. Thus, Tensorflow 2 is not possible to use.
I have used Google Colab to train a GAN before, but this error never appeared. However, I used Tensorflow 2 in that situation.
I have never needed to add any code to specify that a GPU should be run, and I don't know if I should do that here.
I had to download Keras 2.2.5 and Tensorflow 1.14 to replace the existing versions (Keras 2.4.3 and Tensorflow 2.2) and I'm unclear if that has anything to do with it
I've made a piece of code using a tutorial based on tensorflow 1.6 which uses 'contrib' and this is not compatible with my current tensorflow verison (2.1.0).
I haven't been able to run the upgrade script and downgrading my version of tf causes another host of problems.
I've also tried using other modules in tensor flow 2 such as tensorflow-addons and disabling version 2 behaviour.
What to do??
Thank you to #jdehesa
Here is the information on TensorFlow official website.
Warning: The tf.contrib module is not included in TensorFlow 2. Many
of its submodules have been integrated into TensorFlow core, or
spun-off into other projects like tensorflow_io, or tensorflow_addons.
For instructions on how to upgrade see the Migration guide.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib
https://www.tensorflow.org/guide/migrate
Or, you can just convert the code to an appropriate version for TF 2.x.
I'm using keras to make a model.
While compiling, my model doesn't work and an error message pops out:tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
My computer's spec is as follows:
GPU : RTX2070,
Tensorflow version : 1.13.1,
Python version : 3.6.5,
CUDA : 10.0,
cuDNN : 7.4.2
I tried cuDNN 7.5.0 and this link: cannot train Keras convolution network on GPU but changing cuDNN version doesn't work for me.
So, I tried these codes:
>>>import tensorflow as tf
>>>a = tf.constant([1])
>>>b = tf.constnat([2])
>>>sess = tf.Session()
>>>with tf.device('/gpu:0'):
... print(sess.run(a+b))
...
[3]
It works! Does anyone know why I suffer from this problem?
This issue might be of help https://github.com/tensorflow/tensorflow/issues/24828
Try to check which versions of cudnn and tensorflow you have.
I solved this problem by conda install tensorflow-gpu. It automatically installed cuDNN 7.3.1, and the problem solved.
System information
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow version: tensorflow-gpu (1.7.0)
Python version: Python 3.5.2
CUDA/cuDNN version: CUDA 9.0 cuDNN 7
Describe the problem
I have a cuda lib build from C++ for post-processing after predict result by tensorflow model.
I use following way to make python able to use cuda code from C++
lib = ctypes.cdll.LoadLibrary(my.so)
result = lib.post_process(tensorflow_result)
If I test the cuda code alone without tensorflow, it work fine. (I save the result from tensorflow then use cv2.imread to feed into my cuda code)
But when tensorflow is used in my project, my cuda code become 10 times slower....
My time log is in cuda .so lib, so it's no way that the gap come from python to .so wrap.
I have try to set the fraction of GPU memory to be allocated in tensorflow by:
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
but useless....
so I wonder does tensorflow take all resource from GPU, making other CUDA code slow ?
the only solution is make my cuda code as a tensorflow OP by register?
Any suggestion? Thanks~~~
----------------------Update----------------------
I have tested what #AnandCU say.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
but it doesn't make my cuda code speed up like I test it alone without tensorflow.