I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15
Related
I'm working with python and I would like to use tensorflow with my GTX2080TI but tensorflow is using only the CPU.
when I ask for device on my computer, it always return an empty list:
In [3]: tf.config.list_physical_devices('GPU')
Out[3]: []
I try this post: How do I use TensorFlow GPU?
but I don't use cuda and the tennsorflow-gpu seems outdated.
I also try this well done tutorial https://www.youtube.com/watch?v=hHWkvEcDBO0 without success.
I install the card drivers, CUDA and cudNN but I still get the same issue.
I also unsinstall tensorflow and keras an install them again without success.
I don't know how can I find what is missing or if I made something wrong.
Python 3.10
Tensoflow version: 2.11.0
Cuda version: 11.2
cudNN: 8.1
This line code tell's me that cuda is not build:
print(tf_build_info.build_info)
OrderedDict([('is_cuda_build', False), ('is_rocm_build', False), ('is_tensorrt_build', False), ('msvcp_dll_names', 'msvcp140.dll,msvcp140_1.dll')])
Starting with TensorFlow 2.11, GPU support was dropped for native windows. You can optionally use Direct ML Plugin
Tutorial here
The loss is calculated from the target model created using pytorch (not TensorFlow) and when propagating, I run the code below and had trouble with the following error message.
loss.backward()
(Forward propagation can be calculated without problems.)
terminate called after throwing an instance of 'std::runtime_error'
what(): tensorflow/compiler/xla/xla_client/computation_client.cc:280 : Missing XLA configuration
Aborted
-pytorch(1.12.0+cu102)
torchvision(0.13.0+cu102) <- target model contains pre-trained CNN model which can be installed from torchvision.models
google-compute-engine
GPU (NVIDIA Tesla T4 x 1, 11.6) <- The code worked in the environment where GPU (11.2) was installed, but it does not work in the current environment. / In the current environment, the same error occurs even if the GPU is not used and the CPU is used.
TPU is not installed (I don't want to use TPU, but GPU)
The code is working locally and was also working on other GPU environments as mentioned above. It stopped working when the environment was updated.
Please help me···
I solved this problem with the command.
$ pip uninstall torch_xla
This error seemed to be caused by pytorch-ignite and torch_xla.
My this problem is same as 1: How to set specific gpu in tensorflow?
but it didn't solve my problem.
I have 4 GPUs in my PC and I want to run code on GPU 0 but whenever I run my tensorflow code, my code is always running only on GPU 2. As reading these (2, 3, 4) solutions and information I tried to solve my problem by adding:
os.environ['CUDA_VISIBLE_DEVICES']= '0' in python code
orCUDA_VISIBLE_DEVICES as environment variable in PyCharm project configuration settings.
furthermore I also add CUDA_LAUNCH_BLOCKING=2in code or environment variable to block the GPU 2. Is it right way to block any GPU?
Above solutions are not working for me. Code is always running on GPU 2. I checked it by watch nvidia-smi.
My system environment is
Ubuntu 16.04
RTX2080Ti (all 4 GPUs)
Driver version 418.74
CUDA 9.0 and CuDNN 7.5
Tensorflow-gpu 1.9.0
Any suggestions for this problem? It's wired that adding environment variable in project settings in PyCharm or in python code... still only GPU 2 is visible. When I remove CUDA_VISIBLE_DEVICESthen tensorflow detects all 4 GPUs but code run on only GPU 2.
I tried this in tensorflow 2.0.0
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(physical_devices[0], 'GPU')
logical_devices = tf.config.experimental.list_logical_devices('GPU')
This should make u r code run in GPU index 0
I have a problem with Keras and multiprocessing. I have already searched a lot and I found a lot of questions with the same subjects:
Importing Keras breaks multiprocessing
Keras + Tensorflow and Multiprocessing in Python
(and lot more)
I tried these solutions, so basically importing Keras after the multiprocessing has been instantiated. In actual fact, I see this message:
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Using TensorFlow backend.
Before this message was only printed one time, so I assume that the backend is different, however, my processes are running on the same core. If run again the main process, it creates more processes that run in the same processor as well. It seems that something blocks the execution on different processors.
Any idea on how to fix it?
PS: I am using the second solution I have linked, in particular the following :
DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT
The problem was in the installation of tensorflow and keras. The methods for achieving parallelization are correct.
The tensorflow documentation clearly states that is highly suggested to install the package using pip as the conda package is maintained only by the community (https://www.tensorflow.org/install/pip).
I fixed the problem uninstalling keras and tensorflow and reinstalling them with:
pip install tensorflow
pip install keras
I have two Windows computers, one with and one without a GPU.
I would like to deploy the same python script on both (TensorFlow 1.8 Object Detection), without changing the packaged version of TensorFlow. In other words, I want to run tensorflow-gpu on a CPU.
In the event where my script cannot detect nvcuda.dll, I've tried using a Session config to disable the GPU, like so:
config = tf.ConfigProto(
device_count = {'GPU': 0}
)
and:
with tf.Session(graph=detection_graph, config=config) as sess:
However, this is not sufficient, as TensorFlow still returns the error:
ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable.
Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.
Is there any way to disable checking for a GPU/CUDA entirely and default to CPU?
EDIT: I have read the year-old answer regarding tensorflow-gpu==1.0 on Linux posted here, which suggests this is impossible. I'm interested to know if this is still how tensorflow-gpu is compiled, 9 versions later.