How to prevent specific GPU to be used in Python code - python

My this problem is same as 1: How to set specific gpu in tensorflow?
but it didn't solve my problem.
I have 4 GPUs in my PC and I want to run code on GPU 0 but whenever I run my tensorflow code, my code is always running only on GPU 2. As reading these (2, 3, 4) solutions and information I tried to solve my problem by adding:
os.environ['CUDA_VISIBLE_DEVICES']= '0' in python code
orCUDA_VISIBLE_DEVICES as environment variable in PyCharm project configuration settings.
furthermore I also add CUDA_LAUNCH_BLOCKING=2in code or environment variable to block the GPU 2. Is it right way to block any GPU?
Above solutions are not working for me. Code is always running on GPU 2. I checked it by watch nvidia-smi.
My system environment is
Ubuntu 16.04
RTX2080Ti (all 4 GPUs)
Driver version 418.74
CUDA 9.0 and CuDNN 7.5
Tensorflow-gpu 1.9.0
Any suggestions for this problem? It's wired that adding environment variable in project settings in PyCharm or in python code... still only GPU 2 is visible. When I remove CUDA_VISIBLE_DEVICESthen tensorflow detects all 4 GPUs but code run on only GPU 2.

I tried this in tensorflow 2.0.0
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(physical_devices[0], 'GPU')
logical_devices = tf.config.experimental.list_logical_devices('GPU')
This should make u r code run in GPU index 0

Related

Run python code on specific gpu for lower python versions

I am trying to run a python code on a specific GPU on our server. The server has four GPUs. When I run the code using a virtual environment installed with python 3.8 and tensorflow 2.2, it works correctly on the specific GPU just by adding the below few lines at the first of the script.
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "2" # run the code on a specified GPU
Many threads recommend use the above code to run python scripts on a specific GPU such as here and here.
However, When I tried to use the same way to run another python code on another virtual environment (with lower specifications) that was installed with python version 3.6.9 and tensorflow 1.12, it does not run on the GPU but on the CPU.
How can I run python code on a specific GPU in the case of the second virtual environment?
You can use export CUDA_VISIBLE_DEVICES to define which GPUs are visible to the application. For example, if you want GPUs 0 and 2 visible, use export CUDA_VISIBLE_DEVICES=0,2.

How to disable TensorFlow GPU?

I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15

Is it possible to run tensorflow-gpu on a computer without a GPU or CUDA?

I have two Windows computers, one with and one without a GPU.
I would like to deploy the same python script on both (TensorFlow 1.8 Object Detection), without changing the packaged version of TensorFlow. In other words, I want to run tensorflow-gpu on a CPU.
In the event where my script cannot detect nvcuda.dll, I've tried using a Session config to disable the GPU, like so:
config = tf.ConfigProto(
device_count = {'GPU': 0}
)
and:
with tf.Session(graph=detection_graph, config=config) as sess:
However, this is not sufficient, as TensorFlow still returns the error:
ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable.
Typically it is installed in 'C:\Windows\System32'. If it is not present, ensure that you have a CUDA-capable GPU with the correct driver installed.
Is there any way to disable checking for a GPU/CUDA entirely and default to CPU?
EDIT: I have read the year-old answer regarding tensorflow-gpu==1.0 on Linux posted here, which suggests this is impossible. I'm interested to know if this is still how tensorflow-gpu is compiled, 9 versions later.

Tensorflow GPU doesn't work in Pycharm

When i'm running my tensorflow training module in pycharm IDE in Ubuntu 16.04, it doesn't show any training with GPU and it trains usually with CPU. But When i run the same python script using terminal it runs using GPU training. I want to know how to configure GPU training in Pycharm IDE.
Actually the problem was, the python environment for the pycharm project is not the same as which is in run configurations. This issue was fixed by changing the environment in run configurations.

nvcc fatal : Value 'sm_61' is not defined for option 'gpu-architecture' error with theano

I was setting up python and theano for use with gpu on;
ubuntu 14.04,
GeForce GTX 1080
already installed NVIDIA driver (367.27) and CUDA toolkit (7.5) successfully for the system,
but on testing with theano gpu implementation I get the above error (for example; when importing theano with gpu enabled)
I have tried to look for possible solutions but didn't succeed.
I'm a little new to ubuntu and gpu programming, so I would appreciate any insight into how I can solve this problem.
Thanks
As Robert Crovella said, SM 6.1 (sm_61) is only supported in CUDA 8.0 and above, and thus you should download CUDA 8.0 Release Candidate from https://developer.nvidia.com/cuda-toolkit
Ubuntu 14.04 is supported, and the instructions on the website on how to setup should be straightforward (copy and paste lines to the console).
I would also recommend downloading CUDA 8.0 when it comes out, since the RC is not the final version.
I was able to find a solution to this problem (since I still want to use CUDA 7.5) by including the following line in the .theanorc file
flags = -arch=sm_52
no more nvcc fatal error

Categories

Resources