self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
The code to check if there is a gpu or not, and if there isn't, to utilise the cpu, is described above, using Pytorch.
How to implement this using tensorflow?
As long as you have all the necessary libraries installed (drivers, CUDA, etc.), Tensorflow will automatically run on the GPU, no changes required. You can use the following code to check if Tensorflow recognizes your GPU:
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
If there is no GPU available this code will simply print out nothing, so you know, that Tensorflow did not find a GPU to run your model on. In that case, Tensorflow will run normally on the CPU. There is no explicit code necessary to switch between GPU and CPU.
Related
I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15
I have successfully set up TensorFlow 2.1.0 with access to my GPU:
If I use Keras (from tensorflow import keras) to fit some Sequential model (like in example here), will by default be used GPU or CPU for that? Is there some command to see which one is in use by Keras and can I somehow set this up myself? I would really like to see some very basic Keras model trained on GPU vs CPU to have a better feeling about the difference in performance.
Since TensorFlow 2.1, GPU and CPU packages are together in the same package, tensorflow, not like in previous versions which had separate versions for CPU and GPU : tensorflow and tensorflow-gpu.
You can test to have a better feeling in this way:
#Use only CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
Or you can make your video card visible to TensorFlow by either allowing the default configurations just like above, or to force it via:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
Note that in the above setting, if you had 4 GPUs for example, you would set:
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
I have installed Keras with gpu support in R based on Tensorflow with gpu support. This is installed with these steps:
https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781
If I run the Bosting housing example code from the book Deep learning with R, I receive this screen:
Can I conclude that the code runs on the GPU?
Or is this line from the picture above giving an error:
GPU libraries are statically linked, skip dlopen check.
During running the code the GPU is running only on 3% of capacity while the CPU is running on 20-25%.
The code is NOT running faster than while I initially did run the code without installing GPU support.
Thank you!
Yes, tensorflow is running with GPU enabled. Boston Housing is a relatively small dataset and probably does not benefit from using the GPU to a large degree. The lines below indicate it is running on the GPU. "Created tensorflow device (/job:localhost/replica:0/task:0device:GPU:0".
From the guide at Tensorflow
You can set tf.debugging.set_log_device_placement(True) in order to explicitly see where each operation is running. THE R equivalent is below.
library(tensorflow)
tf$debugging$set_log_device_placement(TRUE)
I have converted a tensorflow inference graph to tflite model file (*.tflite), according to instructions from https://www.tensorflow.org/lite/convert.
I tested the tflite model on my GPU server, which has 4 Nvidia TITAN GPUs. I used the tf.lite.Interpreter to load and run tflite model file.
It works as the former tensorflow graph, however, the problem is that the inference became too slow. When I checked out the reason, I found that the GPU utilization is simply 0% when tf.lite.Interpreter is running.
Is there any method that I can run tf.lite.Interpreter with GPU support?
https://github.com/tensorflow/tensorflow/issues/34536
CPU is kind of good enough for tflite, especially multicore.
nvidia GPU likely not updated for tflite, which is for mobile GPU platform.
Conspiracy: they (TF-NVIDIA) hand-shake to not let TFlite working on GPU ? oo easy to make one.
Steve
System information
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow version: tensorflow-gpu (1.7.0)
Python version: Python 3.5.2
CUDA/cuDNN version: CUDA 9.0 cuDNN 7
Describe the problem
I have a cuda lib build from C++ for post-processing after predict result by tensorflow model.
I use following way to make python able to use cuda code from C++
lib = ctypes.cdll.LoadLibrary(my.so)
result = lib.post_process(tensorflow_result)
If I test the cuda code alone without tensorflow, it work fine. (I save the result from tensorflow then use cv2.imread to feed into my cuda code)
But when tensorflow is used in my project, my cuda code become 10 times slower....
My time log is in cuda .so lib, so it's no way that the gap come from python to .so wrap.
I have try to set the fraction of GPU memory to be allocated in tensorflow by:
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
but useless....
so I wonder does tensorflow take all resource from GPU, making other CUDA code slow ?
the only solution is make my cuda code as a tensorflow OP by register?
Any suggestion? Thanks~~~
----------------------Update----------------------
I have tested what #AnandCU say.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
but it doesn't make my cuda code speed up like I test it alone without tensorflow.