I have successfully set up TensorFlow 2.1.0 with access to my GPU:
If I use Keras (from tensorflow import keras) to fit some Sequential model (like in example here), will by default be used GPU or CPU for that? Is there some command to see which one is in use by Keras and can I somehow set this up myself? I would really like to see some very basic Keras model trained on GPU vs CPU to have a better feeling about the difference in performance.
Since TensorFlow 2.1, GPU and CPU packages are together in the same package, tensorflow, not like in previous versions which had separate versions for CPU and GPU : tensorflow and tensorflow-gpu.
You can test to have a better feeling in this way:
#Use only CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
Or you can make your video card visible to TensorFlow by either allowing the default configurations just like above, or to force it via:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
Note that in the above setting, if you had 4 GPUs for example, you would set:
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
Related
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
The code to check if there is a gpu or not, and if there isn't, to utilise the cpu, is described above, using Pytorch.
How to implement this using tensorflow?
As long as you have all the necessary libraries installed (drivers, CUDA, etc.), Tensorflow will automatically run on the GPU, no changes required. You can use the following code to check if Tensorflow recognizes your GPU:
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
If there is no GPU available this code will simply print out nothing, so you know, that Tensorflow did not find a GPU to run your model on. In that case, Tensorflow will run normally on the CPU. There is no explicit code necessary to switch between GPU and CPU.
I am training Keras CNN models for two different applications on the Jupyter Notebook. Given that I want to utilize the full resources of my PC, can I use Keras-GPU in one notebook and another notebook using CPU.
I learned that Keras uses GPU by default - if available- and I can force Keras to use CPU as
in Can Keras with Tensorflow backend be forced to use CPU or GPU at will?. My question is that by running this line of code,
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
will the default settings change in all the running notebooks or in that particular notebook only?
by running this line of code,
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
the default settings change in that particular notebook only
You can use
os.environ['CUDA_VISIBLE_DEVICES'] = ''
to train on CPU
for some complicated reasons I use both tensorflow and theano in my python code, and I have 2 gpus which I want them to share, but as stated in another question there is some problem, I want to know if there's some trick to achieve that, like specifying tensorflow to use only 1 gpu while theano using another?
for now I can only disable theano's gpu usage by os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float64', and let tensorflow use all
os.environ['KERAS_BACKEND'] = 'theano'
os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float64'
import tensorflow as tf
import keras as ks
I haven't tried this. However if you have multiple GPUs, you can force to run the code on GPU using the following trick:
import tensorflow as tf
with tf.device('/gpu:0'):
# Run the tensorflow code
import tensorflow as tf
with tf.device('/gpu:1'):
# Run the theano code
Hope this helps!
System information
OS Platform and Distribution: Linux Ubuntu 16.04
TensorFlow version: tensorflow-gpu (1.7.0)
Python version: Python 3.5.2
CUDA/cuDNN version: CUDA 9.0 cuDNN 7
Describe the problem
I have a cuda lib build from C++ for post-processing after predict result by tensorflow model.
I use following way to make python able to use cuda code from C++
lib = ctypes.cdll.LoadLibrary(my.so)
result = lib.post_process(tensorflow_result)
If I test the cuda code alone without tensorflow, it work fine. (I save the result from tensorflow then use cv2.imread to feed into my cuda code)
But when tensorflow is used in my project, my cuda code become 10 times slower....
My time log is in cuda .so lib, so it's no way that the gap come from python to .so wrap.
I have try to set the fraction of GPU memory to be allocated in tensorflow by:
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
but useless....
so I wonder does tensorflow take all resource from GPU, making other CUDA code slow ?
the only solution is make my cuda code as a tensorflow OP by register?
Any suggestion? Thanks~~~
----------------------Update----------------------
I have tested what #AnandCU say.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
but it doesn't make my cuda code speed up like I test it alone without tensorflow.
I have a little knowledge of using GPU to train model.
I am using K-means from scikit-learn to train my model.
Since my data is very large, is it possible to train this model using GPU to reduce computation time?
or could you please suggest any methods to use GPU power?
The other question is if I use TensorFlow to build the K-means as shown in this blog.
https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html
It will use GPU or not?
Thank you in advance.
To check if your GPU supports CUDA: https://developer.nvidia.com/cuda-gpus
Scikit-learn hasn't supported CUDA so far. You may want to use TensorFlow: https://www.tensorflow.org/install/install_linux
I hope this helps.
If you have CUDA enabled GPU with Compute Capability 3.0 or higher and install GPU supported version of Tensorflow, then it will definitely use GPU for training.
For additions information on NVIDIA requirements to run TensorFlow with GPU support check the following link:
https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support