I have an AMD graphic card, so I have to use OpenCL. After a long time installing I almost get it working, and the only thing I am unable to do is to use convolutional layers. I get an error:
AssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against?
So, is there a way to use convolutional layers in lasagne on GPU using OpenCL?
The lasagne docs note that on compilation on GPU, it will use a cuDDN implementation, and if this fails it will fall back to a CPU based implementation. Unfortunately, there seems to be no way to use lasagne with a card only supporting OpenCL.
Related
I have successfully set up TensorFlow 2.1.0 with access to my GPU:
If I use Keras (from tensorflow import keras) to fit some Sequential model (like in example here), will by default be used GPU or CPU for that? Is there some command to see which one is in use by Keras and can I somehow set this up myself? I would really like to see some very basic Keras model trained on GPU vs CPU to have a better feeling about the difference in performance.
Since TensorFlow 2.1, GPU and CPU packages are together in the same package, tensorflow, not like in previous versions which had separate versions for CPU and GPU : tensorflow and tensorflow-gpu.
You can test to have a better feeling in this way:
#Use only CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
Or you can make your video card visible to TensorFlow by either allowing the default configurations just like above, or to force it via:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
Note that in the above setting, if you had 4 GPUs for example, you would set:
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
I have installed Keras with gpu support in R based on Tensorflow with gpu support. This is installed with these steps:
https://towardsdatascience.com/installing-tensorflow-with-cuda-cudnn-and-gpu-support-on-windows-10-60693e46e781
If I run the Bosting housing example code from the book Deep learning with R, I receive this screen:
Can I conclude that the code runs on the GPU?
Or is this line from the picture above giving an error:
GPU libraries are statically linked, skip dlopen check.
During running the code the GPU is running only on 3% of capacity while the CPU is running on 20-25%.
The code is NOT running faster than while I initially did run the code without installing GPU support.
Thank you!
Yes, tensorflow is running with GPU enabled. Boston Housing is a relatively small dataset and probably does not benefit from using the GPU to a large degree. The lines below indicate it is running on the GPU. "Created tensorflow device (/job:localhost/replica:0/task:0device:GPU:0".
From the guide at Tensorflow
You can set tf.debugging.set_log_device_placement(True) in order to explicitly see where each operation is running. THE R equivalent is below.
library(tensorflow)
tf$debugging$set_log_device_placement(TRUE)
I have converted a tensorflow inference graph to tflite model file (*.tflite), according to instructions from https://www.tensorflow.org/lite/convert.
I tested the tflite model on my GPU server, which has 4 Nvidia TITAN GPUs. I used the tf.lite.Interpreter to load and run tflite model file.
It works as the former tensorflow graph, however, the problem is that the inference became too slow. When I checked out the reason, I found that the GPU utilization is simply 0% when tf.lite.Interpreter is running.
Is there any method that I can run tf.lite.Interpreter with GPU support?
https://github.com/tensorflow/tensorflow/issues/34536
CPU is kind of good enough for tflite, especially multicore.
nvidia GPU likely not updated for tflite, which is for mobile GPU platform.
Conspiracy: they (TF-NVIDIA) hand-shake to not let TFlite working on GPU ? oo easy to make one.
Steve
I trained my tf model in python:
with sv.managed_session(master='') as sess:
with tf.device("/gpu:1"):#my systerm has 4 nvidia cards
and use the command line to abstract the model:
freeze_graph.py --clear_devices False
and during test phase, I set the device as follow:
tensorflow::graph::SetDefaultDevice("/gpu:1", &tensorflow_graph);
but someting is wrong:
ould not create Tensorflow Graph:
Invalid argument: Cannot assign a device to node '.../RNN_backword/while/Enter':
Could not satisfy explicit device specification '/gpu:1'
because no devices matching that specification are registered in this process;
available devices: /job:localhost/replica:0/task:0/cpu:0
so,how can I use gpu i correctly??
anyone could help??
Is it possible you're using a version of TensorFlow without GPU support enabled? If you're building a binary you may need to add additional BUILD rules from //tensorflow that enable GPU support. Also ensure you enabled GPU support when running configure.
EDIT: Can you file a bug on TF's github issues with:
1) your BUILD rule
2) much more of your code so we can see how you're building your model and creating your session
3) how you ran configure
While this API is not yet marked "public"; we want to see if there's indeed a bug you are running into so we can fix it.
I have a little knowledge of using GPU to train model.
I am using K-means from scikit-learn to train my model.
Since my data is very large, is it possible to train this model using GPU to reduce computation time?
or could you please suggest any methods to use GPU power?
The other question is if I use TensorFlow to build the K-means as shown in this blog.
https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html
It will use GPU or not?
Thank you in advance.
To check if your GPU supports CUDA: https://developer.nvidia.com/cuda-gpus
Scikit-learn hasn't supported CUDA so far. You may want to use TensorFlow: https://www.tensorflow.org/install/install_linux
I hope this helps.
If you have CUDA enabled GPU with Compute Capability 3.0 or higher and install GPU supported version of Tensorflow, then it will definitely use GPU for training.
For additions information on NVIDIA requirements to run TensorFlow with GPU support check the following link:
https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support