I am training Keras CNN models for two different applications on the Jupyter Notebook. Given that I want to utilize the full resources of my PC, can I use Keras-GPU in one notebook and another notebook using CPU.
I learned that Keras uses GPU by default - if available- and I can force Keras to use CPU as
in Can Keras with Tensorflow backend be forced to use CPU or GPU at will?. My question is that by running this line of code,
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
will the default settings change in all the running notebooks or in that particular notebook only?
by running this line of code,
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
the default settings change in that particular notebook only
You can use
os.environ['CUDA_VISIBLE_DEVICES'] = ''
to train on CPU
Related
When I predict on a pre-trained VGG16 based checkpoint with load_weights I am always getting the same class and output tensor. This only happens when I use my GPU (I have 2 GPUs).
If, on the other hand, I only use my CPU by setting:
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
I get the expected results.
This happens for me in Jupyter Notebooks, but it does not happen in google colab. In google colab I get accurate results no matter what.
I am using Keras 2.4.3
Tensorfow 2.4.1
CUDA 11.2
I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15
I have a saved transformers model using BertModel.from_pretrained('test_model')
I have trained this model using google colab's GPUs
Then, I want to open it, with BertModel.from_pretrained('test_model/')
but I do not have a GPU in my local PC. I get this:
/home/seiji/.local/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
What shoud I do? I have no idea of how can I open it using a CPU. And is it possible?
The best thing you can do is save the CPU version of the model, i.e:
model.cpu().save_pretrained("model_directory")
All the pre-trained Huggingface models are saved as CPU models anyway and you always need to move them to GPU explicitly.
PyTorch allows loading GPU models on CPU (see https://discuss.pytorch.org/t/on-a-cpu-device-how-to-load-checkpoint-saved-on-gpu-device/349), but the arguments of torch.load you would need to set are not exposed via the API, so you would need write your own from_pretrained method.
I have successfully set up TensorFlow 2.1.0 with access to my GPU:
If I use Keras (from tensorflow import keras) to fit some Sequential model (like in example here), will by default be used GPU or CPU for that? Is there some command to see which one is in use by Keras and can I somehow set this up myself? I would really like to see some very basic Keras model trained on GPU vs CPU to have a better feeling about the difference in performance.
Since TensorFlow 2.1, GPU and CPU packages are together in the same package, tensorflow, not like in previous versions which had separate versions for CPU and GPU : tensorflow and tensorflow-gpu.
You can test to have a better feeling in this way:
#Use only CPU
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
Or you can make your video card visible to TensorFlow by either allowing the default configurations just like above, or to force it via:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
Note that in the above setting, if you had 4 GPUs for example, you would set:
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
I want to train a 5 Layer DNN using Tensorflow on Jupyter Notebook. It perform well on normal training.
But when I want to use Cross validation to find a great dropout rate. When training process, Jupyter say the kernel is dead.
The Jupyter log:
terminate called after throwing an instance of 'std::system_error'
what(): Resource temporarily unavailable
My code is here.
I Google find out maybe it's because run out of memory. I try to reduce batch size and the error still occurred.
The code running on Ubuntu 16.04 and 32GB RAM with GPU 1080Ti. Enviroment are Python(3.5), tensorflow (1.3.0) & tensorflow-gpu (1.3.0).