I am using machine learning with detecto in Python. However, whenever I run it, I get a warning saying
It looks like you're training your model on a CPU. Consider switching to a GPU; otherwise,
this method can take hours upon hours or even days to finish. For more information, see
https://detecto.readthedocs.io/en/latest/usage/quickstart.html#technical-requirements
I have a GPU in the form of an Intel(R) HD graphics 4600, but for some reason the code is running on the CPU. I have checked out the link it gives which says
By default, Detecto will run all heavy-duty code on the GPU if it’s available and on the CPU otherwise.
It recommends using Google Collab if the computer doesn't have a GPU it can use, but I do have one, and don't want to use Google Collab.
Why is it running on the CPU instead of the GPU? And how can I fix it? The part of my code where I get the warning is
losses = fitmodel(loader, Test_dataset, epochs=25, lr_step_size=5,
learning_rate=0.001, verbose=True)
The code does work, however it takes ages to run, so want to be able to run it on the GPU to save time.
The GPU that detecto is referring to would need to be a CUDA capable Nvidia GPU. So your Intel(R) HD graphics 4600 does not meet this criterion.
Detecto uses pytorch internally, whichs GPU support is based on CUDA. So in order to use a GPU, you would need to move to a machine that has a CUDA capable card
Related
I am currently working on 1D Convolutional Neural Networks for Time Series Classification. Recently, i got CUDA working on my GeForce 3080 (which was a pain itself). However, i noticed a weird behavior when using tensorflow and cuda. After training a model, the gpu memory is not released, even after deleting the variables and doing garbage collection. I tried reseting the tf graph and closing the tf sessions, but the gpu memory stays allocated. This results in cross validation crashing and me having to restart my python environment every time i want to make changes and retrain my model.
After a tideous search, I found out people have been struggling with this 5 years ago. However, I am right now using tf 2.7. I am working on Ubuntu 20.04.3. Some of my colleagues are using windows and are not experiencing these problems. However, it seems like they do not have any issues with models not being able to be retrained because of already allocated memory.
I found the workaround using multiple processes, but wasn't able to get it to work for my model using 10 fold cv.
As the issue has been up for more than 5yrs now and my colleagues not having any problems, I was wondering if I am doing sth. wrong. I think that issue might very likely have been fixed after 5 years, which is why I think my code is the problem here.
Is there any solution / guide for tf 2.7 and memory allocation of the gpu?
I have written a piece of scientific code in python, mainly using the numpy library (especially Fast Fourier Transforms), and a bit of Cython. Nothing in CUDA or anything GPU related that I am aware of. There is no graphic interface, everything runs in the terminal (I'm using WSL2 on Windows). The whole code is mostly about number crunching, nothing fancy at all.
When I run my program, I see that CPU usage is ~ 100% (to be expected of course), but GPU usage also rises, to around 5%.
Is it possible that a part of the work gets offloaded automatically to the GPU? How else can I explain this small but consistent increase in GPU usage ?
Thanks for the help
No, there is no automatic offloading in Numpy, at least not with the standard Numpy implementation. Note that some specific FFT libraries can use the GPU, but the standard implementation of Numpy uses its own implementation of FFT called PocketFFT based on FFTPack that do not use the GPU. Cython do not perform any automatic implicit GPU offloading. The code need to do that explicitly/manually.
No GPU offloading are automatically performed because GPUs are not faster than CPUs for all tasks and offloading data to the GPU is expensive, especially with small arrays (due to the relatively high-latency of the PCI bus and kernel calls in such a case). Moreover, this is hard to do efficiently even in case where the GPUs could be theoretically faster.
The 5% GPU usage is relative to the frequency of the GPU which is often configured to use an adaptative frequency. For example my discrete Nv-1660S GPU frequency is currently 300 MHz while it can automatically reach 1.785 GHz. Using actually 5% of a GPU running at a 17% of its maximum frequency with a 2D rendering of a terminal is totally possible. On my machine, printing lines in a for loop at 10 FPS in a Windows terminal takes 6% of my GPU still running at low-frequency (0-1% without running anything).
If you want to check the frequency of your GPU and the load there are plenty of tools for that starting from vendor tools often installed with the driver to softwares like GPU-z on Windows. For Nvidia GPU, you can list the processes currently using you GPU with nvidia-smi (it should be rocm-smi on AMD GPUs).
I have just started working on this quite big dataset with my new and shiny GTX 1080. I have installed the drivers as well as cuda, cuDNN etc.
Observation:
When I run nvidia-smi I get the following picture:
nvidia-smi
So, both of the GPUs are using loads of memory ( That is good, I think? ) however, the GPU utilization is very low for both of them, especially the second GPU.
Explanation: Why?
Any tips on how I can improve the performance as well of someone know why the GPU utilization is so low?
First of all, the nvidia-smi may be misleading.
Have a look at the discussion: nvidia-smi Volatile GPU-Utilization explanation?
Second: Unfortunately, Keras doesn't provide an out-of-the-box solution for using multiple GPU. Keras uses automatically (using TensorFlow as backend) one GPU. You have to use TensorFlow directly if you want to make use of both GPUs.
If you are using Keras version 1.x.x you can try this solution: Transparent Multi-GPU Training on TensorFlow with Keras.
I had a question regarding tensorflow that is, somewhat critical to what task I'm trying to accomplish.
My scenario is as follows,
1. I have a tensorflow script that has been set-up, trained and tested. It is working well.
The training and testing was done on a devBox with 2 Titan X cards.
We need to now port this system to a live-pilot testing stage and are required to deploy it on a virtual-machine with Ubuntu 14.04 running atop of it.
Here lies the problem - A vm will not have access to underlying GPUs and must validate the incoming data in CPU only mode. My question,
Will the absence of GPUs hinder the validation process of my ML system? Does tensorflow, by default use GPUs for CNN computation and will the absence of a GPU affect the execution?
How do I run my script in CPU only mode?
Will setting CUDA_VISIBLE_DEVICES to none help with the validation in a CPU-only mode after the system has been trained on GPU boxes?
I'm sorry if this comes across as a noob question but I am new to TF and any advice would be much appreciated. Please let me know if you need any further information about my scenario.
Testing with CUDA_VISIBLE_DEVICES set to empty string will make sure that you don't have anything that depends on GPU being present, and theoretically it should be enough. In practice, there are some bugs in GPU codepath which can get triggered when there are no GPUs (like this one), so you want to make sure your GPU software environment (CUDA version) is the same.
Alternatively, you could compile TensorFlow without GPU support (bazel build -c opt tensorflow), this way you don't have to worry about matching CUDA environments or setting CUDA_VISIBLE_DEVICES
I've seen several questions about GPU Memory with Tensorflow but I've installed it on a Pine64 with no GPU support.
That means I'm running it with very limited resources (CPU and RAM only) and Tensorflow seems to want it all, completely freezing my machine.
Is there a way to limit the amount of processing power and memory allocated to Tensorflow? Something similar to bazel's own --local_resources flag?
This will create a session that runs one op at a time, and only one thread per op
sess = tf.Session(config=
tf.ConfigProto(inter_op_parallelism_threads=1,
intra_op_parallelism_threads=1))
Not sure about limiting memory, it seems to be allocated on demand, I've had TensorFlow freeze my machine when my network wanted 100GB of RAM, so my solution was to make networks that need less RAM
For TensorFlow 2.x this has been answered in the following thread:
In Tensorflow 2.x, there is no session anymore. Directly use the config API to set the parallelism at the start of the program.
import tensorflow as tf
tf.config.threading.set_intra_op_parallelism_threads(2)
tf.config.threading.set_inter_op_parallelism_threads(2)
with tf.device('/CPU:0'):
model = tf.keras.models.Sequential([...
https://www.tensorflow.org/api_docs/python/tf/config/threading