How to set all tensors to cuda device? - python

The project needed to calculate on the GPU, but manually switching each tensor .to(device) is too long.
I used this, but the tensors still remain on the cpu. pic with problem
if torch.cuda.is_available():
torch.set_default_tensor_type(torch.cuda.FloatTensor)

To set all tensors to a CUDA device, you can use the 'to' method of the 'torch' tensor library. The to method allows you to specify the device that you want to move the tensor 'to'. For example, to move all tensors to the first CUDA device, you can use the following code:
import torch
# Set all tensors to the first CUDA device
device = torch.device("cuda:0")
torch.set_default_tensor_type(device)
Alternatively, you can also specify the device when you create a new tensor using the 'device' argument. For example:
import torch
# Set all tensors to the first CUDA device
device = torch.device("cuda:0")
x = torch.zeros(10, device=device)
This will create a tensor 'x' on the first CUDA device.

Related

Unable to train PyTorch model in GPU. Keep getting errors that tensors are not on same device

I have been stuck at trying to train my PyTorch model in GPU. The model perfectly works in CPU though. I have been using Google Colab's GPU resources for using cuda.
I know that in order to run a model in GPU, the 'model', 'input features' and 'target' needs to be in 'cuda' device.
But, no matter what I do in my code, I either keep getting the error:
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
OR
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Here is my notebook:
https://colab.research.google.com/drive/1rviS_4hmdzPQUncZyi8FsRH7y3jL0isQ
It would be really helpful if someone could let me exactly which variables to be moved using .to('cuda')
Additionally, explanations/suggestions for ensuring that this does not recur in the future would be highly appreciated. Thank you !
Your self.hidden is a tuple of torch.tensors. PyTorch doesn't automatically move these kind of tensor to GPU when .to(device) is invoked on your model.
You can either:
Implement your own to(self, type, device) method for your BiLSTM_CRF class. (Not recommended).
Make self.hidden a registered buffer. This way all methods of nn.Module such as .to(), .float(), etc. will also be applied to self.hidden.
first you have configure the device you want to use, if you are on GPU change it to CPU and the reverse is also true

RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu"

Earlier I have configured the following project
https://github.com/zllrunning/face-makeup.PyTorch
using Pytorch with CUDA=10.2, Now Pytorch with CUDA=10.2 support is not available for Windows.
So, when I am configuring the same project using Pytorch with CUDA=11.3, then I am getting the following error:
RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu". This is no longer allowed; the devices must match.
Please help me in solving this problem.
I solved this by adding map_location=lambda storage, loc: storage.cuda() in the model_zoo.load_url method. I think in torch 1.12 they have changed the default location from GPU to CPU (which does not make any sense).
Edit:
In the file resnet.py, under function def init_weight(self):, the following line
state_dict = modelzoo.load_url(resnet18_url)
is changed with
state_dict = modelzoo.load_url(resnet18_url, map_location=lambda storage, loc: storage.cuda())
not specifically related to resnet , but below code should work for majority of issues related to " dont know how to restore data location of torch.storage._untype storage ( tagged with gpu)
import torch
import whisper
devices = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = whisper.load_model("medium" , device =devices)

Problems about torch.nn.DataParallel

I am new in deep learning area. Now I am reproducing a paper’s codes. since they use several GPUs, there is a command torch.nn.DataParallel(model, device_ids= args.gpus).cuda() in codes. But I only have one GPU, what
should I change this code to match up my GPU?
Thank you!
DataParallel should work on a single GPU as well, but you should check if args.gpus only contains the id of the device that is to be used (should be 0) or None.
Choosing None will make the module use all available devices.
Also you could remove DataParallel as you do not need it and move the model to GPU only by calling model.cuda() or, as I prefer, model.to(device) where device is the device's name.
Example:
This example shows how to use a model on a single GPU, setting the device using .to() instead of .cuda().
from torch import nn
import torch
# Set device to cuda if cuda is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create model
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
# moving model to GPU
model.to(device)
If you want to use DataParallel you could do it like this
# Optional DataParallel, not needed for single GPU usage
model1 = torch.nn.DataParallel(model, device_ids=[0]).to(device)
# Or, using default 'device_ids=None'
model1 = torch.nn.DataParallel(model).to(device)

Using CUDA with pytorch?

Is there a way to reliably enable CUDA on the whole model?
I want to run the training on my GPU. I found on some forums that I need to apply .cuda() on anything I want to use CUDA with (I've applied it to everything I could without making the program crash). Surprisingly, this makes the training even slower.
Then, I found that you could use this torch.set_default_tensor_type('torch.cuda.FloatTensor') to use CUDA. With both enabled, nothing changes. What is happening?
You can use the tensor.to(device) command to move a tensor to a device.
The .to() command is also used to move a whole model to a device, like in the post you linked to.
Another possibility is to set the device of a tensor during creation using the device= keyword argument, like in t = torch.tensor(some_list, device=device)
To set the device dynamically in your code, you can use
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
to set cuda as your device if possible.
There are various code examples on PyTorch Tutorials and in the documentation linked above that could help you.
With both enabled, nothing changes.
That is because you have already set every tensor to GPU.
Is there a way to reliably enable CUDA on the whole model?
model.to('cuda')
I've applied it to everything I could
You only need to apply it to tensors the model will be interacting with, generally:
the model's pramaters model.to('cuda')
the features data features = features.to('cuda')
the target data targets = targets.to('cuda')

How to run PyTorch on GPU by default?

I want to run PyTorch using cuda. I set model.cuda() and torch.cuda.LongTensor() for all tensors.
Do I have to create tensors using .cuda explicitly if I have used model.cuda()?
Is there a way to make all computations run on GPU by default?
I do not think you can specify that you want to use cuda tensors by default.
However you should have a look to the pytorch offical examples.
In the imagenet training/testing script, they use a wrapper over the model called DataParallel.
This wrapper has two advantages:
it handles the data parallelism over multiple GPUs
it handles the casting of cpu tensors to cuda tensors
As you can see in L164, you don't have to cast manually your inputs/targets to cuda.
Note that, if you have multiple GPUs and you want to use a single one, launch any python/pytorch scripts with the CUDA_VISIBLE_DEVICES prefix. For instance CUDA_VISIBLE_DEVICES=0 python main.py.
Yes. You can set the default tensor type to cuda with:
torch.set_default_tensor_type('torch.cuda.FloatTensor')
Do I have to create tensors using .cuda explicitly if I have used model.cuda()?
Yes, you need to not only set your model [parameter] tensors to cuda, but also those of the data features and targets (and any other tensors used by the model).

Categories

Resources