AttributeError in Classification report - python

The thing is i want to output the precisio, recall, and f1-score using classification report. But when i run below code, that error occurs. How can i fix the AttributeError?
print(classification_report(test.targets.cpu().numpy(),
File "C:\Users\Admin\PycharmProjects\ImageRotation\venv\lib\site-packages\torch\utils\data\dataset.py", line 83, in __getattr__
raise AttributeError
AttributeError
This is where i load the data from my directory.
data_loader = ImageFolder(data_dir,transform = transformer)
lab = data_loader.classes
num_classes = int(len(lab))
print("Number of Classes: ", num_classes)
print("The classes are as follows : \n",data_loader.classes)
batch_size = 128
train_size = int(len(data_loader) * 0.8)
test_size = len(data_loader) - train_size
train,test = random_split(data_loader,[train_size,test_size])
train_size = int(len(train) * 0.8)
val_size = len(train) - train_size
train_data, val_data = random_split(train,[train_size,val_size])
#load the train and validation into batches.
print(f"Length of Train Data : {len(train_data)}")
print(f"Length of Validation Data : {len(val_data)}")
print(f"Length of Test Data : {len(test)}")
train_dl = DataLoader(train_data, batch_size, shuffle = True)
val_dl = DataLoader(val_data, batch_size*2)
test_dl = DataLoader(test, batch_size, shuffle=True)
model.evaL() code
with torch.no_grad():
# set the model in evaluation mode
model.eval()
# initialize a list to store our predictions
preds = []
# loop over the test set
for (x, y) in test_dl:
# send the input to the device
x = x.to(device)
# make the predictions and add them to the list
pred = model(x)
preds.extend(pred.argmax(axis=1).cpu().numpy())
# generate a classification report
print(classification_report(test.targets.cpu().numpy(),
np.array(preds), target_names=test.classes))

It seems, ImageFolder is the your dataset object, but that is not inherited ted from torch.utils.data.Datasets.
torch Dataloader tries to call the __getitem__ method in the your Dataset object, but since it is not torch.utils.data.Dataset object it does not have has a function, then that causes to AttributeError now you are getting.
Convert ImageFolder to torch torch dataset. For further library details : torch doc
Practical implemtation : ast_dataloader
Also, you can use freeze model [without back propagation] to speedup the inference process.
with torch.no_grad():
# make the predictions and add them to the list
pred = model(x)
Update>
sample torch dataset:
from torch.utils.data import Dataset
class Dataset_train(Dataset):
def __init__(self, list_IDs, labels, base_dir):
"""self.list_IDs : list of strings (each string: utt key),
self.labels : dictionary (key: utt key, value: label integer)"""
self.list_IDs = list_IDs
self.labels = labels
self.base_dir = base_dir
def __len__(self):
return len(self.list_IDs)
def __getitem__(self, index):
key = self.list_IDs[index]
X, _ = get_sample(f"{self.base_dir}/{key}", self.noises)
y = self.labels[index]
return X, y
[Note] get_sample is custom build function for .wav file read. you could replace it with any funtion.
torch example-1
torch example-2
medium example

Related

Pytorch Data Loader: getattr(): attribute name must be string

Could someone please explain to me why this code:
import torch
from torch_geometric.datasets import TUDataset
from torch.nn import Linear
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
from torch_geometric.nn import global_mean_pool
from torch_geometric.data import Data, Dataset,DataLoader,DenseDataLoader,InMemoryDataset
from torch_geometric.data import Data, Dataset
from sklearn import preprocessing
device = torch.device('cpu')
torch.backends.cudnn.benchmark = True
import joblib
edge_origins = [0,1,2,3,4,5,6,7,8,10,11,12,13]
edge_destinations = [1,2,3,4,5,6,7,8,9,11,12,13,14]
target = [0,1]
x = [[0.1,0.5,0.2],[0.5,0.6,0.23]]
edge_index = torch.tensor([edge_origins, edge_destinations], dtype=torch.long)
x = torch.tensor(x, dtype=torch.float)
y = torch.tensor(target, dtype=torch.long)
dataset = Data(x=x, edge_index=edge_index, y=y, num_classes = len(set(target))) #making the graph of nodes and edges
train_loader = DataLoader(dataset, batch_size=64, shuffle=True)
for x,y in train_loader:
print(x)
Generates this error:
for x,y in train_loader:
File "/root/miniconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 346, in __next__
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/root/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/miniconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/miniconda3/lib/python3.7/site-packages/torch_geometric/data/data.py", line 92, in __getitem__
return getattr(self, key, None)
TypeError: getattr(): attribute name must be string
Edit 1, as an update: if I type:
train_loader = DataLoader(dataset, batch_size=64, shuffle=True)
it = iter(train_loader)
print(it)
It returns:
<torch.utils.data.dataloader._SingleProcessDataLoaderIter object at 0x7f4aeb009590>
but then if I try to iterate through this object like this:
for x,i in enumerate(it):
print(i)
it returns the same error as before.
Edit 2: Just to mention I am not particularly interested in printing out the data loader attributes, but the next thing I want to do is feed the data loader into the below code, and when I run the below code with the current data loader, I get the error described above about the attribute name must be string when I run the for data in train_loader line of the train() function:
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GCNConv(dataset.num_node_features, hidden_channels)
self.conv2 = GCNConv(hidden_channels, hidden_channels)
self.conv3 = GCNConv(hidden_channels, hidden_channels)
self.lin = Linear(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index, batch):
# 1. Obtain node embeddings
x = self.conv1(x, edge_index)
x = x.relu()
x = self.conv2(x, edge_index)
x = x.relu()
x = self.conv3(x, edge_index)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin(x)
return x
model = GCN(hidden_channels=64)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
for data in train_loader: # Iterate in batches over the training dataset.
out = model(data.x, data.edge_index, data.batch) # Perform a single forward pass.
loss = criterion(out, data.y) # Compute the loss.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
optimizer.zero_grad() # Clear gradients.
def test(loader):
model.eval()
correct = 0
for data in loader: # Iterate in batches over the training/test dataset.
out = model(data.x, data.edge_index, data.batch)
pred = out.argmax(dim=1) # Use the class with highest probability.
correct += int((pred == data.y).sum()) # Check against ground-truth labels.
return correct / len(loader.dataset) # Derive ratio of correct predictions.
for epoch in range(1, 171):
train()
train_acc = test(train_loader)
test_acc = test(test_loader)
print(f'Epoch: {epoch:03d}, Train Acc: {train_acc:.4f}, Test Acc: {test_acc:.4f}')
Just do:
for x,i in enumerate(trainloader):
DataLoader objects are already iterables, and enumerate adds a counter to an iterable object. iter produces an iterator over the DataLoader object, and I suspect calling enumerate on an iterator rather than an iterable is resulting in the above error.

How to use DistilBERT Huggingface NLP model to perform sentiment analysis on new data?

I am using DistilBERT to do sentiment analysis on my dataset. The dataset contains text and a label for each row which identifies whether the text is a positive or negative movie review (eg: 1 = positive and 0 = negative). Here is the code from the huggingface documentation (https://huggingface.co/transformers/custom_datasets.html?highlight=imdb)
#This dataset can be explored in the Hugging Face model hub (IMDb), and can be alternatively downloaded with the 🤗 Datasets library with load_dataset("imdb").
wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
tar -xf aclImdb_v1.tar.gz
#This data is organized into pos and neg folders with one text file per example. Let’s write a function that can read this in.
from pathlib import Path
def read_imdb_split(split_dir):
split_dir = Path(split_dir)
texts = []
labels = []
for label_dir in ["pos", "neg"]:
for text_file in (split_dir/label_dir).iterdir():
texts.append(text_file.read_text())
labels.append(0 if label_dir is "neg" else 1)
return texts, labels
train_texts, train_labels = read_imdb_split('aclImdb/train')
test_texts, test_labels = read_imdb_split('aclImdb/test')
from sklearn.model_selection import train_test_split
train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_texts, truncation=True, padding=True)
val_encodings = tokenizer(val_texts, truncation=True, padding=True)
test_encodings = tokenizer(test_texts, truncation=True, padding=True)
import torch
class IMDbDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = IMDbDataset(train_encodings, train_labels)
val_dataset = IMDbDataset(val_encodings, val_labels)
test_dataset = IMDbDataset(test_encodings, test_labels)
#Now that our datasets our ready, we can fine-tune a model either #with the 🤗 Trainer/TFTrainer or with native PyTorch/TensorFlow. See #training.
#Fine-tuning with Trainer
#The steps above prepared the datasets in the way that the trainer is #expected. Now all we need to do is create a model to fine-tune, #define the TrainingArguments/TFTrainingArguments and instantiate a #Trainer/TFTrainer.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
#We can also train with Pytorch/Tensorflow
from torch.utils.data import DataLoader
from transformers import DistilBertForSequenceClassification, AdamW
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
model.to(device)
model.train()
train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
optim = AdamW(model.parameters(), lr=5e-5)
for epoch in range(3):
for batch in train_loader:
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
loss.backward()
optim.step()
model.eval()
I want to know test this model on a new piece of data. So, I have a dataframe which contains a piece of text/review for each row, and I want to predict the label. Does anyone know how I would go about doing that? I apologize, I am very new to this and would greatly appreciate any help! I tried taking in text, cleaning it, and then doing
prediction = model.predict(text)
and I got an error saying DistilBERT has no attribute .predict.
If you just want to use the model, you can use the corresponding pipeline:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
Then you can use it:
classifier("I hate this book")
The code that you've shared from the documentation essentially covers the training and evaluation loop. Beware that your shared code contains two ways of fine-tuning, once with the trainer, which also includes evaluation, and once with native Pytorch/TF, which contains just the training portion and not the evaluation portion.
Here is how the native method can be tweaked to generate predictions on the test set:
# Put model in evaluation mode
model.eval()
# Tracking variables for storing ground truth and predictions
predictions , true_labels = [], []
# Prediction Loop
for batch in test_dataset:
# Unpack the inputs from our dataloader and move to GPU/accelerator
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = model(input_ids, attention_mask=attention_mask,
labels=labels)
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
After the execution of this loop, predictions will contain logits, i.e., the probability distribution from the model before any form of normalization.
You can use the following to pick the label with the maximum score from the logits, and produce a classification report
from sklearn.metrics import classification_report, accuracy_score
# Combine the results across all batches.
flat_predictions = np.concatenate(predictions, axis=0)
# For each sample, pick the label (0 or 1) with the higher score.
flat_predictions = np.argmax(flat_predictions, axis=1).flatten()
# Combine the correct labels for each batch into a single list.
flat_true_labels = np.concatenate(true_labels, axis=0)
# Accuracy
print(accuracy_score(flat_true_labels, flat_predictions))
# Classification Report
report = classification_report(flat_true_labels, flat_predictions)
For a more elegant way of performing predictions, you can create a BERTModel Class that would contain different methods and variables for handling the tokenization, creation of dataloader, running the predictions, etc.
You can try code like this example: Link-BERT
You'll arrange the dataset according to the BERT model. D Section in this link, you can just change the model name and your dataset.

creating a train and a test dataloader

I have actually a directory RealPhotos containing 17000 jpg photos. I would be interested in creating a train dataloader and a test dataloader
ls RealPhotos/
2007_000027.jpg 2008_007119.jpg 2010_001501.jpg 2011_002987.jpg
2007_000032.jpg 2008_007120.jpg 2010_001502.jpg 2011_002988.jpg
2007_000033.jpg 2008_007123.jpg 2010_001503.jpg 2011_002992.jpg
2007_000039.jpg 2008_007124.jpg 2010_001505.jpg 2011_002993.jpg
2007_000042.jpg 2008_007129.jpg 2010_001511.jpg 2011_002994.jpg
2007_000061.jpg 2008_007130.jpg 2010_001514.jpg 2011_002996.jpg
2007_000063.jpg 2008_007131.jpg 2010_001515.jpg 2011_002997.jpg
2007_000068.jpg 2008_007133.jpg 2010_001516.jpg 2011_002999.jpg
2007_000121.jpg 2008_007134.jpg 2010_001518.jpg 2011_003002.jpg
2007_000123.jpg 2008_007138.jpg 2010_001520.jpg 2011_003003.jpg
...
I know I can subclassing TensorDataset to make it compatible with unlabeled data with
class UnlabeledTensorDataset(TensorDataset):
"""Dataset wrapping unlabeled data tensors.
Each sample will be retrieved by indexing tensors along the first
dimension.
Arguments:
data_tensor (Tensor): contains sample data.
"""
def __init__(self, data_tensor):
self.data_tensor = data_tensor
def __getitem__(self, index):
return self.data_tensor[index]
And something along these lines for training the autoencoder
X_train = rnd.random((300,100))
train = UnlabeledTensorDataset(torch.from_numpy(X_train).float())
train_loader= data_utils.DataLoader(train, batch_size=1)
for epoch in range(50):
for batch in train_loader:
data = Variable(batch)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, data)
You first need to define a Dataset (torch.utils.data.Dataset) then you can use DataLoader on it. There is no difference between your train and test dataset, you can define a generic dataset that will look into a particular directory and map each index to a unique file.
class MyDataset(Dataset):
def __init__(self, directory):
self.files = os.listdir(directory)
def __getitem__(self, index):
img = Image.open(self.files[index]).convert('RGB')
return T.ToTensor()(img)
Where T refers to torchvision.transform and Image is imported from PIL.
You can then instanciate a dataset with
data_set = MyDataset('./RealPhotos')
From there you can use torch.utils.data.random_split to perform the split:
train_len = int(len(data_set)*0.7)
train_set, test_set = random_split(data_set, [train_len, len(data_set)-train_len])
Then use torch.utils.data.DataLoader as you did:
train_loader = DataLoader(train_set, batch_size=1, shuffle=True)
test_loader = DataLoader(test_set, batch_size=16, shuffle=False)

Training Error in PyTorch - RuntimeError: Expected object of type FloatTensor vs ByteTensor

A minimal working sample will be difficult to post here but basically I am trying to modify this project http://torch.ch/blog/2015/09/21/rmva.html which works smoothly with MNIST. I am trying to run it with my own dataset with a custom dataloader.py as below:
from __future__ import print_function, division #ds
import numpy as np
from utils import plot_images
import os #ds
import pandas as pd #ds
from skimage import io, transform #ds
import torch
from torchvision import datasets
from torch.utils.data import Dataset, DataLoader #ds
from torchvision import transforms
from torchvision import utils #ds
from torch.utils.data.sampler import SubsetRandomSampler
class CDataset(Dataset):
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.frame)
def __getitem__(self, idx):
img_name = os.path.join(self.root_dir,
self.frame.iloc[idx, 0]+'.jpg')
image = io.imread(img_name)
# image = image.transpose((2, 0, 1))
labels = np.array(self.frame.iloc[idx, 1])#.as_matrix() #ds
#landmarks = landmarks.astype('float').reshape(-1, 2)
#print(image.shape)
#print(img_name,labels)
sample = {'image': image, 'labels': labels}
if self.transform:
sample = self.transform(sample)
return sample
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, labels = sample['image'], sample['labels']
#print(image)
#print(labels)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
#print(image.shape)
#print((torch.from_numpy(image)))
#print((torch.from_numpy(labels)))
return {'image': torch.from_numpy(image),
'labels': torch.from_numpy(labels)}
def get_train_valid_loader(data_dir,
batch_size,
random_seed,
#valid_size=0.1, #ds
#shuffle=True,
show_sample=False,
num_workers=4,
pin_memory=False):
"""
Utility function for loading and returning train and valid
multi-process iterators over the MNIST dataset. A sample
9x9 grid of the images can be optionally displayed.
If using CUDA, num_workers should be set to 1 and pin_memory to True.
Args
----
- data_dir: path directory to the dataset.
- batch_size: how many samples per batch to load.
- random_seed: fix seed for reproducibility.
- #ds valid_size: percentage split of the training set used for
the validation set. Should be a float in the range [0, 1].
In the paper, this number is set to 0.1.
- shuffle: whether to shuffle the train/validation indices.
- show_sample: plot 9x9 sample grid of the dataset.
- num_workers: number of subprocesses to use when loading the dataset.
- pin_memory: whether to copy tensors into CUDA pinned memory. Set it to
True if using GPU.
Returns
-------
- train_loader: training set iterator.
- valid_loader: validation set iterator.
"""
#ds
#error_msg = "[!] valid_size should be in the range [0, 1]."
#assert ((valid_size >= 0) and (valid_size <= 1)), error_msg
#ds
# define transforms
#normalize = transforms.Normalize((0.1307,), (0.3081,))
trans = transforms.Compose([
ToTensor(), #normalize,
])
# load train dataset
#train_dataset = datasets.MNIST(
# data_dir, train=True, download=True, transform=trans
#)
train_dataset = CDataset(csv_file='/home/Desktop/6June17/util/train.csv',
root_dir='/home/caffe/data/images/',transform=trans)
# load validation dataset
#valid_dataset = datasets.MNIST( #ds
# data_dir, train=True, download=True, transform=trans #ds
#)
valid_dataset = CDataset(csv_file='/home/Desktop/6June17/util/eval.csv',
root_dir='/home/caffe/data/images/',transform=trans)
num_train = len(train_dataset)
train_indices = list(range(num_train))
#ds split = int(np.floor(valid_size * num_train))
num_valid = len(valid_dataset) #ds
valid_indices = list(range(num_valid)) #ds
#if shuffle:
# np.random.seed(random_seed)
# np.random.shuffle(indices)
#ds train_idx, valid_idx = indices[split:], indices[:split]
train_idx = train_indices #ds
valid_idx = valid_indices #ds
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=batch_size, sampler=train_sampler,
num_workers=num_workers, pin_memory=pin_memory,
)
print(train_loader)
valid_loader = torch.utils.data.DataLoader(
valid_dataset, batch_size=batch_size, sampler=valid_sampler,
num_workers=num_workers, pin_memory=pin_memory,
)
# visualize some images
if show_sample:
sample_loader = torch.utils.data.DataLoader(
dataset, batch_size=9, #shuffle=shuffle,
num_workers=num_workers, pin_memory=pin_memory
)
data_iter = iter(sample_loader)
images, labels = data_iter.next()
X = images.numpy()
X = np.transpose(X, [0, 2, 3, 1])
plot_images(X, labels)
return (train_loader, valid_loader)
def get_test_loader(data_dir,
batch_size,
num_workers=4,
pin_memory=False):
"""
Utility function for loading and returning a multi-process
test iterator over the MNIST dataset.
If using CUDA, num_workers should be set to 1 and pin_memory to True.
Args
----
- data_dir: path directory to the dataset.
- batch_size: how many samples per batch to load.
- num_workers: number of subprocesses to use when loading the dataset.
- pin_memory: whether to copy tensors into CUDA pinned memory. Set it to
True if using GPU.
Returns
-------
- data_loader: test set iterator.
"""
# define transforms
#normalize = transforms.Normalize((0.1307,), (0.3081,))
trans = transforms.Compose([
ToTensor(), #normalize,
])
# load dataset
#dataset = datasets.MNIST(
# data_dir, train=False, download=True, transform=trans
#)
test_dataset = CDataset(csv_file='/home/Desktop/6June17/util/test.csv',
root_dir='/home/caffe/data/images/',transform=trans)
test_loader = torch.utils.data.DataLoader(
test_dataset, batch_size=batch_size, shuffle=False,
num_workers=num_workers, pin_memory=pin_memory,
)
return test_loader
#for i_batch, sample_batched in enumerate(dataloader):
# print(i_batch, sample_batched['image'].size(),
# sample_batched['landmarks'].size())
# # observe 4th batch and stop.
# if i_batch == 3:
# plt.figure()
# show_landmarks_batch(sample_batched)
# plt.axis('off')
# plt.ioff()
# plt.show()
# break
Other main change I have made is closing off the parameter intake for validation size and shuffling (as I am using a pre-existing train, validation and test split and I have already shuffled these splits)
And my last change is in train_one_epoch(self, epoch) function, while iterating in trainer.py. I have changed this part because formerly the x, y were being returned as strings of "image" and "labels" - headers of the python dictionary rather than the values in batches.
for i, batch in enumerate(self.train_loader):
x, y = batch["image"], batch["labels"]
But now I get errors with the network training that I can not figure out as I am new to pytorch:
[*] Train on 64034 samples, validate on 18951 samples Epoch: 1/200 - LR: 0.000300 <torch.utils.data.dataloader.DataLoader object at 0x7fe065fd4f60> 0%| | 0/64034 [00:00<?, ?it/s]/home/duygu/recurrent-visual-attention-master/modules.py:106: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number from_x, to_x = from_x.data[0], to_x.data[0] /home/duygu/recurrent-visual-attention-master/modules.py:107: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number from_y, to_y = from_y.data[0], to_y.data[0]
Traceback (most recent call last): File "main.py", line 49, in <module>
main(config) File "main.py", line 40, in main
trainer.train() File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 168, in train
train_loss, train_acc = self.train_one_epoch(epoch) File "/home/duygu/recurrent-visual-attention-master/trainer.py", line 252, in train_one_epoch
h_t, l_t, b_t, p = self.model(x, l_t, h_t) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs) File "/home/duygu/recurrent-visual-attention-master/model.py", line 101, in forward
g_t = self.sensor(x, l_t_prev) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs) File "/home/duygu/recurrent-visual-attention-master/modules.py", line 214, in forward
phi_out = F.relu(self.fc1(phi)) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py", line 992, in linear
return torch.addmm(bias, input, weight.t()) RuntimeError: Expected object of type torch.FloatTensor but found type torch.ByteTensor for argument #4 'mat1'
I am seeking recommendations on how to fix this error and to understand what is causing it.I get this error even when I run it without GPU support on. I wonder if somehow my parameters are passed empty by looking at the initial warning.
As far as I can tell, it seems that as you commented the normalize / transforms.Normalize operations applied to your dataset, your images don't have their values normalize to float between [0, 1], and are instead keeping their byte values between [0, 255].
Try applying data normalization or at least converting your images to float (32-bit, not 64) values (e.g. in ToTensor, add image = image.float() or while it is still a numpy array using data.astype(numpy.float32)) before feeding them to your network.

predict_generator and class labels

I am using ImageDataGenerator to generate new augmented images and extract bottleneck features from pretrained model but most of the tutorial I see on keras
samples same no of training samples as number of images in directory.
train_generator = train_datagen.flow_from_directory(
train_path,
target_size=image_size,
shuffle = "false",
class_mode='categorical',
batch_size=1)
bottleneck_features_train = model.predict_generator(
train_generator, 2* nb_train_samples // batch_size)
Suppose I want 2 times more images from the above code, how I can get the desired class labels for the features extracted from bottleneck layer which are stored in tuple train_generator.
shouldnt the code in training_generator.py at line 422
x, _ = generator_output
do something like this
=> x, y = generator_output
and return tuple [np.concatenate(out) for out in all_outs],y from predict_generator
i.e return the corresponding class labels along with the predicted features all_outs since there is no way to get the corresponding labels without running generator twice.
If you're using predict, normally you simply don't want Y, because Y will be the result of the prediction. (You're not training, so you don't need the true labels)
But you can do it yourself:
bottleneck = []
labels = []
for i in range(2 * nb_train_samples // batch_size):
x, y = next(train_generator)
bottleneck.append(model.predict(x))
labels.append(y)
bottleneck = np.concatenate(bottleneck)
labels = np.concatenate(labels)
If you want it with indexing (if your generator supports that):
#...
for epoch in range(2):
for i in range(nb_train_samples // batch_size):
x,y = train_generator[i]
#...

Categories

Resources