I am trying to use this huggingface model and have been following the example provided, but I am getting an error when loading the tokenizer:
from transformers import AutoTokenizer
task = 'sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
OSError: Can't load tokenizer for 'cardiffnlp/twitter-roberta-base-sentiment'. Make sure that:
'cardiffnlp/twitter-roberta-base-sentiment' is a correct model identifier listed on 'https://huggingface.co/models'
or 'cardiffnlp/twitter-roberta-base-sentiment' is the correct path to a directory containing relevant tokenizer files
What I find very weird is that I was able to run my script several times but ran into an error after some time, while I don't recall changing anything in the meantime. Does anyone know what's the solution here?
EDIT: Here is my entire script:
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
task = 'sentiment'
MODEL = f"nlptown/bert-base-multilingual-uncased-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
labels = ['very_negative', 'negative', 'neutral', 'positive', 'very_positive']
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "I love you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
print(scores)
The error seems to start happening when I run model.save_pretrained(MODEL), but this might be a coincidence.
I just came across this same issue. It seems like a bug with model.save_pretrained(), as you noted.
I was able to resolve by deleting the directory where the model had been saved (cardiffnlp/) and running again without model.save_pretrained().
Not sure what your application is. For me, re-downloading the model each time takes ~5s and that is acceptable.
Related
I want to download the GPT-2 model and tokeniser. For open-end generation, HuggingFace sets the padding token ID to be equal to the end-of-sentence token ID, so I configured it manually using :
import tensorflow as tf
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
However, it gives me the following error:
TypeError: ('Keyword argument not understood:', 'pad_token_id')
I haven't been able to find a solution for this nor do I understand why I am getting this error. Insights will be appreciated.
Your code does not throw any error for me - I would try re-installing the most recent version of transformers - if that is a viable solution for you.
I'm trying to create a question and answering AI, I would like it to be as accurate as possible without having to train the model myself.
I can create a simple AI using the existing base models like so via their documentation:
from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "What does He like?", "He likes bears"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
answer_start = torch.argmax(start_scores) # get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(end_scores) + 1
tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0][answer_start:answer_end]))
However this model doesn't answer questions as accurate as others. On the HuggingFace site I've found an example that I'd like to use of a fine-tuned model
However the instructions show how to train a model like so. The example works on the page so clearly a pretrained model of the exists.
Does anyone know how I can reuse the existing models so I don't have to train one from scratch?
Turns out I just needed to grab an additional identifier when trying to request the model:
from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch
MODEL_PATH = 'ktrapeznikov/albert-xlarge-v2-squad-v2';
tokenizer = AlbertTokenizer.from_pretrained(MODEL_PATH)
model = AlbertForQuestionAnswering.from_pretrained(MODEL_PATH)
For future reference this information can be grabbed from the transformers use button. Seem in the image below.
Every time I run GPT-2, I am receiving this message. Is there a way I can get this to go away?
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Yes you need to change the loglevel before you import anything from the transformers library:
import logging
logging.basicConfig(level='ERROR')
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2')
I have problem with fastai library. My code below:
import fastai
from fastai.text import *
import os
import pandas as pd
import fastai
from fastai import *
lab = df.columns[0]
data_lm = TextLMDataBunch.from_csv(r'/AWD', 'data.csv', label_cols = lab, text_cols = ['text'])
data_clas = TextClasDataBunch.from_csv(r'/AWD', 'data.csv', vocab = data_lm.train_ds.vocab, bs = 256,label_cols = lab, text_cols=['text'])
data_lm.save('data_lm_export.pkl')
data_clas.save('data_clas.pkl')
learn = language_model_learner(data_lm,AWD_LSTM,drop_mult = 0.3)
learn.lr_find()
learn.recorder.plot(skip_end=10)
learn.fit_one_cycle(10,1e-2,moms=(0.8,0.7))
learn.save('fit_head')
learn.load('fit_head')
My data is quite big, so each epoch in fit_one_cycle lasts about 6h. My resources enables me only to train model in SLURM JOB 70h, so my whole script will be cancelled. I wanted to divide my script into pieces and the first longest part has to learn and save fit_head. Everything was ok, and after that I wanted to load my model to train it again, but i got this error:
**RuntimeError: Error(s) in loading state_dict for SequentialRNN:
size mismatch for 0.encoder.weight: copying a param with shape torch.Size([54376, 400]) from checkpoint, the shape in current model is torch.Size([54720, 400]).
**
I have checked similar problems on github/stack posts and I tried those solutions like this below, but i cannot find anything usefull.
data_clas.vocab.stoi = data_lm.vocab.stoi
data_clas.vocab.itos = data_lm.vocab.itos
Is there any possibility to load trained model without having this issue ?
When you do learner.save() only the model weights are saved on your disk and not the model state dict which contains the model architecture information.
To train the model in a different session you must first define the model itself. Remember to use the same code to define your new model. Since your data is quite heavy as you mentioned you can use a very small subset (~16 records) of your data to create this new model and then do learn.load(model_path) and you should be able to resume training.
you can modify the training data with learn.data.train_dl = new_dl
I am using theano, sklearn and numpy in Python. I found this code for saving my trained network and predict on my new dataset in this link https://github.com/lzhbrian/RBM-DBN-theano-DL4J/blob/master/src/theano/code/logistic_sgd.py. the part of the code I am using is this :
"""
An example of how to load a trained model and use it
to predict labels.
"""
def predict():
# load the saved model
classifier = pickle.load(open('best_model.pkl'))
# compile a predictor function
predict_model = theano.function(
inputs=[classifier.input],
outputs=classifier.y_pred)
# We can test it on some examples from test test
dataset='mnist.pkl.gz'
datasets = load_data(dataset)
test_set_x, test_set_y = datasets[2]
test_set_x = test_set_x.get_value()
predicted_values = predict_model(test_set_x[:10])
print("Predicted values for the first 10 examples in test set:")
print(predicted_values)
if __name__ == '__main__':
sgd_optimization_mnist()
The code for the neural network model I want to save and load and predict with is https://github.com/aseveryn/deep-qa. I could save and load the model with cPickle but I continuously get errors in # compile a predictor function part:
predict_model = theano.function(inputs=[classifier.input],outputs=classifier.y_pred)
Actually I am not certain what I need to put in the inputs according to my code. Which one is right?
inputs=[main.predict_prob_batch.batch_iterator], outputs=test_nnet.layers[-1].
y_pred)
inputs=[predict_prob_batch.batch_iterator],
outputs=test_nnet.layers[-1].y_pred)
inputs=[MiniBatchIteratorConstantBatchSize.dataset],
outputs=test_nnet.layers[-1].y_pred)
inputs=[
sgd_trainer.MiniBatchIteratorConstantBatchSize.dataset],
outputs=test_nnet.layers[-1].y_pred)
or none of them???
Each of them I tried I got the errors:
ImportError: No module named MiniBatchIteratorConstantBatchSize
or
NameError: global name 'predict_prob_batch' is not defined
I would really appreciate if you could help me.
I also used these commands for running the code but still the errors.
python -c 'from run_nnet import predict; from sgd_trainer import MiniBatchIteratorConstantBatchSize; from MiniBatchIteratorConstantBatchSize import dataset; print predict()'
python -c 'from run_nnet import predict; from sgd_trainer import *; from MiniBatchIteratorConstantBatchSize import dataset; print predict()'
Thank you and let me know please if you know a better way to predict for new dataset on the loaded trained model.