use a saved trained model to predict on new dataset - python

I am using theano, sklearn and numpy in Python. I found this code for saving my trained network and predict on my new dataset in this link https://github.com/lzhbrian/RBM-DBN-theano-DL4J/blob/master/src/theano/code/logistic_sgd.py. the part of the code I am using is this :
"""
An example of how to load a trained model and use it
to predict labels.
"""
def predict():
# load the saved model
classifier = pickle.load(open('best_model.pkl'))
# compile a predictor function
predict_model = theano.function(
inputs=[classifier.input],
outputs=classifier.y_pred)
# We can test it on some examples from test test
dataset='mnist.pkl.gz'
datasets = load_data(dataset)
test_set_x, test_set_y = datasets[2]
test_set_x = test_set_x.get_value()
predicted_values = predict_model(test_set_x[:10])
print("Predicted values for the first 10 examples in test set:")
print(predicted_values)
if __name__ == '__main__':
sgd_optimization_mnist()
The code for the neural network model I want to save and load and predict with is https://github.com/aseveryn/deep-qa. I could save and load the model with cPickle but I continuously get errors in # compile a predictor function part:
predict_model = theano.function(inputs=[classifier.input],outputs=classifier.y_pred)
Actually I am not certain what I need to put in the inputs according to my code. Which one is right?
inputs=[main.predict_prob_batch.batch_iterator], outputs=test_nnet.layers[-1].
y_pred)
inputs=[predict_prob_batch.batch_iterator],
outputs=test_nnet.layers[-1].y_pred)
inputs=[MiniBatchIteratorConstantBatchSize.dataset],
outputs=test_nnet.layers[-1].y_pred)
inputs=[
sgd_trainer.MiniBatchIteratorConstantBatchSize.dataset],
outputs=test_nnet.layers[-1].y_pred)
or none of them???
Each of them I tried I got the errors:
ImportError: No module named MiniBatchIteratorConstantBatchSize
or
NameError: global name 'predict_prob_batch' is not defined
I would really appreciate if you could help me.
I also used these commands for running the code but still the errors.
python -c 'from run_nnet import predict; from sgd_trainer import MiniBatchIteratorConstantBatchSize; from MiniBatchIteratorConstantBatchSize import dataset; print predict()'
python -c 'from run_nnet import predict; from sgd_trainer import *; from MiniBatchIteratorConstantBatchSize import dataset; print predict()'
Thank you and let me know please if you know a better way to predict for new dataset on the loaded trained model.

Related

Loading TFlite model for Inference (Python)

I'm using Tensorflow Lite to train an image classifier. I now have a bunch of *.tflite models stored, and I'm trying to write some code that allows me to pick a tflite model file, pick a dataset, and test that model on that dataset (inference).
When I train a model using:
model = image_classifier.create(trainData, validation_data=valData, shuffle=True, use_augmentation=False)
I am able to easily test this model on a test dataset right away because the model is actually stored in the variable 'model', by using:
model.evaluate_tflite('model.tflite', test_data)
or
loss, accuracy = model.evaluate(test_data)
However, if I simply want to load an already existing *.tflite model, without having trained it in the same run, I can't figure out a simple way to do that.
Following these instructions, it seems to be a lot of steps for what I'm trying to do. In other Machine Learning libraries (like PyTorch), you are able to define the model and then quickly load the saved weights and then get to testing, like:
model = models.densenet201(progress=True, pretrained=pretrained)
model.load_state_dict(torch.load("models/model.pt"))
Is there a simple way for me to initialise the model into the 'model' variable, load the saved weights from a *.tflite file, and then run inference?
Thank you for your help
A simple example of image classification:
import tensorflow as tf
import numpy as np
import cv2
class TFLiteModel:
def __init__(self, model_path: str):
self.interpreter = tf.lite.Interpreter(model_path)
self.interpreter.allocate_tensors()
self.input_details = self.interpreter.get_input_details()
self.output_details = self.interpreter.get_output_details()
def predict(self, *data_args):
assert len(data_args) == len(self.input_details)
for data, details in zip(data_args, self.input_details):
self.interpreter.set_tensor(details["index"], data)
self.interpreter.invoke()
return self.interpreter.get_tensor(self.output_details[0]["index"])
model = TFLiteModel("mobilenet_v2_1.0_224_1_default_1.tflite")
image = cv2.imread("hand_blower.png")
image = cv2.resize(image, (224, 224))
image = image.astype(np.float32)[np.newaxis]
image = (image - 127.5) / 127.5
label = model.predict(image)[0].argmax()
print(label)
Please refer to the official documentation for detailed information:
https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter
The model was loaded from:
https://tfhub.dev/tensorflow/lite-model/mobilenet_v2_1.0_224/1/default/1

How to Load Fastai model and predict it on single image

I have trained a fastai model using Kaggle notebook, it has saved the model but how to load the model is the problem, i have tried different methods like the method given below. Even it does load the model it doesn't have any predict function only thing I can see is model.eval().
The second problem is when the model was trained on google collab it didn't even get the single image, I did try to convert the image to NumPy way and another way but both didn't work out.
I am attaching the kaggle link of model training, the saved model and the test images in last after this code
#Code for Loading model
from fastai import *
from fastai.vision import *
import torch
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
body = create_body(models.resnet18, True, None)
data_classes = 4
nf = callbacks.hooks.num_features_model(body) * 2
head = create_head(nf, data_classes, None, ps=0.5, bn_final=False)
model = nn.Sequential(body, head)
Kaggle Model
Test Images From Kaggle Dataset
Saved Model
How to load pytorch models:
loc = torch.load('/content/gdrive/MyDrive/Data Exports/35k data/stage-1.pth')
model = ... # build your model
model.load_state_dict(loc)
model.eval()
Now you should be able to simply use the forward pass to generate your predictions:
input = ... # your input image
pred = model(input) # your class predictions
Don't forget to convert your inputs to torch tensors first, you might want to use a DataLoader for ease of use.

OSError when loading tokenizer for huggingface model

I am trying to use this huggingface model and have been following the example provided, but I am getting an error when loading the tokenizer:
from transformers import AutoTokenizer
task = 'sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
OSError: Can't load tokenizer for 'cardiffnlp/twitter-roberta-base-sentiment'. Make sure that:
'cardiffnlp/twitter-roberta-base-sentiment' is a correct model identifier listed on 'https://huggingface.co/models'
or 'cardiffnlp/twitter-roberta-base-sentiment' is the correct path to a directory containing relevant tokenizer files
What I find very weird is that I was able to run my script several times but ran into an error after some time, while I don't recall changing anything in the meantime. Does anyone know what's the solution here?
EDIT: Here is my entire script:
from transformers import AutoTokenizer
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
task = 'sentiment'
MODEL = f"nlptown/bert-base-multilingual-uncased-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
labels = ['very_negative', 'negative', 'neutral', 'positive', 'very_positive']
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "I love you"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
print(scores)
The error seems to start happening when I run model.save_pretrained(MODEL), but this might be a coincidence.
I just came across this same issue. It seems like a bug with model.save_pretrained(), as you noted.
I was able to resolve by deleting the directory where the model had been saved (cardiffnlp/) and running again without model.save_pretrained().
Not sure what your application is. For me, re-downloading the model each time takes ~5s and that is acceptable.

How to load a finetuned sciBERT model in AllenNLP?

I have finetuned the SciBERT model on the SciIE dataset. The repository uses AllenNLP to finetune the model. The training is executed as follows:
python -m allennlp.run train $CONFIG_FILE --include-package scibert -s "$#"
After a successful training I have a model.tar.gz file as an output that contains weights.th, config.json, and vocabulary folder. I have tried to load it in the allenlp predictor:
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("model.tar.gz")
But I get the following error:
ConfigurationError: bert-pretrained not in acceptable choices for
dataset_reader.token_indexers.bert.type: ['single_id', 'characters',
'elmo_characters', 'spacy', 'pretrained_transformer',
'pretrained_transformer_mismatched']. You should either use the
--include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model":
"my_module.models.MyModel"} to have it imported automatically.
I have never worked with allenNLP, so I am quite lost about what to do.
For reference, this is the part of the config that describer token indexers
"token_indexers": {
"bert": {
"type": "bert-pretrained",
"do_lowercase": "false",
"pretrained_model": "/home/tomaz/neo4j/scibert/model/vocab.txt",
"use_starting_offsets": true
}
}
I am using allenlp version
Name: allennlp
Version: 1.2.1
Edit:
I think I have made a lot of progress, I have to use the same version that was used to train the model and I can import the modules like so:
from allennlp.predictors.predictor import Predictor
from scibert.models.bert_crf_tagger import *
from scibert.models.bert_text_classifier import *
from scibert.models.dummy_seq2seq import *
from scibert.dataset_readers.classification_dataset_reader import *
predictor = Predictor.from_path("scibert_ner/model.tar.gz")
dataset_reader="classification_dataset_reader")
predictor.predict(
sentence="Did Uriah honestly think he could beat The Legend of Zelda in under three hours?"
)
Now I get an error:
No default predictor for model type bert_crf_tagger.\nPlease specify a
predictor explicitly
I know that I can use the predictor_name to specify a predictor explicitly, but I haven't got the faintest idea which name to pick that would work
I have seen a lot of people having this problem. Upon going through the repository code, I found this to be the easiest way to run the predictions:
python -m allennlp.run predict /path/to/saved_model/model.tar.gz /path/to/test.txt\
--include-package scibert --use-dataset-reader\
--output-file /path/to/where/you/want/predict.txt\
--predictor sentence-tagger --batch-size 16
What did I add? The predictor sentence-tagger. Once you go through the repository, you would find that the registered predictor is sentence-tagger. Although the DEFAUL_DICT of the taggers contain sentence_tagger. A lot of confusion, right? Tell me!
This answer also saves you from writing a predictor.

Problem with loading language_model_learner fastai

I have problem with fastai library. My code below:
import fastai
from fastai.text import *
import os
import pandas as pd
import fastai
from fastai import *
lab = df.columns[0]
data_lm = TextLMDataBunch.from_csv(r'/AWD', 'data.csv', label_cols = lab, text_cols = ['text'])
data_clas = TextClasDataBunch.from_csv(r'/AWD', 'data.csv', vocab = data_lm.train_ds.vocab, bs = 256,label_cols = lab, text_cols=['text'])
data_lm.save('data_lm_export.pkl')
data_clas.save('data_clas.pkl')
learn = language_model_learner(data_lm,AWD_LSTM,drop_mult = 0.3)
learn.lr_find()
learn.recorder.plot(skip_end=10)
learn.fit_one_cycle(10,1e-2,moms=(0.8,0.7))
learn.save('fit_head')
learn.load('fit_head')
My data is quite big, so each epoch in fit_one_cycle lasts about 6h. My resources enables me only to train model in SLURM JOB 70h, so my whole script will be cancelled. I wanted to divide my script into pieces and the first longest part has to learn and save fit_head. Everything was ok, and after that I wanted to load my model to train it again, but i got this error:
**RuntimeError: Error(s) in loading state_dict for SequentialRNN:
size mismatch for 0.encoder.weight: copying a param with shape torch.Size([54376, 400]) from checkpoint, the shape in current model is torch.Size([54720, 400]).
**
I have checked similar problems on github/stack posts and I tried those solutions like this below, but i cannot find anything usefull.
data_clas.vocab.stoi = data_lm.vocab.stoi
data_clas.vocab.itos = data_lm.vocab.itos
Is there any possibility to load trained model without having this issue ?
When you do learner.save() only the model weights are saved on your disk and not the model state dict which contains the model architecture information.
To train the model in a different session you must first define the model itself. Remember to use the same code to define your new model. Since your data is quite heavy as you mentioned you can use a very small subset (~16 records) of your data to create this new model and then do learn.load(model_path) and you should be able to resume training.
you can modify the training data with learn.data.train_dl = new_dl

Categories

Resources