How to load bert-base-cased of Simple Transformers model offline - python

this is my first time using Simple Transformers - NERModelPermalink and I want to use the offline mode for loading the pretrained bert-base-cased model, I downloaded the files and put it in same directory, now I cant load the model and I cant start the training part, this is the code
# test.ipynb
args = NERArgs()
args.num_train_epochs = 1
args.learning_rate = 4e-5
args.overwrite_output_dir =True
args.train_batch_size = 32
args.eval_batch_size = 32
args.save_steps = -1
args.save_model_every_epoch = False
model = NERModel('bert', 'bert-base-cased',labels=label,args =args, use_cuda=False)
this is the directory map
root_
|__down_model__
|__config.json
|__flax_model.msgpack
|__.gitattributes
|__pytorch_model.bin
|__README.md
|__tf_model.h5
|__tokenizer.json
|__tokenizer_config.json
|__Unconfirmed 363105.crdownload
|__vocab.txt
root_
|__test.ipynb
how can I use the model I downloaded (root/down_model) for this line
model = NERModel('bert', 'bert-base-cased',labels=label,args =args, use_cuda=False)

Related

Amazon SageMaker how to predict new data

I built the model in Amazon SageMaker, the code is attached below.
Now I would like to be able to upload new data to s3 and get predictions based on this model without having to recalculate it every time.
sess = sagemaker.Session()
bucket = "innogy-bda-germany-dev-landing-dc3-retailpl"
prefix = "sagemaker/xgboost-upsell"
role = get_execution_role()
container = sagemaker.image_uris.retrieve("xgboost", boto3.Session().region_name, "latest")
display(container)
train_path = 's3://innogy-bda-germany-dev-landing-dc3-retailpl/UPSELL/LIST/train.csv'
test_path = 's3://innogy-bda-germany-dev-landing-dc3-retailpl/UPSELL/LIST/validation.csv'
s3_input_train = sagemaker.TrainingInput(s3_data=train_path, content_type='csv')
s3_input_test = sagemaker.TrainingInput(s3_data=test_path, content_type='csv')
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type="ml.m5.4xlarge",
output_path="s3://innogy-bda-germany-dev-landing-dc3-retailpl/UPSELL/LIST/output",
sagemaker_session=sess,
)
xgb.set_hyperparameters(
alpha= 1.340343927865692,
colsample_bytree= 0.525162855476281,
eta= 0.06451533130134757,
gamma= 0.9683995477068462,
max_depth= 10,
min_child_weight= 3.851108988963441,
num_round= 987,
subsample= 0.8725573749114485,
silent=0,
objective="binary:logistic",
early_stopping_rounds=50,
)
xgb.fit({"train": s3_input_train, "validation": s3_input_validation})
I am asking for a code example, how to extract this model from s3 to a new notebook now and use it to predict new data.
Additionally, I wonder why You don't throw away the target variable while using the built-in xgboost model in sagemaker since when making a prediction on a new set, I will not know the target.
train_data, validation_data, test_data = np.split(df_smote.sample(frac=1, random_state=1729),[int(0.7 * len(df_smote)), int(0.9 * len(df_smote))],)
You will need to follow the steps outlined here to deploy your model to an EC2 instance, so you can do batch or on-demand predictions.
https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-model-deployment.html

Is there a preferred method of saving and loading h2o word2vec models in python?

I have trained a word2vec model in the python h2o package.
Is there a simple way for me to save that word2vec model and load it back later for use?
I have tried the h2o.save_model() and h2o.load_model() functions with no luck.
I get an error using that approach like
ERROR: Unexpected HTTP Status code: 412 Precondition Failed (url = http://localhost:54321/99/Models.bin/)
water.exceptions.H2OIllegalArgumentException
[1] "water.exceptions.H2OIllegalArgumentException: Illegal argument: dir of function: importModel:
I am using the same version of h2o to train and load the model back in so the issue outlined in this question is not applicable Can't import binay h2o model with h2o.loadModel() function: 412 Precondition Failed
Any one with any insights on how to save and load an h2o word2vec model?
My sample code with a few of the important snippets
import h2o
from h2o.estimators import H2OWord2vecEstimator
df['text'] = df['text'].ascharacter()
# Break text into sequence of words
words = tokenize(df["text"])
# Initializing h2o
print('Initializing h2o.')
h2o.init(ip=h2o_ip, port=h2o_port, min_mem_size=h2o_min_memory)
# Build word2vec model:
w2v_model = H2OWord2vecEstimator(sent_sample_rate = 0.0, epochs = 10)
w2v_model.train(training_frame=words)
# Calculate a vector for each row
word_vecs = w2v_model.transform(words, aggregate_method = "AVERAGE")
#Save model to path
wv_path = '/models/wordvec/'
model_path = h2o.save_model(model = w2v_model, path= wv_path ,force=True)
# Load model in later script
w2v_model = h2o.load_model(model_path)
It sounds like you might have an access issue with the directory you trying to read from. I just tested on H2O 3.30.0.1 following the w2v example from docs and it ran fine:
job_titles = h2o.import_file(("https://s3.amazonaws.com/h2o-public-test-data/smalldata/craigslistJobTitles.csv"),
col_names = ["category", "jobtitle"],
col_types = ["string", "string"],
header = 1)
STOP_WORDS = ["ax","i","you","edu","s","t","m","subject","can",
"lines","re","what","there","all","we","one","the",
"a","an","of","or","in","for","by","on","but","is",
"in","a","not","with","as","was","if","they","are",
"this","and","it","have","from","at","my","be","by",
"not","that","to","from","com","org","like","likes",
"so"]
# Make the 'tokenize' function:
def tokenize(sentences, stop_word = STOP_WORDS):
tokenized = sentences.tokenize("\\W+")
tokenized_lower = tokenized.tolower()
tokenized_filtered = tokenized_lower[(tokenized_lower.nchar() >= 2) | (tokenized_lower.isna()),:]
tokenized_words = tokenized_filtered[tokenized_filtered.grep("[0-9]",invert=True,output_logical=True),:]
tokenized_words = tokenized_words[(tokenized_words.isna()) | (~ tokenized_words.isin(STOP_WORDS)),:]
return tokenized_words
# Break job titles into a sequence of words:
words = tokenize(job_titles["jobtitle"])
# Build word2vec model:
w2v_model = H2OWord2vecEstimator(sent_sample_rate = 0.0, epochs = 10)
w2v_model.train(training_frame=words)
#Save model
wv_path = 'models/'
model_path = h2o.save_model(model = w2v_model, path= wv_path ,force=True)
#Load Model
w2v_model2 = h2o.load_model(model_path)

Tensorflow.keras.Model served by a Flask app + uwsgi gets stuck in model.predict

I am trying to serve a tensorflow.keras.Model in a Flask + nginx + uwsgi application, using Tensorflow v1.14.
I load the model in the constructor of a class named Prediction in my Flask's application factory function and save the graph as an attribute of
the Flask app, as suggested here.
Then I run the prediction by calling a method Prediction.process in a route named _process of my Flask app, but it gets stuck during the call of tf.keras.Model.predict (self.model.summary() in predict.py is executed, i.e. the summary is shown, but not print("Never gets here :(")).
If I initialize my class Prediction in _process (which I want to avoid to not have to load the model for every prediction), everything works fine.
If I use Flask server, it works fine, too. So it seems that it is related to uwsgi config.
Any suggestion ?
init.py
def create_app():
app = Flask(__name__)
#(...)
app.register_blueprint(bp)
load_tf_model(app)
return app
def load_tf_model(app):
sess = tf.Session(graph=tf.Graph())
app.sess = sess
with sess.graph.as_default():
weights = os.path.join(app.static_folder, 'weights/model.32-0.81.h5')
app.prediction = Prediction(weights)
predict.py
class Prediction:
def __init__(self, weights):
# build model and set weights
inputs = tf.keras.Input(shape=SHAPE, batch_size=1)
outputs = simple_cnn.build_model(inputs, N_CLASSES)
self.model = tf.keras.Model(inputs=inputs, outputs=outputs)
self.model.load_weights(weights)
self.model._make_predict_function()
# create TF mel extractor
self.melspec_ex = tf_feature_utils.MelSpectrogram()
def process(self, audio, sr):
# compute features (in NCHW format) and labels
data = audio2data(
audio,
sr,
class_list=np.arange(N_CLASSES))
features = np.asarray([d[0] for d in data])
features = tf.reshape(features, (features.shape[0], 1, features.shape[1], features.shape[2]))
labels = np.asarray([d[1] for d in data])
# make tf.data.Dataset
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
dataset = dataset.batch(1)
dataset = dataset.map(lambda data, labels: (
tf.expand_dims(self.melspec_ex.process(tf.squeeze(data, axis=[1,2])), 1)))
# show model (debug)
self.model.summary()
# run prediction
predictions = self.model.predict(dataset)
print("Never gets here :(")
# integrate predictions over time
return np.mean(predictions, axis=0)
routes.py
#bp.route('/_process', methods=['POST'])
def _process():
with current_app.graph.as_default():
# load audio
filepath = session['filepath']
audio, sr = librosa.load(filepath)
# predict
predictions = current_app.prediction.process(audio, sr)
# delete file
os.remove(filepath)
return jsonify(prob=predictions.tolist())
It was a threading issue. I had to add configure uwsgi with the following options:
master = false
processes = 1
cheaper = 0

how to export tf model for serving directly from session (no creating of tf checkpoint) to minimize export time

I wanted to share my findings on how to export a tf model for serving directly from session without creating model checkpoint. my use case requires minimum time to create a pb file, therefore I wanted to get a model.pb file directly from session without creating model checkpoint.
most examples online (and documentation refers to the common case of creating a model checkpoint and loading it in order to create a tf-serving (pb) file. of course this use case is good in case export performance time is not an issue.
import tensorflow as tf
from tensorflow.python.framework import importer
output_path = '/export_directory' # be sure to create it before export
input_ops = ['name/s_of_model_input/s']
output_ops = ['name/s_of_model_output/s']
session = tf.compat.v1.Session()
def get_ops_dict(ops, graph, name='op_'):
out_dict = dict()
for i, op in enumerate(ops):
out_dict[name + str(i)] = tf.compat.v1.saved_model.build_tensor_info(graph.get_tensor_by_name(op + ':0'))
return out_dict
def add_meta_graph(pbtxt_tmp_path, graph_def):
with tf.Graph().as_default() as graph:
importer.import_graph_def(graph_def, name="")
os.unlink(pbtxt_tmp_path)
# used to rename model input/outputs
inputs_dict = get_ops_dict(input_ops, graph, name='input_')
outputs_dict = get_ops_dict(output_ops, graph, name='output_')
prediction_signature = (
tf.compat.v1.saved_model.signature_def_utils.build_signature_def(
inputs=inputs_dict,
outputs=outputs_dict,
method_name=tf.saved_model.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.compat.v1.tables_initializer(), name='legacy_init_op')
builder = tf.compat.v1.saved_model.builder.SavedModelBuilder(output_path+'/export')
builder.add_meta_graph_and_variables(
session,
tags=[tf.saved_model.SERVING],
signature_def_map={
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY: prediction_signature},
legacy_init_op=legacy_init_op)
builder.save()
return prediction_signature
def export_model(session, output_path, output_ops):
graph_def = session.graph_def
tf.io.write_graph(graph_or_graph_def=graph_def, logdir=output_path,
name='model.pbtxt', as_text=False)
frozen_graph_def = tf.compat.v1.graph_util.convert_variables_to_constants(
session, graph_def, output_ops)
prediction_signature = add_meta_graph(output_path+'/model.pbtxt', frozen_graph_def)

Tensorflow: define placeholders/operation name in image pipeline

I would like to save my trained Tensorflow model, so it can be deployed by restoring the model file (I'm following this example, which seems to make sense). To do this, however, I need to have named tensors, so that I can do reload the variables with something like:
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("my_tensor:0")
I am queuing images from a list of filenames using string_input_producer (code below), but how do I name the tensors so that I can reload them at a later stage?
import tensorflow as tf
flags = tf.app.flags
conf = flags.FLAGS
class ImageDataSet(object):
def __init__(self, img_list_path, num_epoch, batch_size):
# Build the record list queue
input_file = open(images_list_path, 'r')
self.record_list = []
for line in input_file:
line = line.strip()
self.record_list.append(line)
filename_queue = tf.train.string_input_producer(self.record_list, num_epochs=num_epoch)
image_reader = tf.WholeFileReader()
_, image_file = image_reader.read(filename_queue)
image = tf.image.decode_jpeg(image_file, conf.img_colour_channels)
# preprocess
# ...
min_after_dequeue = 1000
capacity = min_after_dequeue + 400 * batch_size
self.images = tf.train.shuffle_batch(image, batch_size=batch_size, capacity=capacity,
min_after_dequeue=min_after_dequeue)
I assume that you want to restore the graph for testing or deploying.
For these purposes, you can edit your graph by insert a placeholder as an entrance of the testing data.
To edit the graph, you can use tf's graph editor, or build an new graph with placeholder and save it.

Categories

Resources