Transformer library cache path is not changing - python

I have tried this but it's not working for me. I am using this Git repo. I am building a desktop app and don't want users to download model. I want to ship models with build. I know transformers library looks for models in cache/torch/transformers. If it's not there then download it. I also know you can pass cache_dir parameter in pre_trained.
I am trying this.
cache = os.path.join(os.path.abspath(os.getcwd()), 'Transformation/Annotators/New Sentiment Analysis/transformers')
os.environ['TRANSFORMERS_CACHE'] = cache
if args.model_name_or_path is None:
args.model_name_or_path = 'barissayil/bert-sentiment-analysis-sst'
#Configuration for the desired transformer model
config = AutoConfig.from_pretrained(args.model_name_or_path, cache_dir=cache)
I have tried the solution in above mentioned question and tried cache_dir also. The transformer folder is in same directory with analyze.py. The whole repo and transformer folder is in New Sentiment Analysis directory.

You actually haven't showed the code which is not working but I assume you did something like the following:
from transformers import AutoConfig
import os
os.environ['TRANSFORMERS_CACHE'] = '/blabla/cache/'
config = AutoConfig.from_pretrained('barissayil/bert-sentiment-analysis-sst')
os.path.isdir('/blabla/cache/')
Output:
False
This will not create a new default location for caching, because you imported the transformers library before you have set the environment variable (I modified your linked question to make it more clear). The proper way to modify the default caching directory is setting the environment variable before importing the transformers library:
import os
os.environ['TRANSFORMERS_CACHE'] = '/blabla/cache/'
from transformers import AutoConfig
config = AutoConfig.from_pretrained('barissayil/bert-sentiment-analysis-sst')
os.path.isdir('/blabla/cache/')
Output:
True

Related

How do I call out to a specific model version with Python and a TensorFlow serving model?

I have a few machine learning models running via TensorFlow Serving on Kubernetes. I'd like to be able to have one deployment of a particular model, and then load multiple versions.
This seems like it would be easier than having to maintain a separate Kubernetes deployment for each version of each model that we have.
But it's not obvious how to pass the version or model flavor I want to call using the Python gRPC interface to TF Serving. How do I specify the version and pass it in?
For whatever reason, it's not possible to update the model spec in place as you're building the request to pull. Instead, you need to separately build an instance of ModelSpec that includes the version you want, and then pass that in to the constructor for the prediction request.
Also worth pointing out you need to use the Google-specific Int64Value for the version.
from google.protobuf.wrappers_pb2 import Int64Value
from tensorflow_serving.apis.model_pb2 import ModelSpec
from tensorflow_serving.apis import predict_pb2, get_model_metadata_pb2, \
prediction_service_pb2_grpc
from tensorflow import make_tensor_proto
import grpc
model_name = 'mymodel'
input_name = 'model_input'
model_uri = 'mymodel.svc.cluster.local:8500'
X = # something that works
channel = grpc.insecure_channel(model_uri, options=MESSAGE_OPTIONS)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
version = Int64Value(value=1)
model_spec = ModelSpec(version=version, name=model_name, signature_name='serving_default')
request = predict_pb2.PredictRequest(model_spec=model_spec)
request.inputs[input_name].CopyFrom(make_tensor_proto(X.astype(np.float32), shape=X.shape))
result = stub.Predict(request, 1.0)
channel.close()

How to load a finetuned sciBERT model in AllenNLP?

I have finetuned the SciBERT model on the SciIE dataset. The repository uses AllenNLP to finetune the model. The training is executed as follows:
python -m allennlp.run train $CONFIG_FILE --include-package scibert -s "$#"
After a successful training I have a model.tar.gz file as an output that contains weights.th, config.json, and vocabulary folder. I have tried to load it in the allenlp predictor:
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("model.tar.gz")
But I get the following error:
ConfigurationError: bert-pretrained not in acceptable choices for
dataset_reader.token_indexers.bert.type: ['single_id', 'characters',
'elmo_characters', 'spacy', 'pretrained_transformer',
'pretrained_transformer_mismatched']. You should either use the
--include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model":
"my_module.models.MyModel"} to have it imported automatically.
I have never worked with allenNLP, so I am quite lost about what to do.
For reference, this is the part of the config that describer token indexers
"token_indexers": {
"bert": {
"type": "bert-pretrained",
"do_lowercase": "false",
"pretrained_model": "/home/tomaz/neo4j/scibert/model/vocab.txt",
"use_starting_offsets": true
}
}
I am using allenlp version
Name: allennlp
Version: 1.2.1
Edit:
I think I have made a lot of progress, I have to use the same version that was used to train the model and I can import the modules like so:
from allennlp.predictors.predictor import Predictor
from scibert.models.bert_crf_tagger import *
from scibert.models.bert_text_classifier import *
from scibert.models.dummy_seq2seq import *
from scibert.dataset_readers.classification_dataset_reader import *
predictor = Predictor.from_path("scibert_ner/model.tar.gz")
dataset_reader="classification_dataset_reader")
predictor.predict(
sentence="Did Uriah honestly think he could beat The Legend of Zelda in under three hours?"
)
Now I get an error:
No default predictor for model type bert_crf_tagger.\nPlease specify a
predictor explicitly
I know that I can use the predictor_name to specify a predictor explicitly, but I haven't got the faintest idea which name to pick that would work
I have seen a lot of people having this problem. Upon going through the repository code, I found this to be the easiest way to run the predictions:
python -m allennlp.run predict /path/to/saved_model/model.tar.gz /path/to/test.txt\
--include-package scibert --use-dataset-reader\
--output-file /path/to/where/you/want/predict.txt\
--predictor sentence-tagger --batch-size 16
What did I add? The predictor sentence-tagger. Once you go through the repository, you would find that the registered predictor is sentence-tagger. Although the DEFAUL_DICT of the taggers contain sentence_tagger. A lot of confusion, right? Tell me!
This answer also saves you from writing a predictor.

Google AI Platform: Unexpected error when loading the model: 'str' object has no attribute 'decode' [Keras 2.3.1, TF 1.15]

I am trying to use the beta Google Custom Prediction Routine in Google's AI Platform to run a live version of my model.
I include in my package predictor.py which contains a Predictor class as such:
import os
import numpy as np
import pickle
import keras
from keras.models import load_model
class Predictor(object):
"""Interface for constructing custom predictors."""
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
def predict(self, instances, **kwargs):
"""Performs custom prediction.
Instances are the decoded values from the request. They have already
been deserialized from JSON.
Args:
instances: A list of prediction input instances.
**kwargs: A dictionary of keyword args provided as additional
fields on the predict request body.
Returns:
A list of outputs containing the prediction results. This list must
be JSON serializable.
"""
# pre-processing
preprocessed_inputs = self._preprocessor.preprocess(instances[0])
# predict
outputs = self._model.predict(preprocessed_inputs)
# post-processing
outputs = np.array([np.fliplr(x) for x in x_test])
return outputs.tolist()
#classmethod
def from_path(cls, model_dir):
"""Creates an instance of Predictor using the given path.
Loading of the predictor should be done in this method.
Args:
model_dir: The local directory that contains the exported model
file along with any additional files uploaded when creating the
version resource.
Returns:
An instance implementing this Predictor class.
"""
model_path = os.path.join(model_dir, 'keras.model')
model = load_model(model_path, compile=False)
preprocessor_path = os.path.join(model_dir, 'preprocess.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
The full error Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'str' object has no attribute 'decode' (Error code: 0)" indicates that the issue is in this script, specifically when loading the model. However, I am able to successfully load the model in my notebook locally with the same code block in predict.py:
from keras.models import load_model
model = load_model('keras.model', compile=False)
I have seen similar posts which suggest to set the version of h5py<3.0.0 but this hasn't helped. I can set versions of modules for my custom prediction routine as such in a setup.py file:
from setuptools import setup
REQUIRED_PACKAGES = ['keras==2.3.1', 'h5py==2.10.0', 'opencv-python', 'pydicom', 'scikit-image']
setup(
name='my_custom_code',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
version='0.23',
scripts=['predictor.py', 'preprocess.py'])
Unfortunately, I haven't found a good way to debug model deployment in google's AI Platform and the troubleshooting guide is unhelpful. Any pointers would be much appreciated. Thanks!
Edit 1:
The h5py module's version is wrong –– at 3.1.0, despite setting it to 2.10.0 in setup.py. Anyone know why? I confirmed that Keras version and other modules are set properly however. I've tried 'h5py==2.9.0' and 'h5py<3.0.0' to no avail. More on including PyPi package dependencies here.
Edit 2:
So it turns out google currently does not support this capability.
StackOverflow, enzed01
I have encountered the same problem with using AI platform with code that was running fine two months ago, when we last trained our models. Indeed, it is due to the dependency on h5py which fails to load the h5 model out of the blue.
After a while I was able to make it work with runtime 2.2 and python version 3.7. I am also using the custom prediction routine and my model was a simple 2-layer bidirectional LSTM serving classifications.
I had a notebook VM set up with TF == 2.1 and downgraded h5py to <3.0.0 with:
!pip uninstall -y h5py
!pip install 'h5py < 3.0.0'
My setup.py looks like this:
from setuptools import setup
REQUIRED_PACKAGES = ['tensorflow==2.1', 'h5py<3.0.0']
setup(
name="my_package",
version="0.1",
include_package_data=True,
scripts=["preprocess.py", "model_prediction.py"]
)
I added compile=False to my model load code. Without it, I ran into another problem with deployment which was giving following error: Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: 'sample_weight_mode' (Error code: 0)"
The code change from OP:
model = keras.models.load_model(
os.path.join(model_dir,'model.h5'), compile = False)
And this made the model be deployed as before without a problem. I suspect the
compile=False might mean slower prediction serving, but have not noticed anything so far.
Hope this helps anyone stuck and googling these issues!

Change Logdir of Ray RLlib Training instead of ~/ray_results

I'm using Ray & RLlib to train RL agents on an Ubuntu system. Tensorboard is used to monitor the training progress by pointing it to ~/ray_results where all the log files for all runs are stored. Ray Tune is not being used.
For example, on starting a new Ray/RLlib training run, a new directory will be created at
~/ray_results/DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1
To visualize the training progress, we need to start Tensorboard using
tensorboard --logdir=~/ray_results
Question: Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
Failed Attempt: Tried setting
os.environ['TUNE_RESULT_DIR'] = '~/another_dir`
before running ray.init(), but the result log files were still being written to ~/ray_results.
Without using Tune, you can change the logdir using rllib's "Trainer". The "Trainer" class takes in an optional "logger_creator" if you want to specify where to save the log (see here).
A concrete example:
Define your customized logger creator (you can simply modify from the default one):
def custom_log_creator(custom_path, custom_str):
timestr = datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
logdir_prefix = "{}_{}".format(custom_str, timestr)
def logger_creator(config):
if not os.path.exists(custom_path):
os.makedirs(custom_path)
logdir = tempfile.mkdtemp(prefix=logdir_prefix, dir=custom_path)
return UnifiedLogger(config, logdir, loggers=None)
return logger_creator
Pass this logger_creator to the trainer, and start training:
trainer = PPOTrainer(config=config, env='CartPole-v0',
logger_creator=custom_log_creator(os.path.expanduser("~/another_ray_results/subdir"), 'custom_dir'))
for i in range(ITER_NUM):
result = trainer.train()
You will find the training results (i.e., TensorBoard events file, params, model, ...) saved under "~/another_ray_results/subdir" with your specified naming convention.
Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
There is currently no way to configure this using RLib CLI tool (rllib).
If you're okay with Python API, then, as described in documentation, local_dir parameter of tune.run is responsible for specifying output directory, default is ~/ray_results.
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
This is governed by trial_name_creator parameter of tune.run. It must be a function that accepts trial object and formats it into a string like so:
def trial_name_id(trial):
return f"{trial.trainable_name}_{trial.trial_id}"
tune.run(...trial_name_creator=trial_name_id)
Just for anyone who bumps into this problem with Ray Tune.
You can specify local_dir for run_config within tune.Tuner:
# This logs to 2 different trial folders:
# ./results/test_experiment/trial_name_1 and ./results/test_experiment/trial_name_2
# Only trial_name is autogenerated.
tuner = tune.Tuner(trainable,
tune_config=tune.TuneConfig(num_samples=2),
run_config=air.RunConfig(local_dir="./results", name="test_experiment"))
results = tuner.fit()
Please see this link for more info.

Install issues with 'lr_utils' in python

I am trying to complete some homework in a DeepLearning.ai course assignment.
When I try the assignment in Coursera platform everything works fine, however, when I try to do the same imports on my local machine it gives me an error,
ModuleNotFoundError: No module named 'lr_utils'
I have tried resolving the issue by installing lr_utils but to no avail.
There is no mention of this module online, and now I started to wonder if that's a proprietary to deeplearning.ai?
Or can we can resolve this issue in any other way?
You will be able to find the lr_utils.py and all the other .py files (and thus the code inside them) required by the assignments:
Go to the first assignment (ie. Python Basics with numpy) - which you can always access whether you are a paid user or not
And then click on 'Open' button in the Menu bar above. (see the image below)
.
Then you can include the code of the modules directly in your code.
As per the answer above, lr_utils is a part of the deep learning course and is a utility to download the data sets. It should readily work with the paid version of the course but in case you 'lost' access to it, I noticed this github project has the lr_utils.py as well as some data sets
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments
Note:
The chinese website links did not work when I looked at them. Maybe the server storing the files expired. I did see that this github project had some datasets though as well as the lr_utils file.
EDIT: The link no longer seems to work. Maybe this one will do?
https://github.com/knazeri/coursera/blob/master/deep-learning/1-neural-networks-and-deep-learning/2-logistic-regression-as-a-neural-network/lr_utils.py
Download the datasets from the answer above.
And use this code (It's better than the above since it closes the files after usage):
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
"lr_utils" is not official library or something like that.
Purpose of "lr_utils" is to fetch the dataset that is required for course.
option (didn't work for me): go to this page and there is a python code for downloading dataset and creating "lr_utils"
I had a problem with fetching data from provided url (but at least you can try to run it, maybe it will work)
option (worked for me): in the comments (at the same page 1) there are links for manually downloading dataset and "lr_utils.py", so here they are:
link for dataset download
link for lr_utils.py script download
Remember to extract dataset when you download it and you have to put dataset folder and "lr_utils.py" in the same folder as your python script that is using it (script with this line "import lr_utils").
The way I fixed this problem was by:
clicking File -> Open -> You will see the lr_utils.py file ( it does not matter whether you have paid/free version of the course).
opening the lr_utils.py file in Jupyter Notebooks and clicking File -> Download ( store it in your own folder ), rerun importing the modules. It will work like magic.
I did the same process for the datasets folder.
You can download train and test dataset directly here: https://github.com/berkayalan/Deep-Learning/tree/master/datasets
And you need to add this code to the beginning:
import numpy as np
import h5py
import os
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
I faced similar problem and I had followed the following steps:
1. import the following library
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
2. download the train_catvnoncat.h5 and test_catvnoncat.h5 from any of the below link:
[https://github.com/berkayalan/Neural-Networks-and-Deep-Learning/tree/master/datasets]
or
[https://github.com/JudasDie/deeplearning.ai/tree/master/Improving%20Deep%20Neural%20Networks/Week1/Regularization/datasets]
3. create a folder named datasets and paste these two files in this folder.
[ Note: datasets folder and your source code file should be in same directory]
4. run the following code
def load_dataset():
with h5py.File('datasets1/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets1/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
5. Load the data:
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
check datasets
print(len(train_set_x_orig))
print(len(test_set_x_orig))
your data set is ready, you may check the len of the train_set_x_orig, train_set_y variable. For mine, it was 209 and 50
I could download the dataset directly from coursera page.
Once you open the Coursera notebook you go to File -> Open and the following window will be display:
enter image description here
Here the notebooks and datasets are displayed, you can go to the datasets folder and download the required data for the assignment. The package lr_utils.py is also available for downloading.
below is your code, just save your file named "lr_utils.py" and now you can use it.
import numpy as np
import h5py
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
if your code file can not find you newly created lr_utils.py file just write this code:
import sys
sys.path.append("full path of the directory where you saved Ir_utils.py file")
Here is the way to get dataset from as #ThinkBonobo:
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments/datasets
write a lr_utils.py file, as above answer #StationaryTraveller, put it into any of sys.path() directory.
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
....
!!! BUT make sure that you delete 'datasets/', cuz now the name of your data file is train_catvnoncat.h5
restart kernel and good luck.
I may add to the answers that you can save the file with lr_utils script on the disc and import that as a module using importlib util function in the following way.
The below code came from the general thread about import functions from external files into the current user session:
How to import a module given the full path?
### Source load_dataset() function from a file
# Specify a name (I think it can be whatever) and path to the lr_utils.py script locally on your PC:
util_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_utils.py")
# Make a module
load_utils = importlib.util.module_from_spec(util_script)
# Execute it on the fly
util_script.loader.exec_module(load_utils)
# Load your function
load_utils.load_dataset()
# Then you can use your load_dataset() coming from above specified 'module' called load_utils
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_utils.load_dataset()
# This could be a general way of calling different user specified modules so I did the same for the rest of the neural network function and put them into separate file to keep my script clean.
# Just remember that Python treat it like a module so you need to prefix the function name with a 'module' name eg.:
# d = nnet_utils.model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
nnet_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_nnet.py")
nnet_utils = importlib.util.module_from_spec(nnet_script)
nnet_script.loader.exec_module(nnet_utils)
That was the most convenient way for me to source functions/methods from different files in Python so far.
I am coming from the R background where you can call just one line function source() to bring external scripts contents into your current session.
The above answers didn't help, some links had expired.
So, lr_utils is not a pip library but a file in the same notebook as the CourseEra website.
You can click on "Open", and it'll open the explorer where you can download everything that you would want to run in another environment.
(I used this on a browser.)
This is how i solved mine, i copied the lir_utils file and paste it in my notebook thereafter i downloaded the dataset by zipping the file and extracting it. With the following code. Note: Run the code on coursera notebook and select only the zipped file in the directory to download.
!pip install zipfile36
zf = zipfile.ZipFile('datasets/train_catvnoncat_h5.zip', mode='w')
try:
zf.write('datasets/train_catvnoncat.h5')
zf.write('datasets/test_catvnoncat.h5')
finally:
zf.close()

Categories

Resources