Pass a function/method to a python script - python

I have defined this method/function in a google colab cell
[5]:def lstm_model2(input_features, timesteps, regularizers=regularizers, batch_size=10):
model = Sequential()
model.add(LSTM(15, batch_input_shape=(batch_size, timesteps,
input_features), stateful=True,
recurrent_regularizer=regularizers))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
return model
I want to pass this method to a script I am executing in the next cell using argparse.
[6]:!python statefulv2.py --model=lstm_model2
I tried an approach similar to type argument in argparse like defining a identically named abstract method inside the statefulv2.py script so that argparse.add_argument('--model', help='LSTM model', type=lstm_model2, required=True) can be written inside statefulv2.py But this raises an invalid argument error.
Is there a neat way to pass methods as arguments in argparse?
The reason for keeping the method outside is to edit it for trying differeny functions since GoogleColab does not provide separate editor for changing model in a separate file.

It's best that you don't try to pass arguments like that. Stick to the basic types. An alternative would be to store the different models in a file like models.py, such as:
def lstm_model1(...):
# Definition of model1
def lstm_model2(...):
# Definition of model2
and so on.
In statefulv2.py you can have:
import models
import argparse
parser = ...
parser.add_argument('-m', "--model", help='LSTM model',
type=str, choices=['model1', 'model2'], required=True)
model_dict = {'model1': models.lstm_model1(...),
'model2': models.lstm_model2(...)}
args = parser.parse_args()
my_model = model_dict[args.model]
EDIT: If saving model to file is not allowed.
In case you absolutely have to find a workaround, you can save the code in a buffer file which can be read into statefulv2.py as an argument of type open. Here is a demo:
In your Colab cell, you write the function code in a string:
def save_code_to_file():
func_string = """
def lstm_model3():
print ("Hey!")
"""
with open("buffer", "w") as f:
f.write(func_string)
In statefulv2.py, you can have:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-m', "--model", help='LSTM model', type=open, required=True)
args = parser.parse_args()
lines = args.model.readlines()
model_def = ""
for line in lines:
model_def += line
exec(model_def)
lstm_model3()
Output (this is in iPython, change it accordingly for Colab):
In [25]: %run statefulv2.py -m buffer
Hey!

Related

adding command line arguments to multiple scripts in python

I have a use case where I have a main python script with many command line arguments, I need to break it's functionality into multiple smaller scripts, a few command-line arguments will be common to more than one smaller scripts. I want to reduce code duplicacy. I tried to use decorators to register each argument to one or more scripts, but am not able to get around an error. Another caveat I have is I want to set default values for shared argument according to which script is being run. This is what I have currently
argument_parser.py
import argparse
import functools
import itertools
from scripts import Scripts
from collections import defaultdict
_args_register = defaultdict(list)
def argument(scope):
"""
Decorator to add argument to argument registry
:param scope: The module name to register current argument function to can also be a list of modules
:return: The decorated function after after adding it to registry
"""
def register(func):
if isinstance(scope, Scripts):
_args_register[scope].append(func)
elif isinstance(scope, list) and Scripts.ALL in scope:
_args_register[Scripts.ALL].append(func)
else:
for module in scope:
_args_register[module].append(func)
return func
return register
class ArgumentHandler:
def __init__(self, script, parser=None):
self._parser = parser or argparse.ArgumentParser(description=__doc__)
assert script in Scripts
self._script = script
#argument(scope=Scripts.ALL)
def common_arg(self):
self._parser.add_arg("--common-arg",
default=self._script,
help="An arg common to all scripts")
#argument(scope=[Scripts.TRAIN, Scripts.TEST])
def train_test_arg(self):
self._parser.add_arg("--train-test-arg",
default=self._script,
help=f"An arg common to train-test scripts added in argument handler"
)
def parse_args(self):
for argument in itertools.chain(_args_register[Scripts.ALL],
_args_register[self._script]):
argument()
_args = self._parser.parse_args()
return _args
One of the smaller scripts train.py
"""
A Train script to abstract away training tasks
"""
import argparse
from argument_parser import ArgumentHandler
from scripts import Scripts
current = Scripts.TRAIN
parser = argparse.ArgumentParser(description=__doc__)
def get_args() -> argparse.Namespace:
parser.add_argument('--train-arg',
default='blah',
help='a train argumrnt set in the train script')
args_handler = ArgumentHandler(parser=parser, script=current)
return args_handler.parse_args()
if __name__ == '__main__':
print(get_args())
When I run train.py I get the following error
File "../argument_parser.py", line 68, in parse_args
argument()
TypeError: common_arg() missing 1 required positional argument: 'self'
Process finished with exit code 1
I think this is because decorators are run at import time, but am not sure, is there any work around this? or any other better way to reduce code duplicacy? Any help will be highly appreciated. Thanks!

TypeError: (): incompatible function arguments. The following argument types are supported: 1. (self: fasttext_pybind.args, arg0: float) -> None

I would like Model training with train.py file, but I keep getting the following error:
setattr(a, k, v)
TypeError: (): incompatible function arguments. The following argument types are supported:
1. (self: fasttext_pybind.args, arg0: float) -> None
Invoked with: <fasttext_pybind.args object at 0x7f6bbed0c030>,
'/home/van/Download/classification/egs/vntc_fasttext/snapshots/model'
This is my code:
import argparse
import os
import sys
from os.path import join, dirname, abspath
import fasttext
cwd = dirname(abspath(__file__))
sys.path.append(dirname(dirname(cwd)))
parser = argparse.ArgumentParser("train.py")
parser.add_argument("--train", help="train data path", required=True)
parser.add_argument("-s", "--serialization-dir", help="directory in which to save the model
and its logs",required=True)
args = parser.parse_args()
train_path = os.path.abspath(join(cwd, args.train))
serialization_dir = os.path.abspath(join(cwd, args.serialization_dir))
fasttext.train_supervised(train_path, '{}/model'.format(serialization_dir))
print("Done!!!")
Could someone please help me fix this issue?
The error comes from the fact that the method is expecting learning rate number, instead of the output path string (see train_supervised parameters).
Python module is slightly different from command line interface (see supervised tutorial and help on Python module).
To train the model, use the following command:
model=fasttext.train_supervised(input=train_path)
Then, to save the model, use:
model.save_model('{}/model'.format(serialization_dir))

How to pass arguments to scoring file when deploying a Model in AzureML

I am deploying a trained model to an ACI endpoint on Azure Machine Learning, using the Python SDK.
I have created my score.py file, but I would like that file to be called with an argument being passed (just like with a training file) that I can interpret using argparse.
However, I don't seem to find how I can pass arguments
This is the code I have to create the InferenceConfig environment and which obviously does not work. Should I fall back on using the extra Docker file steps or so?
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
env = Environment('my_hosted_environment')
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=['scikit-learn'],
pip_packages=['azureml-defaults'])
scoring_script = 'score.py --model_name ' + model_name
inference_config = InferenceConfig(entry_script=scoring_script, environment=env)
Adding the score.py for reference on how I'd love to use the arguments in that script:
#removed imports
import argparse
def init():
global model
parser = argparse.ArgumentParser(description="Load sklearn model")
parser.add_argument('--model_name', dest="model_name", required=True)
args, _ = parser.parse_known_args()
model_path = Model.get_model_path(model_name=args.model_name)
model = joblib.load(model_path)
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = np.array(data)
result = model.predict(data)
return result.tolist()
except Exception as e:
result = str(e)
return result
Interested to hear your thoughts
This question is a year old. Providing a solution to help those who may still be looking for an answer. My answer to a similar question is here. You may pass native python datatype variables into the inference config and access them as environment variables within the scoring script.
I tackled this problem differently. I could not find a (proper and easy to follow) way to pass arguments for score.py, when it is consumed by InferenceConfig . Instead, what I did was following 4 steps:
Created score_template.py and define variables which should be assigned
Read content of score_template.py and modify it by replacing variables with desired values
Write modified contents into score.py
Finally pass score.py to InferenceConfig
STEP 1 in score_template.py:
import json
from azureml.core.model import Model
import os
import joblib
import pandas as pd
import numpy as np
def init():
global model
#model = joblib.load('recommender.pkl')
model_name="#MODEL_NAME#"
model_saved_file='#MODEL_SAVED_FILE#'
try:
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), model_saved_file)
model = joblib.load(model_path)
except:
model_path = Model.get_model_path(model_name)
model = joblib.load(model_path)
def run(raw_data):
try:
#data=pd.json_normalize(data)
#data=np.array(data['data'])
data = json.loads(raw_data)["data"]
data = np.array(data)
result = model.predict(data)
# you can return any datatype as long as it is JSON-serializable
return {"result": result.tolist()}
except Exception as e:
error = str(e)
#error= data
return error
STEP 2-4 in deploy_model.py:
#--Modify Entry Script/Pass Model Name--
entry_script="score.py"
entry_script_temp="score_template.py"
# Read in the entry script template
print("Prepare Entry Script")
with open(entry_script_temp, 'r') as file :
entry_script_contents = file.read()
# Replace the target string
entry_script_contents = entry_script_contents.replace('#MODEL_NAME#', model_name)
entry_script_contents = entry_script_contents.replace('#MODEL_SAVED_FILE#', model_file_name)
# Write the file to entry script
with open(entry_script, 'w') as file:
file.write(entry_script_contents)
#--Define configs for the deployment---
print("Get Environtment")
env = Environment.get(workspace=ws, name=env_name)
env.inferencing_stack_version = "latest"
print("Inference Configuration")
inference_config = InferenceConfig(entry_script=entry_script, environment=env, source_directory=base_path)
aci_config = AciWebservice.deploy_configuration(cpu_cores = int(cpu_cores), memory_gb = int(memory_gb),location=location)
#--Deloy the service---
print("Deploy Model")
print("model version:", model_artifact.version)
service = Model.deploy( workspace=ws,
name=service_name,
models=[model_artifact],
inference_config=inference_config,
deployment_config=aci_config,
overwrite=True )
service.wait_for_deployment(show_output=True)
How to deploy using environments can be found here model-register-and-deploy.ipynb . InferenceConfig class accepts source_directory and entry_script parameters, where source_directory is a path to the folder that contains all files(score.py and any other additional files) to create the image.
This multi-model-register-and-deploy.ipynb has code snippets on how to create InferenceConfig with source_directory and entry_script.
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
service = Model.deploy(workspace=ws,
name='sklearn-mnist-svc',
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
print(service.scoring_uri)

Trying to assign a path to the ArgumentParser

I'm trying to access to the "resources" folder with the ArgumentParser.
This code and the "resources" folder are in the same folder...
Just to try to run the code, I've put a print function in the predict function. However this error occurs:
predict.py: error: the following arguments are required: resources_path
How can I fix it?
from argparse import ArgumentParser
def parse_args():
parser = ArgumentParser()
parser.add_argument("resources_path", help='/resources')
return parser.parse_args()
def predict(resources_path):
print(resources_path)
pass
if __name__ == '__main__':
args = parse_args()
predict(args.resources_path)
I am guessing from your error message that you are trying to call your program like this:
python predict.py
The argument parser by default gets the arguments from sys.argv, i.e. the command line. You'll have to pass it yourself like this:
python predict.py resources
It's possible that you want the resources argument to default to ./resources if you don't pass anything. (And I further assume you want ./resources, not /resources.) There's a keyword argument for that:
....
parser.add_argument('resources_path', default='./resources')
...

Can't solve Python argparse error 'object has no attribute'

When I run this code I get
AttributeError: 'ArgumentParser' object has no attribute 'max_seed'
Here's the code
import argparse
import ConfigParser
CFG_FILE='/my.cfg'
# Get command line arguments
args = argparse.ArgumentParser()
args.add_argument('verb', choices=['new'])
args.add_argument('--max_seed', type=int, default=1000)
args.add_argument('--cmdline')
args.parse_args()
if args.max_seed:
pass
if args.cmdline:
pass
My source file is called "fuzz.py"
You should first initialize the parser and arguments and only then get the actual arguments from parse_args() (see example from the docs):
import argparse
import ConfigParser
CFG_FILE='/my.cfg'
# Get command line arguments
parser = argparse.ArgumentParser()
parser.add_argument('verb', choices=['new'])
parser.add_argument('--max_seed', type=int, default=1000)
parser.add_argument('--cmdline')
args = parser.parse_args()
if args.max_seed:
pass
if args.cmdline:
pass
Hope that helps.
If you use argparse parsed arguments inside another class (somewhere you do self.args = parser.parse_args() ), you might need to explicitly tell your lint parser to ignore Namespace type checking. As told by #frans at Avoid Pylint warning E1101: 'Instance of .. has no .. member' for class with dynamic attributes
:
Just to provide the answer that works for me now - as [The
Compiler][1] suggested you can add a rule for the problematic class in
your projects .pylintrc:
[TYPECHECK]
ignored-classes=Namespace
[1]: https://stackoverflow.com/users/2085149/the-compiler

Categories

Resources