Kedro template configuration does not load/parse variables - python

A follow up to this question. I am using Kedro v0.18.2. I am trying use the TemplateConfig so I have created a globals.yml under conf/base, which looks like this:
paths:
base_path: s3://my_project
datasets:
pdf: base.PDFDataSet
png: pillow.ImageDataSet
csv: pandas.CSVDataSet
excel: pandas.ExcelDataSet
data_folders:
raw: 01_raw
intermediate: 02_intermediate
primary: 03_primary
feature: 04_feature
model_input: 05_model_input
models: 06_models
model_output: 07_model_output
reporting: 08_reporting
I have followed the documentation and I have uncommented some of the settings.py as such:
"""Project settings. There is no need to edit this file unless you want to change values
from the Kedro defaults. For further information, including these default values, see
https://kedro.readthedocs.io/en/stable/kedro_project_setup/settings.html."""
# Instantiated project hooks.
# from certifai.hooks import ProjectHooks
# HOOKS = (ProjectHooks(),)
# Installed plugins for which to disable hook auto-registration.
# DISABLE_HOOKS_FOR_PLUGINS = ("kedro-viz",)
# Class that manages storing KedroSession data.
# from kedro.framework.session.store import ShelveStore
# SESSION_STORE_CLASS = ShelveStore
# Keyword arguments to pass to the `SESSION_STORE_CLASS` constructor.
# SESSION_STORE_ARGS = {
# "path": "./sessions"
# }
# Class that manages Kedro's library components.
# from kedro.framework.context import KedroContext
# CONTEXT_CLASS = KedroContext
# Directory that holds configuration.
# CONF_SOURCE = "conf"
# Class that manages how configuration is loaded.
from kedro.config import TemplatedConfigLoader
CONFIG_LOADER_CLASS = TemplatedConfigLoader
CONFIG_LOADER_ARGS = {
"globals_pattern": "*globals.yml",
}
# Class that manages the Data Catalog.
# from kedro.io import DataCatalog
# DATA_CATALOG_CLASS = DataCatalog
catalog.yml looks like this:
_label_images: &label_images
type: PartitionedDataSet
path: ${paths.base_path}/data/${data_folders.raw}/label_images
dataset: ${datasets.png}
label_images_png:
<<: *label_images
filename_suffix: .png
label_images_jpg:
<<: *label_images
filename_suffix: .jpg
label_images_jpeg:
<<: *label_images
filename_suffix: .jpeg
label_images_pdf:
<<: *label_images
dataset: base.PDFDataSet
filename_suffix: .pdf
my_project_label_extracts:
type: PartitionedDataSet
path: s3://my_project/data/01_raw/label_extracts
dataset: pandas.ExcelDataSet
My testing script looks like this:
from kedro.config import ConfigLoader
from kedro.framework.project import settings
from pathlib import Path
from kedro.extras.datasets import pillow
project_path = Path(__file__).parent.parent.parent
conf_path = str(project_path / settings.CONF_SOURCE)
conf_loader = ConfigLoader(conf_source=conf_path, env="base")
conf_catalog = conf_loader.get("catalog*", "catalog*/**")
images_dataset = pillow.ImageDataSet.from_config("label_images_png", conf_catalog["label_images_png"])
images_loader = images_dataset.load()
images_loader["00337180800086"]().show()
With hard-coded values inside the catalog.yml, the script runs and outputs an image, However, with the template config it does not work. Am I missing something?
P.S. Apologies if the question is duplicated.

The first bug I've noticed is in your catalog for the entry:
_label_images: &label_images
type: PartitionedDataSet
path: ${paths.base_path}/data/${data_folders.raw}/label_images
dataset: ${datasets.png}
You've missed the type key for the dataset. The correct entry should be:
_label_images: &label_images
type: PartitionedDataSet
path: ${paths.base_path}/data/${data_folders.raw}/label_images
dataset:
type: ${datasets.png}
If you now run the script with the TemplatedConfigLoader you should hopefully not receive the mentioned errors anymore:
from kedro.config import ConfigLoader, TemplatedConfigLoader
from kedro.framework.project import settings
from pathlib import Path
from kedro.extras.datasets import pillow
project_path = Path(__file__).parent.parent.parent
conf_path = str(project_path / settings.CONF_SOURCE)
conf_loader = TemplatedConfigLoader(conf_source=conf_path, env="base", globals_pattern="*globals.yml")
conf_catalog = conf_loader.get("catalog*", "catalog*/**")
images_dataset = pillow.ImageDataSet.from_config("label_images_png", conf_catalog["label_images_png"])
images_loader = images_dataset.load()
images_loader["00337180800086"]().show()
For ease of communication you might want to join the Kedro Discord channel so we can respond to you in real time: https://discord.gg/akJDeVaxnB

Related

Argo - submit workflow from python with input parameter file

I basically want to run this command: argo submit -n argo workflows/workflow.yaml -f params.json through the official python SDK.
This example covers how to submit a workflow manifest, but I don't know where to add the input parameter file.
import os
from pprint import pprint
import yaml
from pathlib import Path
import argo_workflows
from argo_workflows.api import workflow_service_api
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow_create_request import \
IoArgoprojWorkflowV1alpha1WorkflowCreateRequest
configuration = argo_workflows.Configuration(host="https://localhost:2746")
configuration.verify_ssl = False
with open("workflows/workflow.yaml", "r") as f:
manifest = yaml.safe_load(f)
api_client = argo_workflows.ApiClient(configuration)
api_instance = workflow_service_api.WorkflowServiceApi(api_client)
api_response = api_instance.create_workflow(
namespace="argo",
body=IoArgoprojWorkflowV1alpha1WorkflowCreateRequest(workflow=manifest, _check_type=False),
_check_return_type=False)
pprint(api_response)
Where to pass in the params.json file?
I found this snippet in the docs of WorkflowServiceApi.md (which was apparently too big to render as markdown):
import time
import argo_workflows
from argo_workflows.api import workflow_service_api
from argo_workflows.model.grpc_gateway_runtime_error import GrpcGatewayRuntimeError
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow_submit_request import IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest
from argo_workflows.model.io_argoproj_workflow_v1alpha1_workflow import IoArgoprojWorkflowV1alpha1Workflow
from pprint import pprint
# Defining the host is optional and defaults to http://localhost:2746
# See configuration.py for a list of all supported configuration parameters.
configuration = argo_workflows.Configuration(
host = "http://localhost:2746"
)
# Enter a context with an instance of the API client
with argo_workflows.ApiClient() as api_client:
# Create an instance of the API class
api_instance = workflow_service_api.WorkflowServiceApi(api_client)
namespace = "namespace_example" # str |
body = IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest(
namespace="namespace_example",
resource_kind="resource_kind_example",
resource_name="resource_name_example",
submit_options=IoArgoprojWorkflowV1alpha1SubmitOpts(
annotations="annotations_example",
dry_run=True,
entry_point="entry_point_example",
generate_name="generate_name_example",
labels="labels_example",
name="name_example",
owner_reference=OwnerReference(
api_version="api_version_example",
block_owner_deletion=True,
controller=True,
kind="kind_example",
name="name_example",
uid="uid_example",
),
parameter_file="parameter_file_example",
parameters=[
"parameters_example",
],
pod_priority_class_name="pod_priority_class_name_example",
priority=1,
server_dry_run=True,
service_account="service_account_example",
),
) # IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest |
# example passing only required values which don't have defaults set
try:
api_response = api_instance.submit_workflow(namespace, body)
pprint(api_response)
except argo_workflows.ApiException as e:
print("Exception when calling WorkflowServiceApi->submit_workflow: %s\n" % e)
Have you tried using a IoArgoprojWorkflowV1alpha1WorkflowSubmitRequest? Looks like it has submit_options of type IoArgoprojWorkflowV1alpha1SubmitOpts which has a parameter_file param.

Cannot load azure.ml workspace from within a webservice entry_script! Where is the `/var/azureml-app/` located?

I am creating an azure-ml webservice. The following script shows the code for creating the webservice and deploying it locally.
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core import Workspace
from azureml.core.model import Model
ws = Workspace.from_config()
model = Model(ws,'textDNN-20News')
ws.write_config(file_name='config.json')
env = Environment(name="init-env")
python_packages = ['numpy', 'pandas']
for package in python_packages:
env.python.conda_dependencies.add_pip_package(package)
dummy_inference_config = InferenceConfig(
environment=env,
source_directory="./source_dir",
entry_script="./init_score.py",
)
from azureml.core.webservice import LocalWebservice
deployment_config = LocalWebservice.deploy_configuration(port=6789)
service = Model.deploy(
ws,
"myservice",
[model],
dummy_inference_config,
deployment_config,
overwrite=True,
)
service.wait_for_deployment(show_output=True)
As it can be seen, the above code deploys "entry_script = init_score.py" to my local machine. Within the entry_script, I need to load the workspace again to connect to azure SQL database. I do it like the following :
from azureml.core import Dataset, Datastore
from azureml.data.datapath import DataPath
from azureml.core import Workspace
def init():
pass
def run(data):
try:
ws = Workspace.from_config()
# create tabular dataset from a SQL database in datastore
datastore = Datastore.get(ws, 'sql_db_name')
query = DataPath(datastore, 'SELECT * FROM my_table')
tabular = Dataset.Tabular.from_sql_query(query, query_timeout=10)
df = tabular.to_pandas_dataframe()
return len(df)
except Exception as e:
output0 = "{}:".format(type(e).__name__)
output1 = "{} ".format(e)
output2 = f"{type(e).__name__} occured at line {e.__traceback__.tb_lineno} of {__file__}"
return output0 + output1 + output2
The try-catch block is for catching the potential exception thrown and return it as an output.
The exception that I keep getting is:
UserErrorException: The workspace configuration file config.json, could not be found in /var/azureml-app or its
parent directories. Please check whether the workspace configuration file exists, or provide the full path
to the configuration file as an argument. You can download a configuration file for your workspace,
via http://ml.azure.com and clicking on the name of your workspace in the right top.
I have actually tried to save the config file by passing an absolute path to the path argument of both ws.write_config(path='my_absolute_path'), and also when loading it to the Workspace.from_config(path='my_absolute_path'), but I got pretty much the same error:
UserErrorException: The workspace configuration file config.json, could not be found in /var/azureml-app/my_absolute_path or its
parent directories. Please check whether the workspace configuration file exists, or provide the full path
to the configuration file as an argument. You can download a configuration file for your workspace,
via http://ml.azure.com and clicking on the name of your workspace in the right top.
Looks like even providing the path does not change the root directory that the entry script starts locating from.
I also tried to directly saving the file to /var/azureml-app/, but this path is not recognized when I passed it to the ws.write_config(path='/var/azureml-app/').
Do you have any idea where exactly is the /var/azureml-app/?
Any idea on how to fix this?

AzureDevOPS ML Error: We could not find config.json in: /home/vsts/work/1/s or in its parent directories

I am trying to create an Azure DEVOPS ML Pipeline. The following code works 100% fine on Jupyter Notebooks, but when I run it in Azure Devops I get this error:
Traceback (most recent call last):
File "src/my_custom_package/data.py", line 26, in <module>
ws = Workspace.from_config()
File "/opt/hostedtoolcache/Python/3.8.7/x64/lib/python3.8/site-packages/azureml/core/workspace.py", line 258, in from_config
raise UserErrorException('We could not find config.json in: {} or in its parent directories. '
azureml.exceptions._azureml_exception.UserErrorException: UserErrorException:
Message: We could not find config.json in: /home/vsts/work/1/s or in its parent directories. Please provide the full path to the config file or ensure that config.json exists in the parent directories.
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"message": "We could not find config.json in: /home/vsts/work/1/s or in its parent directories. Please provide the full path to the config file or ensure that config.json exists in the parent directories."
}
}
The code is:
#import
from sklearn.model_selection import train_test_split
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.experiment import Experiment
from datetime import date
from azureml.core import Workspace, Dataset
import pandas as pd
import numpy as np
import logging
#getdata
subscription_id = 'mysubid'
resource_group = 'myrg'
workspace_name = 'mlplayground'
workspace = Workspace(subscription_id, resource_group, workspace_name)
dataset = Dataset.get_by_name(workspace, name='correctData')
#auto ml
ws = Workspace.from_config()
automl_settings = {
"iteration_timeout_minutes": 2880,
"experiment_timeout_hours": 48,
"enable_early_stopping": True,
"primary_metric": 'spearman_correlation',
"featurization": 'auto',
"verbosity": logging.INFO,
"n_cross_validations": 5,
"max_concurrent_iterations": 4,
"max_cores_per_iteration": -1,
}
cpu_cluster_name = "computecluster"
compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)
print(compute_target)
automl_config = AutoMLConfig(task='regression',
compute_target = compute_target,
debug_log='automated_ml_errors.log',
training_data = dataset,
label_column_name="paidInDays",
**automl_settings)
today = date.today()
d4 = today.strftime("%b-%d-%Y")
experiment = Experiment(ws, "myexperiment"+d4)
remote_run = experiment.submit(automl_config, show_output = True)
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
remote_run.wait_for_completion()
There is something weird happening on your code, you are getting the data from a first workspace (workspace = Workspace(subscription_id, resource_group, workspace_name)), then using the resources from a second one (ws = Workspace.from_config()). I would suggest avoiding having code relying on two different workspaces, especially when you know that an underlying datasource can be registered (linked) to multiple workspaces (documentation).
In general using a config.json file when instantiating a Workspace object will result in an interactive authentication. When your code will be processed and you will have a log asking you to reach a specific URL and enter a code. This will use your Microsoft account to verify that you are authorized to access the Azure resource (in this case your Workspace('mysubid', 'myrg', 'mlplayground')). This has its limitations when you start deploying the code onto virtual machines or agents, you will not always manually check the logs, access the URL and authenticate yourself.
For this matter it is strongly recommended setting up more advanced authentication methods and personally I would suggest using the service principal one since it is simple, convinient and secure if done properly.
You can follow Azure's official documentation here.
You need to provide a config path to Workspace.from_config().
Under https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py you find the following explanation how to create a config file:
Create a Workspace:
from azureml.core import Workspace
ws = Workspace.create(name='myworkspace',
subscription_id='<azure-subscription-id>',
resource_group='myresourcegroup',
create_resource_group=True,
location='eastus2'
)
Save the workspace config:
ws.write_config(path="./file-path", file_name="config.json")
load the config from the default path:
ws = Workspace.from_config()
ws.get_details()
or load the config from a specified path:
ws = Workspace.from_config(path="my/path/config.json")
more details about how to create a Workspace from_config can be found here:
https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py#from-config-path-none--auth-none---logger-none---file-name-none-

How to fake a module in unittest?

I have a code that is based on a configuration file called config.py which defines a class called Config and contains all the configuration options. As the config file can be located anywhere in the user's storage, so I use importlib.util to import it (as specified in this answer). I want to test this functionality with unittest for different configurations. How do I do it? A simple answer could be make a different file for every possible config I want to test and then pass its path to the config loader but this is not what I want. What I basically need is that I implement the Config class, and fake it as if it were the actual config file. How to achieve this?
EDIT Here is the code I want to test:
import os
import re
import traceback
import importlib.util
from typing import Any
from blessings import Terminal
term = Terminal()
class UnknownOption(Exception):
pass
class MissingOption(Exception):
pass
def perform_checks(config: Any):
checklist = {
"required": {
"root": [
"flask",
"react",
"mysql",
"MODE",
"RUN_REACT_IN_DEVELOPMENT",
"RUN_FLASK_IN_DEVELOPMENT",
],
"flask": ["HOST", "PORT", "config"],
# More options
},
"optional": {
"react": [
"HTTPS",
# More options
],
"mysql": ["AUTH_PLUGIN"],
},
}
# Check for missing required options
for kind in checklist["required"]:
prop = config if kind == "root" else getattr(config, kind)
for val in kind:
if not hasattr(prop, val):
raise MissingOption(
"Error while parsing config: "
+ f"{prop}.{val} is a required config "
+ "option but is not specified in the configuration file."
)
def unknown_option(option: str):
raise UnknownOption(
"Error while parsing config: Found an unknown option: " + option
)
# Check for unknown options
for val in vars(config):
if not re.match("__[a-zA-Z0-9_]*__", val) and not callable(val):
if val in checklist["optional"]:
for ch_val in vars(val):
if not re.match("__[a-zA-Z0-9_]*__", ch_val) and not callable(
ch_val
):
if ch_val not in checklist["optional"][val]:
unknown_option(f"Config.{val}.{ch_val}")
else:
unknown_option(f"Config.{val}")
# Check for illegal options
if config.react.HTTPS == "true":
# HTTPS was set to true but no cert file was specified
if not hasattr(config.react, "SSL_KEY_FILE") or not hasattr(
config.react, "SSL_CRT_FILE"
):
raise MissingOption(
"config.react.HTTPS was set to True without specifying a key file and a crt file, which is illegal"
)
else:
# Files were specified but are non-existent
if not os.path.exists(config.react.SSL_KEY_FILE):
raise FileNotFoundError(
f"The file at { config.react.SSL_KEY_FILE } was set as the key file"
+ "in configuration but was not found."
)
if not os.path.exists(config.react.SSL_CRT_FILE):
raise FileNotFoundError(
f"The file at { config.react.SSL_CRT_FILE } was set as the certificate file"
+ "in configuration but was not found."
)
def load_from_pyfile(root: str = None):
"""
This loads the configuration from a `config.py` file located in the project root
"""
PROJECT_ROOT = root or os.path.abspath(
".." if os.path.abspath(".").split("/")[-1] == "lib" else "."
)
config_file = os.path.join(PROJECT_ROOT, "config.py")
print(f"Loading config from {term.green(config_file)}")
# Load the config file
spec = importlib.util.spec_from_file_location("", config_file)
config = importlib.util.module_from_spec(spec)
# Execute the script
spec.loader.exec_module(config)
# Not needed anymore
del spec, config_file
# Load the mode from environment variable and
# if it is not specified use development mode
MODE = int(os.environ.get("PROJECT_MODE", -1))
conf: Any
try:
conf = config.Config()
conf.load(PROJECT_ROOT, MODE)
except Exception:
print(term.red("Fatal: There was an error while parsing the config.py file:"))
traceback.print_exc()
print("This error is non-recoverable. Aborting...")
exit(1)
print("Validating configuration...")
perform_checks(conf)
print(
"Configuration",
term.green("OK"),
)
Without seeing a bit more of your code, it's tough to give a terribly direct answer, but most likely, you want to use Mocks
In the unit test, you would use a mock to replace the Config class for the caller/consumer of that class. You then configure the mock to give the return values or side effects that are relevant to your test case.
Based on what you've posted, you may not need any mocks, just fixtures. That is, examples of Config that exercise a given case. In fact, it would probably be best to do exactly what you suggested originally--just make a few sample configs that exercise all the cases that matter.
It's not clear why that is undesirable--in my experience, it's much easier to read and understand a test with a coherent fixture than it is to deal with mocking and constructing objects in the test class. Also, you'd find this much easier to test if you broke the perform_checks function into parts, e.g., where you have comments.
However, you can construct the Config objects as you like and pass them to the check function in a unit test. It's a common pattern in Python development to use dict fixtures. Remembering that in python objects, including modules, have an interface much like a dictionary, suppose you had a unit test
from unittest import TestCase
from your_code import perform_checks
class TestConfig(TestCase):
def test_perform_checks(self):
dummy_callable = lambda x: x
config_fixture = {
'key1': 'string_val',
'key2': ['string_in_list', 'other_string_in_list'],
'key3': { 'sub_key': 'nested_val_string', 'callable_key': dummy_callable},
# this is your in-place fixture
# you make the keys and values that correspond to the feature of the Config file under test.
}
perform_checks(config_fixture)
self.assertTrue(True) # i would suggest returning True on the function instead, but this will cover the happy path case
def perform_checks_invalid(self):
config_fixture = {}
with self.assertRaises(MissingOption):
perform_checks(config_fixture)
# more tests of more cases
You can also override the setUp() method of the unittest class if you want to share fixtures among tests. One way to do this would be set up a valid fixture, then make the invalidating changes you want to test in each test method.

How to pass arguments to scoring file when deploying a Model in AzureML

I am deploying a trained model to an ACI endpoint on Azure Machine Learning, using the Python SDK.
I have created my score.py file, but I would like that file to be called with an argument being passed (just like with a training file) that I can interpret using argparse.
However, I don't seem to find how I can pass arguments
This is the code I have to create the InferenceConfig environment and which obviously does not work. Should I fall back on using the extra Docker file steps or so?
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment
from azureml.core.model import InferenceConfig
env = Environment('my_hosted_environment')
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=['scikit-learn'],
pip_packages=['azureml-defaults'])
scoring_script = 'score.py --model_name ' + model_name
inference_config = InferenceConfig(entry_script=scoring_script, environment=env)
Adding the score.py for reference on how I'd love to use the arguments in that script:
#removed imports
import argparse
def init():
global model
parser = argparse.ArgumentParser(description="Load sklearn model")
parser.add_argument('--model_name', dest="model_name", required=True)
args, _ = parser.parse_known_args()
model_path = Model.get_model_path(model_name=args.model_name)
model = joblib.load(model_path)
def run(raw_data):
try:
data = json.loads(raw_data)['data']
data = np.array(data)
result = model.predict(data)
return result.tolist()
except Exception as e:
result = str(e)
return result
Interested to hear your thoughts
This question is a year old. Providing a solution to help those who may still be looking for an answer. My answer to a similar question is here. You may pass native python datatype variables into the inference config and access them as environment variables within the scoring script.
I tackled this problem differently. I could not find a (proper and easy to follow) way to pass arguments for score.py, when it is consumed by InferenceConfig . Instead, what I did was following 4 steps:
Created score_template.py and define variables which should be assigned
Read content of score_template.py and modify it by replacing variables with desired values
Write modified contents into score.py
Finally pass score.py to InferenceConfig
STEP 1 in score_template.py:
import json
from azureml.core.model import Model
import os
import joblib
import pandas as pd
import numpy as np
def init():
global model
#model = joblib.load('recommender.pkl')
model_name="#MODEL_NAME#"
model_saved_file='#MODEL_SAVED_FILE#'
try:
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), model_saved_file)
model = joblib.load(model_path)
except:
model_path = Model.get_model_path(model_name)
model = joblib.load(model_path)
def run(raw_data):
try:
#data=pd.json_normalize(data)
#data=np.array(data['data'])
data = json.loads(raw_data)["data"]
data = np.array(data)
result = model.predict(data)
# you can return any datatype as long as it is JSON-serializable
return {"result": result.tolist()}
except Exception as e:
error = str(e)
#error= data
return error
STEP 2-4 in deploy_model.py:
#--Modify Entry Script/Pass Model Name--
entry_script="score.py"
entry_script_temp="score_template.py"
# Read in the entry script template
print("Prepare Entry Script")
with open(entry_script_temp, 'r') as file :
entry_script_contents = file.read()
# Replace the target string
entry_script_contents = entry_script_contents.replace('#MODEL_NAME#', model_name)
entry_script_contents = entry_script_contents.replace('#MODEL_SAVED_FILE#', model_file_name)
# Write the file to entry script
with open(entry_script, 'w') as file:
file.write(entry_script_contents)
#--Define configs for the deployment---
print("Get Environtment")
env = Environment.get(workspace=ws, name=env_name)
env.inferencing_stack_version = "latest"
print("Inference Configuration")
inference_config = InferenceConfig(entry_script=entry_script, environment=env, source_directory=base_path)
aci_config = AciWebservice.deploy_configuration(cpu_cores = int(cpu_cores), memory_gb = int(memory_gb),location=location)
#--Deloy the service---
print("Deploy Model")
print("model version:", model_artifact.version)
service = Model.deploy( workspace=ws,
name=service_name,
models=[model_artifact],
inference_config=inference_config,
deployment_config=aci_config,
overwrite=True )
service.wait_for_deployment(show_output=True)
How to deploy using environments can be found here model-register-and-deploy.ipynb . InferenceConfig class accepts source_directory and entry_script parameters, where source_directory is a path to the folder that contains all files(score.py and any other additional files) to create the image.
This multi-model-register-and-deploy.ipynb has code snippets on how to create InferenceConfig with source_directory and entry_script.
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
myenv = Environment.from_conda_specification(name="myenv", file_path="myenv.yml")
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
service = Model.deploy(workspace=ws,
name='sklearn-mnist-svc',
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
print(service.scoring_uri)

Categories

Resources