I would like to gather all the pipeline runs from a particular pipeline folder in Azure Data Factory using Python and the Azure API
The code below returns pipeline runs from the full factory, I would like to pass an additional filter with a folder name to only return pipeline runs from that folder:
import pandas as pd
from datetime import datetime, timedelta
from azure.mgmt.datafactory import DataFactoryManagementClient
from azure.mgmt.datafactory.models import *
look_back=1 #days
adf_client = DataFactoryManagementClient(credentials, subscription_id)
pfilter = PipelineRunFilterParameters(last_updated_after=datetime.now() - timedelta(days=look_back), last_updated_before=datetime.now() + timedelta(1))
paged_results=adf_client.pipeline_runs.query_by_factory(factory_name=factory_name,resource_group_name=resource_group_name,filter_parameters=pfilter,content_type= "'application/json'")
print(paged_results.as_dict()['value'][0]['run_id'])
I'm trying to create a new experiment on mlflow but I have this problem:
Exception: Run with UUID l142ae5a7cf04a40902ae9ed7326093c is already active.
This is my code:
import mlflow
import mlflow.sklearn
import sys
mlflow.set_experiment("New experiment 2")
mlflow.set_tracking_uri('http://mlflow:5000')
st= mlflow.start_run(run_name='Test2')
id = st.info.run_id
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(model, "wineModel")
You have to run mlflow.end_run() to finish the first experiment. Then you can create another
I want to create an endpoint for scikit logistic regression in AWS Sagemaker. I have a train.py file which contains training code for scikit sagemaker.
import subprocess as sb
import pandas as pd
import numpy as np
import pickle,json
import sys
def install(package):
sb.call([sys.executable, "-m", "pip", "install", package])
install('s3fs')
import argparse
import os
if __name__ =='__main__':
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument('--solver', type=str, default='liblinear')
# Data, model, and output directories
parser.add_argument('--output_data_dir', type=str, default=os.environ.get('SM_OUTPUT_DIR'))
parser.add_argument('--model_dir', type=str, default=os.environ.get('SM_MODEL_DIR'))
parser.add_argument('--train', type=str, default=os.environ.get('SM_CHANNEL_TRAIN'))
args, _ = parser.parse_known_args()
# ... load from args.train and args.test, train a model, write model to args.model_dir.
input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ]
if len(input_files) == 0:
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(args.train, "train"))
raw_data = [ pd.read_csv(file, header=None, engine="python") for file in input_files ]
df = pd.concat(raw_data)
y = df.iloc[:,0]
X = df.iloc[:,1:]
solver = args.solver
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver=solver).fit(X, y)
from sklearn.externals import joblib
def model_fn(model_dir):
lr = joblib.dump(lr, "model.joblib")
return lr
In my sagemaker notebook I ran the following code
import os
import boto3
import re
import copy
import time
from time import gmtime, strftime
from sagemaker import get_execution_role
import sagemaker
role = get_execution_role()
region = boto3.Session().region_name
bucket=<bucket> # Replace with your s3 bucket name
prefix = <prefix>
output_path = 's3://{}/{}/{}'.format(bucket, prefix,'output_data_dir')
train_data = 's3://{}/{}/{}'.format(bucket, prefix, 'train')
train_channel = sagemaker.session.s3_input(train_data, content_type='text/csv')
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
train_instance_type="ml.m4.xlarge",
role=role,output_path = output_path,
sagemaker_session=sagemaker.Session(),
hyperparameters={'solver':'liblinear'})
I'm fitting my model here
sklearn.fit({'train': train_channel})
Now, for creating endpoint,
from sagemaker.predictor import csv_serializer
predictor = sklearn.deploy(1, 'ml.m4.xlarge')
While trying to create endpoint, it is throwing
ClientError: An error occurred (ValidationException) when calling the CreateModel operation: Could not find model data at s3://<bucket>/<prefix>/output_data_dir/sagemaker-scikit-learn-x-y-z-000/output/model.tar.gz.
I checked my S3 bucket. Inside my output_data_dir there is sagemaker-scikit-learn-x-y-z-000 dir which has debug-output\training_job_end.ts file. An additional directory got created outside my <prefix> folder with name sagemaker-scikit-learn-x-y-z-000 that has source\sourcedir.tar.gz file. Generally whenever I trained my models with sagemaker built-in algorithms, output_data_dir\sagemaker-scikit-learn-x-y-z-000\output\model.tar.gz kind of files get created. Can someone please tell me where my scikit model got stored, how to push source\sourcedir.tar.gz inside my prefix code without having doing it manually and how to see contents of sourcedir.tar.gz?
Edit: I elaborated the question regarding prefix. Whenever I run sklearn.fit(), two files with same name sagemaker-scikit-learn-x-y-z-000 are getting created in my S3 bucket. One created inside my <bucket>/<prefix>/output_data_dir/sagemaker-scikit-learn-x-y-z-000/debug-output/training_job_end.ts and other file is created in <bucket>/sagemaker-scikit-learn-x-y-z-000/source/sourcedir.tar.gz. Why is the second file not created inside my <prefix> like the first one? What is contained in sourcedir.tar.gz file?
I am not sure if your model is really stored, if you can't find it in S3. While you define a function with the call of joblib.dump in your entry point script, I am having the call at the end of the main. For example:
# persist model
path = os.path.join(args.model_dir, "model.joblib")
joblib.dump(myestimator, path)
print('model persisted at ' + path)
Then the file can be found in ..\output\model.tar.gz just as in your other cases. In order to double-check that is created you maybe want to have a print statement that can be found in the protocol of the training.
You must dump the model as the last step of your training code. Currently you are doing it in the wrong place, as model_fn goal is to load the model for inference, not for training.
Add the dump after training:
lr = LogisticRegression(solver=solver).fit(X, y)
lr = joblib.dump(lr, args.model_dir)
Change model_fn() to load the model instead of dumping it.
See more here.
This post here explains it well:
https://towardsdatascience.com/deploying-a-pre-trained-sklearn-model-on-amazon-sagemaker-826a2b5ac0b6
In short, the tar.gz gets created by tar-gz-ing the model.joblib binary which was first created joblib.dump. To quote the article:
#Build tar file with model data + inference code
bashCommand = "tar -cvpzf model.tar.gz model.joblib inference.py"
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
The inference.py is probably optional.
I'm trying to write my own chatbot with the RASA framework.
Right now I'm just playing around with it and I have the following piece of code for training purposes.
from rasa.nlu.training_data import load_data
from rasa.nlu.config import RasaNLUModelConfig
from rasa.nlu.model import Trainer
from rasa.nlu import config
training_data = load_data("./data/nlu.md")
trainer = Trainer(config.load("config.yml"))
interpreter = trainer.train(training_data)
model_directory = trainer.persist("./models/nlu",fixed_model_name="current")
Now, I read that if I wanted to test it I should do something like this.
from rasa.nlu.evaluate import run_evaluation
run_evaluation("nlu.md", model_directory)
But this code is not available anymore in rasa.nlu.evaluate nor in rasa.nlu.test!
What's the way, then, of testing a RASA model?
The module was renamed.
Please import
from rasa.nlu.test import run_evaluation
Alternatively you now also do
from rasa.nlu import test
test_result = test(path_to_test_data, unpacked_model)
intent_evaluation_report = test_result["intent_evaluation"]["report"]
print(intent_evaluation_report)
I'm trying to Ansible's Python API in order to write a test API (in Python) which can take advantage of a playbook programmatically and add new nodes to a Hadoop cluster. As we know, at least node in the cluster has to be the Namenode and JobTracker (MRv1). For simplicity lets say the JobTracker and the Namenode are in the same node (namenode_ip).
Thus, in order to use Ansible to create a new node, and have it self registered with the Namenode I've created this following Python utility:
from ansible.playbook import PlayBook
from ansible.inventory import Inventory
from ansible.inventory import Group
from ansible.inventory import Host
from ansible import constants as C
from ansible import callbacks
from ansible import utils
import os
import logging
import config
def run_playbook(ipaddress, namenode_ip, playbook, keyfile):
utils.VERBOSITY = 0
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
stats = callbacks.AggregateStats()
runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=utils.VERBOSITY)
host = Host(name=ipaddress)
group = Group(name="new-nodes")
group.add_host(host)
inventory = Inventory(host_list=[], vault_password="Hello123")
inventory.add_group(group)
key_file = keyfile
playbook_path = os.path.join(config.ANSIBLE_PLAYBOOKS, playbook)
pb = PlayBook(
playbook=playbook_path,
inventory = inventory,
remote_user='deploy',
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
private_key_file=key_file
)
results = pb.run()
print results
However, Ansible documentation for the Python API is very poorly written (doesn't give any detail, except for a simple example). What I needed was to have a similar thing as:
ansible-playbook -i hadoop_config -e "namenode_ip=1.2.3.4"
That's it, pass the value of namenode_ip dynamically to the inventory file using the Python API. How can I do that?
This should be as simple as adding one or more of these lines to your script after instantiating your group object and before running your playbook:
group.set_variable("foo", "BAR")