Create mlflow experiment: Run with UUID is already active - python

I'm trying to create a new experiment on mlflow but I have this problem:
Exception: Run with UUID l142ae5a7cf04a40902ae9ed7326093c is already active.
This is my code:
import mlflow
import mlflow.sklearn
import sys
mlflow.set_experiment("New experiment 2")
mlflow.set_tracking_uri('http://mlflow:5000')
st= mlflow.start_run(run_name='Test2')
id = st.info.run_id
mlflow.log_metric("score", score)
mlflow.sklearn.log_model(model, "wineModel")

You have to run mlflow.end_run() to finish the first experiment. Then you can create another

Related

tqdm.notebook progress bar is not running

I am running a model using tqdm.notebook to check the progress using python3.8. However, the progress bar is not running though the generation works okay.
It just shows this on and on.
Here is my following code, and the model I'm running.
import numpy as np
import tensorflow as tf
from midi_ddsp.utils.midi_synthesis_utils import synthesize_mono_midi, conditioning_df_to_audio
from midi_ddsp.utils.inference_utils import get_process_group
from midi_ddsp.midi_ddsp_synthesize import load_pretrained_model
from midi_ddsp.data_handling.instrument_name_utils import INST_NAME_TO_ID_DICT
from tqdm.notebook import tqdm
# -----MIDI Synthesis-----
midi_file = '/Users/midi-ddsp/midi_example/ode_to_joy.mid'
# Load pre-trained model
synthesis_generator, expression_generator = load_pretrained_model()
# Synthesize with violin:
instrument_name = 'violin'
instrument_id = INST_NAME_TO_ID_DICT[instrument_name]
# Run model prediction
midi_audio, midi_control_params, midi_synth_params, conditioning_df = synthesize_mono_midi(synthesis_generator,
expression_generator,
midi_file, instrument_id,
output_dir=None)
synthesized_audio = midi_audio # The synthesized audio
conditioning_df_changed = conditioning_df.copy()
idk what's the problem. Hope someone can tell me. I appreciate it!

pm4py Error: cannot import name 'factory' from 'pm4py.algo.discovery.alpha'

I am trying to run the following code:
from pm4py.algo.discovery.alpha import factorial as alpha_miner
from pm4py.objects.log.importer.xes import factory as xes_importer
event_log = xes_importer.import_log(os.path.join("tests","input_data","running-example.xes"))
net, initial_marking, final_marking = alpha_miner.apply(event_log)
gviz = pn_vis_factory.apply(net, initial_marking, final_marking)
pn_vis_factory.view(gviz)
However, when I run the alpha miner, I get an error message that factory cannot be imported.
What could be the reason or does anyone know a soulution for this?
Many thanks for the answer
from pm4py.algo.discovery.alpha import algorithm as alpha_miner
Find all process discoveries and its information at:
https://pm4py.fit.fraunhofer.de/documentation#discovery
Try this:
import os
# Alpha Miner
from pm4py.algo.discovery.alpha import algorithm as alpha_miner
# XES Reader
from pm4py.objects.log.importer.xes import importer as xes_importer
# Visualize
from pm4py.visualization.petri_net import visualizer as pn_visualizer
log = xes_importer.apply(os.path.join("tests","input_data","running-example.xes"))
net, initial_marking, final_marking = alpha_miner.apply(log)
gviz = pn_visualizer.apply(net, initial_marking, final_marking)
pn_visualizer.view(gviz)

How to load an experiment in azureml?

I have many experiment, like:
and now, i want load an experiment
#%% sumonando os pacotes e verificando azureml.core
import azureml.core
import pandas as pd
import numpy as np
import logging
print("AzureML SDK Version: ", azureml.core.VERSION)
#%% Conectando ao azure e crinado o exparimento
from azureml.core import Workspace, Experiment
ws = Workspace.from_config()
print(Experiment.list(ws))
#%%
Experiment = Experiment.from_directory('teste2-Monitor-Runs') `
but
"error": {
"message": "No cache found for current project, try providing resource group and workspace
arguments"
}`
Content: azureml.core.Experiment class - Azure Machine Learning Python
Following AzureML's Experiment class documentation, they define an Experiment as a container of trials or runs. If you wish to access the container or the Experiment as a whole, you can use just access it by name (as long as you have the right workspace configured). See snippet below:
from azureml.core import Experiment, Workspace
ws = Workspace.from_config()
my_exp = Experiment(ws, name="teste2-Monitor-Runs")
I believe it is that way.
from azureml.core import Experiment, Workspace
Experiment = ws.experiments["teste2-Monitor-Runs"]

How to test a RASA model?

I'm trying to write my own chatbot with the RASA framework.
Right now I'm just playing around with it and I have the following piece of code for training purposes.
from rasa.nlu.training_data import load_data
from rasa.nlu.config import RasaNLUModelConfig
from rasa.nlu.model import Trainer
from rasa.nlu import config
training_data = load_data("./data/nlu.md")
trainer = Trainer(config.load("config.yml"))
interpreter = trainer.train(training_data)
model_directory = trainer.persist("./models/nlu",fixed_model_name="current")
Now, I read that if I wanted to test it I should do something like this.
from rasa.nlu.evaluate import run_evaluation
run_evaluation("nlu.md", model_directory)
But this code is not available anymore in rasa.nlu.evaluate nor in rasa.nlu.test!
What's the way, then, of testing a RASA model?
The module was renamed.
Please import
from rasa.nlu.test import run_evaluation
Alternatively you now also do
from rasa.nlu import test
test_result = test(path_to_test_data, unpacked_model)
intent_evaluation_report = test_result["intent_evaluation"]["report"]
print(intent_evaluation_report)

Access to Spark from Flask app

I wrote a simple Flask app to pass some data to Spark. The script works in IPython Notebook, but not when I try to run it in it's own server. I don't think that the Spark context is running within the script. How do I get Spark working in the following example?
from flask import Flask, request
from pyspark import SparkConf, SparkContext
app = Flask(__name__)
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("SparkContext1")
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)
#app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080)
In IPython Notebook I don't define the SparkContext because it is automatically configured. I don't remember how I did this, I followed some blogs.
On the Linux server I have set the .py to always be running and installed the latest Spark by following up to step 5 of this guide.
Edit:
Following the advice by davidism I have now instead resorted to simple programs with increasing complexity to localise the error.
Firstly I created .py with just the script from the answer below (after appropriately adjusting the links):
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
This returns "Successfully imported Spark Modules". However, the next .py file I made returns an exception:
from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()
This returns exception:
"Java gateway process exited before sending the driver its port number"
Searching around for similar problems I found this page but when I run this code nothing happens, no print on the console and no error messages. Similarly, this did not help either, I get the same Java gateway exception as above. I have also installed anaconda as I heard this may help unite python and java, again no success...
Any suggestions about what to try next? I am at a loss.
Okay, so I'm going to answer my own question in the hope that someone out there won't suffer the same days of frustration! It turns out it was a combination of missing code and bad set up.
Editing the code:
I did indeed need to initialise a Spark Context by appending the following in the preamble of my code:
from pyspark import SparkContext
sc = SparkContext('local')
So the full code will be:
from pyspark import SparkContext
sc = SparkContext('local')
from flask import Flask, request
app = Flask(__name__)
#app.route('/whateverYouWant', methods=['POST']) #can set first param to '/'
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080) #note set to 8080!
Editing the setup:
It is essential that the file (yourrfilename.py) is in the correct directory, namely it must be saved to the folder /home/ubuntu/spark-1.5.0-bin-hadoop2.6.
Then issue the following command within the directory:
./bin/spark-submit yourfilename.py
which initiates the service at 10.0.0.XX:8080/accessFunction/ .
Note that the port must be set to 8080 or 8081: Spark only allows web UI for these ports by default for master and worker respectively
You can test out the service with a restful service or by opening up a new terminal and sending POST requests with cURL commands:
curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/
I was able to fix this problem by adding the location of PySpark and py4j to the path in my flaskapp.wsgi file. Here's the full content:
import sys
sys.path.insert(0, '/var/www/html/flaskapp')
sys.path.insert(1, '/usr/local/spark-2.0.2-bin-hadoop2.7/python')
sys.path.insert(2, '/usr/local/spark-2.0.2-bin-hadoop2.7/python/lib/py4j-0.10.3-src.zip')
from flaskapp import app as application
Modify your .py file as it is shown in the linked guide 'Using IPython Notebook with Spark' part second point. Insted sys.path.insert use sys.path.append. Try insert this snippet:
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)

Categories

Resources