ignore_invalid_triggers not working - python

I am using the pytransitions library (documented here) to implement a Finite State Machine. One of the features outlined is the ability to ignore invalid triggers. Here is the example as per the documentation:
# Globally suppress invalid trigger exceptions
m = Machine(lump, states, initial='solid', ignore_invalid_triggers=True)
If the trigger is set to true, no error should be thrown for triggers that are invalid.
Here is a sample of the code I am trying to construct:
from transitions import Machine
states = ['changes ongoing', 'changes complete', 'changes pushed', 'code reviewed', 'merged']
triggers = ['git commit', 'git push', 'got plus2', 'merged']
# Initialize the state machine
git_user = Machine(states=states, initial=states[0], ignore_invalid_triggers=True, ordered_transitions=True)
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
print(git_user.state)
git_user.trigger('git commit')
print(git_user.state)
git_user.trigger('invalid') # This line will throw an AttributeError
The produced error:
changes ongoing
changes complete
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/transitions/core.py", line 58, in _get_trigger
raise AttributeError("Model has no trigger named '%s'" % trigger_name)
AttributeError: Model has no trigger named 'invalid'
Process finished with exit code 1
I am unsure of why an error is being thrown when ignore_invalid_triggers=True.
There is limited information on this library besides the documentation on the official github page. If anyone has any insight on this I would appreciate the help.
Thanks in advance.

To be an invalid trigger under the rules set out in the documentation, the trigger name has to be valid somewhere in the model. For instance, try trigger "merged" from state "changes ongoing". You get an attribute error because "invalid" is not a trigger at all: you have a list of four, and that's not one of them.
To see the effect of establishing "invalid" as a trigger, add an end-to-start transition (the last line below) after your nice linear loop:
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
git_user.add_transition(trigger="invalid", source=states[-1], dest=states[0])
Now your code should run as expected, ignoring that invalid transition.

Related

Smartsheet API in Python: how to access response data from client

I am trying to understand / plan for the rate limits, particularly for the get_cell_history method in the python api.
When I run the code below, I see the following response printed to my console:
{"response": {"statusCode": 429, "reason": "Too Many Requests", "content": {"errorCode": 4003, "message": "Rate limit exceeded.", "refId": "SOME_REF_ID"}}}
But I am tyring to force my if statement to read the response and check for the 429 status code
import smartsheet
smartsheet_client = smartsheet.Smartsheet('My_API_KEY')
smartsheet_client.errors_as_exceptions(True)
sheet_id = 'My_Sheet_ID'
edit_sheet = smartsheet_client.Sheets.get_sheet(sheet_id) # type: smartsheet.smartsheet.models.Sheet
for row in edit_sheet.rows: # type: smartsheet.smartsheet.models.Row
for cell in row.cells: # type: smartsheet.smartsheet.models.Cell
cell_history = smartsheet_client.Cells.get_cell_history(sheet_id, row.id, cell.column_id,
include_all=True)
if cell_history.request_response.status_code == 429:
print(f'Rate limit exceeded.')
elif cell_history.request_response.status_code == 200:
print(f'Found Cell History: {len(cell_history.data)} edits')
Where can I access this response? Why does my program run without printing out the "Rate limit exceeded" string?
I am using Python 3.8 and the package smartsheet-python-sdk==2.105.1
The following code shows how to successfully catch an API error when using the Smartsheet Python SDK. For simplicity, I've just used the Get Sheet operation and handled the error with status code 404 in this example, but you should be able to implement the same approach in your code. Note that you need to include the smartsheet_client.errors_as_exceptions() line (as shown in this example) so that Smartsheet errors get raised as exceptions.
# raise Smartsheet errors as exceptions
smartsheet_client.errors_as_exceptions()
try:
# call the Get Sheet operation
# if the request is successful, my_sheet contains the Sheet object
my_sheet = smartsheet_client.Sheets.get_sheet(4183817838716804)
except smartsheet.exceptions.SmartsheetException as e:
# handle the API error
if isinstance(e, smartsheet.exceptions.ApiError):
if (e.error.result.status_code == 404):
print(f'Sheet not found.')
else:
print(f'Another type of error occurred: ' + str(e.error.result.status_code))
Finally, here's a bit of additional info about error handling with the Smartsheet Python SDK that may be helpful (from this other SO thread).
SmartsheetException is the base class for all of the exceptions raised by the SDK. The two most common types of SmartsheetException are ApiError and HttpError. After trapping the exception, first determine the exception type using an isinstance test.
The ApiError exception has an Error class object accessible through the error property, and then that in turn points to an ErrorResult class accessible through the result property.
The details of the API error are stored in that ErrorResult.
Note that to make the Python SDK raise exceptions for API errors, you must call the errors_as_exceptions() method on the client object.
---------------------
UPDATE (retry logic):
To repro the behavior you've described in your first comment below, I used (most of) the code from your original post above, and added some print statements to it. My code is as follows:
# raise Smartsheet errors as exceptions
smartsheet_client.errors_as_exceptions()
sheet_id = 5551639177258884
edit_sheet = smartsheet_client.Sheets.get_sheet(sheet_id) # type: smartsheet.smartsheet.models.Sheet
# initialize ctr
row_ctr = 0
for row in edit_sheet.rows: # type: smartsheet.smartsheet.models.Row
row_ctr += 1
print('----ROW # ' + str(row_ctr) + '----')
cell_ctr = 0
for cell in row.cells: # type: smartsheet.smartsheet.models.Cell
cell_ctr += 1
try:
cell_history = smartsheet_client.Cells.get_cell_history(sheet_id, row.id, cell.column_id, include_all=True)
print(f'FOUND CELL HISTORY: {len(cell_history.data)} edits [cell #' + str(cell_ctr) + ' ... rowID|columnID= ' + str(row.id) + '|' + str(cell.column_id) + ']')
except smartsheet.exceptions.SmartsheetException as e:
# handle the exception
if isinstance(e, smartsheet.exceptions.ApiError):
if (e.error.result.status_code == 429):
print(f'RATE LIMIT EXCEEDED! [cell #' + str(cell_ctr) + ']')
Interestingly, it seems that the Smartsheet Python SDK has built-in retry logic (with exponential backoff). The exception handling portion of my code isn't ever reached -- because the SDK is recognizing the Rate Limiting error and automatically retrying (with exponential backoff) any requests that fail with that error, instead of raising an exception for my code to catch.
I can see this happening in my print output. For example, here's the cell history output from Row #2 in my sheet (each row has 8 columns / cells) -- where all Get Cell History calls were processed without any exceptions being thrown.
Things continue without error until the 7th row is being processed, when a Rate Limiting error occurs when it's trying to get cell history for the fifth column/cell in that row. Instead of this being a fatal error though -- I see processing pause for a few seconds (i.e., no additional output is printed to my console for a few seconds)...and then processing resumes. Another Rate Limiting error occurs, followed by a slightly longer pause in processing, after which processing resumes and cell history is successfully retrieved for the fifth through eighth cells in that row.
As my program continues to run, I see this same thing happen several times -- i.e., several API calls are successfully issued, then a Rate Limiting error occurs, processing pauses momentarily, and then processing resumes with API calls successfully issued, until another Rate Limiting error occurs, etc. etc. etc.
To confirm that the SDK is automatically retrying requests that fail with the Rate Limiting error, I dug into the code a bit. Sure enough, I see that the request_with_retry function within /smartsheet/smartsheet.py does indeed include logic to retry failing requests with exponential backoff (if should_retry is true).
Which begs the question, how does should_retry get set (i.e., in response to which error codes will the SDK automatically retry a failing request)? Once again, the answer is found within /smartsheet/smartsheet.py (SDK source code). This suggests that you don't need exception handling logic in your code for the Rate Limiting error (or for any of the other 3 errors where should_retry is set to True), as the SDK is built to automatically retry any requests that fail with that error.

Getting the error message: "r is null" whenever running a Dash application

I couldn't find any similar issue. This error appears ever since I started developing a Dash application.
The error stack is very long and. I'm not sure it's informative, but I'm adding it in case it is.
(This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.)
value#http://127.0.0.1:8050/_dash-component-suites/dash/dcc/async-graph.js:1:7832
value#http://127.0.0.1:8050/_dash-component-suites/dash/dcc/async-graph.js:1:12619
callComponentWillReceiveProps#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:13111:16
updateClassInstance#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:13313:38
updateClassComponent#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:17242:22
beginWork#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:18755:18
callCallback#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:182:16
invokeGuardedCallbackDev#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:231:18
invokeGuardedCallback#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:286:33
beginWork$1#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:23338:30
performUnitOfWork#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:22292:14
workLoopSync#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:22265:24
performSyncWorkOnRoot#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:21891:11
flushSyncCallbackQueueImpl/<#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:11224:26
unstable_runWithPriority#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react#16.v2_0_0m1637923178.14.0.js:2685:14
runWithPriority$1#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:11174:12
flushSyncCallbackQueueImpl#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:11219:26
flushSyncCallbackQueue#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:11207:5
batchedUpdates$1#http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom#16.v2_0_0m1637923178.14.0.js:21997:9
notify#webpack://dash_renderer/./node_modules/react-redux/es/utils/Subscription.js?:24:12
notifyNestedSubs#webpack://dash_renderer/./node_modules/react-redux/es/utils/Subscription.js?:95:20
handleChangeWrapper#webpack://dash_renderer/./node_modules/react-redux/es/utils/Subscription.js?:100:12
dispatch#webpack://dash_renderer/./node_modules/redux/es/redux.js?:307:7
createThunkMiddleware/</</<#webpack://dash_renderer/./node_modules/redux-thunk/es/index.js?:12:16
applyProps#webpack://dash_renderer/./src/observers/executedCallbacks.ts?:74:15
observer/</<#webpack://dash_renderer/./src/observers/executedCallbacks.ts?:115:40
forEach#webpack://dash_renderer/./node_modules/ramda/es/forEach.js?:50:7
_checkForMethod/<#webpack://dash_renderer/./node_modules/ramda/es/internal/_checkForMethod.js?:27:116
f2#webpack://dash_renderer/./node_modules/ramda/es/internal/_curry2.js?:34:14
observer/<#webpack://dash_renderer/./src/observers/executedCallbacks.ts?:102:55
forEach#webpack://dash_renderer/./node_modules/ramda/es/forEach.js?:50:7
_checkForMethod/<#webpack://dash_renderer/./node_modules/ramda/es/internal/_checkForMethod.js?:27:116
f2#webpack://dash_renderer/./node_modules/ramda/es/internal/_curry2.js?:34:14
observer#webpack://dash_renderer/./src/observers/executedCallbacks.ts?:84:51
StoreObserver/</<#webpack://dash_renderer/./src/StoreObserver.ts?:100:9
forEach#webpack://dash_renderer/./node_modules/ramda/es/forEach.js?:50:7
_checkForMethod/<#webpack://dash_renderer/./node_modules/ramda/es/internal/_checkForMethod.js?:27:116
f2#webpack://dash_renderer/./node_modules/ramda/es/internal/_curry2.js?:34:14
StoreObserver/<#webpack://dash_renderer/./src/StoreObserver.ts?:98:51
dispatch#webpack://dash_renderer/./node_modules/redux/es/redux.js?:307:7
createThunkMiddleware/</</<#webpack://dash_renderer/./node_modules/redux-thunk/es/index.js?:12:16
_callee$#webpack://dash_renderer/./src/observers/executingCallbacks.ts?:78:25
c#http://127.0.0.1:8050/_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_0_0m1637926595.min.js:14:4600
l/s._invoke</<#http://127.0.0.1:8050/_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_0_0m1637926595.min.js:14:4354
y/</<#http://127.0.0.1:8050/_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v1_0_0m1637926595.min.js:14:4963
asyncGeneratorStep#webpack://dash_renderer/./src/observers/executingCallbacks.ts?:13:103
_next#webpack://dash_renderer/./src/observers/executingCallbacks.ts?:15:212
I'd be glad to get your opinions and ideas.
Thanks.
The first thing to check is that PreventUpdate is being used in callbacks to handle null conditions:
def some_callback(some_input):
if (some_input is None):
raise PreventUpdate
The r is null also error occurs for me when errors are raised in my server-side code that leave Dash with nothing to display.
For example, my server-side code throws an error when there are no traces that meet the user-specified criteria. In the screenshot below, you can see that I set a threshold of 0.98 for the y-axis, and there are no traces satisfy that condition. This triggers the error, which means there is no component for Dash to display.
I programmed the Dash function to update my graphs which created another function in the script placed above the Dash app.
I accidentally forgot to return a fig once and then experienced this error, too.
It seems that this error occurs at the beginning of development of a Dash application. Once callback functions were called, the error no longer popped up.

How to add more metrics to a finished MLflow run?

Once an MLflow run is finished, external scripts can access its parameters and metrics using python mlflow client and mlflow.get_run(run_id) method, but the Run object returned by get_run seems to be read-only.
Specifically, .log_param .log_metric, or .log_artifact cannot be used on the object returned by get_run, raising errors like these:
AttributeError: 'Run' object has no attribute 'log_param'
If we attempt to run any of the .log_* methods on mlflow, it would log them into to a new run with auto-generated run ID in the Default experiment.
Example:
final_model_mlflow_run = mlflow.get_run(final_model_mlflow_run_id)
with mlflow.ActiveRun(run=final_model_mlflow_run) as myrun:
# this read operation uses correct run
run_id = myrun.info.run_id
print(run_id)
# this write operation writes to a new run
# (with auto-generated random run ID)
# in the "Default" experiment (with exp. ID of 0)
mlflow.log_param("test3", "This is a test")
Note that the above problem exists regardless of the Run status (.info.status can be both "FINISHED" or "RUNNING", without making any difference).
I wonder if this read-only behavior is by design (given that immutable modeling runs improve experiments reproducibility)? I can appreciate that, but it also goes against code modularity if everything has to be done within a single monolith like the with mlflow.start_run() context...
As it was pointed out to me by Hans Bambel and as it is documented here mlflow.start_run (in contrast to mlflow.ActiveRun) accepts the run_id parameter of an existing run.
Here's an example tested to work in v1.13 through v1.19 - as you see one can even overwrite an existing metric to correct a mistake:
with mlflow.start_run(run_id=final_model_mlflow_run_id):
# print(mlflow.active_run().info)
mlflow.log_param("start_run_test", "This is a test")
mlflow.log_metric("start_run_test", 1.23)
mlflow.log_metric("start_run_test", 1.33)
mlflow.log_artifact("/home/jovyan/_tmp/formula-features-20201103.json", "start_run_test")

Unable to pass custom values to AWS Lambda function

I am trying to pass custom input to my lambda function (Python 3.7 runtime) in JSON format from the rule set in CloudWatch.
However I am facing difficulty accessing elements from the input correctly.
Here's what the CW rule looks like.
Here is what the lambda function is doing.
import sqlalchemy # Package for accessing SQL databases via Python
import psycopg2
def lambda_handler(event,context):
today = date.today()
engine = sqlalchemy.create_engine("postgresql://some_user:userpassword#som_host/some_db")
con = engine.connect()
dest_table = "dest_table"
print(event)
s={'upload_date': today,'data':'Daily Ingestion Data','status':event["data"]} # Error points here
ingestion = pd.DataFrame(data = [s])
ingestion.to_sql(dest_table, con, schema = "some_schema", if_exists = "append",index = False, method = "multi")
When I test the event with default test event values, the print(event) statement prints the default test values ("key1":"value1") but the syntax for adding data to the database ingestion.to_sql() i.e the payload from input "Testing Input Data" is inserted to the database successfully.
However the lambda function still shows an error while running the function at event["data"] as Key error.
1) Am I accessing the Constant JSON input the right way?
2) If not then why is the data still being ingested as the way it is intended despite throwing an error at that line of code
3) The data is ingested when the function is triggered as per the schedule expression. When I test the event it shows an error with the Key. Is it the test event which is not similar to the actual input which is causing this error?
There is alot of documentation and articles on how to take input but I could not find anything that shows how to access the input inside the function. I have been stuck at this point for a while and it frustrates me that why is this not documented anywhere.
Would really appreciate if someone could give me some clarity to this process.
Thanks in advance
Edit:
Image of the monitoring Logs:
[ERROR] KeyError: 'data' Traceback (most recent call last): File "/var/task/test.py"
I am writing this answer based on the comments.
The syntax that was originally written is valid and I am able to access the data correctly. There was a need to implement a timeout as the function was constantly hitting that threshold followed by some change in the iteration.

Azure ML Studio: How to change input value with Python before it goes through data process

I am currently attempting to change the value of input as it goes through data process in Azure ML. However, I cannot find a clue about how to access to the input data with python.
For example, if you were to use python, you can access to the column of data with
print(dataframe1["Hello World"])
I tried to change the name of Web Service Input and tried to do it like how I did for other dataframe (e.g. sample)
print(dataframe["sample"])
But it returns an error with no luck, and from what I read from an error, it's not compatible to dataframe:
object of type 'NoneType' has no len()
I tried to look up a solution with Nonetype error, but there is no good solution.
The whole error message:
requestId = 1f0f621f1d8841baa7862d5c05154942 errorComponent=Module. taskStatusCode=400. {"Exception":{"ErrorId":"FailedToEvaluateScript","ErrorCode":"0085","ExceptionType":"ModuleException","Message":"Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\r\n File \"C:\\server\\invokepy.py\", line 211, in batch\r\n xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True)\r\n File \"C:\\server\\XDRReader\\xdrutils.py\", line 51, in DataFrameToRFile\r\n attributes = XDRBridge.DataFrameToRObject(dataframe)\r\n File \"C:\\server\\XDRReader\\xdrbridge.py\", line 40, in DataFrameToRObject\r\n if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):\r\nTypeError: object of type 'NoneType' has no len()\r\nProcess returned with non-zero exit code 1\r\n\r\n---------- End of error message from Python interpreter ----------"}}Error: Error 0085: The following error occurred during script evaluation, please view the output log for more information:---------- Start of error message from Python interpreter ----------Caught exception while executing function: Traceback (most recent call last): File "C:\server\invokepy.py", line 211, in batch xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True) File "C:\server\XDRReader\xdrutils.py", line 51, in DataFrameToRFile attributes = XDRBridge.DataFrameToRObject(dataframe) File "C:\server\XDRReader\xdrbridge.py", line 40, in DataFrameToRObject if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):TypeError: object of type 'NoneType' has no len()Process returned with non-zero exit code 1---------- End of error message from Python interpreter ---------- Process exited with error code -2
I have also tried to a way to pass python script in data, but it is not able to make any change to Web Service Input value as I want it to be.
I have tried to look on forums like msdn or SO, but it's been difficult to find any information about it. Please let me know if you need any more information if needed. I would greatly appreciate your help!
tl;dr; You need to also link the dataset you used for training to the same port you link the Web service input, so that the Execute Python Script has something to work on - see the image below for how this should look.
You need to keep in mind that the Predictive experiment has some conventions that need to be followed (or learned the hard way :) ). One of them is that in order to use the Web service input, you need to pair it with an actual dataset, which Azure ML Studio can then use to infer structure and to provide you with some data while testing your predictive experiment. You can see it as some sort of 'ghost' module that doesn't do anything by itself.
Hope this helps.

Categories

Resources