Smartsheet API in Python: how to access response data from client - python

I am trying to understand / plan for the rate limits, particularly for the get_cell_history method in the python api.
When I run the code below, I see the following response printed to my console:
{"response": {"statusCode": 429, "reason": "Too Many Requests", "content": {"errorCode": 4003, "message": "Rate limit exceeded.", "refId": "SOME_REF_ID"}}}
But I am tyring to force my if statement to read the response and check for the 429 status code
import smartsheet
smartsheet_client = smartsheet.Smartsheet('My_API_KEY')
smartsheet_client.errors_as_exceptions(True)
sheet_id = 'My_Sheet_ID'
edit_sheet = smartsheet_client.Sheets.get_sheet(sheet_id) # type: smartsheet.smartsheet.models.Sheet
for row in edit_sheet.rows: # type: smartsheet.smartsheet.models.Row
for cell in row.cells: # type: smartsheet.smartsheet.models.Cell
cell_history = smartsheet_client.Cells.get_cell_history(sheet_id, row.id, cell.column_id,
include_all=True)
if cell_history.request_response.status_code == 429:
print(f'Rate limit exceeded.')
elif cell_history.request_response.status_code == 200:
print(f'Found Cell History: {len(cell_history.data)} edits')
Where can I access this response? Why does my program run without printing out the "Rate limit exceeded" string?
I am using Python 3.8 and the package smartsheet-python-sdk==2.105.1

The following code shows how to successfully catch an API error when using the Smartsheet Python SDK. For simplicity, I've just used the Get Sheet operation and handled the error with status code 404 in this example, but you should be able to implement the same approach in your code. Note that you need to include the smartsheet_client.errors_as_exceptions() line (as shown in this example) so that Smartsheet errors get raised as exceptions.
# raise Smartsheet errors as exceptions
smartsheet_client.errors_as_exceptions()
try:
# call the Get Sheet operation
# if the request is successful, my_sheet contains the Sheet object
my_sheet = smartsheet_client.Sheets.get_sheet(4183817838716804)
except smartsheet.exceptions.SmartsheetException as e:
# handle the API error
if isinstance(e, smartsheet.exceptions.ApiError):
if (e.error.result.status_code == 404):
print(f'Sheet not found.')
else:
print(f'Another type of error occurred: ' + str(e.error.result.status_code))
Finally, here's a bit of additional info about error handling with the Smartsheet Python SDK that may be helpful (from this other SO thread).
SmartsheetException is the base class for all of the exceptions raised by the SDK. The two most common types of SmartsheetException are ApiError and HttpError. After trapping the exception, first determine the exception type using an isinstance test.
The ApiError exception has an Error class object accessible through the error property, and then that in turn points to an ErrorResult class accessible through the result property.
The details of the API error are stored in that ErrorResult.
Note that to make the Python SDK raise exceptions for API errors, you must call the errors_as_exceptions() method on the client object.
---------------------
UPDATE (retry logic):
To repro the behavior you've described in your first comment below, I used (most of) the code from your original post above, and added some print statements to it. My code is as follows:
# raise Smartsheet errors as exceptions
smartsheet_client.errors_as_exceptions()
sheet_id = 5551639177258884
edit_sheet = smartsheet_client.Sheets.get_sheet(sheet_id) # type: smartsheet.smartsheet.models.Sheet
# initialize ctr
row_ctr = 0
for row in edit_sheet.rows: # type: smartsheet.smartsheet.models.Row
row_ctr += 1
print('----ROW # ' + str(row_ctr) + '----')
cell_ctr = 0
for cell in row.cells: # type: smartsheet.smartsheet.models.Cell
cell_ctr += 1
try:
cell_history = smartsheet_client.Cells.get_cell_history(sheet_id, row.id, cell.column_id, include_all=True)
print(f'FOUND CELL HISTORY: {len(cell_history.data)} edits [cell #' + str(cell_ctr) + ' ... rowID|columnID= ' + str(row.id) + '|' + str(cell.column_id) + ']')
except smartsheet.exceptions.SmartsheetException as e:
# handle the exception
if isinstance(e, smartsheet.exceptions.ApiError):
if (e.error.result.status_code == 429):
print(f'RATE LIMIT EXCEEDED! [cell #' + str(cell_ctr) + ']')
Interestingly, it seems that the Smartsheet Python SDK has built-in retry logic (with exponential backoff). The exception handling portion of my code isn't ever reached -- because the SDK is recognizing the Rate Limiting error and automatically retrying (with exponential backoff) any requests that fail with that error, instead of raising an exception for my code to catch.
I can see this happening in my print output. For example, here's the cell history output from Row #2 in my sheet (each row has 8 columns / cells) -- where all Get Cell History calls were processed without any exceptions being thrown.
Things continue without error until the 7th row is being processed, when a Rate Limiting error occurs when it's trying to get cell history for the fifth column/cell in that row. Instead of this being a fatal error though -- I see processing pause for a few seconds (i.e., no additional output is printed to my console for a few seconds)...and then processing resumes. Another Rate Limiting error occurs, followed by a slightly longer pause in processing, after which processing resumes and cell history is successfully retrieved for the fifth through eighth cells in that row.
As my program continues to run, I see this same thing happen several times -- i.e., several API calls are successfully issued, then a Rate Limiting error occurs, processing pauses momentarily, and then processing resumes with API calls successfully issued, until another Rate Limiting error occurs, etc. etc. etc.
To confirm that the SDK is automatically retrying requests that fail with the Rate Limiting error, I dug into the code a bit. Sure enough, I see that the request_with_retry function within /smartsheet/smartsheet.py does indeed include logic to retry failing requests with exponential backoff (if should_retry is true).
Which begs the question, how does should_retry get set (i.e., in response to which error codes will the SDK automatically retry a failing request)? Once again, the answer is found within /smartsheet/smartsheet.py (SDK source code). This suggests that you don't need exception handling logic in your code for the Rate Limiting error (or for any of the other 3 errors where should_retry is set to True), as the SDK is built to automatically retry any requests that fail with that error.

Related

Error in Prefect flow when passing large dataframe as argument

I am trying to implement orchestration of a ml-pipeline with Prefect. The entire pipeline works fine when passing a small enough batch of raw data. But once I pass a larger batch, some of the flows suddenly cannot be created.
The first flows in the pipeline still works fine, which perform some data cleaning and feature generation. But once I have a cleaned dataframe of about 10 000 rows and one column containing text data and try to processes it in a flow that makes predictions based on a pretrained model, I get one of the following errors:
1.
Exception has occurred: ReadTimeout 
The operation did not complete (read) (_ssl.c:2633)ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:2633)  
During handling of the above exception, another exception occurred:  
asyncio.exceptions.CancelledError:  
During handling of the above exception, another exception occurred:  
TimeoutError:  
During handling of the above exception, another exception occurred:  httpcore.ReadTimeout:  The above exception was the direct cause of the following exception:  File "C:\Users\EmilEne\OneDrive - Brand Delta\Documents\GitHub\airflow_ds_test\ds_nomad\model_age\age_prediction.py", line 108, in <module> test = new_predictions(df, m['model'], column='cleaned_message', language='italian', market='italy') httpx.ReadTimeout:
Or 2.
Exception has occurred: LocalProtocolError
1 
h2.exceptions.StreamClosedError: 1  
During handling of the above exception, another exception occurred:  
httpcore.LocalProtocolError: 1  
The above exception was the direct cause of the following exception:  
File "C:\Users\EmilEne\OneDrive - Brand Delta\Documents\GitHub\airflow_ds_test\ds_nomad\model_age\age_prediction.py", line 105, in <module> test = new_predictions(df, m['model'], column='cleaned_message', language='italian', market='italy') httpx.LocalProtocolError: 1
I tried using the debugger in vs code to see where in the function of the flow breaks but it never enters the function, it seems the flow is never even created. I tried replacing the whole function of the flow to just a print statement like this:
from prefect import task, flow
import pandas as pd
#flow()
def predictions(df):
print(df.info())
df_test = pd.read_parquet(".../path")
test = predicitons(df_test)
But it still doesn't create the flow and gives me the same error, the df_test is also one column and 7000 rows of cleaned text data.
I really don't understand the errors, I have no knowledge SSL or h2. Anybody that can point me in the right direction?
df_test = pd.read_parquet(".../path") isn't a valid relative file path.
If you want to go up two levels you would do df_test = pd.read_parquet("../../path")

Python binance client.create_test_order returning nothing with no error

I'm trying to use the python-binance api to trade Ethereum, whilst developing the code I've been using the create_test_order function so that I'm not costing myself money to run tests.
The issue I'm getting however is that the function doesn't seem to be returning anything at all:
try:
avg_price = float(client.get_avg_price(symbol="ETHGBP")['price'])
logging.info(f"Test Buy at {avg_price}")
order = client.create_test_order(
symbol="ETHGBP",
side=SIDE_BUY,
type=ORDER_TYPE_LIMIT,
timeInForce=TIME_IN_FORCE_GTC,
quantity=100,
price="{:.2f}".format(float(avg_price)))
logging.info(order)
except Exception as e:
logging.error(f"Exception occurred: {e}", exc_info=True)
quit()
This is the output I'm getting:
travis_1 | 2021-05-27 15:08:02,617 - root - INFO - Test Buy at 2003.53136248
travis_1 | 2021-05-27 15:08:02,852 - root - INFO - {}
I've been following this guide through, and I was expecting the returned order to have something in it, but it's not: https://algotrading101.com/learn/binance-python-api-guide/
I'm also confused that it's returning nothing, but no exception is being raised.
Any ideas where I'm going wrong.
According to documentation create_test_order returns empty JSON object {}.
See: https://python-binance.readthedocs.io/en/latest/binance.html?highlight=create_test_order#binance.client.AsyncClient.create_test_order
According to the python-binance documentation, the create_test_order() method only returns an opening and closing empty brackets (Empty JSON Object). Look at the attached picture below:
Documentation link: https://python-binance.readthedocs.io/en/latest/binance.html

Code By Zapier: Python - How to make it return error

I have written some python code to fetch a http request using requests library.
How do I make it error out if, after examining response, I figure out that I have not gotten the desired data?
Please note that I want to raise error only in certain cases.
Right now, I return an output object and the step always shows a passed.
David here, from the Zapier Platform team.
There's no secret sauce for Python on Zapier - If you want to raise an error, you can just raise it. How do you that depends on what situations you want to trigger an error state.
response = requests.get(url)
# raises for all error codes >= 400
response.raise_for_status()
result = response.json()
# or more manually, for specific cases
if result['my_key'] != 'key_i_want':
raise Exception('bad key')
Does that make sense? ​Let me know if you've got any other questions!

ignore_invalid_triggers not working

I am using the pytransitions library (documented here) to implement a Finite State Machine. One of the features outlined is the ability to ignore invalid triggers. Here is the example as per the documentation:
# Globally suppress invalid trigger exceptions
m = Machine(lump, states, initial='solid', ignore_invalid_triggers=True)
If the trigger is set to true, no error should be thrown for triggers that are invalid.
Here is a sample of the code I am trying to construct:
from transitions import Machine
states = ['changes ongoing', 'changes complete', 'changes pushed', 'code reviewed', 'merged']
triggers = ['git commit', 'git push', 'got plus2', 'merged']
# Initialize the state machine
git_user = Machine(states=states, initial=states[0], ignore_invalid_triggers=True, ordered_transitions=True)
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
print(git_user.state)
git_user.trigger('git commit')
print(git_user.state)
git_user.trigger('invalid') # This line will throw an AttributeError
The produced error:
changes ongoing
changes complete
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/transitions/core.py", line 58, in _get_trigger
raise AttributeError("Model has no trigger named '%s'" % trigger_name)
AttributeError: Model has no trigger named 'invalid'
Process finished with exit code 1
I am unsure of why an error is being thrown when ignore_invalid_triggers=True.
There is limited information on this library besides the documentation on the official github page. If anyone has any insight on this I would appreciate the help.
Thanks in advance.
To be an invalid trigger under the rules set out in the documentation, the trigger name has to be valid somewhere in the model. For instance, try trigger "merged" from state "changes ongoing". You get an attribute error because "invalid" is not a trigger at all: you have a list of four, and that's not one of them.
To see the effect of establishing "invalid" as a trigger, add an end-to-start transition (the last line below) after your nice linear loop:
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
git_user.add_transition(trigger="invalid", source=states[-1], dest=states[0])
Now your code should run as expected, ignoring that invalid transition.

How to catch specific Postgres exceptions in Python psycopg2

Default psycopg2 error messages are too broad. Most of the time it simply throws:
psycopg2.OperationalError
without any additional information. So it is hard to guess, what was the real reason of the error - either incorrect user credentials, or simply the fact the server is not running. So, I need some more appropriate error handling, like error codes in pymysql library.
I've seen this page, but it does not help. When I do
except Exception as err:
print(err.pgcode)
it always prints None. And as for errorcodes it is simply undefined. I tried to import it, but failed. So, I need some help.
For anyone looking a quick answer:
Short Answer
import traceback # Just to show the full traceback
from psycopg2 import errors
InFailedSqlTransaction = errors.lookup('25P02')
try:
feed = self._create_feed(data)
except InFailedSqlTransaction:
traceback.print_exc()
self._cr.rollback()
pass # Continue / throw ...
Long Answer
Go to psycopg2 -> erros.py you will find the lookup function lookup(code).
Go to psycopg2\_psycopg\__init__.py in sqlstate_errors you will find all codes to add as string into the lookup function.
This is not an answer to the question but merely my thoughts that don't fit in a comment. This is a scenario in which I think setting pgcode in the exception object would be helpful but, unfortunately, it is not the case.
I'm implementing a pg_isready-like Python script to test if a Postgres service is running on the given address and port (e.g., localhost and 49136). The address and port may or may not be used by any other program.
pg_isready internally calls internal_ping(). Pay attention to the comment of Here begins the interesting part of "ping": determine the cause...:
/*
* Here begins the interesting part of "ping": determine the cause of the
* failure in sufficient detail to decide what to return. We do not want
* to report that the server is not up just because we didn't have a valid
* password, for example. In fact, any sort of authentication request
* implies the server is up. (We need this check since the libpq side of
* things might have pulled the plug on the connection before getting an
* error as such from the postmaster.)
*/
if (conn->auth_req_received)
return PQPING_OK;
So pg_isready also uses the fact that an error of invalid authentication on connection means the connection itself is already successful, so the service is up, too. Therefore, I can implement it as follows:
ready = True
try:
psycopg2.connect(
host=address,
port=port,
password=password,
connect_timeout=timeout,
)
except psycopg2.OperationalError as ex:
ready = ("fe_sendauth: no password supplied" in except_msg)
However, when the exception psycopg2.OperationalError is caught, ex.pgcode is None. Therefore, I can't use the error codes to compare and see if the exception is about authentication/authorization. I'll have to check if the exception has a certain message (as #dhke pointed out in the comment), which I think is kind of fragile because the error message may be changed in the future release but the error codes are much less likely to be changed, I think.

Categories

Resources