Python: Why do is the traceback getting printed? - python

I have a function. The function get started in some more threads. I tried to print a own error message. But its not important what I do I get still the traceback printed. My function:
def getSuggestengineResultForThree(suggestengine, seed, dynamoDBLocation):
results[seed][suggestengine] = getsuggestsforsearchengine(seed, suggestengine)
for keyword_result in results[seed][suggestengine]:
o = 0
while True:
try:
allKeywords.put_item(
Item={
'keyword': keyword_result
}
)
break
except ProvisionedThroughputExceededException as pe:
if (o > 9):
addtoerrortable(keyword_result)
print('ProvisionedThroughputExceededException 10 times in getSuggestengineResultForThree for allKeywords')
break
sleep(1)
o = o + 1
print("ProvisionedThroughputExceededException in getSugestengineResult")
But I get for every Thread an output like this:
Exception in thread Thread-562:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/Users/iTom/ownCloud/Documents/Workspace/PyCharm/Keywords/TesterWithDB.py", line 362, in getSuggestengineResultForThree
'keyword': keyword_result
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/boto3/resources/factory.py", line 518, in do_action
response = action(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(**params)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/botocore/client.py", line 252, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/botocore/client.py", line 542, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ProvisionedThroughputExceededException) when calling the PutItem operation: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API
Can someone help me to get my own print instead of the traceback? :)

This answer is a bit late for your question but here it is in case anyone is searching for this answer.
The exception that boto3 throws is a botocore.exceptions.ClientError as Neil has answered. However you should check response error code for 'ProvisionedThroughputExceededException' because the ClientError could be for another issue.
from botocore.exceptions import ClientError
except ClientError as e:
if e.response['Error']['Code'] != 'ProvisionedThroughputExceededException':
raise
# do something else with 'e'
I am using Python 2.7 which may or may not make a difference. The exception that I receive suggests that boto3 is already doing retries (up to 9 times) which is different from your exception:
An error occurred (ProvisionedThroughputExceededException) when calling the PutItem operation (reached max retries: 9): The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.

It's possible that ProvisionedThroughputExceededException is not actually the error. Try:
except botocore.exceptions.ClientError as pe:
instead.
If that doesn't work, figure out what line the error is occurring on and put the except statement there.

Related

What is the correct way to attach messages to an error?

I have a project which is structured something like this, with multiple functionality and often more possible sources of error. One functionality may also call something else that raises an error.
def functionality_one(arguments) -> str:
try:
status_feedback = attempt_functionality_one(arguments)
# this would usually be multiple lines
except ValueError as e:
return "known-failure-code"
except ConnectionError as e:
raise ConnectionError("Some user-friendly message for unexpected error") from e
else:
return status_feedback
def main():
## when the relevant CLI argument is passed:
try:
status = functionality_one(arguments)
except Exception as e:
send_notification_to_user(e.args[0])
else:
send_notification_to_user(USER_FRIENDLY_SUCCESS_MESSAGES.get(status, "Success!"))
if __name__ == "__main__":
main()
Focus on this bit about re-raising errors:
except ConnectionError as e:
raise ConnectionError("Some user-friendly message for unexpected error") from e
I do this to attach a user-friendly message in the error that I can later display to the user. Is there a better way to accomplish this?
In particular, normally error tracebacks just state errors that propogate. With this method, it gives a message like "... was the direct cause of the following exception ..." and I don't know whether this is the norm in Python. Here's an example from the log file:
Traceback (most recent call last):
File "D:\username\Documents\tech-projects\project-name\src\auth.py", line 157, in login
login_request = post(
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='wifi-login.university-website.domain', port=80): Max retries exceeded with url: /cgi-bin/authlogin (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FF681B27D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\username\Documents\tech-projects\project-name\login_cli.py", line 269, in main
status_message: str = parsed_namespace.func(parsed_namespace)
File "D:\username\Documents\tech-projects\project-name\login_cli.py", line 197, in connect
return src.auth.login(credentials)
File "D:\username\Documents\tech-projects\project-name\src\auth.py", line 164, in login
raise ConnectionError(f"Server-side error. Contact IT support or wait until morning.") from e
requests.exceptions.ConnectionError: Server-side error. Contact IT support or wait until morning.
So what's the right way to do this? Feel free to suggest a change that completely changes the structure of the program too, if you feel that's necessary.

Error: trying to add members to group by Telegram ID

this is the code but its giving me error. maybe get by entity not working in this code.i have tried changing the entity type and everything else but that also doesnt works. i give parameters as required in the script but still it doesnt do anything but only gives me error"some error in adding".
try:
user_to_add = client.get_entity(int(user['user_id']))
print(user_to_add)
client(InviteToChannelRequest(entity, [user_to_add]))
usr_id = user['user_id']
print(f'{attempt}{g} Adding {usr_id}{rs}')
print(f'{sleep}{g} Sleep 20s{rs}')
time.sleep(20)
except PeerFloodError:
#time.sleep()
os.system(f'del {file}')
sys.exit(f'\n{error}{r} Aborted. Peer Flood Error{rs}')
except UserPrivacyRestrictedError:
print(f'{error}{r} User Privacy Restriction{rs}')
continue
except KeyboardInterrupt:
print(f'{error}{r} Aborted. Keyboard Interrupt{rs}')
update_list(users, added_users)
if not len(users) == 0:
print(f'{info}{g} Remaining users logged to {file}')
logger = Relog(users, file)
logger.start()
except:
print(f'{error}{r} Some Other error in adding{rs}')
continue
#os.system(f'del {file}')
input(f'{info}{g}Adding complete...Press enter to exit...')
sys.exit()
give me this error
> Traceback (most recent call last):
File "C:\Users\Noni\Desktop\python\telegramscraper-main\addbyid1.py", line 90, in <module>
user_to_add = client.get_entity(int(user['user_id']))
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\sync.py", line 39, in syncified
return loop.run_until_complete(coro)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 647, in run_until_complete
return future.result()
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\client\users.py", line 292, in get_entity
inputs.append(await self.get_input_entity(x))
File "C:\Users\Noni\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\telethon\client\users.py", line 466, in get_input_entity
raise ValueError(
ValueError: Could not find the input entity for PeerUser(user_id=807250194) (PeerUser). Please read https://docs.telethon.dev/en/latest/concepts/entities.html to find out more details.
Can you provide the error?
I think you know that you can't add more than 200 subscribers to the channel manually. Please check the current number of subscribers. If it is more than 200 your code will not work.

Python3.5 Asyncio - Preventing task exception from dumping to stdout?

I have a textbased interface (asciimatics module) for my program that uses asyncio and discord.py module and occasionally when my wifi adapter goes down I get an exception like so:
Task exception was never retrieved
future: <Task finished coro=<WebSocketCommonProtocol.run() done, defined at /home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py:428> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 434, in run
msg = yield from self.read_message()
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 456, in read_message
frame = yield from self.read_data_frame(max_size=self.max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 511, in read_data_frame
frame = yield from self.read_frame(max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 546, in read_frame
self.reader.readexactly, is_masked, max_size=max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/framing.py", line 86, in read_frame
data = yield from reader(2)
File "/usr/lib/python3.5/asyncio/streams.py", line 670, in readexactly
block = yield from self.read(n)
File "/usr/lib/python3.5/asyncio/streams.py", line 627, in read
yield from self._wait_for_data('read')
File "/usr/lib/python3.5/asyncio/streams.py", line 457, in _wait_for_data
yield from self._waiter
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/selector_events.py", line 662, in _read_ready
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
This exception is non-fatal and the program is able to re-connect despite it - what I want to do is prevent this exception from dumping to stdout and mucking up my text interface.
I tried using ensure_future to handle it but it doesn't seem to work. Am I missing something:
#asyncio.coroutine
def handle_exception():
try:
yield from WebSocketCommonProtocol.run()
except Exception:
print("SocketException-Retrying")
asyncio.ensure_future(handle_exception())
#start discord client
client.run(token)
Task exception was never retrieved - is not actually exception propagated to stdout, but a log message that warns you that you never retrieved exception in one of your tasks. You can find details here.
I guess, most easy way to avoid this message in your case is to retrieve exception from task manually:
coro = WebSocketCommonProtocol.run() # you don't need any wrapper
task = asyncio.ensure_future(coro)
try:
#start discord client
client.run(token)
finally:
# retrieve exception if any:
if task.done() and not task.cancelled():
task.exception() # this doesn't raise anything, just mark exception retrieved
The answer provided by Mikhail is perfectly acceptable, but I realized it wouldn't work for me since the task that is raising the exception is buried deep in some module so trying to retrieve it's exception is kind've difficult. I found that instead if I simply set a custom exception handler for my asyncio loop (loop is created by the discord client):
def exception_handler(loop,context):
print("Caught the following exception")
print(context['message'])
client.loop.set_exception_handler(exception_handler)
client.run(token)

Surviving icinga2 restart in a python requests stream

I have been working on a chatbot interface to icinga2, and have not found a persistent way to survive the restart/reload of the icinga2 server. After a week of moving try/except blocks, using requests sessions, et al, it's time to reach out to the community.
Here is the current iteration of the request function:
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
except (requests.exceptions.ChunkedEncodingError,requests.packages.urllib3.exceptions.ProtocolError, http.client.IncompleteRead,ValueError) as drop:
return("No connection to Icinga API")
if r.status_code == 200:
for line in r.iter_lines():
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
continue
else:
#r.raise_for_status()
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
(The debug function just flags and prints the indicated error to the console.)
This code works fine handling events from the API and sending them to the chatbot, but if the icinga server is reloaded, as would be needed after adding a new server definition in /etc/icinga2..., the listener crashes.
Here is the error response I get when the server is restarted:
Exception in thread Thread-11:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 447, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 228, in _error_catcher
yield
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 498, in read_chunked
self._update_chunk_length()
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 451, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 664, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 349, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 526, in read_chunked
self._original_response.close()
File "/usr/lib64/python3.4/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 246, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/errbot/plugins/icinga2bot.py", line 186, in report_events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 158, in i2events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 98, in i2api_request
for line in r.iter_lines():
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 706, in iter_lines
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 667, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
With Icinga2.4, this crash happened every time the server was restarted. I thought the problem had gone away after we upgraded to 2.5, but it now appears to have turned into a heisenbug.
I wound up getting advice on IRC to reorder the try/except blocks and make sure they were in the right places. Here's the working result.
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
debug(url)
debug(headers)
debug(data)
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
debug("Connecting to Icinga server")
debug(r)
if r.status_code == 200:
try:
for line in r.iter_lines():
debug('in i2api_request: '+str(line))
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
return("Could not produce JSON from "+line)
except (requests.exceptions.ChunkedEncodingError,ConnectionRefusedError):
return("Connection to Icinga lost.")
else:
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
except (requests.exceptions.ConnectionError,
requests.packages.urllib3.exceptions.NewConnectionError) as drop:
debug("No connection to Icinga API. Error received: "+str(drop))
sleep(5)
return("No connection to Icinga API.")

How to retry urlfetch.fetch a few more times in case of error?

Quite often GAE is not able to upload the file and I am getting the following error:
ApplicationError: 2
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 636, in __call__
handler.post(*groups)
File "/base/data/home/apps/picasa2vkontakte/1.348093606241250361/picasa2vkontakte.py", line 109, in post
headers=headers
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 260, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 355, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 2
How should I perform retries in case of such error?
try:
result = urlfetch.fetch(url=self.request.get('upload_url'),
payload=''.join(data),
method=urlfetch.POST,
headers=headers
)
except DownloadError:
# how to retry 2 more times?
# and how to verify result here?
If you can, move this work into the task queue. When tasks fail, they retry automatically. If they continue to fail, the system gradually backs off retry frequency to as slow as once-per hour. This is an easy way to handle API requests to rate-limited services without implementing one-off retry logic.
If you really need to handle requests synchronously, something like this should work:
for i in range(3):
try:
result = urlfetch.fetch(...)
# run success conditions here
break
except DownloadError:
#logging.debug("urlfetch failed!")
pass
You can also pass deadline=10 to urlfetch.fetch to double the default timeout deadline.

Categories

Resources