Surviving icinga2 restart in a python requests stream - python

I have been working on a chatbot interface to icinga2, and have not found a persistent way to survive the restart/reload of the icinga2 server. After a week of moving try/except blocks, using requests sessions, et al, it's time to reach out to the community.
Here is the current iteration of the request function:
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
except (requests.exceptions.ChunkedEncodingError,requests.packages.urllib3.exceptions.ProtocolError, http.client.IncompleteRead,ValueError) as drop:
return("No connection to Icinga API")
if r.status_code == 200:
for line in r.iter_lines():
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
continue
else:
#r.raise_for_status()
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
(The debug function just flags and prints the indicated error to the console.)
This code works fine handling events from the API and sending them to the chatbot, but if the icinga server is reloaded, as would be needed after adding a new server definition in /etc/icinga2..., the listener crashes.
Here is the error response I get when the server is restarted:
Exception in thread Thread-11:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 447, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 228, in _error_catcher
yield
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 498, in read_chunked
self._update_chunk_length()
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 451, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 664, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 349, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 526, in read_chunked
self._original_response.close()
File "/usr/lib64/python3.4/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 246, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/errbot/plugins/icinga2bot.py", line 186, in report_events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 158, in i2events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 98, in i2api_request
for line in r.iter_lines():
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 706, in iter_lines
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 667, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
With Icinga2.4, this crash happened every time the server was restarted. I thought the problem had gone away after we upgraded to 2.5, but it now appears to have turned into a heisenbug.

I wound up getting advice on IRC to reorder the try/except blocks and make sure they were in the right places. Here's the working result.
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
debug(url)
debug(headers)
debug(data)
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
debug("Connecting to Icinga server")
debug(r)
if r.status_code == 200:
try:
for line in r.iter_lines():
debug('in i2api_request: '+str(line))
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
return("Could not produce JSON from "+line)
except (requests.exceptions.ChunkedEncodingError,ConnectionRefusedError):
return("Connection to Icinga lost.")
else:
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
except (requests.exceptions.ConnectionError,
requests.packages.urllib3.exceptions.NewConnectionError) as drop:
debug("No connection to Icinga API. Error received: "+str(drop))
sleep(5)
return("No connection to Icinga API.")

Related

impossible to connect to a websocket

I'm trying to connect as a client to my websocket, but yet every time I try this error comes up, I've tried literally everything but the result is always the same
(with other languages, for example NodeJS, I can connect without problems)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 138, in read_http_response
status_code, reason, headers = await read_response(self.reader)
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/http.py", line 122, in read_response
raise EOFError("connection closed while reading HTTP status line") from exc
EOFError: connection closed while reading HTTP status line
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 666, in __await_impl__
await protocol.handshake(
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 326, in handshake
status_code, response_headers = await self.read_http_response()
File "/root/api/.venv/lib/python3.9/site-packages/websockets/legacy/client.py", line 144, in read_http_response
raise InvalidMessage("did not receive a valid HTTP response") from exc
websockets.exceptions.InvalidMessage: did not receive a valid HTTP response
code:
import websockets
from websockets import client
async def receiver(ws):
for message in ws:
print(f"{message}")
async for websocket in client.connect('wss://localhost:8777/password/AAAAAA/1/175/'):
try:
print('connecting')
except websockets.ConnectionClosed:
print('error')

How can i send large message to kafka producer using python?

if I send the largest Json to the Kafka server it will show this kind of error, How can I increase message.max.bytes=15728640 and replica.fetch.max.bytes=15728640 in Kafka. I tried to increase byte level as below it won't work
The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=15728640
The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=15728640
Error:=>
[2022-01-06 12:36:51,281] [9015] [ERROR] [^-App]: Crashed reason=ProducerSendError("Error while sending: MessageSizeTooLargeError('The message is 6677420 bytes when serialized which is larger than the maximum request size you have configured with the max_request_size configuration',)",)
Traceback (most recent call last):
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/transport/drivers/aiokafka.py", line 1059, in send
transactional_id=transactional_id,
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/aiokafka/producer/producer.py", line 310, in send
key_bytes, value_bytes = self._serialize(topic, key, value)
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/aiokafka/producer/producer.py", line 231, in _serialize
" max_request_size configuration" % message_size)
kafka.errors.MessageSizeTooLargeError: [Error 10] MessageSizeTooLargeError: The message is 6677420 bytes when serialized which is larger than the maximum request size you have configured with the max_request_size configuration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/mode/services.py", line 779, in _execute_task
await task
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/app/base.py", line 941, in _wrapped
return await task()
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/app/base.py", line 991, in around_timer
await fun(*args)
File "/home/twilightuser/faust_library/producer.py", line 14, in my_send
await topic.send(value=value)
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/topics.py", line 193, in send
callback=callback,
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/channels.py", line 303, in _send_now
schema, key_serializer, value_serializer, callback))
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/topics.py", line 417, in publish_message
headers=headers,
File "/home/twilightuser/faust_library/venv/lib/python3.6/site-packages/faust/transport/drivers/aiokafka.py", line 1062, in send
raise ProducerSendError(f'Error while sending: {exc!r}') from exc
faust.exceptions.ProducerSendError: Error while sending: MessageSizeTooLargeError('The message is 6677420 bytes when serialized which is larger than the maximum request size you have configured with the max_request_size configuration',)

Python http requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

I'm writing a chat program. Each of my clients has an open get request to the server in a separate thread (and another thread for posting their own messages). I don't want to have a lot of overhead. That is, clients don't send get requests frequently to see if there have been any unseen messages. Instead, they always have exactly one open get request to get the new messages, and as soon as the server responded to them with new unseen messages, they immediately send another get request to the server to stay updated and so on.
So on the client-side, I have something like this:
def coms():
headers = {'data': myAut.strip()}
resp = requests.get("http://localhost:8081/receive", headers=headers,timeout=1000000)
print(resp.text)
t = threading.Thread(target=coms, args=())
t.start()
On the server-side, I have something like this:
def do_GET(self):
if self.path == '/receive':
auth=self.headers['data']
#Using auth, find who has sent this message
u=None
for i in range(len(users)):
print(users[i].aut,auth)
if users[i].aut==auth:
u=users[i]
break
t=threading.Thread(target=long_Poll,args=(u,self))
t.start()
and
def long_Poll(client,con):
while True:
if len(client.unreadMessages) != 0:
print("IM GONNA RESPOND")
con.end_headers()
con.wfile.write(bytes(client.unreadMessages, "utf8"))
client.unreadMessages=[]
break
con.send_response(200)
con.end_headers()
And the logic behind this is that the servers want to do the long-polling, that is, it keeps GET/receive requests open in another busy-waiting thread. When any client sends a message to the server via POST/message it just adds this new message to other clients unseenMessages and so once their thread is running, they come out of the while True: loop, and the server gives them the new messages. In other words, I want to hold client's GET/receive open and not respond it as long as I want.
This process might take so long time. Maybe the chatroom is idle and there is no messages for a long time.
Right now the problem I have is that as soon as my client sends its first GET/receive message, it gets this error, even though I have set the timeout value in GET/receive request to be so much.
C:\Users\erfan\Desktop\web\client\venv\Scripts\python.exe C:\Users\erfan\Desktop\web\client\Client.py
Hossein
Welcome to chatroom Hossein ! Have a nice time !
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 677, in urlopen
chunked=chunked,
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1321, in getresponse
response.begin()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 296, in begin
version, status, reason = self._read_status()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 265, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\adapters.py", line 449, in send
timeout=timeout
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 727, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\util\retry.py", line 410, in increment
raise six.reraise(type(error), error, _stacktrace)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\packages\six.py", line 734, in reraise
raise value.with_traceback(tb)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 677, in urlopen
chunked=chunked,
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 426, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\urllib3\connectionpool.py", line 421, in _make_request
httplib_response = conn.getresponse()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 1321, in getresponse
response.begin()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 296, in begin
version, status, reason = self._read_status()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\http\client.py", line 265, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\erfan\AppData\Local\Programs\Python\Python37\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\erfan\Desktop\web\client\Client.py", line 13, in coms
resp = requests.get("http://localhost:8081/receive", headers=headers,timeout=1000000)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "C:\Users\erfan\Desktop\web\client\venv\lib\site-packages\requests\adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
===========================================================================
UPDATE:
The strange part is whenever I edit the GET/receive module to this:
def do_GET(self):
while True:
pass
everything works fine.
But when I do :
def do_GET(self):
t=threading.Thread(target=long_Poll,args=(self))
t.start()
def long_Poll(con):
client =None
while True:
pass
It gives the same error to the client!
I mean the problem is because I pass the self object to another function to respond? maybe it interrupts the connection? I remember having the same problem in Java socket programming where I would encounter to some bugs sometimes when I wanted to use a socket to communicate in two functions? However, here I only want to communicate in the long-polling function not anywhere else.
=======================================
update:
I also put my server and client code here. For brevity, I post the paste.ubuntu links here.
Client:
https://paste.ubuntu.com/p/qJmRjYy4Y9/
Server:
https://paste.ubuntu.com/p/rVyHPs4Rjz/
First time a client types, he enters his name and after that he starts sending GET/receive requests. Client can then send his messages to other people by sending POST/message requests. Any time a user send a message to the server, the server finds him (by his auth) and updates all other clients unseenMessages so that whenever their long-polling thread continued, they'll get the new messages and their clients also send another GET/receive message immediately.
I have found the answer. I was trying to have a multithreaded server using single thread syntax!
I followed this thread for having a multithreaded HTTP server
Multithreaded web server in python

How to collect files on Windows machines with pywinrm?

How to collect files on Windows machines?
Password rejected on pywinrm connection, but the password is correct and the connection port is listening
Script:
import winrm
s = winrm.Session('192.168.9.102', auth=('domain\username', 'password'))
r = s.run_cmd('ipconfig', ['/all'])
print(r.status_code)
print(r.std_out)
Output:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/winrm/transport.py", line 329, in _send_message_request
response.raise_for_status()
File "/usr/local/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: for url: http://192.168.9.102:5985/wsman
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get.py", line 4, in <module>
r = s.run_cmd('ipconfig', ['/all'])
File "/usr/local/lib/python3.7/site-packages/winrm/__init__.py", line 39, in run_cmd
shell_id = self.protocol.open_shell()
File "/usr/local/lib/python3.7/site-packages/winrm/protocol.py", line 166, in open_shell
res = self.send_message(xmltodict.unparse(req))
File "/usr/local/lib/python3.7/site-packages/winrm/protocol.py", line 243, in send_message
resp = self.transport.send_message(message)
File "/usr/local/lib/python3.7/site-packages/winrm/transport.py", line 323, in send_message
response = self._send_message_request(prepared_request, message)
File "/usr/local/lib/python3.7/site-packages/winrm/transport.py", line 333, in _send_message_request
raise InvalidCredentialsError("the specified credentials were rejected by the server")
winrm.exceptions.InvalidCredentialsError: the specified credentials were rejected by the server
telnet 192.168.9.102 5985
Trying 192.168.9.102...
Connected to 192.168.9.102.
Escape character is '^]'.
^CConnection closed by foreign host.
Have you tried using another authentication transport like ntlm, according to your server configuration:
winrm.Session('192.168.9.102', auth=('username#domain', 'password'), transport='ntlm')

Python: rerun function on except block with clean call stack

I have code which produces potentially infinite call stack (simplified):
def listen(self, pipeline):
try:
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
self.listen(pipeline)
If there are any network issues, it will log an error, reconnect and move further. But it will also add one extra call to call stack. So my stack trace on error will be like this:
...
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/msworker/queue.py", line 81, in listen
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 2113, in basic_ack
self._flush_output()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/msworker/queue.py", line 81, in listen
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 2113, in basic_ack
self._flush_output()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 1336, in _flush_output
self._connection._flush_output(lambda: self.is_closed, *waiters)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/blocking_connection.py", line 522, in _flush_output
raise self._closed_result.value.error
pika.exceptions.StreamLostError: Stream connection lost: ConnectionResetError(104, 'Connection reset by peer')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 1097, in _on_socket_writable
self._produce()
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 820, in _produce
self._tx_buffers[0])
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 79, in retry_sigint_wrap
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/pika/adapters/utils/io_services_utils.py", line 861, in _sigint_safe_send
return sock.send(data)
ConnectionResetError: [Errno 104] Connection reset by peer
How can I rerun listen function from scratch, without old calls in call stack?
UPDATE
To avoid this issue, it is right to operate nested function and rerun it but not itself:
def listen(self, pipeline):
try:
self._listen(self, pipeline)
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
self._listen(self, pipeline)
def _listen(self, pipeline):
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
But still, is there a way to rerun the recursive function with a clean call stack?
Why use recursion when you can use simple iteration ?
def listen(self, pipeline):
while True:
try:
for message in self.channel.consume(self.queue_name):
pipeline.process(message)
self.channel.basic_ack(delivery_tag=method_frame.delivery_tag)
return
except (pika.exceptions.StreamLostError,
pika.exceptions.ConnectionClosed,
pika.exceptions.ChannelClosed,
ConnectionResetError) as e:
logging.warning(f'Connection dropped for queue {self.queue_name}. Exception: {e}. Reconnecting...')
self._reconnect()
But still, is there a way to rerun the recursive function with a clean call stack?
Actually, what you currently have IS a "clean call stack" - it's the real call stack, with one distinct frame per call (recursive or not). Some languages do "optimize" tail-recursive calls (by squashing / reusing frames), Python's designers choosed not to to make debugging easier.

Categories

Resources