from gevent import monkey
monkey.patch_all()
import gevent
from gevent import pywsgi
from gevent import queue
import redis
REDIS_CONNECTION_POOL = redis.ConnectionPool(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB)
def redis_wait(environ, body, channel, wait_once):
server = redis.Redis(connection_pool=REDIS_CONNECTION_POOL)
client = server.pubsub()
client.subscribe(channel)
messages = client.listen()
while True:
message = messages.next()
This is the error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/greenlet.py", line 327, in run
result = self._run(*self.args, **self.kwargs)
File "/home/ubuntu/www/app.wsgi", line 110, in wait_messages
redis_wait(environ, body, channel, False)
File "/home/ubuntu/www/app.wsgi", line 47, in redis_wait
message = messages.next()
StopIteration
<Greenlet at 0x1c190f0: wait_messages({}, ['5386E49C1CEB16573ACBD90566F3B740983768CB,1358532, <Queue at 0x19fd6d0>, '1410290151', None)> failed with StopIteration
I have tried to google the error, but nothing comes up. The error only occurs intermittently. Does anyone know what it means? Is it a timeout of some sort perhaps?
StopIteration is the exception that Python throws when an iterator (such as messages) has reached the end of its values. It is not an error, but a normal, expected condition that will be automatically handled by Python in some circumstances. For example, if you loop over the iterator using a for loop like so:
for message in messages:
print message # Or do something with it
then the StopIteration exception will end the for loop normally.
However, a while loop does not handle StopIteration itself, but lets it continue through to your code so that you can handle it in whatever way you see fit. Your code is currently not handling it, so the exception ends up terminating your program.
Replace your while True: message = messages.next() loop with for message in messages and everything should work.
More reading on iterators, generators, and StopIteration:
http://anandology.com/python-practice-book/iterators.html
http://www.python-course.eu/generators.php
http://chimera.labs.oreilly.com/books/1230000000393/ch04.html
Related
I am trying to integrate asyncpg with discord.py, however I ran into one very anoyying issue.
Whenever I try to stop the bot using ^C, I get spammed with a ton of errors. This is just bothersome as when I'm trying to debug something I often lose the original error.
Here is my code:
loop = asyncio.get_event_loop()
async def connection_init(conn):
await conn.execute("SET CLIENT_ENCODING to 'utf-8';")
conn.client = client
try:
client.pool = loop.run_until_complete(asyncpg.create_pool(
host=os.environ.get("postgres_host"),
database=os.environ.get("postgres_database"),
user=os.environ.get("postgres_user"),
password=os.environ.get("postgres_password"),
connection_class=dbutils.DBUtils,
init=connection_init
))
print('PostgreSQL connection successful')
except Exception as e:
print(e)
# the bot basically cannot function without database
print('PostgreSQL connection failed- aborting')
exit()
client.run(os.environ.get("main"))
Here are the errors that floods my terminal. It's the same error however it pops up like 20 times.
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x033DF148>
Traceback (most recent call last):
File "C:\Users\zghan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\proactor_events.py", line 116, in __del__
File "C:\Users\zghan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\proactor_events.py", line 108, in close
File "C:\Users\zghan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 719, in call_soon
File "C:\Users\zghan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 508, in _check_closed
RuntimeError: Event loop is closed
That error happens because the asyncio.get_event_loop() can not retrieve any loop, as at the moment you try to retrieve it, it is already closed.
The correct syntax would be to start a new loop instead of trying to get the current:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(asyncio.new_event_loop())
# ...
loop = asyncio.get_event_loop()
Alternatively, in Python 3.7 asyncio added a new way of managing the loops: using asyncio.run(), which does not require you to create, set or retrieve the current loop as it already manages all that internally.
This question already has answers here:
Paho MQTT Python Client: No exceptions thrown, just stops
(3 answers)
Closed 2 years ago.
I have a place in my code where I made a mistake in the name of the key of a dict. It took some time to understand why the code was not running past that place because a traceback was not thrown.
The code is below, I put it for completeness, highlighting with →→→ the place where the issue is:
class Alert:
lock = threading.Lock()
sent_alerts = {}
#staticmethod
def start_alert_listener():
# load existing alerts to keep persistancy
try:
with open("sent_alerts.json") as f:
json.load(f)
except FileNotFoundError:
# there is no file, never mind - it will be created at some point
pass
# start the listener
log.info("starting alert listener")
client = paho.mqtt.client.Client()
client.on_connect = Alert.mqtt_connection_alert
client.on_message = Alert.alert
client.connect("mqtt.XXXX", 1883, 60)
client.loop_forever()
#staticmethod
def mqtt_connection_alert(client, userdata, flags, rc):
if rc:
log.critical(f"error connecting to MQTT: {rc}")
sys.exit()
topic = "monitor/+/state"
client.subscribe(topic)
log.info(f"subscribed alert to {topic}")
#staticmethod
def alert(client, userdata, msg):
event = json.loads(msg.payload)
log.debug(f"received alert: {event}")
→→→ if event['ok']:
# remove existing sent flag, not thread safe!
with Alert.lock:
Alert.sent_alerts.pop(msg['id'], None)
return
(...)
The log coming from the line just above is
2021-01-14 22:03:02,617 [monitor] DEBUG received alert: {'full_target_name': 'ThisAlwaysFails → a failure', 'isOK': False, 'why': 'explicit fail', 'extra': None, 'id': '6507a61c9688199a34cb006b354c8433', 'last': '2021-01-14T22:03:02.612912+01:00', 'last_ko': '2021-01-14T22:03:02.612912+01:00'}
This is the dict in which I am trying to erroneously access ok, which should raise an exception and a traceback. But nothing happens. The code does not further than that as if the error was silently discarded (and the method silently fails).
I tried to put a raise Exception("wazaa") between the log.debug() and the if - same thing, the method fails at that point but an exception is not raised.
I am at loss about the reason where an exception could not be visible though a traceback?
The alert() method is called in a separate thread, if this matters. For completeness I tried the follwong code just to make sure threading does not interfere but no (I do not see a reason why it should)
import threading
class Hello:
#staticmethod
def a():
raise Exception("I am an exception")
threading.Thread(target=Hello.a).start()
outputs
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python38\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "C:\Python38\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:/Users/yop/AppData/Roaming/JetBrains/PyCharm2020.3/scratches/scratch_1.py", line 7, in a
raise Exception("I am an exception")
Exception: I am an exception
It appears to call your callback within a try, then logs that error:
try:
self.on_message(self, self._userdata, message)
except Exception as err:
self._easy_log(
MQTT_LOG_ERR, 'Caught exception in on_message: %s', err)
if not self.suppress_exceptions:
raise
What I can't explain however is why the exception isn't being raised. I can't see why self.suppress_exceptions would be true for you since you never set it, but try:
Manually setting suppress_exceptions using client.suppress_exceptions = False. This shouldn't be necessary since that appears to be the default, but it's worth a try.
Checking the log that it apparently maintains. You'll need to refer to the docs though on how to do that, since I've never touched this library before.
I have setup a Python program that relies on asyncio and socketio to transfer messages between chat participants using two different chat systems (one interface being a CRM backend and the other being a javascript chat widget on a website).
The relevant part of the code is below. I need to open a separate thread for each independent chat. Therefore, in order to open a channel to the backend, I create a separate thread and run a "long_polling" function in it, which constantly checks for new messages from the backend via pull_messages() and sends them to the widget asynchronously via socketio AsyncServer (forwarding messages from widget to backend works fine and I leave it out here):
thread = threading.Thread(target=long_poll, args=())
thread.daemon = True
thread.start()
with the long_poll() function being defined as
self.server = AsyncServer(async_mode="sanic")
async def send_aio(self, msg):
await self.server.emit(msg)
def long_poll(self):
while chat:
response = self.pull_messages(...)
if response == "/end":
chat = False
asyncio.set_event_loop(asyncio.new_event_loop())
loop = asyncio.get_event_loop()
loop.run_until_complete(self.send_aio(response))
loop.close()
When the long_poll function is executed, just after the human response is fetched, I get the following error:
ERROR asyncio - Task exception was never retrieved
future: <Task finished coro=<AsyncServer._emit_internal() done, defined at /usr/local/lib/python3.6/site-packages/socketio/asyncio_server.py:344> exception=RuntimeError('Non-thread-safe operation invoked on an event loop other than the current one',)>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/socketio/asyncio_server.py", line 354, in _emit_internal
binary=None))
File "/usr/local/lib/python3.6/site-packages/socketio/asyncio_server.py", line 365, in _send_packet
await self.eio.send(sid, encoded_packet, binary=False)
File "/usr/local/lib/python3.6/site-packages/engineio/asyncio_server.py", line 89, in send
binary=binary))
File "/usr/local/lib/python3.6/site-packages/engineio/asyncio_socket.py", line 74, in send
await self.queue.put(pkt)
File "/usr/local/lib/python3.6/asyncio/queues.py", line 141, in put
return self.put_nowait(item)
File "/usr/local/lib/python3.6/asyncio/queues.py", line 153, in put_nowait
self._wakeup_next(self._getters)
File "/usr/local/lib/python3.6/asyncio/queues.py", line 74, in _wakeup_next
waiter.set_result(None)
File "uvloop/loop.pyx", line 1251, in uvloop.loop.Loop.call_soon
File "uvloop/loop.pyx", line 644, in uvloop.loop.Loop._check_thread
RuntimeError: Non-thread-safe operation invoked on an event loop other than the current one
Strange thing is, when I test locally via docker, this error does not happen, only once I migrate to Kubernetes (images and all other settings are the same).
What I have tried so far to analyze the situation:
Put logger.info("Using thread: {}".format(threading.current_thread().name)) into the long_poll() function, just before the error occurs. It tells me code is run in Thread-n (n is always 1 locally and usually some higher integer in the Kubernetes Pod), which is different from MainThread where other parts of my code with asyncio run, therefore I thought I am safe (I am aware the above asyncio code is not thread-safe). In case you have any suggestions, let me know.
UPDATE:
As user4815162342 suggested, instead of creating and destroying the event loops within the while loop, I created a single event loop in a dedicated thread and then pass the coroutine I need to run to the loop in the thread via asyncio.run_coroutine_threadsafe(). Not sure whether I did everything correct here, but now this part of the code is blocking the rest of the program (e.g. no more messages from user to backend are possible...)
self.server = AsyncServer(async_mode="sanic")
async def send_aio(self, msg):
await self.server.emit(msg)
def long_poll(self, loop):
while chat:
response = self.pull_messages(...)
if response == "/end":
chat = False
future = asyncio.run_coroutine_threadsafe(self.send_io(response), loop)
_ = future.result()
In the main program:
...
loop = asyncio.new_event_loop()
lpt = LongPollingObject()
threading.Thread(target=lpt.long_poll, args=(loop,), daemon=True).start()
...
I'm trying to catch exception thrown inside a run_until_complete but whatever I try, I can't seems to catch them properly.
Here's my latest attempt (note, I'm using Pypputeer, a fork of Puppeteer in Python, that uses asyncio):
import asyncio
from pyppeteer.launcher import launch
async def test(instance):
page = await instance.newPage()
await page.goto('http://www.google.com', {'waitUntil': 'load', 'timeout': 1})
await page.pdf({'path': 'example.pdf'})
async def test2():
instance = launch(headless=True)
try:
task = asyncio.ensure_future(test(instance))
print(task)
await task
except:
print("Caught!")
instance.close()
def __main__():
try:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
loop.run_until_complete(test2())
except:
print("ERROR")
return 'ok'
The issue I'm having with this code, is two fold:
if I do asyncio.get_event_loop instead, I get the following error:
There is no current event loop in thread 'Thread-1'.
If I change the timeout to a decent value, I get the following error (at loop.run_until_complete(test2())):
RuntimeError: This event loop is already running
If I set the timeout to 1 (to force the error), I get the exception indicated below, shown in the console, and the text "ERROR" is shown. (but not caught).
Here's the stacktrace:
Exception in callback NavigatorWatcher.waitForNavigation.<locals>.watchdog_cb(<Task finishe...> result=None>) at /home/user/www/project/api/env/lib/python3.6/site-packages/pyppeteer/navigator_watcher.py:49
handle: <Handle NavigatorWatcher.waitForNavigation.<locals>.watchdog_cb(<Task finishe...> result=None>) at /home/user/www/project/api/env/lib/python3.6/site-packages/pyppeteer/navigator_watcher.py:49>
Traceback (most recent call last):
File "/usr/lib64/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/home/user/www/project/api/env/lib/python3.6/site-packages/pyppeteer/navigator_watcher.py", line 52, in watchdog_cb
self._timeout)
File "/home/user/www/project/api/env/lib/python3.6/site-packages/pyppeteer/navigator_watcher.py", line 40, in _raise_error
raise error
concurrent.futures._base.TimeoutError: Navigation Timeout Exceeded: 1 ms exceeded
So, TLDR, how can I catch exceptions thrown inside a run_until_complete call of asyncio?
Thank you so much!
You can't catch this error because it doesn't happen somewhere before event loop finished it's work:
loop.run_until_complete(test2())
print('TEST !!!') # You will see this line, because there was no exception before
But if you look at whole traceback you will see:
Error in atexit._run_exitfuncs:
It means exception happens inside one of the functions Pypputeer registered using atexit handler. You should search how to catch exception there, but I'm not sure if it's possible.
If an exception is raised during execution of the exit handlers, a
traceback is printed (unless SystemExit is raised) and the exception
information is saved. After all exit handlers have had a chance to run
the last exception to be raised is re-raised.
Not related, but never do such thing.
except:
As a simple example, consider the network equivalent of /dev/zero, below. (Or more realistically, just a web server sending a large file.)
If a client disconnects early, you get a barrage of log messages:
WARNING:asyncio:socket.send() raised exception.
But I'm not finding any way to catch said exception. The hypothetical server continues reading gigabytes from disk and sending them to a dead socket, with no effort on the client's part, and you've got yourself a DoS attack.
The only thing I've found from the docs is to yield from a read, with an empty string indicating closure. But that's no good here because a normal client isn't going to send anything, blocking the write loop.
What's the right way to detect failed writes, or be notified that the TCP connection has been closed, with the streams API or otherwise?
Code:
from asyncio import *
import logging
#coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = get_event_loop()
coro = start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()
I did some digging into the asyncio source to expand on dano's answer on why the exceptions aren't being raised without explicitly passing control to the event loop. Here's what I've found.
Calling yield from wirter.drain() gives the control over to the StreamWriter.drain coroutine. This coroutine checks for and raises any exceptions that that the StreamReaderProtocol set on the StreamReader. But since we passed control over to drain, the protocol hasn't had the chance to set the exception yet. drain then gives control over to the FlowControlMixin._drain_helper coroutine. This coroutine the returns immediately because some more flags haven't been set yet, and the control ends up back with the coroutine that called yield from wirter.drain().
And so we have gone full circle without giving control to the event loop to allow it handle other coroutines and bubble up the exceptions to writer.drain().
yielding before a drain() gives the transport/protocol a chance to set the appropriate flags and exceptions.
Here's a mock up of what's going on, with all the nested calls collapsed:
import asyncio as aio
def set_exception(ctx, exc):
ctx["exc"] = exc
#aio.coroutine
def drain(ctx):
if ctx["exc"] is not None:
raise ctx["exc"]
return
#aio.coroutine
def client_handler(ctx):
i = 0
while True:
i += 1
print("write", i)
# yield # Uncommenting this allows the loop.call_later call to be scheduled.
yield from drain(ctx)
CTX = {"exc": None}
loop = aio.get_event_loop()
# Set the exception in 5 seconds
loop.call_later(5, set_exception, CTX, Exception("connection lost"))
loop.run_until_complete(client_handler(CTX))
loop.close()
This should probably fixed upstream in the Streams API by the asyncio developers.
This is a little bit strange, but you can actually allow an exception to reach the client_handler coroutine by forcing it to yield control to the event loop for one iteration:
import asyncio
import logging
#asyncio.coroutine
def client_handler(reader, writer):
while True:
writer.write(bytes(1))
yield # Yield to the event loop
yield from writer.drain()
logging.basicConfig(level=logging.INFO)
loop = asyncio.get_event_loop()
coro = asyncio.start_server(client_handler, '', 12345)
server = loop.run_until_complete(coro)
loop.run_forever()
If I do that, I get this output when I kill the client connection:
ERROR:asyncio:Task exception was never retrieved
future: <Task finished coro=<client_handler() done, defined at aio.py:4> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.4/asyncio/tasks.py", line 238, in _step
result = next(coro)
File "aio.py", line 9, in client_handler
yield from writer.drain()
File "/usr/lib/python3.4/asyncio/streams.py", line 301, in drain
raise exc
File "/usr/lib/python3.4/asyncio/selector_events.py", line 700, in write
n = self._sock.send(data)
ConnectionResetError: [Errno 104] Connection reset by peer
I'm really not quite sure why you need to explicitly let the event loop get control for the exception to get through - don't have time at the moment to dig into it. I assume some bit needs to get flipped to indicate the connection dropped, and calling yield from writer.drain() (which can short-circuit going through the event loop) in a loop is preventing that from happening, but I'm really not sure. If I get a chance to investigate, I'll update the answer with that info.
The stream based API doesn't have a callback you can specify for when the connection is closed. But the Protocol API does, so use it instead: https://docs.python.org/3/library/asyncio-protocol.html#connection-callbacks