Deadlock in asyncpg - how to resolve? - python

I have a simple execution statement that updates if a condition is met. I understand a deadlock is when both process A and B fight for the same resources, and they both rely on another to finish.
However, I just have a simple UPDATE statement, and I can't find the other source of the deadlock - but I was just confused how this line can cause a deadlock statement:
await self.bot.pg_con.execute("UPDATE gameusers SET raid_pass = raid_pass + 1 WHERE raid_pass < 10")
File "/home/debian/.local/lib/python3.7/site-packages/asyncpg/pool.py", line 518, in execute
return await con.execute(query, *args, timeout=timeout)
File "/home/debian/.local/lib/python3.7/site-packages/asyncpg/connection.py", line 272, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 316, in query
asyncpg.exceptions.DeadlockDetectedError: deadlock detected
DETAIL: Process 24326 waits for ShareLock on transaction 75790925; blocked by process 24240.
Process 24240 waits for ShareLock on transaction 75790924; blocked by process 24326.
How can it occur in a simple UPDATE statement?
Thanks for your time.

Related

Raise Exception if thread hangs

I have scripts running 24/7 that sometimes get stuck when a thread in concurrent.futures gets no response for a request.
The hanging-threads 2.0.5 module prints out which thread hangs and why.
The print looks something like this:
Thread 139646566659840 "ThreadPoolExecutor-666849_1" hangs -
File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 77, in _worker
work_item.run()
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
How can I, instead of just printing out the hanging threads and files, raise an exception when a thread is not responding in a given time? The script should just restart itself if hanging occurs, instead of waiting for a response.
I have tried with timeout, but concurrent futures can not be cancelled while running.
concurrent futures can not be cancelled while running
This is your problem. A hanging thread is still 'running'. Cancelling it from outside is not possible.
Thus you have two options:
switch to something which can be cancelled, like a ProcessPoolExecutor, or
rewrite the blocking code so it fails.
Since you say 'response to a request'---if this is a network request and you are early enough/frustrated enough in the dev cycle I thoroughly recommend switching to a concurrent multiprocessing framework like asyncio. This is exactly what they were developed for. In particular you may be interested in trios implementation of cancel scopes.

Stopping cocotb forked coroutine

I have a coroutine that waiting for a signal rising :
#cocotb.coroutine
def wait_for_rise(self):
yield RisingEdge(self.dut.mysignal)
I'm launching it in my «main» test function like it :
mythread = cocotb.fork(wait_for_rise())
I want to stop it after a while even if no signal rise happen. I tryed to «kill» it:
mythread.kill()
But exception happen :
Send raised exception: 'RunningCoroutine' object has no attribute '_join'
File "/opt/cocotb/cocotb/decorators.py", line 121, in send
return self._coro.send(value)
File "/myproject.py", line 206, in i2c_read
wTXDRwthread.kill()
File "/opt/cocotb/cocotb/decorators.py", line 151, in kill
cocotb.scheduler.unschedule(self)
File "/opt/cocotb/cocotb/scheduler.py", line 453, in unschedule
if coro._join in self._trigger2coros:
Is there a solution to stop forked coroutine properly ?
This very much looks like it is the same problem as in https://github.com/potentialventures/cocotb/issues/650 - you can subscribe to the issue to be notified when its status changes.

Python3.5 Asyncio - Preventing task exception from dumping to stdout?

I have a textbased interface (asciimatics module) for my program that uses asyncio and discord.py module and occasionally when my wifi adapter goes down I get an exception like so:
Task exception was never retrieved
future: <Task finished coro=<WebSocketCommonProtocol.run() done, defined at /home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py:428> exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/usr/lib/python3.5/asyncio/tasks.py", line 241, in _step
result = coro.throw(exc)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 434, in run
msg = yield from self.read_message()
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 456, in read_message
frame = yield from self.read_data_frame(max_size=self.max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 511, in read_data_frame
frame = yield from self.read_frame(max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/protocol.py", line 546, in read_frame
self.reader.readexactly, is_masked, max_size=max_size)
File "/home/mike/.local/lib/python3.5/site-packages/websockets/framing.py", line 86, in read_frame
data = yield from reader(2)
File "/usr/lib/python3.5/asyncio/streams.py", line 670, in readexactly
block = yield from self.read(n)
File "/usr/lib/python3.5/asyncio/streams.py", line 627, in read
yield from self._wait_for_data('read')
File "/usr/lib/python3.5/asyncio/streams.py", line 457, in _wait_for_data
yield from self._waiter
File "/usr/lib/python3.5/asyncio/futures.py", line 361, in __iter__
yield self # This tells Task to wait for completion.
File "/usr/lib/python3.5/asyncio/tasks.py", line 296, in _wakeup
future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/selector_events.py", line 662, in _read_ready
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
This exception is non-fatal and the program is able to re-connect despite it - what I want to do is prevent this exception from dumping to stdout and mucking up my text interface.
I tried using ensure_future to handle it but it doesn't seem to work. Am I missing something:
#asyncio.coroutine
def handle_exception():
try:
yield from WebSocketCommonProtocol.run()
except Exception:
print("SocketException-Retrying")
asyncio.ensure_future(handle_exception())
#start discord client
client.run(token)
Task exception was never retrieved - is not actually exception propagated to stdout, but a log message that warns you that you never retrieved exception in one of your tasks. You can find details here.
I guess, most easy way to avoid this message in your case is to retrieve exception from task manually:
coro = WebSocketCommonProtocol.run() # you don't need any wrapper
task = asyncio.ensure_future(coro)
try:
#start discord client
client.run(token)
finally:
# retrieve exception if any:
if task.done() and not task.cancelled():
task.exception() # this doesn't raise anything, just mark exception retrieved
The answer provided by Mikhail is perfectly acceptable, but I realized it wouldn't work for me since the task that is raising the exception is buried deep in some module so trying to retrieve it's exception is kind've difficult. I found that instead if I simply set a custom exception handler for my asyncio loop (loop is created by the discord client):
def exception_handler(loop,context):
print("Caught the following exception")
print(context['message'])
client.loop.set_exception_handler(exception_handler)
client.run(token)

Django-Celery worker hangs when SQL Server Database crashes using FreeTDS driver

I have a celery worker, running on a Ubuntu 14.04 server, that is reading and writing to an MSSQL Server database using pyodbc and the freetds driver. When the SQL Server goes down, the function fails as expected and the celery worker starts trying to clean up and get ready for the next task. At this time, the worker calls django's "connection.close()" method. This method appears to send a command to rollback any incomplete transactions. Since the server is down this throws an exception that is not caught by the celery worker. The worker then hangs and neither releases the task or move on to the next task.
I tried overriding the on_failure and after_return methods for the function and calling connection.close() there (as specified in other answers), but that didn't work. I suspect it is because the when I call connection.close() it has the same issue, and just bubbles the exception up, or because celery's cleanup code gets run before those two methods get called.
Any ideas on how to either catch this exception before it gets to celery, or avoid it all together?
Below is the stack trace of the exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 283, in trace_task
state, retval, uuid, args, kwargs, None,
File "/var/www/cortex/corespring/tasks.py", line 13, in after_return
connections['xxx'].close()
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/init.py", line 317, in close
self.connection.close()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2642, in close
self.rollback()
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 2565, in rollback
check_success(self, ret)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 987, in check_success
ctrl_err(SQL_HANDLE_DBC, ODBC_obj.dbc_h, ret, ODBC_obj.ansi)
File "/usr/local/lib/python2.7/dist-packages/pyodbc.py", line 965, in ctrl_err
raise DatabaseError(state,err_text)
DatabaseError: (u'08S01', u'[08S01] [FreeTDS][SQL Server]Write to the server failed')

call dbus method on proxy without blocking (or no timeout)

I'm trying to lock my GNOME screensaver, however the dbus .Lock method is waiting for a response. I would like it to not wait for a response (just send the signal to lock the screensaver, and continue on with life). How do I do this? (In practice this code is in a thread so I continue on with life, but I still get the nasty error)
session_bus = dbus.SessionBus()
screensaver_proxy = session_bus.get_object('org.gnome.ScreenSaver', '/org/gnome/ScreenSaver')
locked = screensaver_proxy.Lock(dbus_interface='org.gnome.ScreenSaver')
print "HELLO" # will never get called, due to:
/*
locked = screensaver_proxy.Lock(dbus_interface='org.gnome.ScreenSaver')
File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 68, in __call__
return self._proxy_method(*args, **keywords)
File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 140, in __call__
**keywords)
File "/usr/lib/pymodules/python2.6/dbus/connection.py", line 620, in call_blocking
message, timeout)
DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken
*/
Bah. Solution: http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.html#making-asynchronous-method-calls

Categories

Resources