Using apscheduler with asyncpg db as Job Store: Error MissingGreenlet - python

I'm trying to use Apscheduler with a postgresql db via an asyncpg connection. I thought it would working, because asyncpg supports sqlalchemy ref. But yeah, it isn't working. And to make it even worst, I don't understand the error message, so I have not even a guess what to google for.
import asyncio
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
def simple_job():
print('This was an easy job!')
scheduler = AsyncIOScheduler()
jobstore = SQLAlchemyJobStore(url='postgresql+asyncpg://user:password#localhost:5432/public')
scheduler.add_jobstore(jobstore)
# schedule a simple job
scheduler.add_job(simple_job, 'cron', second='15', id='heartbeat',
coalesce=True, misfire_grace_time=5, replace_existing=True)
scheduler.start()
Versions:
python 3.7
APScheduler==3.7.0
asyncpg==0.22.0
SQLAlchemy==1.4.3
Error Message and traceback:
Traceback (most recent call last):
File "C:/Users/d/PycharmProjects/teamutils/utils/automation.py", line 320, in <module>
scheduler.start()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\asyncio.py", line 45, in start
super(AsyncIOScheduler, self).start(paused)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\schedulers\base.py", line 163, in start
store.start(self, alias)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 68, in start
self.jobs_t.create(self.engine, True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 940, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2979, in _run_ddl_visitor
with self.begin() as conn:
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2895, in begin
conn = self.connect(close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3067, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 91, in __init__
else engine.raw_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3146, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\base.py", line 3113, in _wrap_pool_connect
return fn()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 419, in checkout
rec = pool._do_get()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 145, in _do_get
self._dec_overflow()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\impl.py", line 142, in _do_get
return self._create_connection()
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\compat.py", line 198, in raise_
raise exception
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\pool\base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\engine\default.py", line 548, in connect
return self.dbapi.connect(*cargs, **cparams)
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\dialects\postgresql\asyncpg.py", line 744, in connect
await_only(self.asyncpg.connect(*arg, **kw)),
File "C:\Users\d\PycharmProjects\teamutils\venv\lib\site-packages\sqlalchemy\util\_concurrency_py3k.py", line 48, in await_only
"greenlet_spawn has not been called; can't call await_() here. "
sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_() here. Was IO attempted in an unexpected place? (Background on this error at: http://sqlalche.me/e/14/xd2s)
sys:1: RuntimeWarning: coroutine 'connect' was never awaited
I looked up the provided link, but not getting smart of it. So it would be nice, if somebody can tell me what is going on, so I can search for a solution by my own. (a solution would okay too, of course xD)
Sorry for this "open" question, but my understanding is so bad, that I dont know what to ask for.

I think problem is in ApScheduler.
What is happening is that scheduler.start() will attempt to create the job table in your database. But since your database url is specified as +asyncpg and there is no async coroutine running (ie: async def) when ApScheduler tries to create the table. Hence the "coroutine 'connect' was never awaited" error.
After reading the ApScheduler code, I think "integrates with asyncio" is a little misleading - specifically the scheduler can run asyncio, but the JobStore itself has no provision for an asyncio database connection.
You can get it working by removing +asyncpg in the connection url used with ApScheduler.
Note it would still be possible to use async db calls within job functions with a separate asyncpg connection.

Related

How to access an external MySQL database from within a PythonAnywhere Flask Webapp (via Pandas&sqlalchemy): Receiving sqlalchemy.exc.OperationalError

I am attempting to convert a Flask webapp to run on PythonAnywhere instead of my Raspberry Pi on which it is currently hosted and functions perfectly.
One of the core features of the webapp is using Pandas to query an externally hosted MySQL database (NOT hosted on PythonAnywhere).
Previously I have done this using the following method, with no troubles:
import pandas as pd
URI=f"mysql://{user}#{host}:{port}/{schema}"
my_dataframe=pd.read_sql_query(sql="select * from users",con=URI)
Attempting this with the webapp hosted on PythonAnywhere results in a 502-backend error, with an error log of:
File "./webapp.py", line 240, in ages
message_, success=scripts.ages.main()
File "./scripts/ages.py", line 22, in main
my_dataframe=pd.read_sql_query(sql="select * from users",con=URI)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/pandas/io/sql.py", line 383, in read_sql_query
chunksize=chunksize,
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/pandas/io/sql.py", line 1295, in read_query
result = self.execute(*args)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/pandas/io/sql.py", line 1162, in execute
*args, **kwargs
File "<string>", line 2, in execute
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/deprecations.py", line 401, in warned
return fn(*args, **kwargs)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3145, in execute
connection = self.connect(close_with_result=True)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3204, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
else engine.raw_connection()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3283, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3254, in _wrap_pool_connect
e, dialect, self
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2101, in _handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3250, in _wrap_pool_connect
return fn()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 310, in connect
return _ConnectionFairy._checkout(self)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 868, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 476, in checkout
rec = pool._do_get()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 146, in _do_get
self._dec_overflow()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 143, in _do_get
return self._create_connection()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 256, in _create_connection
return _ConnectionRecord(self)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 371, in __init__
self.__connect()
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 666, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 661, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/create.py", line 590, in connect
return dialect.connect(*cargs, **cparams)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 597, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 130, in Connect
return Connection(*args, **kwargs)
File "/home/mywebapp/.virtualenvs/my-virtualenv/lib/python3.6/site-packages/MySQLdb/connections.py", line 185, in __init__
super().__init__(*args, **kwargs2)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (2003, "Can't connect to MySQL server on '34.105.08.12:8080' (111)")
(Background on this error at: https://sqlalche.me/e/14/e3q8)
(I've changed the IP)
I've take a look at the linked site https://sqlalche.me/e/14/e3q8, but didn't see anything useful on there.
I've also tried googling how to use sqlalchemy within PythonAnywhere. This mostly resulted in questions about connecting to databases hosted by PythonAnywhere, which doesn't help me.
I came across this link:
https://help.pythonanywhere.com/pages/UsingSQLAlchemywithMySQL/
I don't actually know if it's referring to using SQLAlchemy to connect to a PythonAnywhere hosted MySQL database, or an external one, but I tried implementing this anyway. I doubt it makes a difference, but the URI is in a separate file to the Pandas sql query.
#credentials.py
from sqlalchemy import create_engine
URI_string=f"mysql://{user}#{host}:{port}/{schema}"
URI = create_engine(URI_string, pool_recycle=280)
import pandas as pd
import credentials
my_dataframe=pd.read_sql_query(sql="select * from users",con=credentials.URI)
The webapp is still functional when testing on the Pi with this change, however it made no difference in the outcome when testing on PyAnywhere and the same error log output is received.
If anyone has any ideas please let me know!
If you're using a free account, you will only be able to connect out of PythonAnywhere using http(s) to a list of approved sites. So external database connections will not work from a free account.

How to implement SQLAlchemy Postgres async driver in a Django project?

I have come across really weird behavior implementing SQLAlchemy in my Django project using the asynchronous Postgres driver.
When I create a SQLAlchemy Session using a sessionmaker and then execute a simple SELECT statement within a session context manager, it generates the correct result the first time, but the proceeding attempts result in a RuntimeError exception stating "Task ... attached to a different loop". I looked up the error and it usually indicates that the session is persisted across multiple Django requests; however, in my code, I don't see how that is possible since I am creating the database session within the request using the sessionmaker and using the session context manager to close it.
It gets weirder when reloading the page for a 3rd time using the session context manager approach because the RuntimeError changes to an InterfaceError stating "cannot perform operation: another operation is in progress". But when not using the context manager approach and creating the session manually and immediately after using it closing the session it only produces the RuntimeError exception even after the 2nd attempt.
And it continues to get weird, when I do the second approach and do not await the session close, then it works completely fine other than a log message stating that the method was never awaited.
I have created a GitHub repository with a docker-compose.yml that reproduces the issue if anyone wants to see it for themselves.
https://github.com/enots227/Django-SQLAlchemy-Async
Django Setup
settings.py
# Database
DB_ENGINE = create_async_engine(URL.create(
"postgresql+asyncpg",
username=os.getenv('DB_USERNAME'),
password=os.getenv('DB_PASSWORD'),
host=os.getenv('DB_HOST'),
port=os.getenv('DB_PORT'),
database=os.getenv('DB_NAME')), echo=True)
DB_SESSION_MAKER = sessionmaker(DB_ENGINE, AsyncSession, expire_on_commit=False)
Approach 1: Using Context Manager
Here is my code for the Django view for the context manager (http://127.0.0.1:8000/sql/1/)
async def view1(request):
html = '<html><body><ul>'
async with settings.DB_SESSION_MAKER() as db_session:
items = await list_accounts(db_session)
for item in items:
html += '<li>' + item.name + '</li>'
html += '</ul></body></html>'
return HttpResponse(html)
For the first execution, it returns back the correct response:
<html><body><ul><li>test1</li></ul></body></html>
But if I try to reload the page after the first execution, it then raises a RuntimeError exception stating "Task ... attached to a different loop".
Internal Server Error: /sql/1/
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.10/site-packages/asgiref/sync.py", line 204, in __call__
return call_result.result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/site-packages/asgiref/sync.py", line 270, in main_wrap
result = await self.awaitable(*args, **kwargs)
File "/usr/src/app/src/sql/views.py", line 12, in view1
items = await list_accounts(db_session)
File "/usr/src/app/src/sql/models.py", line 20, in list_accounts
return await db_session.scalars(select(Account))
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 269, in scalars
result = await self.execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 212, in execute
result = await greenlet_spawn(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 134, in greenlet_spawn
result = context.throw(*sys.exc_info())
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1692, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2047, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 479, in execute
self._adapt_connection.await_(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 408, in _prepare_and_execute
await adapt_connection._start_transaction()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 716, in _start_transaction
self._handle_exception(error)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 684, in _handle_exception
raise error
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 714, in _start_transaction
await self._transaction.start()
File "/usr/local/lib/python3.10/site-packages/asyncpg/transaction.py", line 138, in start
await self._connection.execute(query)
File "/usr/local/lib/python3.10/site-packages/asyncpg/connection.py", line 318, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 338, in query
RuntimeError: Task <Task pending name='Task-5' coro=<AsyncToSync.main_wrap() running at /usr/local/lib/python3.10/site-packages/asgiref/sync.py:270> cb=[_run_until_complete_cb() at /usr/local/lib/python3.10/asyncio/base_events.py:184]> got Future <Future pending cb=[Protocol._on_waiter_completed()]> attached to a different loop
And when I reload for the 3rd time it raises an InterfaceError stating "cannot perform operation: another operation is in progress".
Internal Server Error: /sql/1/
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 714, in _start_transaction
await self._transaction.start()
File "/usr/local/lib/python3.10/site-packages/asyncpg/transaction.py", line 138, in start
await self._connection.execute(query)
File "/usr/local/lib/python3.10/site-packages/asyncpg/connection.py", line 318, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 323, in query
File "asyncpg/protocol/protocol.pyx", line 707, in asyncpg.protocol.protocol.BaseProtocol._check_state
asyncpg.exceptions._base.InterfaceError: cannot perform operation: another operation is in progress
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 479, in execute
self._adapt_connection.await_(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 408, in _prepare_and_execute
await adapt_connection._start_transaction()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 716, in _start_transaction
self._handle_exception(error)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 682, in _handle_exception
raise translated_error from error
sqlalchemy.dialects.postgresql.asyncpg.AsyncAdapt_asyncpg_dbapi.InterfaceError: <class 'asyncpg.exceptions._base.InterfaceError'>: cannot perform operation: another operation is in progress
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.10/site-packages/asgiref/sync.py", line 204, in __call__
return call_result.result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/site-packages/asgiref/sync.py", line 270, in main_wrap
result = await self.awaitable(*args, **kwargs)
File "/usr/src/app/src/sql/views.py", line 12, in view1
items = await list_accounts(db_session)
File "/usr/src/app/src/sql/models.py", line 20, in list_accounts
return await db_session.scalars(select(Account))
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 269, in scalars
result = await self.execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 212, in execute
result = await greenlet_spawn(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 134, in greenlet_spawn
result = context.throw(*sys.exc_info())
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1692, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1631, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 325, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2043, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 479, in execute
self._adapt_connection.await_(
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 76, in await_only
return current.driver.switch(awaitable)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 129, in greenlet_spawn
value = await result
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 408, in _prepare_and_execute
await adapt_connection._start_transaction()
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 716, in _start_transaction
self._handle_exception(error)
File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 682, in _handle_exception
raise translated_error from error
sqlalchemy.exc.InterfaceError: (sqlalchemy.dialects.postgresql.asyncpg.InterfaceError) <class 'asyncpg.exceptions._base.InterfaceError'>: cannot perform operation: another operation is in progress
[SQL: SELECT account.acct_id, account.name
FROM account]
(Background on this error at: https://sqlalche.me/e/14/rvf5)
Approach 2: Manually Creating and Closing Session
Here is my code for the Django view by manually managing the session (http://127.0.0.1:8000/sql/2/)
async def view2(request):
html = '<html><body><ul>'
db_session: AsyncSession = settings.DB_SESSION_MAKER()
items = await list_accounts(db_session)
await db_session.close()
for item in items:
html += '<li>' + item.name + '</li>'
html += '</ul></body></html>'
return HttpResponse(html)
For the first execution, it returns back the correct response as approach 1. After the first execution, I get the RuntimeError in approach 1 and never get the InterfaceError.
Approach 3: Manually Creating and Closing Session (Not Awaiting Close)
Here is my code for the Django view for manually managing the session and not awaiting the close method (http://127.0.0.1:8000/sql/2/)
async def view2(request):
html = '<html><body><ul>'
db_session: AsyncSession = settings.DB_SESSION_MAKER()
items = await list_accounts(db_session)
db_session.close() # logs async warning since missing await, but works
for item in items:
html += '<li>' + item.name + '</li>'
html += '</ul></body></html>'
return HttpResponse(html)
All executions always return back the correct response.

"KeyError: 0" when connecting to MySQL (get_default_isolation_level)

Currently developing for python 3.9.6 with sqlalchemy version 1.4.23 and whenever I use sqlalchemy to connect to a database, I get an error with this stack trace:
Traceback (most recent call last):
File "/mnt/c/Importer/common/db.py", line 106, in db_cur
with engine.connect() as con:
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3166, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
else engine.raw_connection()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3245, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3212, in _wrap_pool_connect
return fn()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 307, in connect
return _ConnectionFairy._checkout(self)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 767, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 425, in checkout
rec = pool._do_get()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/impl.py", line 146, in _do_get
self._dec_overflow()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/impl.py", line 143, in _do_get
return self._create_connection()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 253, in _create_connection
return _ConnectionRecord(self)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 368, in __init__
self.__connect()
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 622, in __connect
pool.dispatch.connect.for_modify(
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/event/attr.py", line 329, in _exec_w_sync_on_first_run
self(*args, **kw)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/event/attr.py", line 343, in __call__
fn(*args, **kw)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1691, in go
return once_fn(*arg, **kw)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 674, in first_connect
dialect.initialize(c)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/base.py", line 2961, in initialize
default.DefaultDialect.initialize(self, connection)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 407, in initialize
self.default_isolation_level = self.get_default_isolation_level(
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 473, in get_default_isolation_level
return self.get_isolation_level(dbapi_conn)
File "/home/jimmy/.local/share/virtualenvs/Importer-0-6a4YAh/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/base.py", line 2720, in get_isolation_level
val = row[0]
KeyError: 0
I can fix the issue by manually changing the file from
val = row[0]
to
val = row['##transaction_isolation']
but I don't want to have to manually change dependency files after installing them. I have tried manually setting the transaction isolation when creating the engine, but that doesn't prevent get_isolation_level() from being called so the error persists. Is sqlalchemy not compatible with python 3.9.6? Does this have anything to do with the mysql server version?
Problem was with the DictCursor being used in the connection. Changing _fetch_type for SELECT ##transaction_isolation to be a tuple instead of a dictionary solved the issue.

PyMongo authentication failing

I set up a database with mongolab where the db had the same name as the collection. I decided I didnt like this deleted it and made a better naming scheme. I added a new user/password to for the new db and tried to authenticated. It keeps failing. Not sure why. I have double check my credentials and they are correct. I have checked the URI that mongolab provides and that is correct also. This code worked well for the first db.
this is my login code
def __init__(self, user_id, password, database, collection):
# mongodb://<dbuser>:<dbpassword>#ds017205.mlab.com:17205/words
mongodb_uri = "mongodb://" + user_id + ":" + password + "#ds017205.mlab.com:17205/" + database
client = pymongo.MongoClient(mongodb_uri)
db = client[database]
self.collection = db[collection] # is declared in class
this generates the following errors
Error
Traceback (most recent call last):
File "C:\Users\Austin\PycharmProjects\Words\test_mongodb.py", line 15, in testUpdate
results = self.mongodb_obj.update(test)
File "C:\Users\Austin\PycharmProjects\Words\mongodb.py", line 38, in update
upsert=True
File "C:\Python33\lib\site-packages\pymongo\collection.py", line 2235, in update
with self._socket_for_writes() as sock_info:
File "C:\Python33\lib\contextlib.py", line 48, in __enter__
return next(self.gen)
File "C:\Python33\lib\site-packages\pymongo\mongo_client.py", line 718, in _get_socket
with server.get_socket(self.__all_credentials) as sock_info:
File "C:\Python33\lib\contextlib.py", line 48, in __enter__
return next(self.gen)
File "C:\Python33\lib\site-packages\pymongo\server.py", line 152, in get_socket
with self.pool.get_socket(all_credentials, checkout) as sock_info:
File "C:\Python33\lib\contextlib.py", line 48, in __enter__
return next(self.gen)
File "C:\Python33\lib\site-packages\pymongo\pool.py", line 541, in get_socket
sock_info.check_auth(all_credentials)
File "C:\Python33\lib\site-packages\pymongo\pool.py", line 306, in check_auth
auth.authenticate(credentials, self)
File "C:\Python33\lib\site-packages\pymongo\auth.py", line 436, in authenticate
auth_func(credentials, sock_info)
File "C:\Python33\lib\site-packages\pymongo\auth.py", line 416, in _authenticate_default
return _authenticate_scram_sha1(credentials, sock_info)
File "C:\Python33\lib\site-packages\pymongo\auth.py", line 216, in _authenticate_scram_sha1
res = sock_info.command(source, cmd)
File "C:\Python33\lib\site-packages\pymongo\pool.py", line 213, in command
read_concern)
File "C:\Python33\lib\site-packages\pymongo\network.py", line 99, in command
helpers._check_command_response(response_doc, None, allowable_errors)
File "C:\Python33\lib\site-packages\pymongo\helpers.py", line 196, in _check_command_response
raise OperationFailure(msg % errmsg, code, response)
pymongo.errors.OperationFailure: Authentication failed.
Help with this would be appreciated. Thanks
Nothing wrong with the code... the db user set up entered the incorrect password the same twice in the setup box so it was accepted then when the code was trying to use the actual correct answer it wouldn't work for obvious reasons. thanks all for looking

Celery creating a new connection for each task

I'm using Celery with Redis to run some background tasks, but each time a task is called, it creates a new connection to Redis. I'm on Heroku and my Redis to Go plan allows for 10 connections. I'm quickly hitting that limit and getting a "max number of clients reached" error.
How can I ensure that Celery queues the tasks on a single connection rather than opening a new one each time?
EDIT - including the full traceback
File "/app/.heroku/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/hooks/framework_django.py", line 447, in wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 77, in wrapped_view
return view_func(*args, **kwargs)
File "/app/feedback/views.py", line 264, in zencoder_webhook_handler
tasks.process_zencoder_notification.delay(webhook)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 343, in delay
return self.apply_async(args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/task.py", line 458, in apply_async
with app.producer_or_acquire(producer) as P:
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/base.py", line 247, in producer_or_acquire
with self.amqp.producer_pool.acquire(block=True) as producer:
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 705, in acquire
R = self.prepare(R)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 54, in prepare
p = p()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 45, in <lambda>
return lambda: self.create_producer()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/pools.py", line 42, in create_producer
return self.Producer(self._acquire_connection())
File "/app/.heroku/venv/lib/python2.7/site-packages/celery/app/amqp.py", line 160, in __init__
super(TaskProducer, self).__init__(channel, exchange, *args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 83, in __init__
self.revive(self.channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/messaging.py", line 174, in revive
channel = self.channel = maybe_channel(channel)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 879, in maybe_channel
return channel.default_channel
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 617, in default_channel
self.connection
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 610, in connection
self._connection = self._establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/connection.py", line 569, in _establish_connection
conn = self.transport.establish_connection()
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 722, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 705, in create_channel
channel = self.Channel(connection)
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 271, in __init__
self.client.info()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/client.py", line 344, in info
return self.execute_command('INFO')
File "/app/.heroku/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 536, in execute_command
conn.send_command(*args)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 273, in send_command
self.send_packed_command(self.pack_command(*args))
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 256, in send_packed_command
self.connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/object_wrapper.py", line 166, in __call__
self._nr_instance, args, kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/newrelic-1.4.0.137/newrelic/api/function_trace.py", line 81, in literal_wrapper
return wrapped(*args, **kwargs)
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 207, in connect
self.on_connect()
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 233, in on_connect
if self.read_response() != 'OK':
File "/app/.heroku/venv/lib/python2.7/site-packages/redis/connection.py", line 283, in read_response
raise response
ResponseError: max number of clients reached
I ran into the same problem on Heroku with CloudAMQP. I do not know why, but I had no luck when assigning low integers to the BROKER_POOL_LIMIT setting.
Ultimately, I found that by setting BROKER_POOL_LIMIT=None or BROKER_POOL_LIMIT=0 my issue was mitigated. According to the Celery docs, this disables the connection pool. So far, this has not been a noticeable issue for me, however I'm not sure if it might be for you.
Link to relevant info: http://celery.readthedocs.org/en/latest/configuration.html#broker-pool-limit
I wish I was using Redis, because there is a specific option to limit the number of connections: CELERY_REDIS_MAX_CONNECTIONS.
http://docs.celeryproject.org/en/3.0/configuration.html#celery-redis-max-connections (for 3.0)
http://docs.celeryproject.org/en/latest/configuration.html#celery-redis-max-connections (for 3.1)
http://docs.celeryproject.org/en/master/configuration.html#celery-redis-max-connections (for dev)
The MongoDB has a similar backend setting.
Given these backend settings, I have no idea what BROKER_POOL_LIMIT actually does. Hopefully CELERY_REDIS_MAX_CONNECTIONS solves your problem.
I'm one of those folks using CloudAMQP, and the AMQP backend does not have its own connection limit parameter.
Try those settings :
CELERY_IGNORE_RESULT = True
CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
I had a similar issue involving number of connections and Celery. It wasn't on Heroku, and it was Mongo and not Redis though.
I initiated the connection outside of the task function definition at the task module level. At least for Mongo this allowed the tasks to share the connection.
Hope that helps.
https://github.com/instituteofdesign/wander/blob/master/wander/tasks.py
mongoengine.connect('stored_messages')
#celery.task(default_retry_delay = 61)
def pull(settings, google_settings, user, folder, messageid):
'''
Pulls a message from zimbra and stores it in Mongo
'''
try:
imap = imap_connect(settings, user)
imap.select(folder, True)
.......

Categories

Resources