I wonder if it is possible to get the database name after a connection. I known that it is possible with the engine created by the function 'create_engine' (here) and I would like to have the same possibility after a connection.
from sqlalchemy import create_engine, inspect
engine = create_engine('mysql+mysqldb://login:pass#localhost/MyDatabase')
print (engine.url.database) # print the dabase name with an engine
con = engine.connect()
I looked at the inspector tool, but there is no way to retrieve the database name like:
db_name = inspect(con.get_database_name() )
May be it is not possible. Any idea ?
Thanks a lot!
For MySQL, executing select DATABASE() as name_of_current_database should be sufficient. For SQL Server, it would be select DB_NAME() as name_of_current_database. I do not know of any inherently portable way of doing this that will work irrespective of the backend.
Related
I am connecting to an Oracle Database Server using a SessionPool from cx_Oracle. When I look at the description of the opened session in the Oracle developer, I see that its name is "python.exe". How can I set the application/module name in cx_oracle?
You may be able to physically rename python.exe but there is no programmatic way to change what is shown in Oracle Database as the executable.
You can set the driver name by calling cx_Oracle.init_oracle_client. This changes the CLIENT_DRIVER column of V$SESSION_CONNECT_INFO.
Other settable attributes including 'module' (which in Oracle terminology is not the program name) are shown in the documentation Oracle Database End-to-End Tracing.
# Set the tracing metadata
connection.client_identifier = "pythonuser"
connection.action = "Query Session tracing parameters"
connection.module = "End-to-end Demo"
for row in cursor.execute("""
SELECT username, client_identifier, module, action
FROM V$SESSION
WHERE username = 'SYSTEM'"""):
print(row)
I'm using Sqlite3 database in my Python application and query it using parameters substitution.
For example:
cursor.execute('SELECT * FROM table WHERE id > ?', (10,))
Some queries do not return results properly and I would like to log them and try to query sqlite manually.
How can I log these queries with parameters instead of question marks?
Python 3.3 has sqlite3.Connection.set_trace_callback:
import sqlite3
connection = sqlite3.connect(':memory:')
connection.set_trace_callback(print)
The function you provide as argument gets called for every SQL statement that is executed through that particular Connection object. Instead of print, you may want to use a function from the logging module.
Assuming that you have a log function, you could call it first :
query, param = 'SELECT * FROM table WHERE id > ?', (10,)
log(query.replace('?', '%s') % param)
cursor.execute(query, param)
So you don't modify your query at all.
Moreover, this is not Sqlite specific.
I have a Django application, and I'm trying to test it using pytest and pytest-django. However, quite often, when the tests finish running, I get the error that the database failed to be deleted: DETAIL: There is 1 other session using the database.
Basically, the minimum test code that I could narrow it down to is:
#pytest.fixture
def make_bundle():
a = MyUser.objects.create(key_id=uuid.uuid4())
return a
class TestThings:
def test_it(self, make_bundle):
all_users = list(MyUser.objects.all())
assert_that(all_users, has_length(1))
Every now and again the tests will fail with the above error. Is there something I am doing wrong? Or how can I fix this?
The database that I am using is PostgreSQL 9.6.
I am posting this as an answer because I need to post a chunk of code and because this worked. However, this looks like a dirty hack to me, and I'll be more than happy to accept anybody else's answer if it is better.
Here's my solution: basically, add the raw sql that kicks out all the users from the given db to the method that destroys the db. And do that by monkeypatching. To ensure that the monkeypatching happens before tests, add that to the root conftest.py file as an autouse fixture:
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
with self.connection._nodb_connection.cursor() as cursor:
cursor.execute(
"SELECT pg_terminate_backend(pg_stat_activity.pid) "
"FROM pg_stat_activity "
"WHERE pg_stat_activity.datname = '{}' "
"AND pid <> pg_backend_pid();".format(test_database_name)
)
cursor.execute("DROP DATABASE %s"
% self.connection.ops.quote_name(test_database_name))
#pytest.fixture(autouse=True)
def patch_db_cleanup():
creation.BaseDatabaseCreation._destroy_test_db = _destroy_test_db
Note that the kicking-out code may depend on your database engine, and the method that needs monkeypatching may be different in different Django versions.
I am facing a problem to connect to an Azure MS SQL Server 2014 database in Apache Airflow 1.10.1 using pymssql.
I want to use the MsSqlHook class provided by Airflow, for the convenience to create my connection in the Airflow UI, and then create a context manager for my connection using SqlAlchemy:
#contextmanager
def mssql_session(dt_conn_id):
sqla_engine = MsSqlHook(mssql_conn_id=dt_conn_id).get_sqlalchemy_engine()
session = sessionmaker(bind=sqla_engine)()
try:
yield session
except:
session.rollback()
raise
else:
session.commit()
finally:
session.close()
But when I do that, I have this error when I run a request :
sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('IM002',
'[IM002] [unixODBC][Driver Manager]Data source name not found, and no
default driver specified (0) (SQLDriverConnect)') (Background on this
error at: http://sqlalche.me/e/rvf5)
It seems come from pyodbc whereas I want to use pymssql (and in MsSqlHook, the method get_conn uses pymssql !)
I searched in the source code of Airflow the cause.
I noticed that the method get_uri from the class DbApiHook (from which is inherited MsSqlHook) builds the connection string passed to SqlAlchemy like this:
'{conn.conn_type}://{login}{host}/{conn.schema}'
But conn.conn_type is simply equal to 'mssql' whereas we need to specify the DBAPI as described here:
https://docs.sqlalchemy.org/en/latest/core/engines.html#microsoft-sql-server
(for example : 'mssql+pymssql://scott:tiger#hostname:port/dbname')
So, by default, I think it uses pyodbc.
But how can I set properly the conn_type of the connection to 'mssql+pymssql' instead of 'mssql' ?
In the Airflow IU, you can simply select SQL server in a dropdown list, but not set as you want :
To work around the issue, I overload the get_uri method from DbApiHook in a new class I created inherited from MsSqlHook, and in which I build my own connection string, but it's not clean at all...
Thanks for any help
You're right. There's no easy, straightforward way to get Airflow to do what you want. Personally I would build the sqlalchemy engine inside of your context manager, something like create_engine(hook.get_uri().replace("://", "+pymssql://")) -- then I would toss the code somewhere reusable.
You can create a connection by passing it as an environment variable to Airflow. See the docs. The value of the variable is the database URL in the format SqlAlchemy accepts.
The name of the env var follows the pattern AIRFLOW_CONN_ to which you append the connection ID. For example AIRFLOW_CONN_MY_MSSQL, in this case, the conn_id would be 'my_mssql'.
Can anyone tell me if this will work. When you open a SQLite database connection to your database file in the main of your python application and then call that variable as a global connection each time you need to execute to database in a function.
As seen below. Will this even work?
import sqlite3 as lite
con = lite.connect(database)
def db_add_records(_number, _data):
global con
cur = con.cursor()
cur.execute(whatever sql you think of)
con.commit()
Instead of each time creating a new connection to the database inside each function, as I have seen some people do.
Cleaner?