I am trying to identify the number of connections to a postgres database. This is in context of the connection limit on heroku-postgres for dev and hobby plans, which is limited to 20. I have a python django application using the database. I want to understand what constitute a connection. Will each instance of an user using the application count as one connection ? Or The connection from the application to the database is counted as one.
To figure this out I tried the following.
Opened multiple instances of the application from different clients (3 separate machines).
Connected to the database using an online Adminer tool(https://adminer.cs50.net/)
Connected to the database using pgAdmin installed in my local system.
Created and ran dataclips (query reports) on the database from heroku.
Ran the following query from adminer and pgadmin to observe the number of records:
select * from pg_stat_activity where datname ='db_name';
Initial it seemed there was a new record for each for the instance of the application I opened and 1 record for the adminer instance. After some time the query from adminer was showing 6 records (2 connections for adminer, 2 for the pgadmin and 2 for the web-app).
Unfortunately I am still not sure if each instance of users using my web application would be counted as a connection or will all connections to the database from the web app be counted as one ?
Thanks in advance.
Best Regards!
Using PostgreSQL parameters to log connections and disconnections (with right log_line_prefix parameter to have client information) should help:
log_connections (boolean)
Causes each attempted connection to the server to be logged, as well as successful completion of client authentication. Only
superusers can change this parameter at session start, and it cannot
be changed at all within a session. The default is off.
log_disconnections (boolean)
Causes session terminations to be logged. The log output provides information similar to log_connections, plus the duration of the
session. Only superusers can change this parameter at session start,
and it cannot be changed at all within a session. The default is off.
Related
I've been trying to use Continuous Query Notification (CQN) in python script to get notification from database about changes that were made to a specific table.
I have followed tutorial from this link here
https://python-oracledb.readthedocs.io/en/latest/user_guide/cqn.html
Connection to oracle database was successful and I can query the table to get results but I can't get any message from callback function that looks like this
def cqn_callback(message):
print("Notification:")
for query in message.queries:
for tab in query.tables:
print("Table:", tab.name)
print("Operation:", tab.operation)
for row in tab.rows:
if row.operation & oracledb.OPCODE_INSERT:
print("INSERT of rowid:", row.rowid)
if row.operation & oracledb.OPCODE_DELETE:
print("DELETE of rowid:", row.rowid)
subscr = connection.subscribe(callback=cqn_callback,
operations=oracledb.OPCODE_INSERT | oracledb.OPCODE_DELETE,
qos=oracledb.SUBSCR_QOS_QUERY | oracledb.SUBSCR_QOS_ROWIDS)
subscr.registerquery("select * from regions")
input("Hit enter to stop CQN demo\n")
I can see that the registration was created in the database after I run the script but I just don't receive any message about insert or delete after I perform any of those operations through SQL* Plus or SQL Developer.
I am reading other questions and blogs about this functionality but currently without success, so if anyone has any recommendations or has encountered similar problem, please comment or answer here.
Oracle database 12C from docker
Python version is 3.10.7
I am running it in thick mode and for oracle client libraries I am using this command
oracledb.init_oracle_client(lib_dir = ".../instantclient_21_3"
P.S This is my first time posting a question here so if I didn't correctly follow a structure or rules of asking a question please correct me, thanks in advance :)
Please take a look at the requirements for CQN in the documentation. Note in particular the fact that the database needs to connect back to the application. If this cannot be done no notifications will take place even though the registration is successful with the database. With Oracle Database 19.4 a new mode was introduced which eliminates this requirement, but since you are still using 12c that won't work for you. You will need to ensure that the database can connect back to the application -- opening up any ports, ensuring that an IP address is directly specified in the parameters or an IP address can be looked up from the name of the client machine connecting to the database, etc.
I would like to have some help with the parameters passed under the connection string through a .py file trying connection to my Oracle Apex workspace database:
connection = cx_Oracle.connect("user", "password", "dbhost.example.com/dbinstance", encoding="UTF-8")
On the login page at "apex.oracle.com", we have to pass the following information:
Can I assume that the "user" parameter is equal to the USERNAME info, the "password" parameter is equal to the PASSWORD info and the "dbinstance" parameter is equal to the WORKSPACE info?
And what about the hostname? What is it expected as parameter? How do I find it?
Thank you very much for any support.
Those parameters are not equivalent. An APEX workspace is a logical construct that exists only within APEX; it does not correspond to a physical database instance. Username and password do not necessarily correspond to database users, as APEX is capable of multiple methods of authentication.
APEX itself runs entirely within a single physical database. An APEX instance supports multiple logical workspaces, each of which may have its own independent APEX user accounts that often (usually) do not correspond to database users at all. APEX-based apps may have entirely separate authentication methods of their own, too, and generally do not use the same users defined for the APEX workspaces.
When an APEX application does connect to a database to run, it connects as a proxy user using an otherwise unprivileged database account like APEX_PUBLIC_USER.
If you want to connect Python to APEX, you would have to connect like you would any other web app: through the URL using whatever credentials are appropriate to the user interface and then parsing the HTML output, or through an APEX/ORDS REST API (that you would have to first build and deploy).
If you want to connect to the database behind APEX, then you would need an appropriately provisioned database (not APEX) account, credentials and connectivity information provided by the database administrator.
I am trying to commit to two different dbs, one hosted on MSSQL and the other PostgreSQL. I have two different session objects. I know I can do the following,
session1.add(record) // MSSQL session
session1.commit()
session2.add(record) // PostgreSQL session
session2.commit()
But, I am trying keep then in sync, so either both successful or both fail (if one of them fails don't commit to other). I would appreciate any help or thoughts.
You need to use the distributed transaction coordinator to create a distributed transaction.
There is an old saying: A man who has one watch knows what time it is, a man who has two is never sure.
I have an MS-SQL deployed on AWS RDS, that I'm writing a Flask front end for.
I've been following some intro Flask tutorials, all of which seem to pass the DB credentials in the connection string URI. I'm following the tutorial here:
https://medium.com/#rodkey/deploying-a-flask-application-on-aws-a72daba6bb80#.e6b4mzs1l
For deployment, do I prompt for the DB login info and add to the connection string? If so, where? Using SQLAlchemy, I don't see any calls to create_engine (using the code in the tutorial), I just see an initialization using config.from_object, referencing the config.py where the SQLALCHEMY_DATABASE_URI is stored, which points to the DB location. Trying to call config.update(dict(UID='****', PASSWORD='******')) from my application has no effect, and looking in the config dict doesn't seem to have any applicable entries to set for this purpose. What am I doing wrong?
Or should I be authenticating using Flask-User, and then get rid of the DB level authentication? I'd prefer authenticating at the DB layer, for ease of use.
The tutorial you are using uses Flask-Sqlalchemy to abstract the database setup stuff, that's why you don't see engine.connect().
Frameworks like Flask-Sqlalchemy are designed around the idea that you create a connection pool to the database on launch, and share that pool amongst your various worker threads. You will not be able to use that for what you are doing... it takes care of initializing the session and things early in the process.
Because of your requirements, I don't know that you'll be able to make any use of things like connection pooling. Instead, you'll have to handle that yourself. The actual connection isn't too hard...
engine = create_engine('dialect://username:password#host/db')
connection = engine.connect()
result = connection.execute("SOME SQL QUERY")
for row in result:
# Do Something
connection.close()
The issue is that you're going to have to do that in every endpoint. A database connection isn't something you can store in the session- you'll have to store the credentials there and do a connect/disconnect loop in every endpoint you write. Worse, you'll have to either figure out encrypted sessions or server side sessions (without a db connection!) to prevent keeping those credentials in the session from becoming a horrible security leak.
I promise you, it will be easier both now and in the long run to figure out a simple way to authenticate users so that they can share a connection pool that is abstracted out of your app endpoints. But if you HAVE to do it this way, this is how you will do it. (make sure you are closing those connections every time!)
I've ready many posts about this problem. My understanding is the Application has a setting which says how long to keep idle database connections before dropping them and creating new ones. MySQL has a setting that says how long to keep idle connections. After no site activity, MySQL times out the Application's connections. But the Application doesn't know this and still tries using an existing connection, which fails. After the failure the Application drops the connection and makes a new one, and then it is fine.
I have wait_timeout set to 10 seconds on my local mysql server. I have pool_recycle set to 5 seconds on my locally running application. After 10 seconds of inactivity, I make a request, and am still getting this error. Making another request afterwards within 10 seconds, it is then fine. Waiting longer than 10 seconds, it gives this error again.
Any thoughts?
mysql> SELECT ##global.wait_timeout\G
*************************** 1. row ***************************
##global.wait_timeout: 10
1 row in set (0.00 sec)
.
sqlalchemy.twelvemt.pool_recycle = 5
.
engine = engine_from_config(settings, 'sqlalchemy.twelvemt.')
DBSession.configure(bind=engine)
.
OperationalError: (OperationalError) (2006, 'MySQL server has gone away') 'SELECT beaker_cache.data \nFROM beaker_cache \nWHERE beaker_cache.namespace = %s' ('7cd57e290c294c499e232f98354a1f70',)
It looks like the error you're getting is being thrown by your Beaker connection, not your DBSession connection-- the pool_recycle option needs to be set for each connection.
Assuming you're configuring Beaker in your x.ini file, you can pass sqlalchemy options via session.sa.*, so session.sa.pool_recycle = 5
See http://docs.pylonsproject.org/projects/pylons-webframework/en/v0.9.7/sessions.html#sa
Try setting sqlalchemy.pool_recycle for your connection
I always add this into my config file when using mySQL
sqlalchemy.pool_recycle = 3600
Without this I get MySQL server has gone away on the first request after any long pause in activity.
I fixed this by calling remove() on sessions after each request. You can do this by defining a global function:
def remove_session(request, response):
request.dbsession.remove()
Afterwards, you set this function to be run by every class involving requests and a database session:
def __init__(self, request):
request.dbsession = DBSession
request.add_response_callback(remove_session)
This works because SQLAlchemy expects its users to handle the opening and closing of database sessions. More information can be found in the documentation.