Running Python application after UPDATE trigger in SQL Server 2014 - python

The task, when I update a column in the database table, "report" this to the Python application. Working with SQL Server is new to me, maybe I'm missing something.
The database has a trigger on the table I need. I tried to add Exec xp_cmdshell there, but because of this, the application that makes changes to the database hangs. In the task manager, you can see that the Python application is running, but it does not open.
After several attempts, the Python process exits and the main application reports the error "The creator of the error did not provide a reason." I ask for your help.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[OnEmployeePhotoUpdate]
ON [dbo].[EmployeePhoto]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE EmployeePhoto
SET ModificationDateTime = GETDATE()
WHERE _id IN (SELECT _id FROM inserted)
EXEC xp_cmdshell 'C:\Users\IT\AppData\Local\Programs\Python\Python39\pythonw.exe D:\untitled\1.pyw'
END

Related

Python oracledb continous query notification not sending messages from database

I've been trying to use Continuous Query Notification (CQN) in python script to get notification from database about changes that were made to a specific table.
I have followed tutorial from this link here
https://python-oracledb.readthedocs.io/en/latest/user_guide/cqn.html
Connection to oracle database was successful and I can query the table to get results but I can't get any message from callback function that looks like this
def cqn_callback(message):
print("Notification:")
for query in message.queries:
for tab in query.tables:
print("Table:", tab.name)
print("Operation:", tab.operation)
for row in tab.rows:
if row.operation & oracledb.OPCODE_INSERT:
print("INSERT of rowid:", row.rowid)
if row.operation & oracledb.OPCODE_DELETE:
print("DELETE of rowid:", row.rowid)
subscr = connection.subscribe(callback=cqn_callback,
operations=oracledb.OPCODE_INSERT | oracledb.OPCODE_DELETE,
qos=oracledb.SUBSCR_QOS_QUERY | oracledb.SUBSCR_QOS_ROWIDS)
subscr.registerquery("select * from regions")
input("Hit enter to stop CQN demo\n")
I can see that the registration was created in the database after I run the script but I just don't receive any message about insert or delete after I perform any of those operations through SQL* Plus or SQL Developer.
I am reading other questions and blogs about this functionality but currently without success, so if anyone has any recommendations or has encountered similar problem, please comment or answer here.
Oracle database 12C from docker
Python version is 3.10.7
I am running it in thick mode and for oracle client libraries I am using this command
oracledb.init_oracle_client(lib_dir = ".../instantclient_21_3"
P.S This is my first time posting a question here so if I didn't correctly follow a structure or rules of asking a question please correct me, thanks in advance :)
Please take a look at the requirements for CQN in the documentation. Note in particular the fact that the database needs to connect back to the application. If this cannot be done no notifications will take place even though the registration is successful with the database. With Oracle Database 19.4 a new mode was introduced which eliminates this requirement, but since you are still using 12c that won't work for you. You will need to ensure that the database can connect back to the application -- opening up any ports, ensuring that an IP address is directly specified in the parameters or an IP address can be looked up from the name of the client machine connecting to the database, etc.

How to identify number of connections to a Postgres Database (heroku)?

I am trying to identify the number of connections to a postgres database. This is in context of the connection limit on heroku-postgres for dev and hobby plans, which is limited to 20. I have a python django application using the database. I want to understand what constitute a connection. Will each instance of an user using the application count as one connection ? Or The connection from the application to the database is counted as one.
To figure this out I tried the following.
Opened multiple instances of the application from different clients (3 separate machines).
Connected to the database using an online Adminer tool(https://adminer.cs50.net/)
Connected to the database using pgAdmin installed in my local system.
Created and ran dataclips (query reports) on the database from heroku.
Ran the following query from adminer and pgadmin to observe the number of records:
select * from pg_stat_activity where datname ='db_name';
Initial it seemed there was a new record for each for the instance of the application I opened and 1 record for the adminer instance. After some time the query from adminer was showing 6 records (2 connections for adminer, 2 for the pgadmin and 2 for the web-app).
Unfortunately I am still not sure if each instance of users using my web application would be counted as a connection or will all connections to the database from the web app be counted as one ?
Thanks in advance.
Best Regards!
Using PostgreSQL parameters to log connections and disconnections (with right log_line_prefix parameter to have client information) should help:
log_connections (boolean)
Causes each attempted connection to the server to be logged, as well as successful completion of client authentication. Only
superusers can change this parameter at session start, and it cannot
be changed at all within a session. The default is off.
log_disconnections (boolean)
Causes session terminations to be logged. The log output provides information similar to log_connections, plus the duration of the
session. Only superusers can change this parameter at session start,
and it cannot be changed at all within a session. The default is off.

How to drop table and recreate in amazon RDS with Elasticbeanstalk?

My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk.
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot.
I had the same problem and came up with a workaround. In your python code just add and run the following method when deploying your app the next time:
FOR SQLALCHEMY AFTER VERSION 2.0
from sqlalchemy import create_engine, text
tables = ["table1_name", "table2_name"] # the names of the tables you want to delte
engine = create_engine("sqlite:///example.db") # here you create your engine
def delete_tables(tables):
for table in tables:
sql = text(f"DROP TABLE IF EXISTS {table} CASCADE;") # CASCADE deltes the tables even if they had some connections to other tables
with engine.connect() as connection:
with connection.begin():
connection.execute(sql)
delete_tables(tables) # Comment this line out after running it once.
FOR SQLALCHEMY BEFORE VERSION 2 (I guess)
def delete_tables(tables):
for table in tables:
engine.execute(f"DROP TABLE IF EXISTS {table} CASCADE;")
delete_tables(tables) # Comment this line out after running it once.
After you deployed and ran this code 1 time, all your tables will be deleted.
IMPORTANT: Delete or comment out this code after that, otherwise you will delete all your tables every time when you deploy your code to AWS

Update and deploy PostgreSQL schema to Heroku

I have a PostgreSQL schema that resides in a schema.sql file that gets run each time a database connection is initiated in Python. It looks something like:
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
facebook_id TEXT NOT NULL,
name TEXT NOT NULL,
access_token TEXT,
created TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
The app is deployed on Heroku, using their PostgreSQL and everything works as expected.
Now, what if I want to change a bit the structure of my users table? How can I do this the easiest and the best way? I thought of writing an ALTER... line in schema.sql for each change I want to produce in the database, but I don't think this is the best approach, since after some time the schema file will be full of ALTERs and it will slow down my app.
What's the indicated way to deploy changes made to a database?
Running a hard-coded script on each connection is not a great way to handle schema management.
You need to either manage the schema manually, or use a full-fledged tool that keeps a schema version identifier in the database, checks that, and applies a script to upgrade to the next schema version if it's different to the latest one. Rails calls this "migrations" and it kind-of works. If you're using Django it has schema management too.
If you're not using a framework like that, I suggest just writing your own schema upgrade scripts. Add a "schema_version" table with a single row. SELECT it when the app first starts after a redeploy and if it's lower than the current version the app knows about, apply the update script(s) in order, eg schema_1_to_2, schema_2_to_3, etc.
I don't recommend doing this on connect, do it on app start, or better, as a special maintenance command. If you do it on every connection you'll have multiple connections trying to make the same changes and you'll land up with duplicated columns and all sorts of other mess.
I support several django apps on heroku with Postgres. I just connect via PgAdmin and run my scripts when changes are required. I don't see any need for running a script every time a connection is made.

Problem in insertion from python script in mysql database with innondb engine

I am facing a problem where I am trying to add data from a python script to mysql database with InnonDB engine, it works fine with myisam engine of the mysql database. But the problem with the myisam engine is that it doesn't support foreign keys so I'll have to add extra code each place where I want to insert/delete records in database.
Does anyone know why InnonDB doesn't work with python scripts and possible solutions for this problem ??
InnoDB is transactional. You need to call connection.commit() after inserts/deletes/updates.
Edit: you can call connection.autocommit(True) to turn on autocommit.
Python DB API disables autocommit by default
Pasted from google (first page, 2nd result)
MySQL :: MySQL 5.0 Reference Manual :: 13.2.8 The InnoDB ...
By default, MySQL starts the session for each new connection with autocommit ...
dev.mysql.com/.../innodb-transaction-model.html
However
Apparently Python starts MySQL in NON-autocommit mode, see:
http://www.kitebird.com/articles/pydbapi.html
From the article:
The connection object commit() method commits any outstanding changes in the current transaction to make them permanent in the database. In DB-API, connections begin with autocommit mode disabled, so you must call commit() before disconnecting or changes may be lost.
Bummer, dunno how to override that and I don't want to lead you astray by guessing.
I would suggest opening a new question titled:
How to enable autocommit mode in MySQL python DB-API?
Good luck.

Categories

Resources