connection commit function Python - python

I was reading up on the documentation and saw that you only have to commit in once in a transaction?
Does the following count as one transaction or does each function count as a transaction?
def main():
conn=pyodbc.connect(sqlconnectionstring) # Assume this connects to the database
cursor = conn.cursor()
function1()
function2()
conn.commit()
def function1():
# does inserting here
def function2():
# does inserting here and calls function 3
function3()
def function 3():
# does more inserting here
main()
Is that conn.commit() enough to commit all insertions in all the functions or would I have to pass the "conn" variable as an argument and commit inside each function?
Thanks!

yes thats enough to commit all the transactions as the ones like insert and delete will all be performed inside the functions, until one fail then you will find the old row.
but that one commit will change the database status to the recent one

Related

Good way to "wrap" the opening and closing of a database around functions in Python?

I've looked at a few related questions on StackOverflow and at some documentation/guides regarding wrappers, all of which tells me "no," but this doesn't seem right. That said, I'm very new to programming, so 🤷‍♂️
Problem: Opening and closing a database (using python/sqlite3) requires a tedious amount of repeated code (as I understand it):
connection = None
connection = sqlite3.connect(path)
conn.execute("bla bla bla")
conn.commit()
conn.close()
So, I tried to write a reusable wrapper for my functions that access the database. But it's not ideal and so far, mostly complicates the code (problems discussed afterwards):
import functools
import sqlite3
from sqlite3 import Error
conn = None # a global that I hope to get rid of soon.
def connect_db(function):
"""
wrapper that should open db, performs function, then closes db
"""
global conn
try:
conn = sqlite3.connect("journ.db")
print("connected")
except:
print("problem")
#functools.wraps(function)
def wrapper(*args, **kwargs):
return function(*args, **kwargs)
# conn.close() # unreachable code!!
return wrapper
#connect_db
def find_entry_info(user_id):
"""
Queries database regarding users' entries and returns a dictionary (k=entry_url, v=[date, time]).
"""
entry_info = {}
# entry_tags = {}
db_handler = conn.execute("SELECT entry_url, date, time FROM entries WHERE user_id=? ORDER BY date DESC, time DESC",
user_id
)
for row in db_handler:
entry_info[row[0]]=[row[1], row[2]]
return entry_info
entry_info = find_entry_info(user_id)
The problems that I know of right now:
conn is a global variable, the classic "bad idea". I'm confident I can figure this one out eventually, but I'm focused more on the following:
there's no way, based on the documentation, to close the db within the wrapper after returning the needed values from the wrapped function.
I could close it within the wrapped function, of course, but that defeats the point of the wrapper, and isn't any better than calling a regular function to open the database.
So, is there a clever/simple way to write a wrapper that opens/closes the database? A popular library that I'm missing? Should I try looking at python classes . . . or just accept the clutter surrounding my queries? Thanks to all in advance for your time and kindness.
Connections can be used as context managers which automatically commit transactions or rollback in case of exceptions:
with sqlite3.connect(path) as conn:
conn.execute(query)
The connection will also automatically be closed at the end.

Why sqlite 3 stopped working on "delete" class calls in my code?

I have the following code structure that does all the things required except removing the line item I am asking to remove. Code does not throw any error, basically the program disregards de "delete" action.
class MainPage():
def __init__(self, master, *args, **kwargs):
# Coding stuff to build the windows and manage an inventory of items, incl. a button
# that allows the user to delete a line item if she/he considers necessary (calling to def below)
def update_inventory(self, open_web=True, update_trigger=False, sold=False, delete=False):
# Coding stuff to get what the user wants to manage, including an inventory item to remove
if delete:
value_to_remove = str(self.actual_pick[i][0])
import sqlite3
with sqlite3.connect("MTGdb.db") as db:
cursor = db.cursor()
cursor.execute("DELETE FROM db_inventory WHERE id = ?", (value_to_remove,))
db.commit()
Funny thing is that if I use the Python Console and ask for the exactly same statements but asking for the "value_to_remove" manually (it is an integer, e.g. 8224 id item), it works. So I am starting to think that there is a problem on querying it from a class. Even if I call for this action in a "def" statement out of the class it still does not work.
What does work (meaning, effectively updating the database) is adding a db.close() call after the db.commit() but throwing a slite3.ProgrammingError because it cannot operate on a closed database just in the line I state db.close(). Because of that error, the tkinter windows are not updating the changes on the db so this proxy is not acceptable. Note that except approaches don't work: e.g.:
try:
db.close()
except sqlite3.ProgrammingError:
print("I skip the db.close() statement")
Still throws me an sqlite3.ProgrammingError on the print statement.
Important to note that other sqlite3 actions DO work, like UPDATE sqls... so my problem is limited to the "DELETE" sqls.
Appreciated if someone can bring me more visibility on the problem and how I can manage it. Thanks!!

PostgreSQL DROP TABLE query freezes

I am writing code to create a GUI in Python on the Spyder environment of Anaconda. within this code I operate with a PostgreSQL database and I therefore use the psycopg2 database adapter so that I can interact with directly from the GUI.
The code is too long to post here, as it is over 3000 lines, but to summarize, I have no problem interacting with my database except when I try to drop a table.
When I do so, the GUI frames become unresponsive, the drop table query doesn't drop the intended table and no errors or anything else of that kind are thrown.
Within my code, all operations which result in a table being dropped are processed via a function (DeleteTable). When I call this function, there are no problems as I have inserted several print statements previously which confirmed that everything was in order. The problem occurs when I execute the statement with the cur.execute(sql) line of code.
Can anybody figure out why my tables won't drop?
def DeleteTable(table_name):
conn=psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'")
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
That must be because a concurrent transaction is holding a lock that blocks the DROP TABLE statement.
Examine the pg_stat_activity view and watch out for sessions with state equal to idle in transaction or active that have an xact_start of more than a few seconds ago.
This is essentially an application bug: you must make sure that all transactions are closed immediately, otherwise Bad Things can happen.
I am having the same issue when using psycopg2 within airflow's postgres hook and I resolved it with with statement. Probably this resolves the issue because the connection becomes local within the with statement.
def drop_table():
with PostgresHook(postgres_conn_id="your_connection").get_conn() as conn:
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS your_table")
task_drop_table = PythonOperator(
task_id="drop_table",
python_callable=drop_table
)
And a solution is possible for the original code above like this (I didn't test this one):
def DeleteTable(table_name):
with psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'") as conn:
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
Please comment if anyone tries this.

Drop Temporary Table after execution of function

I am executing a selfwritten postgresql function in a loop for several times from Python. I am using the psycopg2 framework to do this.
The function I wrote hast the following structure:
CREATE OR REPLACE FUNCTION my_func()
RETURNS void AS
$$
BEGIN
-- create a temporary table that should be deleted after
-- the functions finishes
-- normally a CREATE TABLE ... would be here
CREATE TEMPORARY TABLE temp_t
(
seq integer,
...
) ON COMMIT DROP;
-- now the insert
INSERT INTO temp_t
SELECT
...
END
$$
LANGUAGE 'plpgsql';
Thats basically the python part
import time
import psycopg2
conn = psycopg2.connect(host="localhost", user="user", password="...", dbname="some_db")
cur = conn.cursor()
for i in range(1, 11):
print i
print time.clock()
cur.callproc("my_func")
print time.clock()
cur.close()
conn.close()
The error I get when I run the python script is:
---> relation "temp_t" already exists
Basically I want to measure how long it takes to execute the function. Doing that, the loop shall run several times. Storing the result of the SELECT in a temporary table is supposed to replace the CREATE TABLE ... part which would normally create the output table
Why doesnt postgres drop the function after I executed the function from Python?
All the function calls in the loop are performed in a single transaction, so the temporary table is not dropped each time. Setting autocommit should change this behavior:
...
conn = psycopg2.connect(host="localhost", user="user", password="...", dbname="some_db")
conn.autocommit = True
cur = conn.cursor()
for i in range(1, 11):
...
Temporary tables are dropped when the session ends. Since your session does not end with the function call, the second function call will try to create the table again. You need to alter your store function and to check whether the temporary table already exists and create it if it doesn't. This post can help you in doing so.
Another quick n dirty is to connect and disconnet after each function call.
import time
import psycopg2
for i in range(1, 11):
conn = psycopg2.connect(host="localhost", user="user", password="...", dbname="some_db")
cur = conn.cursor()
print i
print time.clock()
cur.callproc("my_func")
print time.clock()
cur.close()
conn.close()
Not nice, but does the trick.

Creating transactions with with statements in psycopg2

I am trying to use psycopg2 to add some new columns to a table. PostgreSQL lacks a ALTER TABLE table ADD COLUMN IF NOT EXISTS, so I am adding each column in it's own transaction. If the column exists, there will be a python & postgres error, that's OK, I want my programme to just continue and try to add the next column. The goal is for this to be idempotent, so it can be run many times in a row.
It currently looks like this:
def main():
# <snip>
with psycopg2.connect("") as connection:
create_columns(connection, args.table)
def create_columns(connection, table_name):
def sql(sql):
with connection.cursor() as cursor:
cursor.execute(sql.format(table_name=table_name))
sql("ALTER TABLE {table_name} ADD COLUMN my_new_col numeric(10,0);")
sql("ALTER TABLE {table_name} ADD COLUMN another_new_col INTEGER NOT NULL;")
However, if my_new_col exists, there is an exception ProgrammingError('column "parent_osm_id" of relation "relations" already exists\n',), which is to be expected, but when it tried to add another_new_col, there is the exception InternalError('current transaction is aborted, commands ignored until end of transaction block\n',).
The psycogpg2 document for the with statement implies that the with connection.cursor() as cursor: will wrap that code in a transaction. This is clearly not happening. Experimentation has shown me that I need 2 levels of with statements, to including the pscyopg2.connect call, and then I get a transaction.
How can I pass a connection object around and have queries run in their own transaction to allow this sort of "graceful error handling"? I would like to keep the postgres connection code separate, in a "clean architecture" style. Is this possible?
The psycogpg2 document for the with statement implies that the with connection.cursor() as cursor: will wrap that code in a transaction.
this is actually not true it says:
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
curs.execute(SQL)
When a connection exits the with block, if no exception has been raised by the block, the transaction is committed. In case of exception the transaction is rolled back. In no case the connection is closed: a connection can be used in more than a with statement and each with block is effectively wrapped in a transaction.
So it's not about cursor object being handled by with but the connection object
Also worth noting that all resource held by cursor will be released when we leave the with clause
When a cursor exits the with block it is closed, releasing any resource eventually associated with it. The state of the transaction is not affected.
So back to your code you could probably rewrite it to be more like:
def main():
# <snip>
with psycopg2.connect("") as connection:
create_columns(connection, args.table)
def create_columns(con, table_name):
def sql(connection, sql):
with connection:
with connection.cursor() as cursor:
cursor.execute(sql.format(table_name=table_name))
sql(con, "ALTER TABLE {table_name} ADD COLUMN my_new_col numeric(10,0);")
sql(con, "ALTER TABLE {table_name} ADD COLUMN another_new_col INTEGER NOT NULL;")
ensuring your connection is wrapped in with for each query you execute, so if it fails connection context manager will revert the transaction

Categories

Resources