Is safe to open a transaction with raw SQL in Django - python

With default settings in Django (version 3.1) is safe to do:
with connection.cursor() as cursor:
cursor.execute("BEGIN")
# Some SQL operations
commit_or_rollback = "COMMIT" if success else "ROLLBACK"
with connection.cursor() as cursor:
cursor.execute(commit_or_rollback)
Or must I set autocommit to False with set_autocommit method before, as Django's autocommit closes transactions? Or autocommit is isolated and there will be no problem with my code?
In case you're asking why I'm using raw SQL for transactions: I've tried using transactions manually as docs indicates but it had some issues with multi-process enviroment, so I had to implement with raw queries

Ok, I've been reading more, testing in my project and seing the queries executed by Django in the database logs. It seems to be safe to use transactions with raw SQL as autocommit begins a new one with any operation and doesn't interferes with transactions opened by others connections

Related

Preventing writable modifications to Oracle database, using Python.

Currently using cx_Oracle module in Python to connect to my Oracle database. I would like to only allow the user of the program to do read only executions, like Select, and NOT INSERT/DELETE queries.
Is there something I can do to the connection/cursor variables once I establish the connection to prevent writable queries?
I am using the Python Language.
Appreciate any help.
Thanks.
One possibility is to issue the statement "set transaction read only" as in the following code:
import cx_Oracle
conn = cx_Oracle.connect("cx_Oracle/welcome")
cursor = conn.cursor()
cursor.execute("set transaction read only")
cursor.execute("insert into c values (1, 'test')")
That will result in the following error:
ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
Of course you'll have to make sure that you create a Connection class that calls this statement when it is first created and after each and every commit() and rollback() call. And it can still be circumvented by calling a PL/SQL block that performs a commit or rollback.
The only other possibility that I can think of right now is to create a restricted user or role which simply doesn't have the ability to insert, update, delete, etc. and make sure the application uses that user or role. This one at least is fool proof, but a lot more effort up front!

Why does having no autocommit facility mean that all queries execute within a transaction in PostgreSQL?

From https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
PostgreSQL does not have an autocommit facility which means that all
queries will execute within a transaction.
Execution within a transaction is a very good thing, it ensures data
integrity and allows for appropriate error handling. However there are
queries that can not be run from within a transaction. Take the
following example.
#/usr/bin/python2.4
#
#
import psycopg2
# Try to connect
try:
conn=psycopg2.connect("dbname='template1' user='dbuser' password='mypass'")
except:
print "I am unable to connect to the database."
cur = conn.cursor()
try:
cur.execute("""DROP DATABASE foo_test""")
except:
print "I can't drop our test database!"
This code would actually fail with the printed message of "I can't
drop our test database!" PostgreSQL can not drop databases within a
transaction, it is an all or nothing command. If you want to drop the
database you would need to change the isolation level of the database
this is done using the following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE
cursor execution.
I was wondering why
"PostgreSQL does not have an autocommit facility which means that all queries will execute within a transaction."
"PostgreSQL can not drop databases within a transaction"
"If you want to drop the database you would need to change the isolation level of the database"
Thanks.
Update:
What does autocommit mean in postgresql and psycopg2? answer my question
All the 3 are related to Python and its DB connector library, not the PostgreSQL itself:
PostgreSQL has an autocommit and it is active by default - which means that every SQL statement is immediately executed. When you start a transaction block, this autocommit mode is disabled until you finish the transaction (either by COMMIT or ROLLBACK)
The operation of destroying a database is implemented in a way where you can not run it from inside a transaction block. Also keep in mind that unlike most other databases PostgreSQL allows almost all DDL statements (obviously not the DROP DATABASE one) to be executed inside a transaction.
Actually you can not drop a database if anyone (including you) is currently connected to this database - so it does not matter what is your isolation level, you still have to connect to another database (e.g. postgres)

Are the 2 autocommits the same thing: SQLAlchemy and MySQL?

autocommit in SQLAlchemy:
sessionmaker(bind=engine, autocommit=False) # autocommit default False
autocommit in MySQL:
SET AUTOCOMMIT=0 -- autocommit default 1
I am wondering, are the two autocommits the same thing? i.e. SQLAlchemy passes autocommit status to MySQL via something equivolent to SET AUTOCOMMIT?
According to the docs, these aren't identical (although they achieve the same result).
SQLAlchemy always uses transactions, so it always sets AUTOCOMMIT=0 on MySQL. However, if you set autocommit=True, it will automatically call .commit() whenever you make a change to the data.
This is done in that way because every database does autocommit differently (if at all), and SQLAlchemy tries to behave consistently between them.

psycopg2 OperationalError: cursor does not exist

I'm trying to implement a server side cursor in order to "bypass" Django ORM weakness when it comes to fetch an huge amount of data from the database.
But I don't understand how named cursor are supposed to be defined, since my current code seems not working properly. I define the cursor in this way:
id = 'cursor%s' % uuid4().hex
connection = psycopg2.connect('my connection string here')
cursor = connection.cursor(id, cursor_factory=psycopg2.extras.RealDictCursor)
The cursor seems to work in that it can be iterated and returns expected records as python dictionary, but when I try to close it (cursor.close()) I get the exception:
psycopg2 OperationalError: cursor *the generated cursor id* does not exist
WTF?! So what is the object I'm using to retrieve stuff from the database?
Is psycopg2 using a fallback default (unnamed) cursor since the one I defined is not found in my database (and if so... my big question: it's mandatory to define a cursor at the db level before using psycopg2?) I'm a lot confused, can you help me?
I made a really simple and silly mistake of forgetting to run ./manage.py makemigrations and ./manage.py migrate before running ./manage.py test which caused this error.
(I'm aware this doesn't answer the original question, but since this is the first result from Google I thought I would contribute. Hopefully that's okay)
I've had this problems when playing around with my models and launching the test with Pytest.
What resolved the problem for me was to reset the database of my test unit. I used --create-db like so:
pytest backend/test_projects/partners/test_actions.py --create-db
I had similar problem and found the solution. Just disable server side cursors like described here: https://docs.djangoproject.com/en/2.2/ref/settings/#disable-server-side-cursors
'default': {
...
'USER': DB_USER,
'PASSWORD': DB_PASSWORD,
'NAME': DB_NAME,
'DISABLE_SERVER_SIDE_CURSORS': True,
...
},
From the psycopg2 documentation:
"Named cursors are usually created WITHOUT HOLD, meaning they live only as long as the current transaction. Trying to fetch from a named cursor after a commit() or to create a named cursor when the connection transaction isolation level is set to AUTOCOMMIT will result in an exception."
Which is to say that these cursors do not need to be explicitly closed.
http://initd.org/psycopg/docs/usage.html#server-side-cursors

Python MySQL - SELECTs work but not DELETEs?

I'm new to Python and Python's MySQL adapter. I'm not sure if I'm missing something obvious here:
db = MySQLdb.connect(# db details omitted)
cursor = self.db.cursor()
# WORKS
cursor.execute("SELECT site_id FROM users WHERE username=%s", (username))
record = cursor.fetchone()
# DOES NOT SEEM TO WORK
cursor.execute("DELETE FROM users WHERE username=%s", (username))
Any ideas?
I'd guess that you are using a storage engine that supports transactions (e.g. InnoDB) but you don't call db.commit() after the DELETE. The effect of the DELETE is discarded if you don't commit.
See http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away:
Starting with 1.2.0, MySQLdb disables
autocommit by default, as required by
the DB-API standard (PEP-249). If you
are using InnoDB tables or some other
type of transactional table type,
you'll need to do connection.commit()
before closing the connection, or else
none of your changes will be written
to the database.
See also this similar SO question: Python MySQLdb update query fails
Perhaps you are violating a foreign key constraint.
To your code above,
just add a call to self.db.commit().
The feature is far from an annoyance:
It saves you from data corruption issues
when there are errors in your queries.
The problem might be that you are not committing the changes. it can be done by
conn.commit()
read more on this here

Categories

Resources