I have a table, skills, which is presently empty despite my attempts to add rows to it. I have the following Python code in a CGI script:
open('/tmp/skills', 'a').write('Reached 1!\n')
if get_cgi('nous2dianoia'):
open('/tmp/skills', 'a').write('Reached 2!\n')
#if (get_cgi('previous') and get_cgi('name') and get_cgi('previous') !=
#get_cgi('name')):
#cursor.execute('DELETE FROM skills WHERE name = ?;',
#(get_cgi('previous'),))
cursor.execute('''INSERT INTO skills (name, nous2dianoia,
hereandnow2escapist, nf2nt, social2individual, ithou2iit,
slow2quick) VALUES (?, ?, ?, ?, ?, ?, ?);''',
(get_cgi('name'), get_cgi('nous2dianoia'),
get_cgi('hereandnow2escapist'), get_cgi('nf2nt'),
get_cgi('social2individual'), get_cgi('ithou2iit'),
get_cgi('slow2quick'),))
open('/tmp/skills', 'a').write('Reached 3!\n')
When I load a page, /tmp/skills has a freshly appended:
Reached 1!
Reached 2!
Reached 3!
However, the table remains empty. (The rest of the script runs without crashing, and displays what one would expect to be displayed if the script were called without any CGI variables passed.)
I haven't started a transaction; the SQL operations are not particularly advanced or intricate.
Any insight on how I can get this to run without reported error, but have an empty skills table in the database?
Thanks,
Your insert statement is not automatically committed. From the docs on sqlite3.Connection:
commit()
This method commits the current transaction. If you don’t
call this method, anything you did since the last call to commit() is
not visible from other database connections. If you wonder why you
don’t see the data you’ve written to the database, please check you
didn’t forget to call this method.
To automatically commit, use the connection as a context manager:
# connection.commit() is called automatically upon exit of context manager
# unless an exception is encountered, then connection.rollback() is called.
with connection:
connection.execute(insert_statment)
Related
this is my first post in Stackoverflow.com
This is the process I'm following:
Make a connection to dB
Make a query to the dB to check if the register exists
If the register does NOT exist iterate over a loop
Add the registers in the dB
My code:
conn = sqlite3.connect('serps.db')
c = conn.cursor()
# 1) Make the query
c.execute("SELECT fecha FROM registros WHERE fecha=? AND keyword=?", (fecha, q))
# 2) Check if exists
exists = c.fetchone()
conn.commit()
if not exists:
for data in json:
...
c.execute("INSERT INTO registros VALUES (?, ?, ?, ?, ?, ?)", (fecha, hora, q, rank, url, title))
conn.commit()
I get the following error:
---> conn.commit()
OperationalError: database is locked
I think if I close the database after checking if the register exists, I could open it again and it will work.
But should I close and open connections when INSERT after SELECT?
SQLite is meant to be a lightweight database, and thus can't support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released.
So try switching to another database backend.
You should learn what transactions are, what .commit() does, and how to use .executemany().
How come you have .commit after .fetchone!? Do NOT place .commit() inside a loop. In fact, you should avoid placing INSERTs in a loop as well. Prepare a list of tuples or dicts for insertions in the loop and call the db just once.
I have a 'throwaway' sql statement that I would like to run. I don't care about the error status, and I don't need to know if it completed successfully. It is to create an index on a table that is very infrequently used. I currently have the connection and cursor object, and here is how I would normally do it:
self.cursor.execute('ALTER TABLE mytable ADD INDEX (_id)')
Easy enough. However, this statement takes about five minutes, and like I mentioned, it's not important enough to block other items that are unrelated to it. Is it possible to execute a cursor statement in the background? Again, I don't need any status or anything from it, and I don't care about 'closing the cursor/connection' or anything -- it really is a throw-away statement on a table that is probably accessed one to five times in its lifetime before being dropped.
threading.Thread(target=lambda tn, cursor: cursor.execute('ALTER TABLE %s ADD INDEX (_id)' % tn))).start()
What would be the best approach to execute a statement in the background so it doesn't block future sql statements.
I am writing code to create a GUI in Python on the Spyder environment of Anaconda. within this code I operate with a PostgreSQL database and I therefore use the psycopg2 database adapter so that I can interact with directly from the GUI.
The code is too long to post here, as it is over 3000 lines, but to summarize, I have no problem interacting with my database except when I try to drop a table.
When I do so, the GUI frames become unresponsive, the drop table query doesn't drop the intended table and no errors or anything else of that kind are thrown.
Within my code, all operations which result in a table being dropped are processed via a function (DeleteTable). When I call this function, there are no problems as I have inserted several print statements previously which confirmed that everything was in order. The problem occurs when I execute the statement with the cur.execute(sql) line of code.
Can anybody figure out why my tables won't drop?
def DeleteTable(table_name):
conn=psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'")
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
That must be because a concurrent transaction is holding a lock that blocks the DROP TABLE statement.
Examine the pg_stat_activity view and watch out for sessions with state equal to idle in transaction or active that have an xact_start of more than a few seconds ago.
This is essentially an application bug: you must make sure that all transactions are closed immediately, otherwise Bad Things can happen.
I am having the same issue when using psycopg2 within airflow's postgres hook and I resolved it with with statement. Probably this resolves the issue because the connection becomes local within the with statement.
def drop_table():
with PostgresHook(postgres_conn_id="your_connection").get_conn() as conn:
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS your_table")
task_drop_table = PythonOperator(
task_id="drop_table",
python_callable=drop_table
)
And a solution is possible for the original code above like this (I didn't test this one):
def DeleteTable(table_name):
with psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'") as conn:
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
Please comment if anyone tries this.
I have the following code that is using MySQLdb for db inserts
self.cursor.execute('START TRANSACTION;')
for item in data:
self.cursor.execute('INSERT INTO...')
self.cursor.execute('COMMIT;')
self.conn.commit()
Is the self.conn.commit() at the end redundant, or does that need to be there?
If you start a transaction you're responsible for calling COMMIT or it'll get unrolled when you close the connection.
As a note it's bad form to include ; in your queries unless you're using an interactive shell. They're not necessary and immediately raise a bunch of questions about how they came to be included there.
The ; delimiter is used by the shell to determine where one command stops and the next starts, something that's not necessary when using code where each statement is supplied as a separate string.
I'm using Psycopg2 in Python to access a PostgreSQL database. I'm curious if it's safe to use the with closing() pattern to create and use a cursor, or if I should use an explicit try/except wrapped around the query. My question is concerning inserting or updating, and transactions.
As I understand it, all Psycopg2 queries occur within a transaction, and it's up to calling code to commit or rollback the transaction. If within a with closing(... block an error occurs, is a rollback issued? In older versions of Psycopg2, a rollback was explicitly issued on close() but this is not the case anymore (see http://initd.org/psycopg/docs/connection.html#connection.close).
My question might make more sense with an example. Here's an example using with closing(...
with closing(db.cursor()) as cursor:
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
What happens when module.raise_unexpected_error() raises its error? Is the transaction rolled back? As I understand transactions, I either need to commit them or roll them back. So in this case, what happens?
Alternately I could write my query like this:
cursor = None
try:
cursor = db.cursor()
cursor.execute("""UPDATE users
SET password = %s, salt = %s
WHERE user_id = %s""",
(pw_tuple[0], pw_tuple[1], user_id))
module.rase_unexpected_error()
cursor.commit()
except BaseException:
if cursor is not None:
cursor.rollback()
finally:
if cursor is not None:
cursor.close()
Also I should mention that I have no idea if Psycopg2's connection class cursor() method could raise an error or not (the documentation doesn't say) so better safe than sorry, no?
Which method of issuing a query and managing a transaction should I use?
Your link to the Psycopg2 docs kind of explains it itself, no?
... Note that closing a connection without committing the changes first will
cause any pending change to be discarded as if a ROLLBACK was
performed (unless a different isolation level has been selected: see
set_isolation_level()).
Changed in version 2.2: previously an explicit ROLLBACK was issued by
Psycopg on close(). The command could have been sent to the backend at
an inappropriate time, so Psycopg currently relies on the backend to
implicitly discard uncommitted changes. Some middleware are known to
behave incorrectly though when the connection is closed during a
transaction (when status is STATUS_IN_TRANSACTION), e.g. PgBouncer
reports an unclean server and discards the connection. To avoid this
problem you can ensure to terminate the transaction with a
commit()/rollback() before closing.
So, unless you're using a different isolation level, or using PgBouncer, your first example should work fine. However, if you desire some finer-grained control over exactly what happens during a transaction, then the try/except method might be best, since it parallels the database transaction state itself.