"database is locked" in SQLite3 and Python after SELECT and Loop - python

this is my first post in Stackoverflow.com
This is the process I'm following:
Make a connection to dB
Make a query to the dB to check if the register exists
If the register does NOT exist iterate over a loop
Add the registers in the dB
My code:
conn = sqlite3.connect('serps.db')
c = conn.cursor()
# 1) Make the query
c.execute("SELECT fecha FROM registros WHERE fecha=? AND keyword=?", (fecha, q))
# 2) Check if exists
exists = c.fetchone()
conn.commit()
if not exists:
for data in json:
...
c.execute("INSERT INTO registros VALUES (?, ?, ?, ?, ?, ?)", (fecha, hora, q, rank, url, title))
conn.commit()
I get the following error:
---> conn.commit()
OperationalError: database is locked
I think if I close the database after checking if the register exists, I could open it again and it will work.
But should I close and open connections when INSERT after SELECT?

SQLite is meant to be a lightweight database, and thus can't support a high level of concurrency. OperationalError: database is locked errors indicate that your application is experiencing more concurrency than sqlite can handle in default configuration. This error means that one thread or process has an exclusive lock on the database connection and another thread timed out waiting for the lock the be released.
So try switching to another database backend.

You should learn what transactions are, what .commit() does, and how to use .executemany().
How come you have .commit after .fetchone!? Do NOT place .commit() inside a loop. In fact, you should avoid placing INSERTs in a loop as well. Prepare a list of tuples or dicts for insertions in the loop and call the db just once.

Related

PostgreSQL DROP TABLE query freezes

I am writing code to create a GUI in Python on the Spyder environment of Anaconda. within this code I operate with a PostgreSQL database and I therefore use the psycopg2 database adapter so that I can interact with directly from the GUI.
The code is too long to post here, as it is over 3000 lines, but to summarize, I have no problem interacting with my database except when I try to drop a table.
When I do so, the GUI frames become unresponsive, the drop table query doesn't drop the intended table and no errors or anything else of that kind are thrown.
Within my code, all operations which result in a table being dropped are processed via a function (DeleteTable). When I call this function, there are no problems as I have inserted several print statements previously which confirmed that everything was in order. The problem occurs when I execute the statement with the cur.execute(sql) line of code.
Can anybody figure out why my tables won't drop?
def DeleteTable(table_name):
conn=psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'")
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
That must be because a concurrent transaction is holding a lock that blocks the DROP TABLE statement.
Examine the pg_stat_activity view and watch out for sessions with state equal to idle in transaction or active that have an xact_start of more than a few seconds ago.
This is essentially an application bug: you must make sure that all transactions are closed immediately, otherwise Bad Things can happen.
I am having the same issue when using psycopg2 within airflow's postgres hook and I resolved it with with statement. Probably this resolves the issue because the connection becomes local within the with statement.
def drop_table():
with PostgresHook(postgres_conn_id="your_connection").get_conn() as conn:
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS your_table")
task_drop_table = PythonOperator(
task_id="drop_table",
python_callable=drop_table
)
And a solution is possible for the original code above like this (I didn't test this one):
def DeleteTable(table_name):
with psycopg2.connect("host='localhost' dbname='trial2' user='postgres' password='postgres'") as conn:
cur=conn.cursor()
sql="""DROP TABLE """+table_name+""";"""
cur.execute(sql)
conn.commit()
Please comment if anyone tries this.

SQLite database gets locked by SELECT clause

I have a python script which creates a database and then enters an infinite loop which runs once per second querying the database with some selects.
At the same time I connect to the database with a sqlite cli and try to make an update but I get a database is locked error.
Here the (anonymized) code of the script:
import sqlite3
import time
con = sqlite3.connect(r'path\to\database.sqlite')
con.execute('DROP TABLE IF EXISTS blah;')
con.execute('CREATE TABLE blah;')
con.execute('INSERT INTO blah;')
con.commit()
while True:
result = con.execute('SELECT blah')
print(result.fetchone()[0])
time.sleep(1)
Python's sqlite3 module tries to be clever and manages transactions for you.
To ensure that you can access the database from other threads/processes, disable that (set isolation_level to None), and use explicit transactions, when needed.
Alternatively, call con.commit() whenever you are finished.

Creating transactions with with statements in psycopg2

I am trying to use psycopg2 to add some new columns to a table. PostgreSQL lacks a ALTER TABLE table ADD COLUMN IF NOT EXISTS, so I am adding each column in it's own transaction. If the column exists, there will be a python & postgres error, that's OK, I want my programme to just continue and try to add the next column. The goal is for this to be idempotent, so it can be run many times in a row.
It currently looks like this:
def main():
# <snip>
with psycopg2.connect("") as connection:
create_columns(connection, args.table)
def create_columns(connection, table_name):
def sql(sql):
with connection.cursor() as cursor:
cursor.execute(sql.format(table_name=table_name))
sql("ALTER TABLE {table_name} ADD COLUMN my_new_col numeric(10,0);")
sql("ALTER TABLE {table_name} ADD COLUMN another_new_col INTEGER NOT NULL;")
However, if my_new_col exists, there is an exception ProgrammingError('column "parent_osm_id" of relation "relations" already exists\n',), which is to be expected, but when it tried to add another_new_col, there is the exception InternalError('current transaction is aborted, commands ignored until end of transaction block\n',).
The psycogpg2 document for the with statement implies that the with connection.cursor() as cursor: will wrap that code in a transaction. This is clearly not happening. Experimentation has shown me that I need 2 levels of with statements, to including the pscyopg2.connect call, and then I get a transaction.
How can I pass a connection object around and have queries run in their own transaction to allow this sort of "graceful error handling"? I would like to keep the postgres connection code separate, in a "clean architecture" style. Is this possible?
The psycogpg2 document for the with statement implies that the with connection.cursor() as cursor: will wrap that code in a transaction.
this is actually not true it says:
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
curs.execute(SQL)
When a connection exits the with block, if no exception has been raised by the block, the transaction is committed. In case of exception the transaction is rolled back. In no case the connection is closed: a connection can be used in more than a with statement and each with block is effectively wrapped in a transaction.
So it's not about cursor object being handled by with but the connection object
Also worth noting that all resource held by cursor will be released when we leave the with clause
When a cursor exits the with block it is closed, releasing any resource eventually associated with it. The state of the transaction is not affected.
So back to your code you could probably rewrite it to be more like:
def main():
# <snip>
with psycopg2.connect("") as connection:
create_columns(connection, args.table)
def create_columns(con, table_name):
def sql(connection, sql):
with connection:
with connection.cursor() as cursor:
cursor.execute(sql.format(table_name=table_name))
sql(con, "ALTER TABLE {table_name} ADD COLUMN my_new_col numeric(10,0);")
sql(con, "ALTER TABLE {table_name} ADD COLUMN another_new_col INTEGER NOT NULL;")
ensuring your connection is wrapped in with for each query you execute, so if it fails connection context manager will revert the transaction

Why is this sqlite3 insert statement not adding a row?

I have a table, skills, which is presently empty despite my attempts to add rows to it. I have the following Python code in a CGI script:
open('/tmp/skills', 'a').write('Reached 1!\n')
if get_cgi('nous2dianoia'):
open('/tmp/skills', 'a').write('Reached 2!\n')
#if (get_cgi('previous') and get_cgi('name') and get_cgi('previous') !=
#get_cgi('name')):
#cursor.execute('DELETE FROM skills WHERE name = ?;',
#(get_cgi('previous'),))
cursor.execute('''INSERT INTO skills (name, nous2dianoia,
hereandnow2escapist, nf2nt, social2individual, ithou2iit,
slow2quick) VALUES (?, ?, ?, ?, ?, ?, ?);''',
(get_cgi('name'), get_cgi('nous2dianoia'),
get_cgi('hereandnow2escapist'), get_cgi('nf2nt'),
get_cgi('social2individual'), get_cgi('ithou2iit'),
get_cgi('slow2quick'),))
open('/tmp/skills', 'a').write('Reached 3!\n')
When I load a page, /tmp/skills has a freshly appended:
Reached 1!
Reached 2!
Reached 3!
However, the table remains empty. (The rest of the script runs without crashing, and displays what one would expect to be displayed if the script were called without any CGI variables passed.)
I haven't started a transaction; the SQL operations are not particularly advanced or intricate.
Any insight on how I can get this to run without reported error, but have an empty skills table in the database?
Thanks,
Your insert statement is not automatically committed. From the docs on sqlite3.Connection:
commit()
This method commits the current transaction. If you don’t
call this method, anything you did since the last call to commit() is
not visible from other database connections. If you wonder why you
don’t see the data you’ve written to the database, please check you
didn’t forget to call this method.
To automatically commit, use the connection as a context manager:
# connection.commit() is called automatically upon exit of context manager
# unless an exception is encountered, then connection.rollback() is called.
with connection:
connection.execute(insert_statment)

New rows not showing up after SQL INSERT & "commit" with Python and SQL

I made a loop in Python that calls itself to repeatedly check for new entries in a database. On first execution, all affected rows are shown fine. Meanwhile, I add more rows into the database. On the next query in my loop, the new rows are not shown.
This is my query-loop:
def loop():
global mysqlconfig # username, passwd...
tbd=[] # this is where I save the result
conn = MySQLdb.connect(**mysqlconfig)
conn.autocommit(True)
c = conn.cursor()
c.execute("SELECT id, message FROM tasks WHERE date <= '%s' AND done = 0;" % now.isoformat(' '))
conn.commit()
tbd = c.fetchall()
print tbd
c.close()
conn.close()
time.sleep(5)
loop()
loop()
This is the SQL part of my Python insertion-script:
conn = MySQLdb.connect(**mysqlconfig)
conn.autocommit(1)
c = conn.cursor()
c.execute("INSERT INTO tasks (date, message) VALUES ('{0}', '{1}');".format("2012-10-28 23:50", "test"))
conn.commit()
id = c.lastrowid
c.close()
conn.close()
I tried SQLite, I tried Oracle MySQL's connector, I tried MySQLdb on a Windows and Linux system and all had the same problem. I looked through many, many threads on Stackoverflow that recommended to turn on autocommit or use commit() after an SQL statement (ex. one, two, three), which I tried and failed.
When I added data with HeidiSQL to my database it showed up in the loop query, but I don't really know why this is. Rows inserted with mysql-client on Linux and my Python insertion script never show up until I restart my loop script.
I don't know if it's the fact that I open 2 connections, each in their own script, but I close every connection and every cursor when I'm done with them.
The problem could be with your variable now. I don't see anywhere in the loop that it is being reset.
I'd probably use the mysql NOW() function:
c.execute("SELECT id, message FROM tasks WHERE date <= NOW() AND done = 0;")
It looks like the time you are inserting into the database is a time in the future. I don't think your issue is with your database connection, I think it's something to do with the queries you are doing.

Categories

Resources