I get "database table is locked" error in my sqlite3 db. My script is single threaded, no other app is using the program (i did have it open once in "SQLite Database Browser.exe"). I copied the file, del the original (success) and renamed the copy so i know no process is locking it yet when i run my script everything in table B cannot be written to and it looks like table A is fine. Whats happening?
-edit-
I fixed it but unsure how. I notice the code not doing the correct things (i copied the wrong field) and after fixing it up and cleaning it, it magically started working again.
-edit2-
Someone else posted so i might as well update. I think the problem was i was trying to do a statement with a command/cursor in use.
I have run into this problem before also. It occurs often when you have a cursor and connection open and then your program crashes before you can close it properly. In some cases the following function can be used to make sure that the database is unlocked, even after it was not properly committed and closed beforehand:
from sqlite3 import dbapi2 as sqlite
def unlock_db(db_filename):
"""Replace db_filename with the name of the SQLite database."""
connection = sqlite.connect(db_filename)
connection.commit()
connection.close()
Maybe your application terminated prematurely after a SQLite transaction began. Look for stale -journal files in the directory and delete them.
It might be worth skimming through the documentation as well.
Deleting -journal files sounds like bad advice. See this explanation.
I've also seen this error when the db file is on an NFS mounted file system.
Related
I'm writing a python script to process some csv data and put it into a sqlite db which I'm accessing through sqlalchemy.
The calculations are currently implemented in two parts. The second part depends on results of part one already existing in the database. Rewriting the script from scratch do resolve this dependency would be a pain and I'd like to avoid it.
def part_one():
# does stuff
session.commit()
def part_two():
# does stuff, including querying part_one's results
# sometimes this function fails and rollbacks
session.commit()
If part_two fails, I want to rollback part_two AND part_one.
Since part_two depends on data existing in the db, I think I'm forced to commit in part_one. Otherwise I could just reuse the same session and rollback altogether obviously.
I tried messing about with session.begin_nested but didn't get anywhere with that. Is there a way to achieve what I'm trying to do? I need to either be able to session.query against uncommitted changes (that doesn't seem possible) or roll back a previously successfully committed transaction.
Ok I made this much more complicated than it need to be. What I was looking for was apparently session.flush which does all the insert/update/deletes of part_one without committing anything.
def part_one():
# does stuff
session.flush()
def part_two():
# does stuff, including querying part_one's results
# sometimes this function fails and rollbacks
session.commit()
Works like a charm
I've got a Python script talking to a MySQL database.
This script has been working fine for months.
All of a sudden it isn't actually adding anything to the tables it's supposed to modify.
The script has a lot of print statements and error handlers and it still runs exactly as if it was working, but nothing shows up in the database.
It even prints out "rows affected: 108" or whatever, but when I go look at the database in phpMyAdmin it says there are zero rows in the table.
The only thing it will do is truncate the tables. There's a section at the beginning that truncates the relevant tables so the script can start filling them up again. If I manually create a new row in a table through phpMyAdmin, that row will disappear when the script runs, like it's properly truncating the tables. But nothing after that does anything. It still runs without errors, but it doesn't actually modify the database.
Thanks, yeah for some reason the script was no longer autocommitting by default.
I added "cnx.autocommit(True)" and it's working again.
I have three programs running, one of which iterates over a table in my database non-stop (over and over again in a loop), just reading from it, using a SELECT statement.
The other programs have a line where they insert a row into the table and a line where they delete it. The problem is, that I often get an error sqlite3.OperationalError: database is locked.
I'm trying to find a solution but I don't understand the exact source of the problem (is reading and writing in the same time what make this error occur? or the writing and deleting? maybe both aren't supposed to work).
Either way, I'm looking for a solution. If it were a single program, I could match the database I/O with mutexes and other multithreading tools, but it's not. How can I wait until the database is unlocked for reading/writing/deleting without using too much CPU?
you need to switch databases..
I would use the following:
postgresql as my database
psycopg2 as the driver
the syntax is fairly similar to SQLite and the migration shouldn't be too hard for you
As per the sqlite3 documentation http://www.sqlite.org/compile.html#threadsafe:
"When SQLite has been compiled with SQLITE_THREADSAFE=1 or
SQLITE_THREADSAFE=2 then the threading mode can be altered at run-time
using the sqlite3_config() interface together with one of these verbs:
SQLITE_CONFIG_SINGLETHREAD
SQLITE_CONFIG_MULTITHREAD
SQLITE_CONFIG_SERIALIZED "
Can you please help me with the proper Python syntax for configuring a database with SQLITE_THREADSAFE=1 and SQLITE_CONFIG_MULTITHREAD
Thank you for reading, and apologies for filling up stackoverflow with such a basic problem.
BTW, if it matters at all, what I am doing is, I have multiple threads running, and in each I have a several calls to different database connections. The python script worked well when running on the windows machine I originally wrote it on, but now that I have migrated it to an Ubuntu machine I get "ProgrammingError: SQLite objects created in a thread can only be used in that same thread.." I tried connecting with check_same_thread = False but then I get an error that the database is locked. This is why I need to see if the configs above may help solve my problem, I just have trouble with their syntax.
I used to be able to run and execute python using simply execute statement. This will insert value 1,2 into a,b accordingly. But started last week, I got no error , but nothing happened in my database. No flag - nothing... 1,2 didn't get insert or replace into my table.
connect.execute("REPLACE INTO TABLE(A,B) VALUES(1,2)")
I finally found the article that I need commit() if I have lost the connection to the server. So I have add
connect.execute("REPLACE INTO TABLE(A,B) VALUES(1,2)")
connect.commit()
now it works , but I just want to understand it a little bit , why do I need this , if I know I my connection did not get lost ?
New to python - Thanks.
This isn't a Python or ODBC issue, it's a relational database issue.
Relational databases generally work in terms of transactions: any time you change something, a transaction is started and is not ended until you either commit or rollback. This allows you to make several changes serially that appear in the database simultaneously (when the commit is issued). It also allows you to abort the entire transaction as a unit if something goes awry (via rollback), rather than having to explicitly undo each of the changes you've made.
You can make this functionality transparent by turning auto-commit on, in which case a commit will be issued after each statement, but this is generally considered a poor practice.
Not commiting puts all your queries into one transaction which is safer (and possibly better performance wise) when queries are related to each other. What if the power goes between two queries that doesn't make sense independently - for instance transfering money from one account to another using two update queries.
You can set autocommit to true if you don't want it, but there's not many reasons to do that.