I've got a Python script talking to a MySQL database.
This script has been working fine for months.
All of a sudden it isn't actually adding anything to the tables it's supposed to modify.
The script has a lot of print statements and error handlers and it still runs exactly as if it was working, but nothing shows up in the database.
It even prints out "rows affected: 108" or whatever, but when I go look at the database in phpMyAdmin it says there are zero rows in the table.
The only thing it will do is truncate the tables. There's a section at the beginning that truncates the relevant tables so the script can start filling them up again. If I manually create a new row in a table through phpMyAdmin, that row will disappear when the script runs, like it's properly truncating the tables. But nothing after that does anything. It still runs without errors, but it doesn't actually modify the database.
Thanks, yeah for some reason the script was no longer autocommitting by default.
I added "cnx.autocommit(True)" and it's working again.
Related
My basic problem is that i am trying to have two python programs run simultaneously and have access to the same database table. I feel like this should have a simple solution but it has passed my by so far.
All my attempts at this have caused the database(sqlite) to be locked and the program falling over.
i have tried being clever with the timing with how they programs run so that as one program opens the connection the other closes it, copying data from one database to another etc.. but this just gets horrible and messy very quickly and also a big goal in my design is that I would like to keep latency to an absolute minimum.
The basic structure is pictured below.
I should add too that program one - 'always running and adding to database' is in the milliseconds timeframe.
Program two can be in the multiple seconds range. Obviously none of my solutions have been able to come close to that.
Any help, steps in the right direction or links to further reading is greatly appreciated!
Cheers
Although your title mentions MySQL, in your question you are only using sqlite. Now, sqlite is a perfectly capable database if you only have a single process accessing it, but it is not good for multiple simultaneous access. This is exactly where you need a proper database - like MySQL.
I'm an italian developer and this is my first message here, so sorry if I'm doing something wrong.
This is my problem with pyhs2 module:
I successfully build a connection to my hive database using pyhs2.connect method, and everything works right; the problem is that the same query sometimes freeze, depending on the width of the 'date' clause I use in my query.
Let me explain: if I run cur.execute method with the same query, first with the clause
(date >= '2017-03-01' and date <= '2017-05-10')
then with the clause
(date >= '2017-03-01' and date <= '2017-05-11')
(or even without the 'date' clause)
the first occurrence works right and return the correct results while the second (or third) stay freeze until I manually stop the script.
This behavior is very weird for me because I know there are some data after the 05-10, also I can check my database's running applications and, after my second query successfully ends, it stay pending in running applications for a while, and even when it's done, python script stay in a 'freezed state' and never return the correct results.
I think it might be a timeout problem or something like that, but I've searched both here in your useful website and on the web in general for some solutions, but found nothing.
I don't know if it could be an hive problem (for example for the TEZ type of its applications) or something from pyhs2, that after a certain amount of time maybe can't be able to retrieve the results from a query, so I ask for your help.
Thanks in advance,
Luca
I have three programs running, one of which iterates over a table in my database non-stop (over and over again in a loop), just reading from it, using a SELECT statement.
The other programs have a line where they insert a row into the table and a line where they delete it. The problem is, that I often get an error sqlite3.OperationalError: database is locked.
I'm trying to find a solution but I don't understand the exact source of the problem (is reading and writing in the same time what make this error occur? or the writing and deleting? maybe both aren't supposed to work).
Either way, I'm looking for a solution. If it were a single program, I could match the database I/O with mutexes and other multithreading tools, but it's not. How can I wait until the database is unlocked for reading/writing/deleting without using too much CPU?
you need to switch databases..
I would use the following:
postgresql as my database
psycopg2 as the driver
the syntax is fairly similar to SQLite and the migration shouldn't be too hard for you
I used to be able to run and execute python using simply execute statement. This will insert value 1,2 into a,b accordingly. But started last week, I got no error , but nothing happened in my database. No flag - nothing... 1,2 didn't get insert or replace into my table.
connect.execute("REPLACE INTO TABLE(A,B) VALUES(1,2)")
I finally found the article that I need commit() if I have lost the connection to the server. So I have add
connect.execute("REPLACE INTO TABLE(A,B) VALUES(1,2)")
connect.commit()
now it works , but I just want to understand it a little bit , why do I need this , if I know I my connection did not get lost ?
New to python - Thanks.
This isn't a Python or ODBC issue, it's a relational database issue.
Relational databases generally work in terms of transactions: any time you change something, a transaction is started and is not ended until you either commit or rollback. This allows you to make several changes serially that appear in the database simultaneously (when the commit is issued). It also allows you to abort the entire transaction as a unit if something goes awry (via rollback), rather than having to explicitly undo each of the changes you've made.
You can make this functionality transparent by turning auto-commit on, in which case a commit will be issued after each statement, but this is generally considered a poor practice.
Not commiting puts all your queries into one transaction which is safer (and possibly better performance wise) when queries are related to each other. What if the power goes between two queries that doesn't make sense independently - for instance transfering money from one account to another using two update queries.
You can set autocommit to true if you don't want it, but there's not many reasons to do that.
I get "database table is locked" error in my sqlite3 db. My script is single threaded, no other app is using the program (i did have it open once in "SQLite Database Browser.exe"). I copied the file, del the original (success) and renamed the copy so i know no process is locking it yet when i run my script everything in table B cannot be written to and it looks like table A is fine. Whats happening?
-edit-
I fixed it but unsure how. I notice the code not doing the correct things (i copied the wrong field) and after fixing it up and cleaning it, it magically started working again.
-edit2-
Someone else posted so i might as well update. I think the problem was i was trying to do a statement with a command/cursor in use.
I have run into this problem before also. It occurs often when you have a cursor and connection open and then your program crashes before you can close it properly. In some cases the following function can be used to make sure that the database is unlocked, even after it was not properly committed and closed beforehand:
from sqlite3 import dbapi2 as sqlite
def unlock_db(db_filename):
"""Replace db_filename with the name of the SQLite database."""
connection = sqlite.connect(db_filename)
connection.commit()
connection.close()
Maybe your application terminated prematurely after a SQLite transaction began. Look for stale -journal files in the directory and delete them.
It might be worth skimming through the documentation as well.
Deleting -journal files sounds like bad advice. See this explanation.
I've also seen this error when the db file is on an NFS mounted file system.