I'm using Python and MySQLdb to add rows to my database. It seems that when my script exits, the rows get deleted. My last lines before the script exits do a "select *" on the table, which shows my one row. When I re-run the script, the first lines (after opening the connection) do the same "select *" and return zero results. I'm really at a loss here. I've been working for about 2 hours on this, and can't understand what could be accessing my database.
Also, between running the scripts, I run the "select *" manually from a terminal with zero results.
If I manually add a row from the terminal, it seems to last.
The query to insert the row:
cursor.execute("INSERT INTO sessions(username, id, ip) VALUES (%s, %s, %s)", (username, SessionID, IP]))
The query I use to check the data:
cursor.execute("select * from sessions")
print cursor.fetchall()
This shows the row before the program exits, then shows nothing when the program is run again.
Thanks in advance for all the help.
Looks like you need to connection.commit() your changes after you execute the query (replace connection with your DB connection variable).
http://docs.python.org/library/sqlite3.html
Connection.commit():
This method commits the current transaction. If you don’t call this method, anything you did since the last call to commit() is not visible from other database connections. If you wonder why you don’t see the data you’ve written to the database, please check you didn’t forget to call this method.
Check this other question: Python MySQLdb update query fails
You can find some examples on how to commit, how to connect using autocommit, etc.
Related
I am trying to update SQL Server table through Python. But unfortunately it does not update.
I get message successful but no data was updated.
If I call the same SQL script from within SQL Server, it updates correctly.
Let me show you my script: this is my Python code:
PredString = '99'
conn = pymssql.connect(server="MyServer", database="MyDB", port="1433", user="****", password="******")
dfUpdate = pd.read_sql("EXEC UpdatePredictions '" + PredString + "'", conn)
conn.close()
print(dfUpdate)
This is the SQL Server stored procedure:
alter procedure UpdatePredictions
(#PredString varchar(max))
as
begin
update MyTable
set PredMths = #PredString
select 'Updated.'
end
When I run Python code I get "Updated" but actually no record was updated
But when I call from SQL Server:
EXEC UpdatePredictions '99'
I get message "Updated" and records are actually updated
What am I doing wrong here? How can I get Python to update the table?
Thanks to the guys who commented the answer.
As no one have made it as an answer, I will so I can mark it, so other people can find the answer easily in the future.
the problem was that Python connection wasn't committing update statement.
therefore I have to add this line after sending the update
conn.commit()
I always have to keep the SQL Insert statement in my python code in order to read the data from the database. Doesn't keeping the SQL insert statement in the python code amount to inserting same data multiple times into the database? I think the data insertion statement should be run once to insert data once, after which the data should be readable from the database. Whenever I omit the data insertion statement from my code, I am not able to read the data from the database, as though the script has not been run before.
Can someone please help me understand why this happens?
Below is the code:
#!/usr/bin/python
import sqlite3
conn = sqlite3.connect('test.db')
print "Opened database successfully"
conn.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ VALUES (1, 'Paul', 32, 'California', 20000.00 )")
conn.execute("INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY) \ VALUES (2, 'Allen', 25, 'Texas', 15000.00 )")
First point: you should use a cursor instead of calling connection.execute which is not part of the db-api2 standard.
So you want:
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute(<your ssql statement here>)
Second point: nothing is really written to your db until you commit your transaction, so after your inserts you need :
conn.commit()
Note that all this is very clearly explained in the FineManual with a complete example, so please have mercy and read the doc before anything else.
Third point: your "test.db" file will be looked up (and eventually created if it does not exist) in whatever the current working directory is, so use an absolute path, always - because you cannot rely on where / how your script is called to be sure you're using the expected database.
So, I have the following code that inserts the data of an old database to a new one:
...
cur_old.execute("""SELECT DISTINCT module FROM all_students_users_log_course266""")
module_rows = cur_old.fetchall()
for row in module_rows:
cur_new.execute("""INSERT INTO modules(label) SELECT %s WHERE NOT EXISTS (SELECT 1 FROM modules WHERE label=%s)""", (row[0], row[0]))
...
The last line executes a query where labels are inserted into the new database table. I tested this query on pgAdmin and it works as I want.
However, when execute the script, nothing is inserted on the modules table. (Actually the sequences are updated, but none data is stored on the table).
Do I need to do anything else after I call the execute method from the cursor?
(Ps. The script is running till the end without any errors)
You forgot to do connection.commit(). Any alteration in the database has to be followed by a commit on the connection. For example, the sqlite3 documentation states it clearly in the first example:
# Save (commit) the changes.
conn.commit()
And the first example in the psycopg2 documentation does the same:
# Make the changes to the database persistent
>>> conn.commit()
As Evert said, the commit() was missing. An alternative to always specifying it in your code is using the autocommit feature.
http://initd.org/psycopg/docs/connection.html#connection.autocommit
For example like this:
with psycopg2.connect("...") as dbconn:
dbconn.autocommit=True
My set-up:
MySQL server.
host running a python script.
(1) and (2) are different machines on the network.
The python script generates data which must be stored in a MySQL-database.
I use this (example-)code to achieve that:
def function sqldata(date,result):
con = mdb.connect('sql.lan', 'demouser', 'demo', 'demo')
with con:
cur = con.cursor()
cur.execute('INSERT INTO tabel(titel, nummer) VALUES( %s, %s)',(date, result))
The scipt generates one data-point approx. every minute. So this means that a new connection is opened and closed every minute. I'm wondering if it would be a better idea to open the connection at the start of the script and only close it when the script terminates. Effectively leaving the connection open indefinately.
This then obviously begs the question how to handle/recover when the SQL-server "leaves" the network (e.g. due to a reboot) for a while.
While typing my question this question appeared in the "Similar Questions" section. It is, however, from 2008 and possibly outdated and the 4 answers it received seem to contradict with each other.
What are the current insights in this matter?
Well the referred answer is right in it's point, but maybe not answering all your questions. I can not provide a full running python script for you here, but let me explain how i would go along with it:
Rule 1: Generally most mysql functions return values, that you should always check so that you can react on unwanted behavior.
Rule 2: Open a connection at the beginning of your script and use this one and only connection throughout your script.
Obviously you could check if there is an existing connection in your sqldata function, and if not then you could open a new one to the global con object.
if not con:
con = mdb.connect('sql.lan', 'demouser', 'demo', 'demo')
And if there is a connection already, you could check it's "up status" by performing a simple query with fixed expected result that you can check to see if the sql server is running.
if con:
cur = con.cursor()
returned = cur.execute('SELECT COUNT(*) FROM tabel')
if returned.with_rows:
....
Basically you could avoid this, because if you don't get a cursor back, and you check that first before using it, then you already know if the server is alive or not.
So CHECK, CHECK and CHECK. You should check everything you get back from a function to have a good error handling. Just using a connection or using a cursor without checking it first, can lead you talking to a NIL object and crashing your script.
And the last BIG HINT i can give you is to use multiple row inserts. You can actually insert hundreds of rows, if you just add the values comma seperated to your insert string:
# consider result would be filled like this
result = '("First Song",1),("Second Song",2),("Third Song",3)'
# then this will insert 3 rows with one call
returned = cur.execute('INSERT INTO tabel (titel, nummer) VALUES %s',(date, result), multi=True)
# since literally it will execute
returned = cur.execute('INSERT INTO tabel (titel, nummer) VALUES ("First Song",1),("Second Song",2),("Third Song",3)', multi=True)
# and now you can check returned for any error
if returned:
....
cursor = connection.cursor()
cursor.execute("UPDATE public.rsvp SET status=TRUE WHERE rsvp_id=%s", [rsvp_id])
cursor.execute("SELECT status, rsvp_id FROM public.rsvp WHERE rsvp_id=%s", [rsvp_id])
row = cursor.fetchall()
When I execute this in my Django project, I get the row returned as I expect to see it, but later when I select query for the same row, it appears as tho the statement was never really run. In my code, the column "status" defaults to NULL. After this is run, I still see NULL in my table.
You didn't specify what database you're dealing with, which may change the answer somewhat. However, with most database connections you need to finish with connection.commit() to really save changes on the database. This includes both update and insert operations. Failing to commit() usually results in a rollback of the actions.