mariadb for update doesn't work correctly - python

I was testing mariadb for update with python mysqldb and it doesn't works correctly for me. The same rows are getting twice when running two python scripts symultaneously as cronjob. When I testing on two mysql consoles everything going fine. Using innodb. My code:
cursor.execute("start transaction")
cursor.execute("SELECT id FROM t1 where tw1=2 AND tw2=1 AND tw3=3 order by id limit 1000 FOR UPDATE")
rows = [r for r in cursor.fetchall()]
ids = []
for row in rows:
ids.append(str(row['id']))
for i in range(2):
cursor.execute("SELECT id FROM t2 where tw1="sth" AND tw2 =0 AND tw3=0 LIMIT 1 FOR UPDATE")
d = cursor.fetchone()
if not d:
#here: insert to t2 and get last id as d_id
cursor.execute("update t2 SET tu1=1,tu2=500,tu3=0 WHERE id = %s" % d_id)
for row in rows:
#job with rows
cursor.execute( "Update t1 SET tw1=tw1+1 WHERE id IN (" + ','.join(ids) + ")")
cursor.execute("commit")
tw1 from t1 should be max 3 but it is very often 4. When I put ids to file I see that there are the same ids for 2 diffrent program runs. What I'm doing wrong?

Related

python running sql select with var does not work

I have spent at least 2 hours on this
self.ac=14786894668
myCode = str(self.ac)
query = """select * from myTable where AC_CD = '%s' """ % (myCode)
print(query)
res = self.mycursor.execute(query)
print(res)
for row in self.mycursor.fetchall():
print(row)
#
does not give me the DB result
print(query) gives me -->
select * from myTable where AC_CD = '14786894668'
print(res) gives me --> pyodbc.Cursor object at 0x04F3E6A0
which is correct and I copy and paste the exact result from print(query) -->
select * from myTable where AC_CD = '14786894668'
in my UI DB and it works and I see the rows and all data
I should note that this code does not give me any rows:
for row in self.mycursor.fetchall():
print(row)
Ok I figured out the problem and of course kick myself for wasting so much time.
The AC_CD was generated in QA environment my python code was executing the Query it in Staging DB. and of course it couldn't find it.

How to build dynamic sql query in python and use executemany() to insert?

In my code, I am selecting 10 000 rows of data from a MySQL table and building a list of 100 rows to insert at a time. I am inserting the data into a table on another server. I want to be able to use this code for tables with different values and columns. How am I able to do this?
Here is the code I have so far:
while outerIndex<outerLoops:
ts1 = time.time()
ts2 = 0
sqlReadRows = 'SELECT * FROM `user_session_0805` WHERE record_time >=%s AND record_time < %s ORDER BY record_time ASC LIMIT %s'
readCur.execute(sqlReadRows,(startTime, maxTime, selectLimit))
dataResults = readCur.fetchall()
innerLoops = len(dataResults)
innerIndex = 0
batch = []
while innerIndex<innerLoops:
if len(dataResults) <100:
for row in dataResults:
if row:
batch.append(
row
)
else:
for i in range(innerIndex, innerIndex+100):
if dataResults[i]:
batch.append(
dataResults
)
else:
break
innerIndex+=100
sqlWrite = # Generate sql
if batch:
writeCur.executemany(sqlWrite, batch)
cnx2.commit()
startTime = batch[-1][1]
ts2 = time.time()
print 'Took %s secs to insert %s rows of data'%(int(ts2-ts1), len(batch))
outerIndex+=1
I'm fairly new to Python, so I would appreciate any helpful advice too!
You dont do any modifications to the data, right? You can insert the SELECT result directly in a new Table:
INSERT INTO newTable
SELECT
*
FROM `user_session_0805`
WHERE record_time >=%s
AND record_time < %s
ORDER BY record_time ASC
LIMIT %s
You dont have to create the newTable first, MySQL will handle this for you. But you should do a
DROP TABLE IF EXISTS newTable;
Update
You can export the values to a csv-file by using
SELECT
*
FROM `user_session_0805`
WHERE record_time >=%s
AND record_time < %s
ORDER BY record_time ASC
LIMIT %s
INTO OUTFILE 'c:/myFile.csv'
I ended up writing the sql using this:
if batch:
key =""
key = '({})'.format(','.join(elem for elem in batch[0]))
print key
value ='({})'.format(','.join("'"+str(eleme)+"'" for eleme in batch[0].values()))
print value
sqlWrite = []
sqlWrite.append("INSERT IGNORE INTO %s " %writeTable)
sqlWrite.append("".join(key))
sqlWrite.append(" VALUES ")
sqlWrite.append("".join(value))
sql="".join(sqlWrite)
print sql
if batch:
writeCur.executemany(sql, batch)
cnx2.commit()

sqlite3 python unexpected termination

I have set a sqlite3 database. I filled it with some data(~4million records ~1.2Gb of data).
Then I do some queries (selects/deletes/updates).
The problem is that sometimes after the insertions the script stops without an error. Sometimes it runs normal until the end.
These are the type of queries I run:
from __future__ import print_function
import sqlite3
import csv
import os
import glob
import sys
import time
db = 'test.db'
conn = sqlite3.connect(db)
conn.text_factory = str # allows utf-8 data to be stored
c = conn.cursor()
i = 0
### traverse the directory and process each .csv file
##print("debug")
csvfile =('/home/Desktop/Untitled Folder/Crimes_-_2001_to_present.csv')
with open(csvfile, "rb") as f:
reader = csv.reader(f)
t = time.time()
header = True
for row in reader:
if header:
# gather column names from the first row of the csv
header = False
sql = "DROP TABLE IF EXISTS test_table"
c.execute(sql)
#print("debug 1")
sql = "CREATE TABLE test_table (ID INTEGER,FBI_Code INTEGER,Updated_On TEXT,District TEXT,Beat INTEGER,Primary_Type TEXT,Location BLOB,Latitude REAL,Arrest INTEGER,Domestic INTEGER,Longitude REAL,Community_Area INTEGER,Case_Number INTEGER,Block TEXT,Location_Description TEXT,Ward INTEGER,IUCR INTEGER,Year INTEGER, Date TEXT,Y_Coordinate INTEGER,Description TEXT,X_Coordinate INTEGER);"
c.execute(sql)
#print("debug 2")
insertsql = "INSERT INTO test_table VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)"
rowlen = len(row)
#print("debug 3")
else:
# skip lines that don't have the right number of columns
#print("debug 4")
#if len(row) == rowlen:
#print("debug 5")
try:
c.execute(insertsql, row)
except:
print("problem in row %d" % i)
print(row)
continue
# print("debug 6")
i +=1
if i == 1000:
conn.commit()
#### if i == 4000000:
#### break
## #print (row)
conn.commit()
print('\nTime for Insertions TOTAL~> \n')
print(float( time.time() -t ))
print('\nTime for Insertions per Query~> \n')
print(float( time.time() -t )/i)
del rows
rows = list()
print('\nTime for Selections ~> Domestic\n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE Domestic == 'false'")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
rows = list()
print('\nTime for Selections ~> Arrests\n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE Arrest == 'false'")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
rows = list()
print('\nTime for Selections ID~> \n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE ID < 9938614")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
rows = list()
print('\nTime for Selections ~> Primary_Type\n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE Primary_Type == 'BATTERY'")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
rows = list()
print('\nTime for Selections Year~> \n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE Year <= 2014")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
rows = []
print('\nTime for Updates ~> YEAR\n')
t = time.time()
c.execute("UPDATE test_table SET Year = '2016' WHERE Year == '2014'")
print(float( time.time() -t ))
print('\nTime for Selections Year~> \n')
t = time.time()
c.execute("SELECT * FROM test_table WHERE Year <= 2014")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
print('\nTime for DELETIONS ~> Domestic\n')
t = time.time()
c.execute("DELETE FROM test_table WHERE Domestic == 'false'")
rows = c.fetchall()
print(float( time.time() -t ))
print(len(rows))
del rows
c.close()
conn.close()
Every time I reassign the row list because after some queries I run out of memory. But I do not think that's the problem(just in case I used del rows & reassigned it, it was slower that way). Anyway after some of these queries the script stops without an error and I can not figure out why because there are some times in which it runs OK.
Edit
I have included the code above. The problem is that after the insertion part when I do the queries the script is terminated without any error.
For example it goes till here:
...
Time for Selections ~> Arrests
123.231
3928182
and then it terminates. In the 1st approach I did not delete the list and Cython produced core dump errors when I tried to re-declare the list. Now that I delete and then declare the list Cython runs ok. My question is why does not python catch any exceptions?
After the reassignment of the list, garbage collector clears the trash data (and it does, as I saw from linux monitor) but it crashes without errors. And the most annoying is that sometimes it runs ok till the end.
I had exactly the same problem, my solution was to create a new cursor, then I handled the SQL Select with one cursor and inserts/deletes with the other one.
conn = sqlite3.connect(db)
c = conn.cursor()
c2 = conn.cursor()

Python - Continue Code Execution After Popping Up MsgBox?

I'm fairly new to Python. Here's a script I have that gathers info from our MySQL server hosting our Helpdesk tickets, and will pop up a message box (using EasyGUI's "msgbox()" function) whenever a new ticket arrives.
The issue is that I want my program to continue processing after the popup, regardless of whether the user clicks "OK" or not, even if that means message boxes could keep popping up over each other and must be dismissed one by one; that would be fine with me.
I looked into threading, and either it doesn't work or I did something wrong and need a good guide. Here's my code:
import MySQLdb
import time
from easygui import *
# Connect
db = MySQLdb.connect(host="MySQL.MyDomain.com", user="user", passwd="pass", db="db")
cursor = db.cursor()
# Before-and-after arrays to compare; A change means a new ticket arrived
IDarray = ([0,0,0])
IDarray_prev = ([0,0,0])
# Compare the latest 3 tickets since more than 1 may arrive in my time interval
cursor.execute("SELECT id FROM Tickets ORDER BY id DESC limit 3;")
numrows = int(cursor.rowcount)
for x in range(0,numrows):
row = cursor.fetchone()
for num in row:
IDarray_prev[x] = int(num)
cursor.close()
db.commit()
while 1:
cursor = db.cursor()
cursor.execute("SELECT id FROM Tickets ORDER BY id DESC limit 3;")
numrows = int(cursor.rowcount)
for x in range(0,numrows):
row = cursor.fetchone()
for num in row:
IDarray[x] = int(num)
if(IDarray != IDarray_prev):
cursor.execute("SELECT Subject FROM Tickets ORDER BY id DESC limit 1;")
subject = cursor.fetchone()
for line in subject:
# -----------------------------------------
# STACKOVERFLOW, HERE IS THE MSGBOX LINE!!!
# -----------------------------------------
msgbox("A new ticket has arrived:\n"+line)
# My time interval -- Checks the database every 8 seconds:
time.sleep(8)
IDarray_prev = IDarray[:]
cursor.close()
db.commit()
You can use Python GTK+
It offers non-modal using
set_modal(False)

Inserting rows while fetching(from another table) in SQLite

I'm getting this error no matter what with python and sqlite.
File "addbooks.py", line 77, in saveBook
conn.commit()
sqlite3.OperationalError: cannot commit transaction - SQL statements in progress
The code looks like this:
conn = sqlite3.connect(fname)
cread = conn.cursor()
cread.execute('''select book_text from table''')
while True:
row = cread.fetchone()
if row is None:
break
....
for entry in getEntries(doc):
saveBook(entry, conn)
Can't do a fetchall() because table and column size are big, and the memory is scarce.
What can be done without resorting to dirty tricks(as getting the rowids in memory, which would probably fit, and then selecting the rows one by one)?.
The problem is that you've left the connection in auto-commit mode. Wrap a single transaction around the whole lot so that a commit only happens after you've done all the updates, and it should all work fine.
Don't know if this count as "dirty tricks" too ;-)
My solution to this problem is to use SELECT... LIMIT clause, assumed you have primary key integer field id
current_id = 0
while True:
cread.execute('''select book_text from table where id > %s limit 2''' % current_id)
results = cread.fetchall()
if results is None:
break;
for row in results:
... (save book) ...
current_id = row.id
The problem is that there should be no more than a single active cursor for a connection.
The solution is to use a new connection for the updates.
Unfortunatelly I do not remember the exact place in docs where I read it, so I can not prove it.
UPD:
The following code works on my Windows XP:
import sqlite3
import os
conn1 = sqlite3.connect('test.db')
cursor1 = conn1.cursor()
conn2 = sqlite3.connect('test.db')
cursor2 = conn2.cursor()
cursor1.execute("CREATE TABLE my_table (a INT, b TEXT)")
cursor1.executemany("INSERT INTO my_table (a, b) VALUES (?, NULL);", zip(range(5)))
conn1.commit()
cursor1.execute("SELECT * FROM my_table")
for a, b in cursor1:
cursor2.execute("UPDATE my_table SET b='updated' WHERE a = ?", (a, ))
conn2.commit()
print "results:"
print 10 * '-'
cursor1.execute("SELECT * FROM my_table")
for a, b in cursor1:
print a, b
cursor1.close()
conn1.close()
cursor2.close()
conn2.close()
os.unlink('test.db')
And returns the following as expected:
results:
----------
0 updated
1 updated
2 updated
3 updated
4 updated
If I move the conn2.commit() into the for loop, I get the same error as you mention:
Traceback (most recent call last):
File "concurent.py", line 16, in <module>
conn2.commit()
sqlite3.OperationalError: database is locked
Thus, the solution is to commit once at the end instead of committing after each line.

Categories

Resources