ok i got this as simple as i can, everythings working and i need one last thing before I'm done with this issue
i am using sqlite3 module in python
i also have very limited sql expierance
the problem-
i need to take the only value out of an sql table; the tablename is saves and the row id is 0 the name of the row is lvl. i need to then assign this as the value of the python variable lvl. Then on closure of the program i need to update the sql table with the current value of the python variable lvl (it will take the place of the data i just retrieved--there will also be numerous operations in between).
My code for assigning the value of the python Variable
conn = sql.connect('databaserm/database')
curs = conn.cursor()
curs.execute('SELECT 0 FROM saves')
lvl = curs.fetchone()
conn.commit
conn.close()
after running this i get the output
None
and my code for adding data to the database on closure
elif choice == q:
if choice == q:
cn = sqlite3.connect('/databaserm/database')
curs = cn.cursor()
curs.execute('INSERT INTO saves (lvl) VALUES (?)', lvl)
cn.commit
cn.close()
loop1 = 0
loop = 10000
print "Goodbye!"
sys.exit(0)
after running this with a preloaded database and the previous code ommited i get a connection error
i would be overjoyed at any help i'm offered and hope to work out a solution to this soon
SELECT 0 FROM saves is not sensible (or valid) SQL, you probably want to do something like SELECT * FROM saves WHERE lvl = 0
I suggest reading some sql tutorial
Related
def overwriteFlat(top, curTable, rawEntrylist, columns):
rawEntrylist = rawEntrylist
entryList = list()
for value in rawEntrylist:
entryList.append(value.get())
conn = sqlite3.connect('data.db')
c = conn.cursor()
for i in range(len(columns)):
if entryList[i] != '':
c.execute("""UPDATE """+curTable+""" SET """+columns[i]+""" = :"""+columns[i]+""" WHERE """+columns[0]+""" = """ + str(entryList[0]), {columns[i]: entryList[i]})
print(curTable,columns[i],entryList[i])
conn.commit()
c.close()
conn.close()
closeWin(top)
Output:
Flat ID 23
Flat Street Test
Flat Street_Number 100
I put in "Test" and "100" so that works. I provide a window for input, the input gets put into here and everything provided gets overwritten in provided ID. Because of print() I see it goes into the right table, it also selects the right column and doesn't throw any exception. But it doesn't update database.
Database not locked.
Variables all valid and work.
No exception is thrown.
Vulnerable to injection, soon as it works I'll change it.
Thanks to #JohnGordon i found the solution
But just so if someone wants to use Variables in Sqlite i will explain how as this is hardly explained anywhere on the Internet (at least at my Beginner-Programmer-Level)
Usually Sql commands work like this and are pretty static:
"UPDATE Your_Table SET Your_Column = :Your_Column WHERE IndexColumn = Your_Index), {Your_Column: Your_Value}"
But by using +Variable+ you can use Variables in there
so its the same thing but with whatever Variable you want:
"UPDATE "+curTable+" SET "+columns[i]+" = :"+columns[i]+" WHERE "+columns[i]+" = " + str(entryList[0]), {columns[i]: entryList[i]}
You can now have the Variables "curTable", "columns", "entryList" set to whatever you want and dont need a static line for everything
The same works with INSERT and the other things too
Edit: (its now 3 hours later, 1 AM and i got the safer way)
NOW THAT YOU GOT THAT READ THIS
you will be vulnerable to SQL Injection, and you need to still change that code to this:
query = " UPDATE "+curTable+" SET "+columns[i]+" = ? WHERE "+columns[0]+" = ?"
c.execute(query, (entryList[i], entryList[0], ))
this makes it safer, but as i am not a pro yet maybe someone can confirm
Edit: Removed triple-quotes as they are only needed in multiple-sentence sql stuff thanks for the hint #Tim Roberts
I have located a piece of code that runs quite slow (in my opinion) and would liek to know what you guys think. The code in question is as follows and is supposed to:
Query a database and get 2 fields, a field and its value
Populate the object dictionary with their values
The code is:
query = "SELECT Field, Value FROM metrics " \
"WHERE Status NOT LIKE '%ERROR%' AND Symbol LIKE '{0}'".format(self.symbol)
query = self.db.run(query, True)
if query is not None:
for each in query:
self.metrics[each[0].lower()] = each[1]
The query is run using a db class I created that is very simple:
def run(self, query, onerrorkeeprunning=False):
# Run query provided and return result
try:
con = lite.connect(self.db)
cur = con.cursor()
cur.execute(query)
con.commit()
runsql = cur.fetchall()
data = []
for rows in runsql:
line = []
for element in rows:
line.append(element)
data.append(line)
return data
except lite.Error, e:
if onerrorkeeprunning is True:
if con:
con.close()
return
else:
print 'Error %s:' % e.args[0]
sys.exit(1)
finally:
if con:
con.close()
I know there are tons of ways of writting this code and I was trying to keep things simple but for 24 fields this takes 0.03s so if I have 1,000 elements that is 30s and I find it a little too long!
EDIT: on further review, runsql = cur.fetchall() is the line that takes the most to run.
Any help will be much appreciated.
2nd EDIT: Looking online further, I have found the issue lies with the fetchall() commant and not with my query or the initialization of the DB. Has anybody been able to imporve the performance of the result fetching? (Some people mentioned changing the SQL code but this is not to blame, it runs pretty fast but then the slowness comes when you try to grab those results)
fetchall() reads all results, and returns them in a temporary list.
Your run() function then just puts all the results into another list.
Your top-level code then copies these values into yet another dictionary.
You should fetch only the row you need (which can be done directly on the cursor), and handle it directly:
cur.execute("SELECT Field, Value ...")
for row in cur:
self.metrics[row[0].lower()] = row[1]
Note: this distributes the cost of the SQL query over all for iteration; the overall time spent in the database does not change.
This code improves only on the time that would have been spent handling all the temporary variables.
I have a strange problem that Im having trouble both duplicating and solving.
Im using the pyodbc library in Python to access a MS Access 2007 database. The script is basically just importing a csv file into Access plus a few other tricks.
I am trying to first save a 'Gift Header' - then get the auto-incrmented id (GiftRef) that it is saved with - and use this value to save 1 or more associated 'Gift Details'.
Everything works exactly as it should - 90% of the time. The other 10% of the time Access seems to get stuck and repeatedly returns the same value for cur.execute("select last(GiftRef) from tblGiftHeader").
Once it gets stuck it returns this value for the duration of the script. It does not happen while processing a specific entry or at any specific time in the execution - it seems to happen completely
at random.
Also I know that it is returning the wrong value - in other words the Gift Headers are being saved - and are being given new, unique ID's - but for whatever reason that value is not being returned correctly when called.
SQL = "insert into tblGiftHeader (PersonID, GiftDate, Initials, Total) VALUES "+ str(header_vals) + ""
cur.execute(SQL)
gift_ref = [s[0] for s in cur.execute("select last(GiftRef) from tblGiftHeader")][0]
cur.commit()
Any thoughts or insights would be appreciated.
In Access SQL the LAST() function does not necessarily return the most recently created AutoNumber value. (See here for details.)
What you want is to do a SELECT ##IDENTITY immediately after you commit your INSERT, like this:
import pyodbc
cnxn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Users\\Public\\Database1.accdb;')
cursor = cnxn.cursor()
cursor.execute("INSERT INTO Clients (FirstName, LastName) VALUES (?, ?)", ['Mister', 'Gumby'])
cursor.commit()
cursor.execute("SELECT ##IDENTITY AS ID")
row = cursor.fetchone()
print row.ID
cnxn.close()
Yep! That seems to be a much more reliable way of getting the last id. I believe my initial code was based on the example here http://www.w3schools.com/sql/sql_func_last.asp which I suppose I took out of context.
Thanks for the assist! Here is the updated version of my original code (with connection string):
MDB = 'C:\\Users\\Public\\database.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
conn = pyodbc.connect('DRIVER={};DBQ={}'.format(DRV,MDB))
curs = conn.cursor()
SQL = "insert into tblGiftHeader (PersonID, GiftDate, Initials, Total) VALUES "+ str(header_vals) + ""
curs.execute(SQL)
curs.commit()
curs.execute("SELECT ##IDENTITY AS ID")
row = curs.fetchone()
gift_ref = row.ID
I have to call a MS SQLServer stored procedure with a table variable parameter.
/* Declare a variable that references the type. */
DECLARE #TableVariable AS [AList];
/* Add data to the table variable. */
INSERT INTO #TableVariable (val) VALUES ('value-1');
INSERT INTO #TableVariable (val) VALUES ('value-2');
EXEC [dbo].[sp_MyProc]
#param = #TableVariable
Works well in the SQL Sv Management studio. I tried the following in python using PyOdbc:
cursor.execute("declare #TableVariable AS [AList]")
for a in mylist:
cursor.execute("INSERT INTO #TableVariable (val) VALUES (?)", a)
cursor.execute("{call dbo.sp_MyProc(#TableVariable)}")
With the following error: error 42000 : the table variable must be declared. THe variable does not survive the different execute steps.
I also tried:
sql = "DECLARE #TableVariable AS [AList]; "
for a in mylist:
sql = sql + "INSERT INTO #TableVariable (val) VALUES ('{}'); ".format(a)
sql = sql + "EXEC [dbo].[sp_MyProc] #param = #TableVariable"
cursor.execute(sql)
With the following error: No results. Previous SQL was not a query.
No more chance with
sql = sql + "{call dbo.sp_MyProc(#TableVariable)}"
does somebody knows how to handle this using Pyodbc?
Now the root of your problem is that a SQL Server variable has the scope of the batch it was defined in. Each call to cursor.execute is a separate batch, even if they are in the same transaction.
There are a couple of ways you can work around this. The most direct is to rewrite your Python code so that it sends everything as a single batch. (I tested this on my test server and it should work as long as you either add set nocount on or else step over the intermediate results with nextset.)
A more indirect way is to rewrite the procedure to look for a temp table instead of a table variable and then just create and populate the temp table instead of a table variable. A temp table that is not created inside a stored procedure has a scope of the session it was created in.
I believe this error has nothing to do with sql forgetting the table variable. I've experienced this recently, and the problem was that pyodbc doesnt know how to get a resultset back from the stored procedure if the SP also returns counts for the things affected.
In my case the fix for this was to simply put "SET NOCOUNT ON" at the start of the SP.
I hope this helps.
I am not sure if this works and I can't test it because I don't have MS SQL Server, but have you tried executing everything in a single statement:
cursor.execute("""
DECLARE #TableVariable AS [AList];
INSERT INTO #TableVariable (val) VALUES ('value-1');
INSERT INTO #TableVariable (val) VALUES ('value-2');
EXEC [dbo].[sp_MyProc] #param = #TableVariable;
""");
I had this same problem, but none of the answers here fixed it. I was unable to get "SET NOCOUNT ON" to work, and I was also unable to make a single batch operation working with a table variable. What did work was to use a temporary table in two batches, but it all day to find the right syntax. The code which follows creates and populates a temporary table in the first batch, then in the second, it executes a stored proc using the database name followed by two dots before the stored proc name. This syntax is important for avoiding the error, "Could not find stored procedure 'x'. (2812) (SQLExecDirectW))".
def create_incidents(db_config, create_table, columns, tuples_list, upg_date):
"""Executes trackerdb-dev mssql stored proc.
Args:
config (dict): config .ini file with mssqldb conn.
create_table (string): temporary table definition to be inserted into 'CREATE TABLE #TempTable ()'
columns (tuple): columns of the table table into which values will be inserted.
tuples_list (list): list of tuples where each describes a row of data to insert into the table.
upg_date (string): date on which the items in the list will be upgraded.
Returns:
None
"""
sql_create = """IF OBJECT_ID('tempdb..#TempTable') IS NOT NULL
DROP TABLE #TempTable;
CREATE TABLE #TempTable ({});
INSERT INTO #TempTable ({}) VALUES {};
"""
columns = '"{}"'.format('", "'.join(item for item in columns))
# this "params" variable is an egregious offense against security professionals everywhere. Replace it with parameterized queries asap.
params = ', '.join([str(tupl) for tupl in tuples_list])
sql_create = sql_create.format(
create_table
, columns
, params)
msconn.autocommit = True
cur = msconn.cursor()
try:
cur.execute(sql_create)
cur.execute("DatabaseName..TempTable_StoredProcedure ?", upg_date)
except pyodbc.DatabaseError as err:
print(err)
else:
cur.close()
return
create_table = """
int_column int
, name varchar(255)
, datacenter varchar(25)
"""
create_incidents(
db_config = db_config
, create_table = create_table
, columns = ('int_column', 'name', 'datacenter')
, cloud_list = tuples_list
, upg_date = '2017-09-08')
The stored proc uses IF OBJECT_ID('tempdb..#TempTable') IS NULL syntax to validate the temporary table has been created. If it has, the procedure selects data from it and continues. If the temporary table has not been created, the proc aborts. This forces the stored proc to use a copy of the #TempTable created outside the stored procedure itself but in the same session. The pyodbc session lasts until the cursor or connection is closed and the temporary table created by pyodbc has the scope of the entire session.
IF OBJECT_ID('tempdb..#TempTable') IS NULL
BEGIN
-- #TempTable gets created here only because SQL Server Management Studio throws errors if it isn't.
CREATE TABLE #TempTable (
int_column int
, name varchar(255)
, datacenter varchar(25)
);
-- This error is thrown so that the stored procedure requires a temporary table created *outside* the stored proc
THROW 50000, '#TempTable table not found in tempdb', 1;
END
ELSE
BEGIN
-- the stored procedure has now validated that the temporary table being used is coming from outside the stored procedure
SELECT * FROM #TempTable;
END;
Finally, note that "tempdb" is not a placeholder, like I thought when I first saw it. "tempdb" is an actual MS SQL Server database system object.
Set connection.autocommit = True and use cursor.execute() only once instead of multiple times. The SQL string that you pass to cursor.execute() must contain all 3 steps:
Declaring the table variable
Filling the table variable with data
Executing the stored procedure that uses that table variable as an input
You don't need semicolons between the 3 steps.
Here's a fully functional demo. I didn't bother with parameter passing since it's irrelevant, but it also works fine with this, for the record.
SQL Setup (execute ahead of time)
CREATE TYPE dbo.type_MyTableType AS TABLE(
a INT,
b INT,
c INT
)
GO
CREATE PROCEDURE dbo.CopyTable
#MyTable type_MyTableType READONLY
AS
BEGIN
SET NOCOUNT ON;
SELECT * INTO MyResultTable FROM #MyTable
END
python
import pyodbc
CONN_STRING = (
'Driver={SQL Server Native Client 11.0};'
'Server=...;Database=...;UID=...;PWD=...'
)
class DatabaseConnection(object):
def __init__(self, connection_string):
self.conn = pyodbc.connect(connection_string)
self.conn.autocommit = True
self.cursor = self.conn.cursor()
def __enter__(self):
return self.cursor
def __exit__(self, *args):
self.cursor.close()
self.conn.close()
sql = (
'DECLARE #MyTable type_MyTableType'
'\nINSERT INTO #MyTable VALUES'
'\n(11, 12, 13),'
'\n(21, 22, 23)'
'\nEXEC CopyTable #MyTable'
)
with DatabaseConnection(CONN_STRING) as cursor:
cursor.execute(sql)
If you want to spread the SQL across multiple calls to cursor.execute(), then you need to use a temporary table instead. Note that in that case, you still need connection.autocommit = True.
As Timothy pointed out the catch is to use nextset().
What I have found out is that when you execute() a multiple statement query, pyodbc checks (for any syntax errors) and executes only the first statement in the batch but not the entire batch unless you explicitly specify nextset().
say your query is :
cursor.execute('select 1 '
'select 1/0')
print(cursor.fetchall())
your result is:
[(1, )]
but as soon as you instruct it to move further in the batch that is the syntactically erroneous part via the command:
cursor.nextset()
there you have it:
pyodbc.DataError: ('22012', '[22012] [Microsoft][ODBC SQL Server Driver][SQL Server]Divide by zero error encountered. (8134) (SQLMoreResults)')
hence solves the issue that I encountered with working with variable tables in a multiple statement query.
In my Python application I have been using sqlite3.Row as the row factory to index results by name for a while with no issues. Recently I moved my application to a new server (no code changes), and I discovered this method of indexing is now unexpectedly failing on the new server given quite a specific condition. I cannot see any explanation for it.
The problem seems to occur on the new server when I have the DISTINCT keyword in my select query:
import sqlite3
conn = sqlite3.connect(':memory:')
conn.row_factory = sqlite3.Row
c = conn.cursor()
c.execute('create table test ([name] text)')
c.execute("insert into test values ('testing')")
conn.commit()
c.execute('select [name] from test')
row = c.fetchone()
print row['name'] # works fine on both machines
c.execute('select distinct [name] from test') # add distinct keyword
row = c.fetchone()
print row['name'] # fails on new server (no item with that key)
As you can see I am able to sandbox this problem using an in-memory database, so the problem is nothing to do with my existing data. Both machines are Debian based (old: Ubuntu 8.10, new: Debian 5.0.3) and both machines are running Python 2.5.2. I believe the sqlite3 module is a core part of the Python install, so I do not know how this subtle breakage can be occurring since the python versions are identical.
Has anyone got any ideas, or seen anything like this before?
Thanks,
Chris
Try adding the line
print row.keys()
instead of "print row['name']" to see what column 0's actual name is in the second case (it's probably altered by the "DISTINCT" keyword).
Alternatively you can use row[0] in this case, but that's most likely not what you want. :)
I had a different, but similar, problem but googling "indexerror no item with that key" led me to this question. In my case the issue was that different sqlite versions appear to handle row key names in row_factory = sqlite3.Row mode differently. In sqlite 3.24.0, a query like:
select table.col
from table
...creates a key in the row dictionary like col. But older versions appear to use the qualified key like table.col. Providing an explicit alias or not qualifying the column is a workaround. e.g:
select table.col as "col"
from table
Or:
select col
from table