I have a python script to retrieve a value (table ID) from a PostgreSQL database. The column name contains a colon though and I believe this is stopping it working. I've tested this on columns without colons and it does get the ID correctly.
The line in question is
cur.execute("SELECT tID from titles where name like 'METEOROLOG:WINDSPEED_F' order by structure, comp1, comp2")
rowswind=cur.fetchall()
When I print rowswind nothing is returned (just empty brackets)
I have also tried..
cur.execute('SELECT tID from titles where name like "METEOROLOGY:WINDSPEED_F" order by structure, comp1, comp2')
But that comes back with the error
psycopg2.ProgrammingError: column "METEOROLOGY:WINDSPEED_F" does not
exist
(it definitely does).
I've also tried escaping the colon any way I can think of (i.e. back slash) but nothing works, I just get syntax errors.
Any advice would be welcome. Thanks.
ADDITION 20190429
I've now tried parameterizing the query but also with no success.
wind=('METEOROLOGY:WINDSPEED_F')
sql="SELECT tID from titles where name like '{0}' order by structure, comp1, comp2".format(wind)
I've tried many different combinations of double and single quotes to try and escape the colon with no success.
psycopg2.ProgrammingError: column "METEOROLOGY:WINDSPEED_F" does not exist
You're getting this error because you're using double quotes around the targeted value in your query's WHERE statement, here:
cur.execute('SELECT tID from titles where name like "METEOROLOGY:WINDSPEED_F" order by structure, comp1, comp2')
You're getting 0 results back here:
cur.execute("SELECT tID from titles where name like 'METEOROLOG:WINDSPEED_F' order by structure, comp1, comp2")
because 0 rows exist with the value "METEOROLOG:WINDSPEED_F" in the name column. This might just be because you're spelling METEOROLOGY wrong.
The way you're using LIKE, you might as well be using =. LIKE is great if you're going to use % to find other values like that value.
Example:
SELECT *
FROM
TABLE
WHERE
UPPER(NAME) LIKE 'JOSH%'
This would return results for these values in name: JOSHUA, JoShUa, joshua, josh, JOSH. If I straight up did NAME LIKE 'JOSH' then I would only find results for the exact value of JOSH.
Since you are making the value all caps in your WHERE, try this adding an UPPER() to your query like this:
cur.execute("SELECT tID from titles where UPPER(name) like 'METEOROLOG:WINDSPEED_F' order by structure, comp1, comp2")
Related
Although there are various similar questions around this topic, I can't find one that answers my problem.
I am using pscopg to build and insert data into a postgresql database, where the insert statement exists inside a for loop, and therefore will be different each iteration.
insert_string = sql.SQL(
"INSERT INTO {region}(id, price, house_type) VALUES ({id}, {price}, {house_type})").format(
region=sql.Literal(region),
id=sql.Literal(str(id)),
price=sql.Literal(price),
house_type=sql.Literal(house_type))
cur.execute(insert_string)
The variables region, id, price, house_type are all defined somewhere else inside said for loop.
The error I'm getting is as follows:
psycopg2.errors.SyntaxError: syntax error at or near "'Gorton'"
LINE 1: INSERT INTO 'Gorton'(id, price, house_typ...
^
where 'Gorton' is the value of the variable 'region' at that particular iteration.
From what I can see psycopg seems to be struggling with the apostrophe around Gorton, calling it a syntax error.
I have read the docs and can't figure out if sql.Literal is the correct choice here, or if I should use sql.Identifier instead?
Thanks in advance
I'm building a software to combine some chemicals into different compounds (each compound can have 1,2,3 or 4 chemicals), but some chemicals cannot combine with some other chemicals.
I have a table in my mysql db that has the following columns:
chemical_id,chemicalName, and one column for each chemical in my list.
Each row has one of the chemicals. the value in the fields tell me if both these chemicals can go together in a compound, or not (1, or 0). So all chemicals have a row, and a column. They were created in the same order, too. Here (dummy data): https://imgur.com/a/e2Fbq1K
I have a python list of chemicals_ids, which I'm gonna combine with themselves to make compounds of 1,2,3 and 4 chems, but I need a function to determine if any two of them ain't compatible.
I was trying to mess around with INFORMATION_SCHEMA COLUMN_NAME but I'm kinda lost.
A loop around something like this would work, but the syntax won't.
list_of_chemicals = ['ChemName1','ChemName2','ChemName3'] #etc
def verify_comp(a,b): #will be passed with chem names
mycursor.execute("SELECT chemicalName FROM chemical_compatibility WHERE chemical_id = 'ChemName1' AND 'ChemName2' = 0")
#etc
I have tried to use %s placeholders but it seems only to work in certain parts of mysql query. I'm a beginner both at Python and SQL so any light will be much appreciated.
Thanks!
I followed #Akina's suggestion and made a new table containing pairs of chemicals and compatibility value for each pair.
I also learned that apart from placeholders %s, which can only be used for values on python cursor sql execute statements, you can use a py variable too by doing something like this:
mycursor.execute("SELECT * FROM "+variablename+" WHERE condition = 1")
I'm not worried about SQL Injection for this project nor do I know if what I say here is 100% correct, but maybe it can help people that are lost nevertheless.
I am trying to search my SQLite3 database using a pythonic variable as a search term. The term I'm searching for is a part of the contents of the cell in the database (e.g. Smith in a cell: [Harrison GB, Smith JH]) and is often in the middle of the string in a cell.
I have tried to code it as shown below:
def read_from_db():
c.execute("SELECT authors, year, title, abstract FROM usertable WHERE authors LIKE (?)",(var1,))
data = c.fetchall()
print(data)
for row in data:
searchlist.append(row)
var1="Smith"
read_from_db()
This should show the results row after row. However, I get 0 results when var1 = "Smith". When I change its value to "Harrison GB, Smith JH", I get all the results.
When I try to solve it by changing the SQLite3 execute query I yield an error.
ERROR
c.execute("SELECT authors, year, title, abstract FROM usertable WHERE authors LIKE '%?%'",(var1,))
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 0, and there are 1 supplied.
I get syntax errors if I change the endings with: $?$, (%?%) etc. I tried this with:
...authors="%?%"
But this doesn't work either. There is a few similar questions on SO, but they don't exactly solve my issue...
I am running Python 3.4 and SQLite3 on Windows 10.
Consider concatenating the % wildcards to the binded value, var1:
c.execute("SELECT authors, year, title, abstract" +
" FROM usertable WHERE authors LIKE (?)", ('%'+var1+'%',))
The reason you need to do so is the ? placeholder substitutes a string literal in parameterized queries and for LIKE expressions, wildcards with values together are string literals as denoted by their enclosed single quotes:
SELECT authors, year, title, abstract FROM usertable WHERE authors LIKE '%Smith%'
Your initial attempt failed because you wrap ? with single quotes and the cursor cannot bind the param to prepared statement properly and so the symbol is taken literally as question mark.
RESOLVED - SEE FINAL EDIT AT BOTTOM.
I have a DataFrame that looks the following way:
df_transpose=
Time Date Morning (5AM-9AM) Day (9AM-6PM) \
Area
D1_NY_1 01_05_2012 0.000000 0.000000
D2_NY_2 01_05_2012 0.000000 0.000000
D3_NJ_1 01_05_2012 1.000000 0.966667
...
I want to write this row-by-row to different tables in a database using SQLite. I've set up the database Data.db which contains separate tables for each Area - i.e. the table names contain the Area names as listed in the DataFrame above (ex "Table_D1-NY-1" ect.). I want to test if theres a match between the Area (the index) in the DataFrame above and the names of the tables in my database, and if there's a match write the entire relevant row of the DataFrame to the Table that contains the same Area in the name. Here is what I've written so far, as well as the error I get:
CODE:
ii=0
for ii in range(0,row_count):
df_area= df_transpose.index[ii]
export_data= df_transpose.iloc[ii]
cur.execute("SELECT name FROM sqlite_master WHERE type='table'")
available_tables=(cur.fetchall())
for iii in range (0, row_count):
if re.match('\w*'+df_area, available_tables[iii]):
relevant_table=available_tables[iii]
export_data.to_sql(name=relevant_table, con=con, if_exists=append)
iii=iii+1
ERROR: for the "if re.match..." line:
TypeError: expected string or buffer
I tried to make the second (iii)loop after searching for solutions to the problem to avoid inputting a list object (available_tables) (instead of a string) to re.match(). I still get the same error though. Can anyone see my error or help me fix my the code?
EDIT:
For information, df_area and available_tables outputs the following:
df_area=
u'D1_NY_1
available_tables=
[(u'US_D1_NY_1',), (u'US_D2_NY_2',), (u'US_D3_NJ_1',), ...]
EDIT:
Have not been able to figure this out yet and would appreciate input. I've tried to play around with my code but the error remains the same.
FINAL EDIT:
Thought I would post how I got past this. The problem was that before available_tables was a list of tuples, instead of a list of strings. To get the re.match() test to work available_tables had to be a list of strings. I changed this using the following command:
cur.execute("SELECT name FROM sqlite_master WHERE type='table'")
available_tables=[item[0] for item in cur.fetchall()]
Can't comment, but maybe try r'\w*' instead of '\w' since you're using a backslash.
Also, it seems from the output you gave that available_tables is a list of tuples. So you'd probably want to use:
re.match(r'\w*'+df_area, available_tables[iii][0])
I am having troubles finding out if I can even do this. Basically, I have a csv file that looks like the following:
1111,804442232,1
1112,312908721,1
1113,A*2434,1
1114,A*512343128760987,1
1115,3512748,1
1116,1111,1
1117,1234,1
This is imported into a sqlite database in memory for manipulation. I will be importing multiple files into this database after some manipulation. Sqlite is allowing me to keep constraints on the tables and receive errors where needed without creating additional functions just to check each constraint while using arrays in python. I want to do a few things but the first of which is to prepend field2 where all field2 strings match an entry in field1.
For example, in the above data field2 in entry 6 matches entry 1. In this case I would like to prepend field2 in entry 6 with '555'
If this is not possible I do believe I could make do using a regex and just do this on every row with 4 digits in field2... though... I have yet to successfully get REGEX working using python/sqlite as it always throws me an error.
I am working within Python using Sqlite3 to connect/manipulate my sqlite database.
EDIT: I am looking for a method to manipulate the resultant tables which reside in a sqlite database rather than manipulating just the csv data. The data above is just a simple representation of what is contained in the files I am working with. Would it be better to work with arrays containing the data from the csv files? These files have 10,000+ entries and about 20-30 columns.
If you must do it in SQLite, how about this:
First, get the column names of the table by running the following and parsing the result
def get_columns(table_name, cursor):
cursor.execute('pragma table_info(%s)' % table_name)
return [row[1] for row in cursor]
conn = sqlite3.connect('test.db')
columns = get_columns('test_table',conn.cursor())
For each of those columns, run the following update, that does your prepending
def prepend(column, reference, prefix, cursor):
query = '''
UPDATE %s
SET %s = 'prefix' || %s
WHERE %s IN (SELECT %s FROM %s)
''' % (table, column, column, column, reference, table)
cursor.execute(query)
reference = 'field1'
[prepend('test_table', column, reference, '555', conn.cursor())
for column in columns
if column != reference]
Note that this is expensive: O(n^2) for each column you want to do it for.
As per your edit and Nathan's answer, it might be better to simply work with python's builtin datastructures. You can always insert it into SQLite after.
10,000 entries is not really much so it might not matter in the end. It all depends on your reason for requiring it to be done in SQLite (which we don't have much visibility of).
There is no need to use regex expressions to do this, just throw the contents from the first column into a set and then iterate through the rows and update the second field.
first_col_values = set(row[0] for row in rows)
for row in rows:
if row[1] in first_col_values:
row[1] = '555' + row[1]
So... I found the answer to my own question after a ridiculous amount of my own searching and trial and error. My unfamiliarity with SQL had me stumped as I was trying all kinds of crazy things. In the end... this was the simple type of solution I was looking for:
prefix="555"
cur.execute("UPDATE table SET field2 = %s || field2 WHERE field2 IN (SELECT field1 FROM table)"% (prefix))
I kept the small amount of python in there but what I was looking for was the SQL statement. Not sure why nobody else came up with something that simple =/. Unsatisfied with the answers so far, I had been searching far and wide for this simple line >_<.