I'm using psycopg2 for interacting with PostgreSQL database in Python2.7.
psycopg2 saves list in database at first in varchar field, and then I need simply to get the same Python list back.
Insert:
data = ['value', 'second value']
with psycopg2.connect(**DATABASE_CONFIG) as connection:
cursor = connection.cursor()
cursor.execute("INSERT INTO table_name (varchar_field) VALUES (%s)", (data)
connection.commit()
In pgAdmin it looks like: {value, second_value}
Then I tried to do something like this:
with psycopg2.connect(**DATABASE_CONFIG) as connection:
cursor = connection.cursor()
cursor.execute("SELECT varchar_field FROM table_name")
for row in cursor:
for data_item in row: # here I want to iterate through the saved list (['value', 'second_value']), but it returns string: '{value, second_value}'
print data_item
I have found possible solution, but I have no idea how to implement it in my code.
So, how can I retrieve back Python List from sql ARRAY type?
Given:
CREATE TABLE pgarray ( x text[] );
INSERT INTO pgarray(x) VALUES (ARRAY['ab','cd','ef']);
Then psycopg2 will take care of array unpacking for you. Observe:
>>> import psycopg2
>>> conn = psycopg2.connect('dbname=regress')
>>> curs = conn.cursor()
>>> curs.execute('SELECT x FROM pgarray;')
>>> row = curs.fetchone()
>>> row
(['ab', 'cd', 'ef'],)
>>> row[0][0]
'ab'
>>> print( ', '.join(row[0]))
ab, cd, ef
psycopg2 already does that for you. If the PostgreSQL column type is a text array, i.e., text[] you should get a python list of strings. Just try to access the first item returned by the query instead of the whole result tuple:
for row in cursor:
for data_item in row[0]:
# Note the index '0' ^ here.
print data_item
Related
New to python, trying to use psycopg2 to read Postgres
I am reading from a database table called deployment and trying to handle a Value from a table with three fields id, Key and Value
import psycopg2
conn = psycopg2.connect(host="localhost",database=database, user=user, password=password)
cur = conn.cursor()
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
print(json.dumps(records))
[["newdrive"]]
I want this to be just "newdrive" so that I can do a string comparison in the next line to check if its "newdrive" or not
I tried json.loads on the json.dumps output, didn't work
>>> a=json.loads(json.dumps(records))
>>> print(a)
[['newdrive']]
I also tried to print just the records without json.dump
>>> print(records)
[('newdrive',)]
The result of fetchall() is a sequence of tuples. You can loop over the sequence and print the first (index 0) element of each tuple:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
for record in records:
print(record[0])
Or simpler, if you are sure the query returns no more than one row, use fetchone() which gives a single tuple representing returned row, e.g.:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
row = cur.fetchone()
if row: # check whether the query returned a row
print(row[0])
Im trying to pull some XML from a URL, parse it and store the entries in an sqlite3 database, Im trying numerous things and all are failing. Codde so far:
#!/usr/bin/env python
from urllib2 import urlopen
import gc
import xml.etree.ElementTree as ET
import sqlite3
rosetta_url = ("https://boinc.bakerlab.org/rosetta/team_email_list.php?teamid=12575&account_key=Y&xml=1")
root = ET.parse(urlopen(rosetta_url)).getroot()
cpids = [el.text for el in root.findall('.//user/cpid')]
print cpids
conn = sqlite3.connect("GridcoinTeam.db")
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS GRIDCOINTEAM (cpid TEXT)''')
c.executemany("INSERT INTO GRIDCOINTEAM VALUES (?);", cpids)
conn.commit()
conn.close()
conn = sqlite3.connect("GridcoinTeam.db")
c = conn.cursor()
cpids = c.execute('select cpid from GRIDCOINTEAM').fetchall()
conn.close()
print cpids
gc.collect()
Im getting the error:
Incorrect number of bindings supplied. The current statement uses 1, and there are 32 supplied.
I tried making the insertion tuples by changing to
c.executemany("INSERT INTO GRIDCOINTEAM VALUES (?);", (cpids, ))
but that just gives:
Incorrect number of bindings supplied. The current statement uses 1, and there are 3289 supplied.
The XML extract is in the form ['5da243d1f47b7852d372c51d6ee660d7', '5a6d18b942518aca60833401e70b75b1', '527ab53f75164864b74a89f3db6986b8'], but there are several thousand entries.
Thanks.
You need to insert this as multiple rows instead of multiple columns into one row
cpids = [el.text for el in root.findall('.//user/cpid')]
cpids = zip(*[iter(cpids)]*1)
print cpids
The problem lies in
c.executemany("INSERT INTO GRIDCOINTEAM VALUES (?);", cpids)
This executemany expects a list of tuples, but you pass a list of strings. What the code does effectively is:
for entry in cpids:
c.execute("INSERT INTO GRIDCOINTEAM VALUES (?);", *entry)
Note the star before entry, which unloads the string, and which gives you 32+ params whereas you only want one.
In order to fix that we'd need to know your GRIDCOINTEAM table schema. If you have a table with only one column and you want to insert that, you could probably do this:
for entry in cpids:
c.execute("INSERT INTO GRIDCOINTEAM VALUES (?)", entry)
In contrast to executemany, execute takes each parameter as one param - no tuples and lists unloading here.
Alternatively you can resort to using executemany, but you'd then need to wrap every one of your strings in a tuple or generator:
c.executemany("INSERT INTO GRIDCOINTEAM VALUES (?);", [(i,) for i in cpids])
I am having a hard time converting data. I select the data from my database, which is returned in tuple format. I try to convert them using list(), but all I get is a list of tuples. I am trying to compare them to integers which i receive from parsing my JSON. What would be the easiest way to convert and compare these two?
from DBConnection import db
import pymssql
from data import JsonParse
db.execute('select id from party where partyid = 1')
parse = JsonParse.Parse()
for row in cursor:
curList = list(cursor)
i = 0
for testData in parse:
print curList[i], testData['data']
i += 1
Output:
(6042,) 6042
(6043,) 6043
(6044,) 6044
(6045,) 6045
SQL results always come as rows, which are sequences of columns; this is true even if there is just one column in each row.
Next, you are executing the query on the db object (whatever that may be), but are iterating over the cursor; if this works at all is more down to luck. You'd normally execute a query on the cursor object.
If you expect just one row to be returned, you can use cursor.fetchone() to retrieve that one row. Your for row in cursor loop is actually skipping the first row.
You could use:
cursor = connection.cursor()
cursor.execute('select id from party where partyid = 1')
result = cursor.fetchone()[0]
to retrieve the first column of the first row, or you could use tuple assignment:
cursor = connection.cursor()
cursor.execute('select id from party where partyid = 1')
result, = cursor.fetchone()
If you do need to match against multiple rows, you could use a list comprehension to extract all those id columns:
cursor = connection.cursor()
cursor.execute('select id from party where partyid = 1')
result = [row[0] for row in cursor]
Now you have a list of id values.
Quick and dirty:
print curList[i][0], testData['data']
Or how about:
for db_tuple, json_int in zip(cursor, parse):
print db_tuple[0], json_int
I have an SQL statement which contains a subquery embedded in an ARRAY() like so:
SELECT foo, ARRAY(SELECT x from y) AS bar ...
The query works fine, however in the psycopg2 results cursor the array is returned as a string (as in "{1,2,3}"), not a list.
My question is, what would be the best way to convert strings like these into python lists?
It works for me without the need for parsing:
import psycopg2
query = """
select array(select * from (values (1), (2)) s);
"""
conn = psycopg2.connect('dbname=cpn user=cpn')
cursor = conn.cursor()
cursor.execute(query)
rs = cursor.fetchall()
for l in rs:
print l[0]
cursor.close()
conn.close()
Result when executed:
$ python stackoverflow_select_array.py
[1, 2]
Update
You need to register the uuid type:
import psycopg2, psycopg2.extras
query = """
select array(
select *
from (values
('A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11'::uuid),
('A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11'::uuid)
)s
);
"""
psycopg2.extras.register_uuid()
conn = psycopg2.connect('dbname=cpn user=cpn')
cursor = conn.cursor()
cursor.execute(query)
rs = cursor.fetchall()
for l in rs:
print l[0]
cursor.close()
conn.close()
Result:
$ python stackoverflow_select_array.py
[UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'), UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')]
If every result cursor ARRAY is of the format '{x,y,z}', then you can do this to strip the string of the braces and split it into a list by comma-delimiter:
>>> s = '{1,2,3}'
>>> s
'{1,2,3}'
>>> l = s.rstrip('}').lstrip('{').split(',')
>>> l
['1', '2', '3']
>>>
>>> s = '{1,2,3,a,b,c}'
>>> s
'{1,2,3,a,b,c}'
>>> l = s.rstrip('}').lstrip('{').split(',')
>>> l
['1', '2', '3', 'a', 'b', 'c']
Another way of handling this is to explicitly tell postgres that you want text, then the default psycopg2 string parsing logic will kick in and you'll get a list:
db = psycopg2.connect('...')
curs = db.cursor()
curs.execute("""
SELECT s.id, array_agg(s.kind::text)
FROM (VALUES ('A', 'A0EEBC99-9C0B-AEF8-BB6D-6BB9BD380A11'::uuid),
('A', 'A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A12'::uuid)) AS s (id, kind)
GROUP BY s.id
""")
for row in curs:
print "row: {}".format(row)
Results in:
row: (u'A', [u'a0eebc99-9c0b-aef8-bb6d-6bb9bd380a11', u'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a12'])
and the query
curs.execute("""
SELECT array(
SELECT s.kind::text
FROM (VALUES ('A0EEBC99-9C0B-AEF8-BB6D-6BB9BD380A11'::uuid),
('A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A12'::uuid)) AS s (kind))
""")
for row in curs:
print "row: {}".format(row)
results in:
row: ([u'a0eebc99-9c0b-aef8-bb6d-6bb9bd380a11', u'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a12'],)
The trick is specifically attaching the ::text to the fields that you care about.
Solution 1
Convert the UUID array uuid[] to a text array text[]
select
p.name,
array(
select _i.item_id
from items _i
where _i.owner_id = p.person_id
)::text[] as item_ids
from persons p;
The from python code:
import psycopg2.extras
curs = conn.cursor(cursor_factory=extras.DictCursor) # To get rows in dictionary
curs.execute(my_query)
rows = curs.fetchall()
print(dict(row[0]))
Output:
{
"name": "Alex",
"item_ids": [
"db6c19a2-7627-4dff-a963-b90b6217cb11",
"db6c19a2-7627-4dff-a963-b90b6217cb11"
]
}
Solution 2
Register the UUID type so that the PostgreSQL uuid can be converted to python uuid.UUID (see python UUID docs) type.
import psycopg2.extras
psycopg2.extras.register_uuid()
After this, you can use the query without needing to convert to text array using ::text[].
select
p.name,
array(
select _i.item_id
from items _i
where _i.owner_id = p.person_id
) as item_ids
from persons p;
The output in DictRow will be like:
{
"name": "Alex",
"item_ids": [
UUID("db6c19a2-7627-4dff-a963-b90b6217cb11"),
UUID("db6c19a2-7627-4dff-a963-b90b6217cb11") # uuid.UUID data type
]
}
I'm tring to use this code to read all temperature values from a sqlite database column, but the output is showing [(u'29',), (u'29',), (u'29',)] and I'm only storing the numeric value in the database. I would like the output to be [29, 29, 29]
import sqlite3
conn = sqlite3.connect("growll.db")
cursor = conn.cursor()
print "\nHere's a listing of all the records in the table:\n"
cursor.execute("select lchar from GrowLLDados")
print cursor.fetchall()
Try this:
import sqlite3
conn = sqlite3.connect("growll.db")
cursor = conn.cursor()
print "\nHere's a listing of all the records in the table:\n"
cursor.execute("select lchar from GrowLLDados")
print [int(record[0]) for record in cursor.fetchall()]
print [int(i[0]) for i in cursor.fetchall()]
Let me know how you get on.