How to update row with blob data in sqlite3? - python

I'm trying to update existing row in my database with blob data, and cannot understand how to do this. Is it only insert aviliable? Insert works well:
b = requests.get(url=url)
img = b.content
con = sqlite3.connect(db)
cur = con.cursor()
cur.execute('replace INTO byte(b) where n = 1 VALUES (?)', [img])
con.commit()
con.close()
this give my new row with blob data, but I need to update existing, but if i try some update code it gives me errors:
cur.execute('update byte set b = {}'.format(img))

Well I found the way. At first convert byte to hex string and update db with it, then select hex and convert to byte. So the question may be closed.

Related

sqlite3 - executemany alternatives to speed up insert from records of pandas?

Using pandas and sqlite3.
I have about 1000 csv files with data that needs a little bit of prunning and then I have to insert them into a database.
The code I am using:
def insert_underlying(underlying_df, conn):
# convert timestamp to timestamp dtype
underlying_df['timestamp'] = pd.to_datetime(underlying_df['timestamp'])
# underlying_df = underlying_df.tail(len(underlying_df)-1)
# sort it
underlying_df.sort_values('timestamp', inplace=True)
# remove the contractname column
underlying_df = underlying_df[['timestamp', 'close', 'bid', 'ask']]
# insert underlying into tickdata
# check if table exists # if it doesn't create
c.execute("CREATE TABLE IF NOT EXISTS {}tick(timestamp timestamp, close REAL, bid REAL, ask REAL)" .format(symbol))
# Insert to external db
query = 'INSERT OR REPLACE INTO ' + symbol + 'tick (timestamp, close, bid, ask) VALUES (?,?,?,?)'
conn.executemany(query, underlying_df.to_records(index=False))
conn.commit()
From what I have read on similar questions, executemany is iterating over the rows and inserting the records. This is very slow. My experience with sql is beginner at best.
What would be the best/fastest way to do this?
Thanks

How to use psycopg2 to retrieve a certain key's value from a postgres table which has key-value pairs

New to python, trying to use psycopg2 to read Postgres
I am reading from a database table called deployment and trying to handle a Value from a table with three fields id, Key and Value
import psycopg2
conn = psycopg2.connect(host="localhost",database=database, user=user, password=password)
cur = conn.cursor()
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
print(json.dumps(records))
[["newdrive"]]
I want this to be just "newdrive" so that I can do a string comparison in the next line to check if its "newdrive" or not
I tried json.loads on the json.dumps output, didn't work
>>> a=json.loads(json.dumps(records))
>>> print(a)
[['newdrive']]
I also tried to print just the records without json.dump
>>> print(records)
[('newdrive',)]
The result of fetchall() is a sequence of tuples. You can loop over the sequence and print the first (index 0) element of each tuple:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
records = cur.fetchall()
for record in records:
print(record[0])
Or simpler, if you are sure the query returns no more than one row, use fetchone() which gives a single tuple representing returned row, e.g.:
cur.execute("SELECT \"Value\" FROM deployment WHERE (\"Key\" = 'DUMPLOCATION')")
row = cur.fetchone()
if row: # check whether the query returned a row
print(row[0])

Fetching a long text column

I've got a problem with fetching a result from a MySql database with Python3.6.
The long text is a numpy array transformed into a string.
When I check the database and look into the column img_array everything is just fine. All the data is written.
Next I try to retrieve the text column like this:
con = .. # SQL connection which is successful and working fine
cur = con.cursor() # Getting the cursor
cur.execute('SELECT img_array FROM table WHERE id = 1')
result = cur.fetchone()[0] # Result is a tuple with the array at 0
print(result)
[136 90 87 ... 66 96 125]
The problem here is that the ... is like a string. I'm missing all the values.
When I try the following it works just fine:
cur.execute('SELECT img_array FROM table LIMIT 1')
result = cur.fetchone()[0] # this gives me the entire string in the DB
print(result)
# The entire array will be printed here without missing values
I really don't know how to fetch a column with the where clause via python.
Any ideas?
EDIT: Ok, the last edit was wrong... I've checked it again and the buffered cursor doesn't change it. I'm confused because it seemed to work.

python - SQL Select Conditional statements in python

This is my R piece of code but i want to do the same thing in python, as i am new in it having problems to write the correct code can anybody guide me how to write this is python. I have already made connections of database and also tried simple queries but here i am struggling
sql_command <- "SELECT COUNT(DISTINCT Id) FROM \"Bowlers\";"
total<-as.numeric(dbGetQuery(con, sql_command))
data<-setNames(data.frame(matrix(ncol=8,
nrow=total)),c("Name","Wkts","Ave","Econ","SR","WicketTaker","totalovers",
"Matches"))
for (i in 1:total){
sql_command <- paste("SELECT * FROM \"Bowlers\" where Id = ", i ,";",
sep="")
p<-dbGetQuery(con, sql_command)
p[is.na(p)] <- 0
data$Name[i] = p$bowler[1]
}
after this which works fine how should i proceed to write the loop code:
with engine.connect() as con:
rs=con.execute('SELECT COUNT(DISTINCT id) FROM "Bowlers"')
for row in rs:
print (row)
Use the format method for strings in python to achieve it.
I am using postgresql, but your connection should be similar. Something like:
connect to test database:
import psycopg2
con = psycopg2.connect("dbname='test' user='your_user' host='your_host' password='your_password'")
cur = con.cursor() # cursor method may differ for your connection
loop over your id's:
for i in range(1,total+1):
sql_command = 'SELECT * FROM "Bowlers" WHERE id = {}'.format(i)
cur.execute(sql_command) # execute and fetchall method may differ
rows = cur.fetchall() # check for your connection
print ("output first row for id = {}".format(i))
print (rows[0]) # sanity check, printing first row for ids
print('\n') # rows is a list of tuples
# you can convert them into numpy arrays

How to convert sql varchar array to Python list?

I'm using psycopg2 for interacting with PostgreSQL database in Python2.7.
psycopg2 saves list in database at first in varchar field, and then I need simply to get the same Python list back.
Insert:
data = ['value', 'second value']
with psycopg2.connect(**DATABASE_CONFIG) as connection:
cursor = connection.cursor()
cursor.execute("INSERT INTO table_name (varchar_field) VALUES (%s)", (data)
connection.commit()
In pgAdmin it looks like: {value, second_value}
Then I tried to do something like this:
with psycopg2.connect(**DATABASE_CONFIG) as connection:
cursor = connection.cursor()
cursor.execute("SELECT varchar_field FROM table_name")
for row in cursor:
for data_item in row: # here I want to iterate through the saved list (['value', 'second_value']), but it returns string: '{value, second_value}'
print data_item
I have found possible solution, but I have no idea how to implement it in my code.
So, how can I retrieve back Python List from sql ARRAY type?
Given:
CREATE TABLE pgarray ( x text[] );
INSERT INTO pgarray(x) VALUES (ARRAY['ab','cd','ef']);
Then psycopg2 will take care of array unpacking for you. Observe:
>>> import psycopg2
>>> conn = psycopg2.connect('dbname=regress')
>>> curs = conn.cursor()
>>> curs.execute('SELECT x FROM pgarray;')
>>> row = curs.fetchone()
>>> row
(['ab', 'cd', 'ef'],)
>>> row[0][0]
'ab'
>>> print( ', '.join(row[0]))
ab, cd, ef
psycopg2 already does that for you. If the PostgreSQL column type is a text array, i.e., text[] you should get a python list of strings. Just try to access the first item returned by the query instead of the whole result tuple:
for row in cursor:
for data_item in row[0]:
# Note the index '0' ^ here.
print data_item

Categories

Resources