Update oracle CLOB column with large string - python

I have a table systesttab that contains a few columns. One of those columns is of type CLOB and it's supposed to hold a string of a base64 encoded image.
CREATE TABLE systesttab(
...
f_picture CLOB DEFAULT ' ' NOT NULL,
...
)
However, when I try to update the table with a large base64 string (over 100k character), it fails and my python application crashes (even when put in a try...except block).
UPDATE systesttab SET f_picture = 'data:image/png;base64,iVBORw0KGgoASU ...'
I have even tried to cast the value to clob:
UPDATE systesttab SET f_picture = TO_CLOB('data:image/png;base64,iVBORw0KGgoASU ...')
But all I get is this error:
Input string too long, limit 8192
Now, I guess that this is trying to tell me something about the chunk size, but it's not really helpful to me.
How can I update the table with a single statement?
Do I have to declare the table's create statement differently?
If there is a way to get this done in a single statement, it should also work when updating multiple columns on the same table.
Environment: python 3.4 & pyodbc

I have solved this by using SQL Bindings. It seems the character limit does not apply then.
In python the statement now looks like this:
pic = 'data:image/png;base64,iVBORw0KGgoASU ...'
sql = "UPDATE systesttab SET f_picture = ?"
cursor.execute( sql, [pic] )
This also works fine when updating multiple fields at the same time.

Related

Using Python - How can I parse CSV list of both integers and strings and add to SQL table through Insert Statement?

I am automating a task through Python that will run an SQL statement to insert into an existing table in a DB.
My CSV headers look as such:
ID,ACCOUNTID,CATEGORY,SUBCATEGORY,CREATION_DATE,CREATED_BY,REMARK,ISIMPORTANT,TYPE,ENTITY_TYPE
My values:
seq_addnoteid.nextval,123456,TEST,ADMN_TEST,sysdate,ME,This is a test,Y,1,A
NOTE: Currently, seq_addnote works from DB but in my code i added a small snippet to get the max ID and the rows will increase this by one for each iteration.
Sysdate could also be passed as format '19-MAY-22'
If i was to run from DB, this would work:
insert into notes values(seq_addnoteid.nextval,'123456','TEST','ADMN_TEST',sysdate,'ME','This is a test','Y',1,'A');
# Snippet to get function
cursor.execute("SELECT MAX(ID) from NOTES")
max = cursor.fetchone()
max = int(max[0])
with open ('sample.csv', 'r') as f:
reader = csv.reader(f)
columns = next(reader)
query = 'INSERT INTO NOTES({0}) values ({1})'
query = query.format(','.join(columns), ','.join('?' * len(columns)))
cursor = conn.cursor()
for data in reader:
cursor.execute(query, data)
conn.commit()
print("Records inserted successfully")
cursor.close()
conn.close()
Currently, i'm getting Oracle-Error-Message: ORA-01036: illegal variable name/number and i think its because of my query.format line. However, I'm looking for help to get this code to handle the data types properly.
Thanks!
Try printing your query before you execute it. I think you'll find that it's printing this:
INSERT INTO NOTES(ID,ACCOUNTID,CATEGORY,SUBCATEGORY,CREATION_DATE,CREATED_BY,REMARK,ISIMPORTANT,TYPE,ENTITY_TYPE)
values(seq_addnoteid.nextval,123456,TEST,ADMN_TEST,sysdate,ME,This is a test,Y,1,A);
Which will also give you a ORA-01036 if you try to run it manually.
The problem is that you want some of your column values to be literal values, and some of them to be strings escaped in single-quotes, and your code doesn't do that. I don't think there's an easy to way to do it with ','.join(), so you'll either need to modify your CSVs to quote the strings, like:
seq_addnoteid.nextval,"'123456'","'TEST'","'ADMN_TEST'",sysdate,"'ME'","'This is a test'","'Y'",1,"'A'"
Or modify your query.format to add the quotes around the parameters that you want to treat as strings:
query.format(','.join(columns), "?,'?','?','?',?,'?','?','?',?,'?'")
As the commenters mentioned, pandas does handle this all very nicely.
EDIT: I see what you're saying. I'm not sure pandas will help with the literal functions you want to pass to the insert. But yes, you should be able to change your CSV and then do:
query.format(','.join(columns) + ',ID,CREATION_DATE', "'?','?','?','?','?','?',?,'?',seq_addnoteid.nextval,sysdate")
As a side note, a lot of people do this sort of thing on the database side in a BEFORE INSERT trigger, e.g.:
create or replace trigger NOTES_INS_TRG
before insert on NOTES
for each row
begin
:NEW.ID := seq_addnoteid.nextval;
:NEW.CREATION_DATE := sysdate;
end;
/
Then you could leave those columns out of your insert entirely.
Edit again:
I'm not sure you can use ? for bind/substitution variables in cx_oracle (see documentation ). So where your raw query is currently:
INSERT INTO NOTES(ACCOUNTID,CATEGORY,SUBCATEGORY,CREATED_BY,REMARK,ISIMPORTANT,TYPE,ENTITY_TYPE,ID,CREATION_DATE)
values (seq_addnoteid.nextval,sysdate,'?','?','?','?','?','?',?,'?')
You'd need something like:
INSERT INTO NOTES(ACCOUNTID,CATEGORY,SUBCATEGORY,CREATED_BY,REMARK,ISIMPORTANT,TYPE,ENTITY_TYPE,ID,CREATION_DATE)
values (seq_addnoteid.nextval,sysdate,:1,:2,:3,:4,:5,:6,:7,:8)
We can probably do that by modifying the format string again to generate some bind variables:
query.format('ID,CREATION_DATE,' + ','.join(columns),
"seq_addnoteid.nextval,sysdate," + ','.join([':'+c for c in columns])
Again, try printing the query before executing it to make sure the column names and values are lining up correctly.

MySQL provides wrong ID after Insert (datatype?)

I have a little python (3.64) script that inserts records in MySQL DB (10.1.26 MariaDB)
import MySQLdb as mdb
...
for id in allids
...
cursor.execute("INSERT IGNORE INTO entity(id,name) VALUES (%s, %s)", (identifier, name))
identifier is an integer like 2832718610241077149 and always 19 digits.
MySQL field is int(25).
I use to loop the insert statement but i only get 1 record selected (checked with MySQL Workbench). The id there has nothing in common with the identifier in Python, it's something like 2157483647.
I found out, it works well when i change the DB field to varchar(25). Then all made inserts of the loop can be selected and the IDs are correct too. But i guess it's not how to do it.
In MySQL, INT(25) only indicates the display width, as explained here. You don't get an integer-type field with the capacity to store 25-digit wide numbers, an INT field will always be 32 bits wide which explains the results you are getting - the number just gets chopped to fit into 32 bits.
If you need to store such big numbers, have a look at the DECIMAL or NUMERIC types.
Also read this part of the manual to understand how MySQL will handle this kind of situations.

Substituting column names in Python sqlite3 query [duplicate]

This question already has answers here:
How do you escape strings for SQLite table/column names in Python?
(8 answers)
Closed 7 years ago.
I have a wide table in a sqlite3 database, and I wish to dynamically query certain columns in a Python script. I know that it's bad to inject parameters by string concatenation, so I tried to use parameter substitution instead.
I find that, when I use parameter substitution to supply a column name, I get unexpected results. A minimal example:
import sqlite3 as lite
db = lite.connect("mre.sqlite")
c = db.cursor()
# Insert some dummy rows
c.execute("CREATE TABLE trouble (value real)")
c.execute("INSERT INTO trouble (value) VALUES (2)")
c.execute("INSERT INTO trouble (value) VALUES (4)")
db.commit()
for row in c.execute("SELECT AVG(value) FROM trouble"):
print row # Returns 3
for row in c.execute("SELECT AVG(:name) FROM trouble", {"name" : "value"}):
print row # Returns 0
db.close()
Is there a better way to accomplish this than simply injecting a column name into a string and running it?
As Rob just indicated in his comment, there was a related SO post that contains my answer. These substitution constructions are called "placeholders," which is why I did not find the answer on SO initially. There is no placeholder pattern for column names, because dynamically specifying columns is not a code safety issue:
It comes down to what "safe" means. The conventional wisdom is that
using normal python string manipulation to put values into your
queries is not "safe". This is because there are all sorts of things
that can go wrong if you do that, and such data very often comes from
the user and is not in your control. You need a 100% reliable way of
escaping these values properly so that a user cannot inject SQL in a
data value and have the database execute it. So the library writers do
this job; you never should.
If, however, you're writing generic helper code to operate on things
in databases, then these considerations don't apply as much. You are
implicitly giving anyone who can call such code access to everything
in the database; that's the point of the helper code. So now the
safety concern is making sure that user-generated data can never be
used in such code. This is a general security issue in coding, and is
just the same problem as blindly execing a user-input string. It's a
distinct issue from inserting values into your queries, because there
you want to be able to safely handle user-input data.
So, the solution is that there is no problem in the first place: inject the values using string formatting, be happy, and move on with your life.
Why not use string formatting?
for row in c.execute("SELECT AVG({name}) FROM trouble".format(**{"name" : "value"})):
print row # => (3.0,)

How to handle and escape varbinary database entries correctly in python

I have a script where I first read a table, saving an int and a varbinary(512) in form of a some_set = set(), some_set.add((db_row['int_val'], db_row['varbin_val')). A set of tuples..
There may be multiple rows having the same int_val/varbin_val combination, but those are the duplicates.
Now saving them seems to work fine. But when I try to INSERT INTO said saved rows, from the source table to the destination table, I get b"Incorrect syntax near '\\'
I assume this occurs, because the varbinary(512) is not escaped correctly. (i currently just have a str() wrapped around it.
How can I escape a varbinary(512) from a MSSQL database, saved in a string tuple, to use it in a SELECT/INSERT WHERE query?

Python: using pyodbc and replacing row field values

I'm trying to figure out if it's possible to replace record values in a Microsoft Access (either .accdb or .mdb) database using pyodbc. I've poured over the documentation and noted where it says that "Row Values Can Be Replaced" but I have not been able to make it work.
More specifically, I'm attempting to replace a row value from a python variable. I've tried:
setting the connection autocommit to "True"
made sure that it's not a data type issue
Here is a snippet of the code where I'm executing a SQL query, using fetchone() to grab just one record (I know with this script the query is only returning one record), then I am grabbing the existing value for a field (the field position integer is stored in the z variable), and then am getting the new value I want to write to the field by accessing it from an existing python dictionary created in the script.
pSQL = "SELECT * FROM %s WHERE %s = '%s'" % (reviewTBL, newID, basinID)
cursor.execute(pSQL)
record = cursor.fetchone()
if record:
oldVal = record[z]
val = codeCrosswalk[oldVal]
record[z] = val
I've tried everything I can think bit cannot get it to work. Am I just misunderstanding the help documentation?
The script runs successfully but the newly assigned value never seems to commit. I even tried putting "print str(record[z])this after the record[z] = val line to see if the field in the table has the new value and the new value would print like it worked...but then if I check in the table after the script has finished the old values are still in the table field.
Much appreciate any insight into this...I was hoping this would work like how using VBA in MS Access databases you can use an ADO Recordset to loop through records in a table and assign values to a field from a variable.
thanks,
Tom
The "Row values can be replaced" from the pyodbc documentation refers to the fact that you can modify the values on the returned row objects, for example to perform some cleanup or conversion before you start using them. It does not mean that these changes will automatically be persisted in the database. You will have to use sql UPDATE statements for that.

Categories

Resources