How to insert NULL value in SQL nvarchar column using python - python

I am trying to insert None from python in SQL table, but it is inserting as string 'null'
I am using below set of code.
query_update = f'''update table set Name = '{name}',Key = '{Key}' where Id = {id} '''
stmt.execute(query_update)
conn.commit()
I am getting values in python for variable 'Key' these values I am trying to update in column "Key". These two columns are nvarchar columns.
Now sometime we get None values Null in variable 'Key' as below
Key = 'Null'
So when I insert above value in SQL it is getting updated as string instead of NULL, as I had to put quote in script while updating from Python.
Can anyone help me here, How can I avoid inserting string while inserting Null in SQL from Python

The problem that your Null values are actually a string and not "real" Null.
If you want to insert Null, your key should be equal to None.
You can can convert it as follows:
Key = Key if Key != 'Null' else None

I guess the problem here is that you are placing quotes around the placeholders. Have a look how I think the same can be done.
query_update = 'update table set Name = {name}, Key = {Key} where Id = {id}'
query_update.format(name='myname', Key = 'mykey', Id = 123)
stmt.execute(query_update)
conn.commit()

Don't use dynamic SQL. Use a proper parameterized query:
# test data
name = "Gord"
key = None # not a string
id = 1
query_update = "update table_name set Name = ?, Key = ? where Id = ?"
stmt.execute(query_update, name, key, id)
conn.commit()

Related

Psycopg2/PostgreSQL 11.9: Syntax error at or near "::" when performing string->date type cast

I am using psycopg2 to create a table partition and insert some rows into this newly created partition. The table is RANGE partitioned on a date type column.
Psycopg2 code:
conn = connect_db()
cursor = conn.cursor()
sysdate = datetime.now().date()
sysdate_str = sysdate.strftime('%Y%m%d')
schema_name = "schema_name"
table_name = "transaction_log"
# Add partition if not exists for current day
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': dt.date(2015,6,30)}))
cursor.execute(sql_add_partition, {'sysdate': sysdate})
Formatted output of cursor.mogrify():
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01'::date) TO (maxvalue);
Error received:
ERROR: syntax error at or near "::"
LINE 3: for values FROM ('2021-10-01'::date) TO (maxvalue);
Interestingly enough, psycopg2 appears to be attempting to cast the string '2021-10-01' to a date object with the "::date" syntax, and according to the postgreSQL documentation, this appears to be valid (although there are no explicit examples given in the docs), however executing the statement with both pyscopg2 and in a postgreSQL query editor yields this syntax error. However, executing the following statement in a postgreSQL SQL editor is successful:
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Any ideas on how to get psycopg2 to format the query correctly?
To follow up on #LaurenzAlbe comment:
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': '2021-10-01'}))
#OR
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM ({sysdate}) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name),
table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'),
sysdate=sql.Literal('2021-10-01'))
print(cursor.mogrify(sql_add_partition))
#Formatted as
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Pass the date in as a literal value instead of a date object. psycopg2 does automatic adaptation of date(time) objects to Postgres date/timestamp types(Datetime adaptation) which is what is biting you.
UPDATE
Per my comment, the reason why it needs to be a literal is explained here Create Table:
Each of the values specified in the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key column's type, or a string literal that is valid input for that type.

On conflict, change insert values

Given the schema
CREATE TABLE `test` (
`name` VARCHAR(255) NOT NULL,
`text` TEXT NOT NULL,
PRIMARY KEY(`name`)
)
I would like to insert new data in such a way that if a given name exists, the name I am trying to insert is changed. I've checked the SQLite docs, and all I could find is INSERT OR REPLACE, which would change the text of the existing name instead of creating a new element.
The only solution I can think of is
def merge_or_edit(curr, *data_tuples):
SELECT = """SELECT COUNT(1) FROM `test` WHERE `name`=?"""
INSERT = """INSERT INTO `test` (`name`, `text`) VALUES (?, ?)"""
to_insert = []
for t in data_tuples:
while curr.execute(SELECT, (t[0],)).fetchone()[0] == 1:
t = (t[0] + "_", t[1])
to_insert.append(t)
curr.executemany(INSERT, to_insert)
But this solution is extremely slow for large sets of data (and will crash if the rename takes its name to more than 255 chars.)
What I would like to know is if this functionality is even possible using raw SQLite code.

how to set date value or null in sql formatted query string for a date column

I have a task to copy data from one db table to another db table using psycopg2 library in python language.
Now after fetching a row from first db I have to insert the row into the second db table, but the problem I face is that I need to format the query and insert the value for date column from a variable which may or may not have a date value, so my query is like the following:
cur.execute("""update table_name set column1 = '%s', column2 = '%s',column_date = '%s'""" % (value1, value2, value_date))
will now the value_date may be a date or a None value, so how to I convert this None value to sql null or something so that it can be stored in the date column.
Note: considering the value1, value2 and value_date are variables containing values.
Psycopg adapts a Python None to a Postgresql null. It is not necessary to do anything. If there is no processing at the Python side skip that step and update directly between the tables:
cur.execute("""
update t1
set column1 = t2.c1, column2 = t2.c2, column_date = t2.c3
from t2
where t1.pk = t2.pk
"""
This is how to pass date and None to Psycopg:
from datetime import date
query = '''
update t
set (date_1, date_2) = (%s, %s)
'''
# mogrify returns the query string
print (cursor.mogrify(query, (date.today(), None)).decode('utf8'))
cursor.execute(query, (date.today(), None))
query = 'select * from t'
cursor.execute(query)
print (cursor.fetchone())
Output:
update t
set (date_1, date_2) = ('2017-03-16'::date, NULL)
(datetime.date(2017, 3, 16), None)

insert NULL value in sybase database using python

Let's say we have the following sql statement:
INSERT INTO myTable VALUES (1,"test")
In my python code the first value is either an integer or NULL. How can I insert NULL value using python code?
Code snippet:
var1 = getFirstValue()
var2 = getSecondValue()
qry = "INSERT INTO myTable VALUES (%d,%s)" % (var1,var2)
Whenever var1 is None it is throwing error, but I want NULL to be inserted.
Since you marked this question with the tag Django, you must be aware that you don't just write queries and save them in the Database, Django handles this.
Just check the Tutorial that is available here : https://docs.djangoproject.com/en/1.6/intro/tutorial01/
Since you mentioned Sybase, you must get the Django driver from (https://github.com/sqlanywhere/sqlany-django) and modify the DATABASES entry inside your settings.py project. (first finish the tutorial)
You can use:
qry = "INSERT INTO myTable VALUES ({0}, {1})".format(var1, var2)
Here is one possible option:
var1 = getFirstValue()
var2 = getSecondValue()
var2 = "'{0}'".format(var2) if var2 is not None else 'NULL'
qry = "INSERT INTO myTable VALUES ({0},{1})".format(var1,var2)

How to use python to execute mysql and use replace into - with more than 255 variables ?

Here below is my code that I am using
con.execute("""
REPLACE INTO T(var1,var2,...,var300)VALUES(?,?,....?)""",(var1,var2,...,var300)
This statement works just fine if I have var1-var255 , once I have more than that it gave me an error...
So far, I was able to split T into 2 different times
con.execute("""
REPLACE INTO T(var1,var2,...,var150)VALUES(?,?,....?)""",(var1,var2,...,var150)
con.execute("""
REPLACE INTO T(var151,var152,...,var300)VALUES(?,?,....?)""",(var151,var152,...,var300)
This gave me no error , but my final value in table "T" would only values in the second execute statement , all of var1, var2, ... var 150 got replace with null
Have you tried using update instead?
MySQL documentation tells the following:
"REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted"
There does not seem to be any inherent problem using more than 255 columns in MySQL, interfaced with MySQLdb:
import MySQLdb
import config
connection = MySQLdb.connect(
host = config.HOST, user = config.USER,
passwd = config.PASS, db = 'test')
cursor = connection.cursor()
cols = ['col{i:d}'.format(i =i) for i in range(300)]
types = ['int(11)']*len(cols)
columns = ','.join('{f} {t}'.format(f = f, t = t) for f, t in zip(cols, types))
sql = '''CREATE TABLE IF NOT EXISTS test (
id INT(11) NOT NULL AUTO_INCREMENT,
{c},
PRIMARY KEY (id)
)'''.format(c = columns)
cursor.execute(sql)
sql = '''REPLACE INTO test({c}) VALUES ({v})'''.format(
c = ','.join(cols),
v = ','.join(['%s']*len(cols)))
cursor.execute(sql, range(300))
result = cursor.fetchall()
This adds rows to test.test without a problem.

Categories

Resources