On conflict, change insert values - python

Given the schema
CREATE TABLE `test` (
`name` VARCHAR(255) NOT NULL,
`text` TEXT NOT NULL,
PRIMARY KEY(`name`)
)
I would like to insert new data in such a way that if a given name exists, the name I am trying to insert is changed. I've checked the SQLite docs, and all I could find is INSERT OR REPLACE, which would change the text of the existing name instead of creating a new element.
The only solution I can think of is
def merge_or_edit(curr, *data_tuples):
SELECT = """SELECT COUNT(1) FROM `test` WHERE `name`=?"""
INSERT = """INSERT INTO `test` (`name`, `text`) VALUES (?, ?)"""
to_insert = []
for t in data_tuples:
while curr.execute(SELECT, (t[0],)).fetchone()[0] == 1:
t = (t[0] + "_", t[1])
to_insert.append(t)
curr.executemany(INSERT, to_insert)
But this solution is extremely slow for large sets of data (and will crash if the rename takes its name to more than 255 chars.)
What I would like to know is if this functionality is even possible using raw SQLite code.

Related

How to insert NULL value in SQL nvarchar column using python

I am trying to insert None from python in SQL table, but it is inserting as string 'null'
I am using below set of code.
query_update = f'''update table set Name = '{name}',Key = '{Key}' where Id = {id} '''
stmt.execute(query_update)
conn.commit()
I am getting values in python for variable 'Key' these values I am trying to update in column "Key". These two columns are nvarchar columns.
Now sometime we get None values Null in variable 'Key' as below
Key = 'Null'
So when I insert above value in SQL it is getting updated as string instead of NULL, as I had to put quote in script while updating from Python.
Can anyone help me here, How can I avoid inserting string while inserting Null in SQL from Python
The problem that your Null values are actually a string and not "real" Null.
If you want to insert Null, your key should be equal to None.
You can can convert it as follows:
Key = Key if Key != 'Null' else None
I guess the problem here is that you are placing quotes around the placeholders. Have a look how I think the same can be done.
query_update = 'update table set Name = {name}, Key = {Key} where Id = {id}'
query_update.format(name='myname', Key = 'mykey', Id = 123)
stmt.execute(query_update)
conn.commit()
Don't use dynamic SQL. Use a proper parameterized query:
# test data
name = "Gord"
key = None # not a string
id = 1
query_update = "update table_name set Name = ?, Key = ? where Id = ?"
stmt.execute(query_update, name, key, id)
conn.commit()

Insert Blob Data into Sqlite3 (GeoPackage)

I having issues writing WKB or Well Known Binary into a sqlite database/geopackage (they are the same type of database).
Here is the create statement to generate a polyline dataset.
CREATE TABLE line
(OBJECTID INTEGER primary key autoincrement not null,
Shape MULTILINESTRING)
The WKB is as follows:
bytearray(b'\x01\x05\x00\x00\x00\x02\x00\x00\x00\x01\x02\x00\x00\x00\x04\x00\x00\x00\x96C\x8b\x9a\xd0#`\xc18\xd6\xc5=\xd2\xc5RA\x93\xa9\x825\x02=`\xc1\xb0Y\xf5\xd1\xed\xa6RAZd;W\x913`\xc1 Zd\xfb\x1c\xc0RA\xaa\x82Q%p/`\xc18\xcd;\x92\x19\xaeRA\x01\x02\x00\x00\x00\x03\x00\x00\x00z\xc7)TzD`\xc1\xb8\x8d\x06\xb8S\x9fRA\xbb\xb8\x8d:{"`\xc1X\xec/\xb7\xbb\x9eRA\x00\x91~Eo"`\xc1\x00\xb3{F\xff]RA')
When I go to insert the binary data into the column, I do not get an error, it just inserts NULL.
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [binaryfromabove]
cur.execute(sql, val)
I've also tried using sqlite3.Binary() as well:
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [sqlite3.Binary(binaryfromabove)]
cur.execute(sql, val)
The row that is insert is always NULL.
Any ideas what is going on?

Template for MySQL Table Creation - Python

I am creating one table per user in my database and later storing data specific to that user. Since I have 100+ users, I was looking to automate the table creation process in my Python code.
Much like how I can automate a row insertion in a table, I tried to automate table insertion.
Row insertion code:
PAYLOAD_TEMPLATE = (
"INSERT INTO metadata "
"(to_date, customer_name, subdomain, internal_users)"
"VALUES (%s, %s, %s, %s)"
)
How I use it:
connection = mysql.connector.connect(**config)
cursor = connection.cursor()
# Opening csv table to feed data
with open('/csv-table-path', 'r') as weeklyInsight:
reader = csv.DictReader(weeklyInsight)
for dataDict in reader:
# Changing date to %m/%d/%Y format
to_date = dataDict['To'][:5] + "20" + dataDict['To'][5:]
payload_data = (
datetime.strptime(to_date, '%m/%d/%Y'),
dataDict['CustomerName'],
dataDict['Subdomain'],
dataDict['InternalUsers']
)
cursor.execute(PAYLOAD_TEMPLATE, payload_data)
How can I create a 'TABLE_TEMPLATE' that can be executed in a similar way to create a table?
I wish to create it such that I can execute the template code from my cursor after replacing certain fields with others.
TABLE_TEMPLATE = (
" CREATE TABLE '{customer_name}' (" # Change customer_name for new table
"'To' DATE NOT NULL,"
"'Users' INT(11) NOT NULL,"
"'Valid' VARCHAR(3) NOT NULL"
") ENGINE=InnoDB"
)
There is no technical¹ need to create a separate table for each client. It is simpler and cleaner to have a single table, e.g.
-- A simple users table; you probably already have something like this
create table users (
id integer not null auto_increment,
name varchar(50),
primary key (id)
);
create table weekly_numbers (
id integer not null auto_increment,
-- By referring to the id column of our users table we link each
-- row with a user
user_id integer references users(id),
`date` date not null,
user_count integer(11) not null,
primary key (id)
);
Let's add some sample data:
insert into users (id, name)
values (1, 'Kirk'),
(2, 'Picard');
insert into weekly_numbers (user_id, `date`, user_count)
values (1, '2017-06-13', 5),
(1, '2017-06-20', 7),
(2, '2017-06-13', 3),
(1, '2017-06-27', 10),
(2, '2017-06-27', 9),
(2, '2017-06-20', 12);
Now let's look at Captain Kirk's numbers:
select `date`, user_count
from weekly_numbers
-- By filtering on user_id we can see one user's numbers
where user_id = 1
order by `date` asc;
¹There may be business reasons to keep your users' data separate. A common use case would be isolating your clients' data, but in that case a separate database per client seems like a better fit.

Python sqlite3 generate unique identifier

I am trying to add values to a 'pending application table'. This is what I have so far:
appdata = [(ID,UNIQUE_IDENTIFIER,(time.strftime("%d/%m/%Y"),self.amount_input.get(),self.why_input.get())]
self.c.execute('INSERT into Pending VALUES (?,?,?,?,?)', appdata)
self.conn.commit()
I need to set a value for 'UNIQUE_IDENTIFIER', which is a primary key in a sqlite database.
How can I generate a unquie number for this value?
CREATE TABLE Pending (
ID STRING REFERENCES StaffTable (ID),
PendindID STRING PRIMARY KEY,
RequestDate STRING,
Amount TEXT,
Reason TEXT
);
two ways to do that:
1-First
in python you can use uuid module example:
>>> import uuid
>>> str(uuid.uuid4()).replace('-','')
'5f202bf198e24242b6a11a569fd7f028'
note : a small chance to get the same str so check for object exist with the same primary key in the table before saving
this method uuid.uuid4() each time return new random
for example:
>>> ID=str(uuid.uuid4()).replace('-','')
>>>cursor.execute("SELECT * FROM Pending WHERE PendindID = ?", (ID,))
>>>if len(data)==0:
#then save new object as there is no row with the same id
else:
#create new ID
2-second
in sqlite3 make a composite primary key according to sqlite doc
CREATE TABLE Pending (
column1,
column2,
column3,
PRIMARY KEY (column1, column2)
);
Then make sure of uniqueness throw unique(column1, column2)

Python Sqlite3 insert operation with a list of column names

Normally, if i want to insert values into a table, i will do something like this (assuming that i know which columns that the values i want to insert belong to):
conn = sqlite3.connect('mydatabase.db')
conn.execute("INSERT INTO MYTABLE (ID,COLUMN1,COLUMN2)\
VALUES(?,?,?)",[myid,value1,value2])
But now i have a list of columns (the length of list may vary) and a list of values for each columns in the list.
For example, if i have a table with 10 columns (Namely, column1, column2...,column10 etc). I have a list of columns that i want to update.Let's say [column3,column4]. And i have a list of values for those columns. [value for column3,value for column4].
How do i insert the values in the list to the individual columns that each belong?
As far as I know the parameter list in conn.execute works only for values, so we have to use string formatting like this:
import sqlite3
conn = sqlite3.connect(':memory:')
conn.execute('CREATE TABLE t (a integer, b integer, c integer)')
col_names = ['a', 'b', 'c']
values = [0, 1, 2]
conn.execute('INSERT INTO t (%s, %s, %s) values(?,?,?)'%tuple(col_names), values)
Please notice this is a very bad attempt since strings passed to the database shall always be checked for injection attack. However you could pass the list of column names to some injection function before insertion.
EDITED:
For variables with various length you could try something like
exec_text = 'INSERT INTO t (' + ','.join(col_names) +') values(' + ','.join(['?'] * len(values)) + ')'
conn.exec(exec_text, values)
# as long as len(col_names) == len(values)
Of course string formatting will work, you just need to be a bit cleverer about it.
col_names = ','.join(col_list)
col_spaces = ','.join(['?'] * len(col_list))
sql = 'INSERT INTO t (%s) values(%s)' % (col_list, col_spaces)
conn.execute(sql, values)
I was looking for a solution to create columns based on a list of unknown / variable length and found this question. However, I managed to find a nicer solution (for me anyway), that's also a bit more modern, so thought I'd include it in case it helps someone:
import sqlite3
def create_sql_db(my_list):
file = 'my_sql.db'
table_name = 'table_1'
init_col = 'id'
col_type = 'TEXT'
conn = sqlite3.connect(file)
c = conn.cursor()
# CREATE TABLE (IF IT DOESN'T ALREADY EXIST)
c.execute('CREATE TABLE IF NOT EXISTS {tn} ({nf} {ft})'.format(
tn=table_name, nf=init_col, ft=col_type))
# CREATE A COLUMN FOR EACH ITEM IN THE LIST
for new_column in my_list:
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
conn.close()
my_list = ["Col1", "Col2", "Col3"]
create_sql_db(my_list)
All my data is of the type text, so I just have a single variable "col_type" - but you could for example feed in a list of tuples (or a tuple of tuples, if that's what you're into):
my_other_list = [("ColA", "TEXT"), ("ColB", "INTEGER"), ("ColC", "BLOB")]
and change the CREATE A COLUMN step to:
for tupl in my_other_list:
new_column = tupl[0] # "ColA", "ColB", "ColC"
col_type = tupl[1] # "TEXT", "INTEGER", "BLOB"
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
As a noob, I can't comment on the very succinct, updated solution #ron_g offered. While testing, though I had to frequently delete the sample database itself, so for any other noobs using this to test, I would advise adding in:
c.execute('DROP TABLE IF EXISTS {tn}'.format(
tn=table_name))
Prior the the 'CREATE TABLE ...' portion.
It appears there are multiple instances of
.format(
tn=table_name ....)
in both 'CREATE TABLE ...' and 'ALTER TABLE ...' so trying to figure out if it's possible to create a single instance (similar to, or including in, the def section).

Categories

Resources