Insert Blob Data into Sqlite3 (GeoPackage) - python

I having issues writing WKB or Well Known Binary into a sqlite database/geopackage (they are the same type of database).
Here is the create statement to generate a polyline dataset.
CREATE TABLE line
(OBJECTID INTEGER primary key autoincrement not null,
Shape MULTILINESTRING)
The WKB is as follows:
bytearray(b'\x01\x05\x00\x00\x00\x02\x00\x00\x00\x01\x02\x00\x00\x00\x04\x00\x00\x00\x96C\x8b\x9a\xd0#`\xc18\xd6\xc5=\xd2\xc5RA\x93\xa9\x825\x02=`\xc1\xb0Y\xf5\xd1\xed\xa6RAZd;W\x913`\xc1 Zd\xfb\x1c\xc0RA\xaa\x82Q%p/`\xc18\xcd;\x92\x19\xaeRA\x01\x02\x00\x00\x00\x03\x00\x00\x00z\xc7)TzD`\xc1\xb8\x8d\x06\xb8S\x9fRA\xbb\xb8\x8d:{"`\xc1X\xec/\xb7\xbb\x9eRA\x00\x91~Eo"`\xc1\x00\xb3{F\xff]RA')
When I go to insert the binary data into the column, I do not get an error, it just inserts NULL.
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [binaryfromabove]
cur.execute(sql, val)
I've also tried using sqlite3.Binary() as well:
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [sqlite3.Binary(binaryfromabove)]
cur.execute(sql, val)
The row that is insert is always NULL.
Any ideas what is going on?

Related

pyodbc not enough values\n (0)

cursor.execute('''CREATE TABLE PEDIDO(
CPEDIDO INT GENERATED AS IDENTITY PRIMARY KEY,
CCLIENTE INT NOT NULL,
FECHA DATE NOT NULL
)''')
valores=[]
for i in range(10):
print(i)
x=datetime.date(year=2022,month=11,day=i+1)
valores.append((i,x))
cursor.executemany("INSERT INTO PEDIDO VALUES(?,?);", valores) #doesn't work writing [valores] instead of valores
That results in:
pyodbc.Error: ('HY000', '[HY000] [Devart][ODBC][Oracle]ORA-00947: not enough values\n (0) (SQLExecDirectW)') #when inserting the instances
I have tried to save data in two different tuples: cclients = (...) and dates=(...) and then write:
cursor.executemany("INSERT INTO PEDIDO VALUES(?,?);", [cclients, dates]).
But doesn't work
Name the columns you are inserting into (and you may not need/require including the statement terminator ; in the query):
INSERT INTO PEDIDO (CCLIENTE, FECHA) VALUES (?,?)
If you do not then Oracle will expect you to provide a value for every column in the table (including CPEDIDO).
You created a table with three columns, but you only provide two values in your SQL: INSERT INTO PEDIDO VALUES(?,?). The column CPEDIDO is defined as GENERATED AS IDENTITY. This means that you can provide a value for this column, but you don't have to. But if you leave out this column your SQL statement has to be adjusted.
INSERT INTO PEDIDO (CCLIENTE, FECHA) VALUES(?,?)

Psycopg2/PostgreSQL 11.9: Syntax error at or near "::" when performing string->date type cast

I am using psycopg2 to create a table partition and insert some rows into this newly created partition. The table is RANGE partitioned on a date type column.
Psycopg2 code:
conn = connect_db()
cursor = conn.cursor()
sysdate = datetime.now().date()
sysdate_str = sysdate.strftime('%Y%m%d')
schema_name = "schema_name"
table_name = "transaction_log"
# Add partition if not exists for current day
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': dt.date(2015,6,30)}))
cursor.execute(sql_add_partition, {'sysdate': sysdate})
Formatted output of cursor.mogrify():
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01'::date) TO (maxvalue);
Error received:
ERROR: syntax error at or near "::"
LINE 3: for values FROM ('2021-10-01'::date) TO (maxvalue);
Interestingly enough, psycopg2 appears to be attempting to cast the string '2021-10-01' to a date object with the "::date" syntax, and according to the postgreSQL documentation, this appears to be valid (although there are no explicit examples given in the docs), however executing the statement with both pyscopg2 and in a postgreSQL query editor yields this syntax error. However, executing the following statement in a postgreSQL SQL editor is successful:
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Any ideas on how to get psycopg2 to format the query correctly?
To follow up on #LaurenzAlbe comment:
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': '2021-10-01'}))
#OR
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM ({sysdate}) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name),
table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'),
sysdate=sql.Literal('2021-10-01'))
print(cursor.mogrify(sql_add_partition))
#Formatted as
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Pass the date in as a literal value instead of a date object. psycopg2 does automatic adaptation of date(time) objects to Postgres date/timestamp types(Datetime adaptation) which is what is biting you.
UPDATE
Per my comment, the reason why it needs to be a literal is explained here Create Table:
Each of the values specified in the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key column's type, or a string literal that is valid input for that type.

Insert list as singular value in postgresql

I use sqlalchemy engine for insertion data in postgresql table. I won't to insert list in one row as if list be a string with many value.
query = text('INSERT INTO table (list_id, list_name) VALUES ({}, {}) RETURNING'.format(my_list,'list_name'))
result_id = self.engine.execute(query)
when i tried execute my code I received error:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "["
LINE 1: ...INTO table (list_id, list_name) VALUES (['str1... ^
[SQL: INSERT INTO table (list_id, list_name) VALUES (['str1', 'str1', 'str1'], 'list_name') RETURNING id]
I tried to represent my list as str(my list) but result was same. Also i try str(['str1', 'str1', 'str1']).replace('[', '{').replace(']', '}')
My table query:
CREATE TABLE api_services (
id SERIAL PRIMARY KEY,
list_id VARCHAR,
list_name VARCHAR(255) NOT NULL
);

On conflict, change insert values

Given the schema
CREATE TABLE `test` (
`name` VARCHAR(255) NOT NULL,
`text` TEXT NOT NULL,
PRIMARY KEY(`name`)
)
I would like to insert new data in such a way that if a given name exists, the name I am trying to insert is changed. I've checked the SQLite docs, and all I could find is INSERT OR REPLACE, which would change the text of the existing name instead of creating a new element.
The only solution I can think of is
def merge_or_edit(curr, *data_tuples):
SELECT = """SELECT COUNT(1) FROM `test` WHERE `name`=?"""
INSERT = """INSERT INTO `test` (`name`, `text`) VALUES (?, ?)"""
to_insert = []
for t in data_tuples:
while curr.execute(SELECT, (t[0],)).fetchone()[0] == 1:
t = (t[0] + "_", t[1])
to_insert.append(t)
curr.executemany(INSERT, to_insert)
But this solution is extremely slow for large sets of data (and will crash if the rename takes its name to more than 255 chars.)
What I would like to know is if this functionality is even possible using raw SQLite code.

Update multiple postgresql records using unnest

I have a database table with nearly 1 million records.
I have added a new column, called concentration.
I then have a function which calculates 'concentration' for each record.
Now, I want to update the records in batch, so I have been looking at the following questions/answers: https://stackoverflow.com/a/33258295/596841, https://stackoverflow.com/a/23324727/596841 and https://stackoverflow.com/a/39626473/596841, but I am not sure how to do this using unnest...
This is my Python 3 function to do the updates:
def BatchUpdateWithConcentration(tablename, concentrations):
connection = psycopg2.connect(dbname=database_name, host=host, port=port, user=username, password=password);
cursor = connection.cursor();
sql = """
update #tablename# as t
set
t.concentration = s.con
FROM unnest(%s) s(con, id)
WHERE t.id = s.id;
"""
cursor.execute(sql.replace('#tablename#',tablename.lower()), (concentrations,))
connection.commit()
cursor.close()
connection.close()
concentrations is an array of tuples:
[(3.718244705238561e-16, 108264), (...)]
The first value is a double precision and the second is an integer, representing the concentration and rowid, respectively.
The error I'm getting is:
psycopg2.ProgrammingError: a column definition list is required for functions returning "record"
LINE 5: FROM unnest(ARRAY[(3.718244705238561e-16, 108264), (...
^
Since a Python tuple is adapted by Psycopg to a Postgresql anonymous record it is necessary to specify the data types:
from unnest(%s) s(con numeric, id integer)

Categories

Resources