cursor.execute('''CREATE TABLE PEDIDO(
CPEDIDO INT GENERATED AS IDENTITY PRIMARY KEY,
CCLIENTE INT NOT NULL,
FECHA DATE NOT NULL
)''')
valores=[]
for i in range(10):
print(i)
x=datetime.date(year=2022,month=11,day=i+1)
valores.append((i,x))
cursor.executemany("INSERT INTO PEDIDO VALUES(?,?);", valores) #doesn't work writing [valores] instead of valores
That results in:
pyodbc.Error: ('HY000', '[HY000] [Devart][ODBC][Oracle]ORA-00947: not enough values\n (0) (SQLExecDirectW)') #when inserting the instances
I have tried to save data in two different tuples: cclients = (...) and dates=(...) and then write:
cursor.executemany("INSERT INTO PEDIDO VALUES(?,?);", [cclients, dates]).
But doesn't work
Name the columns you are inserting into (and you may not need/require including the statement terminator ; in the query):
INSERT INTO PEDIDO (CCLIENTE, FECHA) VALUES (?,?)
If you do not then Oracle will expect you to provide a value for every column in the table (including CPEDIDO).
You created a table with three columns, but you only provide two values in your SQL: INSERT INTO PEDIDO VALUES(?,?). The column CPEDIDO is defined as GENERATED AS IDENTITY. This means that you can provide a value for this column, but you don't have to. But if you leave out this column your SQL statement has to be adjusted.
INSERT INTO PEDIDO (CCLIENTE, FECHA) VALUES(?,?)
Related
I am using psycopg2 to create a table partition and insert some rows into this newly created partition. The table is RANGE partitioned on a date type column.
Psycopg2 code:
conn = connect_db()
cursor = conn.cursor()
sysdate = datetime.now().date()
sysdate_str = sysdate.strftime('%Y%m%d')
schema_name = "schema_name"
table_name = "transaction_log"
# Add partition if not exists for current day
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': dt.date(2015,6,30)}))
cursor.execute(sql_add_partition, {'sysdate': sysdate})
Formatted output of cursor.mogrify():
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01'::date) TO (maxvalue);
Error received:
ERROR: syntax error at or near "::"
LINE 3: for values FROM ('2021-10-01'::date) TO (maxvalue);
Interestingly enough, psycopg2 appears to be attempting to cast the string '2021-10-01' to a date object with the "::date" syntax, and according to the postgreSQL documentation, this appears to be valid (although there are no explicit examples given in the docs), however executing the statement with both pyscopg2 and in a postgreSQL query editor yields this syntax error. However, executing the following statement in a postgreSQL SQL editor is successful:
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Any ideas on how to get psycopg2 to format the query correctly?
To follow up on #LaurenzAlbe comment:
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM (%(sysdate)s) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name), table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'))
print(cursor.mogrify(sql_add_partition, {'sysdate': '2021-10-01'}))
#OR
sql_add_partition = sql.SQL("""
CREATE TABLE IF NOT EXISTS {table_partition}
PARTITION of {table}
FOR VALUES FROM ({sysdate}) TO (maxvalue);
""").format(table = sql.Identifier(schema_name, table_name),
table_partition = sql.Identifier(schema_name, f'{table_name}_{sysdate_str}'),
sysdate=sql.Literal('2021-10-01'))
print(cursor.mogrify(sql_add_partition))
#Formatted as
CREATE TABLE IF NOT EXISTS "schema_name"."transaction_log_20211001"
PARTITION of "schema_name"."transaction_log"
FOR VALUES FROM ('2021-10-01') TO (maxvalue);
Pass the date in as a literal value instead of a date object. psycopg2 does automatic adaptation of date(time) objects to Postgres date/timestamp types(Datetime adaptation) which is what is biting you.
UPDATE
Per my comment, the reason why it needs to be a literal is explained here Create Table:
Each of the values specified in the partition_bound_spec is a literal, NULL, MINVALUE, or MAXVALUE. Each literal value must be either a numeric constant that is coercible to the corresponding partition key column's type, or a string literal that is valid input for that type.
I want to update a column with SQL server query in python, as you see I am updating the relative column as below:
I have a CSV file with some A values of relative table as below:
CSV file: (a.csv)
ART-B-C-ART0015-D-E01
ADC-B-C-ADC00112-V-E01
Python Code: (create Name Value)
ff = pd.read_csv("C:\\a.csv",encoding='cp1252')
ff["Name"]= df["A"].str.extract(r'([a-zA-Z]{3}\d{4,5})') + "-A"
Result of python Code:
ART0015-A
ADC00112-A
Table :
A Name FamilyName
ART-B-C-ART0015-D-E01 NULL ART
ADC-B-C-ADC00112-V-E01 NULL ADC00112
Also A is a column in my table (Not all of the A records but some of them) and based on A value I want to update Name column.
My database is SQL Server and I don't know how to update in Name Column in SQL Server where the A value in the csv file is equal to A in the relative table.
Code in Python:
conn = pyodbc.connect('Driver={SQL Server}; Server=ipaddress; Database=dbname; UID=username; PWD= {password};')
cursor = conn.cursor()
conn.commit()
for row in ff.itertuples():
cursor.execute('''UPDATE database.dbo.tablename SET Name where ?
)
conn.commit()
Expected result in table
A Name FamilyName
ART-B-C-ART0015-D-E01 ART0015-A ART
ADC-B-C-ADC00112-V-E01 ADC00112-A ADC00112
I would use an SQL temp table and inner join to update the values. This will work for only updating a subset of records in your SQL table. It can also be efficient at updating many records.
SQL Cursor
# reduce number of calls to server on inserts
cursor.fast_executemany = True
Create Temporary Table
statement = "CREATE TABLE #temp_tablename(A VARCHAR(200), Name VARCHAR(200))"
cursor.execute(statement)
Insert Values into a Temporary Table
# insert only the key and the updated values
subset = ff[['A','Name']]
# form SQL insert statement
columns = ", ".join(subset.columns)
values = '('+', '.join(['?']*len(subset.columns))+')'
# insert
statement = "INSERT INTO #temp_tablename ("+columns+") VALUES "+values
insert = [tuple(x) for x in subset.values]
cursor.executemany(statement, insert)
Update Values in Main Table from Temporary Table
statement = '''
UPDATE
tablename
SET
u.Name
FROM
tablename AS t
INNER JOIN
#temp_tablename AS u
ON
u.A=t.A;
'''
cursor.execute(statement)
Drop Temporary Table
cursor.execute("DROP TABLE #temp_tablename")
cHandler = myDB.cursor()
cHandler.execute('select UserId,C1,LogDate from DeviceLogs_12_2019') // data from remote sql server database
curs = connection.cursor()
curs.execute("""select * from biometric""") //data from my database table
lst = []
result= cHandler.fetchall()
for row in result:
lst.append(row)
lst2 = []
result2= curs.fetchall()
for row in result2:
lst2.append(row)
t = []
r = [elem for elem in lst if not elem in lst2]
for i in r:
print(i)
t.append(i)
for i in t:
frappe.db.sql("""Insert into biometric(UserId,C1,LogDate) select '%s','%s','%s' where not exists(select * from biometric where UserID='%s' and LogDate='%s')""",(i[0],i[1],i[2],i[0],i[2]),as_dict=1)
I am trying above code to insert data into my table if record not exists but getting error :
pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '1111'',''in'',''2019-12-03 06:37:15'' where not exists(select * from biometric ' at line 1")
Is there anything I am doing wrong or any other way to achieve this?
It appears you have potentially four problems:
There is a from clause missing between select and where not exists.
When using a prepared statement you do not enclose your placeholder arguments, %s, within quotes. Your SQL should be:
Your loop:
Loop:
t = []
r = [elem for elem in lst if not elem in lst2]
for i in r:
print(i)
t.append(i)
If you are trying to only include rows from the remote site that will not be duplicates, then you should explicitly check the two fields that matter, i.e. UserId and LogDate. But what is the point since your SQL is taking care of making sure that you are excluding these duplicate rows? Also, what is the point of copying everything form r to t?
SQL:
Insert into biometric(UserId,C1,LogDate) select %s,%s,%s from DUAL where not exists(select * from biometric where UserID=%s and LogDate=%s
But here is the problem even with the above SQL:
If the not exists clause is false, then the select %s,%s,%s from DUAL ... returns no columns and the column count will not match the number of columns you are trying to insert, namely three.
If your concern is getting an error due to duplicate keys because (UserId, LogDate) is either a UNIQUE or PRIMARY KEY, then add the IGNORE keyword on the INSERT statement and then if a row with the key already exists, the insertion will be ignored. But there is no way of knowing since you have not provided this information:
for i in t:
frappe.db.sql("Insert IGNORE into biometric(UserId,C1,LogDate) values(%s,%s,%s)",(i[0],i[1],i[2]))
If you do not want multiple rows with the same (UserId, LogDate) combination, then you should define a UNIQUE KEY on these two columns and then the above SQL should be sufficient. There is also an ON DUPLICATE KEY SET ... variation of the INSERT statement where if the key exists you can do an update instead (look this up).
If you don't have a UNIQUE KEY defined on these two columns or you need to print out those rows which are being updated, then you do need to test for the presence of the existing keys. But this would be the way to do it:
cHandler = myDB.cursor()
cHandler.execute('select UserId,C1,LogDate from DeviceLogs_12_2019') // data from remote sql server database
rows = cHandler.fetchall()
curs = connection.cursor()
for row in rows:
curs.execute("select UserId from biometric where UserId=%s and LogDate=%s", (ros[0], row[2])) # row already in biometric table?
biometric_row = curs.fetchone()
if biometric_row is None: # no, it is not
print(row)
frappe.db.sql("Insert into biometric(UserId,C1,LogDate) values(%s, %s, %s)", (row[0],row[1],row[2]))
I having issues writing WKB or Well Known Binary into a sqlite database/geopackage (they are the same type of database).
Here is the create statement to generate a polyline dataset.
CREATE TABLE line
(OBJECTID INTEGER primary key autoincrement not null,
Shape MULTILINESTRING)
The WKB is as follows:
bytearray(b'\x01\x05\x00\x00\x00\x02\x00\x00\x00\x01\x02\x00\x00\x00\x04\x00\x00\x00\x96C\x8b\x9a\xd0#`\xc18\xd6\xc5=\xd2\xc5RA\x93\xa9\x825\x02=`\xc1\xb0Y\xf5\xd1\xed\xa6RAZd;W\x913`\xc1 Zd\xfb\x1c\xc0RA\xaa\x82Q%p/`\xc18\xcd;\x92\x19\xaeRA\x01\x02\x00\x00\x00\x03\x00\x00\x00z\xc7)TzD`\xc1\xb8\x8d\x06\xb8S\x9fRA\xbb\xb8\x8d:{"`\xc1X\xec/\xb7\xbb\x9eRA\x00\x91~Eo"`\xc1\x00\xb3{F\xff]RA')
When I go to insert the binary data into the column, I do not get an error, it just inserts NULL.
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [binaryfromabove]
cur.execute(sql, val)
I've also tried using sqlite3.Binary() as well:
sql = """INSERT INTO line (Shape)
VALUES(?)"""
val = [sqlite3.Binary(binaryfromabove)]
cur.execute(sql, val)
The row that is insert is always NULL.
Any ideas what is going on?
I have a database table with nearly 1 million records.
I have added a new column, called concentration.
I then have a function which calculates 'concentration' for each record.
Now, I want to update the records in batch, so I have been looking at the following questions/answers: https://stackoverflow.com/a/33258295/596841, https://stackoverflow.com/a/23324727/596841 and https://stackoverflow.com/a/39626473/596841, but I am not sure how to do this using unnest...
This is my Python 3 function to do the updates:
def BatchUpdateWithConcentration(tablename, concentrations):
connection = psycopg2.connect(dbname=database_name, host=host, port=port, user=username, password=password);
cursor = connection.cursor();
sql = """
update #tablename# as t
set
t.concentration = s.con
FROM unnest(%s) s(con, id)
WHERE t.id = s.id;
"""
cursor.execute(sql.replace('#tablename#',tablename.lower()), (concentrations,))
connection.commit()
cursor.close()
connection.close()
concentrations is an array of tuples:
[(3.718244705238561e-16, 108264), (...)]
The first value is a double precision and the second is an integer, representing the concentration and rowid, respectively.
The error I'm getting is:
psycopg2.ProgrammingError: a column definition list is required for functions returning "record"
LINE 5: FROM unnest(ARRAY[(3.718244705238561e-16, 108264), (...
^
Since a Python tuple is adapted by Psycopg to a Postgresql anonymous record it is necessary to specify the data types:
from unnest(%s) s(con numeric, id integer)