Python - Strip each element of a Tuple - python

I am retrieving data from a DBF file and I need to generate a SQL script to load the data into a database. I already have this but the values are stored in a tuple and before i create the SQL script I want to strip each item of the tuple. For example, I am getting this:
INSERT INTO my_table (col1,col2,col3) VALUES('Value 1 ', 'TESTE123', ' ADAD ')
And I need to get this:
INSERT INTO my_table (col1,col2,col3) VALUES('Value 1', 'TESTE123', 'ADAD')
For that I am trying with this code:
with dbf.Table(filename) as table:
for record in table:
fields = dbf.field_names(table)
fields = ','.join(fields)
place_holders = ','.join(['?'] * len(fields))
values = tuple(record.strip())
sql = "insert into %s (%s) values(%s)" & ('my_table', fields, values)
And I am getting the following error:
dbf.FieldMissingError: 'STRIP' no such field in table
What do you purpose?

dbf.Record is not a str, and doesn't have string methods.
If every field in the record is text (e.g. Character or Memo, not Numeric or Date) then you can:
values = [v.strip() for v in record]

Related

Change cx_Oracle.LOB result field to text in a result tuple

I am trying to create some simple replication mechanism, fetching records from one database and writing it to another with python, using cx_oracle to query the Oracle source and psycopg2 to insert the data into PostgreSQL
# Fetch from Oracle
cur.execute("select * from INC where sys_updated_on >= to_timestamp(:max, 'YYYY-MM-DD HH24:MI:SS')", {"max": str(maxupd[0])})
# Get all records in chunks
while True:
# Fetch a subset of records acc. to cur.arraysize
records = cur.fetchmany(numRows=cur.arraysize)
# End loop if no more records are available
if not records:
break
# Get row index for unique SYS_ID
index = cols.index("SYS_ID")
# Check for each record if already exists or is new
for rec in records:
# Fetch the SYS_ID from the current record
my_sql = sql.SQL("select 'SYS_ID' from inc where 'SYS_ID'= '%%%s%%' " % (rec[index]))
cur1.execute(my_sql)
myid = cur1.fetchone()
# If record does not exist in target, insert record
if myid is None:
rec = str(list(rec))[1:-1]
cur1.execute("""INSERT INTO inc VALUES(%s)""" % (rec))
The insert fails because of the following error:
psycopg2.errors.SyntaxError: syntax error at or near "<"
LINE 1: ...370f17e2c083d6ff7bc2050ea4', 'SAR', None, 0, '4', <cx_Oracle...
This means that the CLOB fields are not read correctly when transforming the result from tuple to a string to be used in the insert statement.
Printing the item in the tuple field directly like print(rec[28]) gives the content of the field, but the transformation only shows the cx_Oracle placeholder. I tried various approaches but all failed for my purpose.
Is there a way to get the CLOB content into the string??

How to insert a dictionary into Postgresql Table with Pscycopg2

How do I insert a python dictionary into a Postgresql2 table? I keep getting the following error, so my query is not formatted correctly:
Error syntax error at or near "To" LINE 1: INSERT INTO bill_summary VALUES(To designate the facility of...
import psycopg2
import json
import psycopg2.extras
import sys
with open('data.json', 'r') as f:
data = json.load(f)
con = None
try:
con = psycopg2.connect(database='sanctionsdb', user='dbuser')
cur = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
cur.execute("CREATE TABLE bill_summary(title VARCHAR PRIMARY KEY, summary_text VARCHAR, action_date VARCHAR, action_desc VARCHAR)")
for d in data:
action_date = d['action-date']
title = d['title']
summary_text = d['summary-text']
action_date = d['action-date']
action_desc = d['action-desc']
q = "INSERT INTO bill_summary VALUES(" +str(title)+str(summary_text)+str(action_date)+str(action_desc)+")"
cur.execute(q)
con.commit()
except psycopg2.DatabaseError, e:
if con:
con.rollback()
print 'Error %s' % e
sys.exit(1)
finally:
if con:
con.close()
You should use the dictionary as the second parameter to cursor.execute(). See the example code after this statement in the documentation:
Named arguments are supported too using %(name)s placeholders in the query and specifying the values into a mapping.
So your code may be as simple as this:
with open('data.json', 'r') as f:
data = json.load(f)
print(data)
""" above prints something like this:
{'title': 'the first action', 'summary-text': 'some summary', 'action-date': '2018-08-08', 'action-desc': 'action description'}
use the json keys as named parameters:
"""
cur = con.cursor()
q = "INSERT INTO bill_summary VALUES(%(title)s, %(summary-text)s, %(action-date)s, %(action-desc)s)"
cur.execute(q, data)
con.commit()
Note also this warning (from the same page of the documentation):
Warning: Never, never, NEVER use Python string concatenation (+) or string parameters interpolation (%) to pass variables to a SQL query string. Not even at gunpoint.
q = "INSERT INTO bill_summary VALUES(" +str(title)+str(summary_text)+str(action_date)+str(action_desc)+")"
You're writing your query in a wrong way, by concatenating the values, they should rather be the comma-separated elements, like this:
q = "INSERT INTO bill_summary VALUES({0},{1},{2},{3})".format(str(title), str(summery_text), str(action_date),str(action_desc))
Since you're not specifying the columns names, I already suppose they are in the same orders as you have written the value in your insert query. There are basically two way of writing insert query in postgresql. One is by specifying the columns names and their corresponding values like this:
INSERT INTO TABLE_NAME (column1, column2, column3,...columnN)
VALUES (value1, value2, value3,...valueN);
Another way is, You may not need to specify the column(s) name in the SQL query if you are adding values for all the columns of the table. However, make sure the order of the values is in the same order as the columns in the table. Which you have used in your query, like this:
INSERT INTO TABLE_NAME VALUES (value1,value2,value3,...valueN);

Can't insert into SQLite3 using dictionary

I'm trying to make a function which inserts a row into the SQLite3 database using dictionary.
I found here, on SO a way to do that, but it unfortunately does not work. There is some problem I can't figure out.
def insert_into_table(self,data):
for key in data.keys(): # ADDING COLUMNS IF NECESSARY
columns = self.get_column_names()
column = key.replace(' ','_')
if column not in columns:
self.cur.execute("""ALTER TABLE vsetkyfirmy ADD COLUMN {} TEXT""".format(column.encode('utf-8')))
self.conn.commit()
new_data={}
for v,k in data.iteritems(): # new dictionary with remaden names (*column = key.replace(' ','_'))
new_data[self.remake_name(v)]=k
columns = ', '.join(new_data.keys())
placeholders = ':'+', :'.join(new_data.keys())
query = 'INSERT INTO vsetkyfirmy (%s) VALUES (%s)' % (columns, placeholders)
self.cur.execute(query, new_data)
self.conn.commit()
EXCEPTION:
self.cur.execute(query, new_data)
sqlite3.ProgrammingError: You did not supply a value for binding 1.
When I print query and new_data everything seems correct:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (:Obchodné_meno, :IČ_DPH, :Sídlo, :PSČ, :Spoločník, :IČO, :Základné_imanie, :Konateľ, :Ročný_obrat, :Dátum_vzniku, :Právna_forma)
{u'Obchodn\xe9_meno': 'PRspol. s r.o.', u'I\u010c_DPH': 'S9540', u'S\xeddlo': u'Bansk\xe1 Bystrica, Orembursk\xe1 2', u'PS\u010c': '97401', u'Spolo\u010dn\xedk': u'Dana Dzurianikov\xe1', u'I\u010cO': '3067', u'Z\xe1kladn\xe9_imanie': u'142899 \u20ac', u'Konate\u013e': 'Miroslav Dz', u'Ro\u010dn\xfd_obrat': '2014: 482 EUR', u'D\xe1tum_vzniku': '01.12.1991 ', u'Pr\xe1vna_forma': u'Spolo\u010dnos\u0165 s ru\u010den\xedm obmedzen\xfdm'}
EDIT: So I've tried to remove ':' from query so it looks like:
INSERT INTO vsetkyfirmy (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma) VALUES (Obchodné_meno, IČ_DPH, Sídlo, PSČ, Spoločník, IČO, Základné_imanie, Konateľ, Ročný_obrat, Dátum_vzniku, Právna_forma)
And it returns that sqlite3.OperationalError: no such column: Obchodné_meno
I don't know where is the problem, could it be in encoding?
You are calling encode('utf-8') when creating the table, but not when inserting.
SQLite indeed uses UTF-8, but the sqlite3 module automatically handles conversion from/to Python's internal Unicode string encoding. Don't try to reencode manually.

Match column value and values in a python list

I want to copy all values that has an ID that is contained in a python list into a new table in SQLite like this:
cur.execute("INSERT INTO newtable SELECT * FROM oldtable where userid IN " + userids)
Userids is a list that comes from a file:
userids = [line.strip() for line in open('inputfile.txt')]
But I get the following error:
TypeError: cannot concatenate 'str' and 'list' objects
And I have the growing suspicion that this list with about 15000 elements would be too long for the query as well(?). How would I do this without querying once for each id in the list?
Try something like:
join the list in a string.
userIdsStr = ', '.join(userids)
Then do something like:
cur.execute('INSERT INTO newtable SELECT * FROM oldtable where userid IN (%s)' % userIdsStr)
I hope this helps

Python Sqlite3 insert operation with a list of column names

Normally, if i want to insert values into a table, i will do something like this (assuming that i know which columns that the values i want to insert belong to):
conn = sqlite3.connect('mydatabase.db')
conn.execute("INSERT INTO MYTABLE (ID,COLUMN1,COLUMN2)\
VALUES(?,?,?)",[myid,value1,value2])
But now i have a list of columns (the length of list may vary) and a list of values for each columns in the list.
For example, if i have a table with 10 columns (Namely, column1, column2...,column10 etc). I have a list of columns that i want to update.Let's say [column3,column4]. And i have a list of values for those columns. [value for column3,value for column4].
How do i insert the values in the list to the individual columns that each belong?
As far as I know the parameter list in conn.execute works only for values, so we have to use string formatting like this:
import sqlite3
conn = sqlite3.connect(':memory:')
conn.execute('CREATE TABLE t (a integer, b integer, c integer)')
col_names = ['a', 'b', 'c']
values = [0, 1, 2]
conn.execute('INSERT INTO t (%s, %s, %s) values(?,?,?)'%tuple(col_names), values)
Please notice this is a very bad attempt since strings passed to the database shall always be checked for injection attack. However you could pass the list of column names to some injection function before insertion.
EDITED:
For variables with various length you could try something like
exec_text = 'INSERT INTO t (' + ','.join(col_names) +') values(' + ','.join(['?'] * len(values)) + ')'
conn.exec(exec_text, values)
# as long as len(col_names) == len(values)
Of course string formatting will work, you just need to be a bit cleverer about it.
col_names = ','.join(col_list)
col_spaces = ','.join(['?'] * len(col_list))
sql = 'INSERT INTO t (%s) values(%s)' % (col_list, col_spaces)
conn.execute(sql, values)
I was looking for a solution to create columns based on a list of unknown / variable length and found this question. However, I managed to find a nicer solution (for me anyway), that's also a bit more modern, so thought I'd include it in case it helps someone:
import sqlite3
def create_sql_db(my_list):
file = 'my_sql.db'
table_name = 'table_1'
init_col = 'id'
col_type = 'TEXT'
conn = sqlite3.connect(file)
c = conn.cursor()
# CREATE TABLE (IF IT DOESN'T ALREADY EXIST)
c.execute('CREATE TABLE IF NOT EXISTS {tn} ({nf} {ft})'.format(
tn=table_name, nf=init_col, ft=col_type))
# CREATE A COLUMN FOR EACH ITEM IN THE LIST
for new_column in my_list:
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
conn.close()
my_list = ["Col1", "Col2", "Col3"]
create_sql_db(my_list)
All my data is of the type text, so I just have a single variable "col_type" - but you could for example feed in a list of tuples (or a tuple of tuples, if that's what you're into):
my_other_list = [("ColA", "TEXT"), ("ColB", "INTEGER"), ("ColC", "BLOB")]
and change the CREATE A COLUMN step to:
for tupl in my_other_list:
new_column = tupl[0] # "ColA", "ColB", "ColC"
col_type = tupl[1] # "TEXT", "INTEGER", "BLOB"
c.execute('ALTER TABLE {tn} ADD COLUMN "{cn}" {ct}'.format(
tn=table_name, cn=new_column, ct=col_type))
As a noob, I can't comment on the very succinct, updated solution #ron_g offered. While testing, though I had to frequently delete the sample database itself, so for any other noobs using this to test, I would advise adding in:
c.execute('DROP TABLE IF EXISTS {tn}'.format(
tn=table_name))
Prior the the 'CREATE TABLE ...' portion.
It appears there are multiple instances of
.format(
tn=table_name ....)
in both 'CREATE TABLE ...' and 'ALTER TABLE ...' so trying to figure out if it's possible to create a single instance (similar to, or including in, the def section).

Categories

Resources