I've written a bit of python code that essentially will take data from one database (SQL Server 2008) and insert it into another (MySQL). I am fairly new to python so am struggling to find the errors in my code.
My code is:
import mysql.connector
import pyodbc
def insert_VPS(SageResult):
query = """
INSERT INTO SOPOrderReturn(SOPOrderReturnID,DocumentTypeID,DocumentNo,DocumentDate,CustomerID,CustomerTypeID,CurrencyID,SubtotalGoodsValue,TotalNetValue,TotalTaxValue,TotalGrossValue,SourceTypeID,SourceDocumentNo)
VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"""
try:
mydbVPS = mysql.connector.connect(
host="serveraddress",
user="username",
passwd="password;",
database="databse"
)
VPScursor = mydbVPS.cursor()
print(SageResult)
VPScursor.executemany(query, SageResult)
mydbVPS.commit()
except Exception as e:
print('InsertError:', e)
finally:
VPScursor.close()
mydbVPS.close()
def main():
selectQuery = """
SELECT TOP 1 [SOPOrderReturnID]
,[DocumentTypeID]
,[DocumentNo]
,[DocumentDate]
,[CustomerID]
,[CustomerTypeID]
,[CurrencyID]
,[SubtotalGoodsValue]
,[TotalNetValue]
,[TotalTaxValue]
,[TotalGrossValue]
,[SourceTypeID]
,[SourceDocumentNo]
FROM [Live].[dbo].[SOPOrderReturn]
"""
try:
mydbSage = pyodbc.connect('Driver={SQL Server};'
'Server=CRMTEST;'
'Database=Live;'
'UID=sa;'
'PWD=password;')
Sagecursor = mydbSage.cursor()
Sagecursor.execute(selectQuery)
SageResult = tuple(Sagecursor.fetchall())
mydbSage.commit()
except Exception as e:
print('MainError:', e)
finally:
Sagecursor.close()
mydbSage.close()
insert_VPS(SageResult)
if __name__ == '__main__':
main()
The error I get:
D:\xampp\htdocs\stripe\group\beta>sql-sync.py
((10447177, 0, '0000091897', datetime.datetime(2010, 8, 18, 0, 0), 186150, 1, 1, Decimal('18896.95'), Decimal('18896.95'), Decimal('3779.39'), Decimal('22676.34
'), 0, ''),)
InsertError: Failed executing the operation; Could not process parameters
I have tested the select query (but not the INSERT one) and both connections in a more basic script and those all work fine. Can anyone see the issues?
That should have been except Exception as e instead of except Error as e:
Consider workarounds as the special datetime and decimal types may not translate effectively from pyodbc to mysql connector DB-APIs.
CSV
Use the popular form of data transfer using MySQL's fast LOAD DATA method.
import csv
...
# SQL SERVER CSV EXPORT
mydbSage = pyodbc.connect('...')
Sagecursor = mydbSage.cursor()
Sagecursor.execute(selectQuery)
SageResult = Sagecursor.fetchall()
with open("/path/to/SageResult.csv", "w", newline='') as csv_file:
cw = csv.writer(csv_file)
cw.writerow([i[0] for i in Sagecursor.description]) # WRITE HEADERS
cw.writerows(SageResult) # WRITE DATA ROWS
# MYSQL CSV IMPORT
mydbVPS = mysql.connector.connect(...)
query = """LOAD DATA LOCAL INFILE '/path/to/SageResult.csv'
INTO TABLE SOPOrderReturn
FIELDS TERMINATED BY
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
"""
VPScursor = mydbVPS.cursor()
VPScursor.execute(query)
PyODBC
Run both database connections with same API which requires downloading the MySQL ODBC driver for your OS and replace the mysql.connector). This may resolve handling of those specific types.
# SQL SERVER SELECT QUERY
mydbSage = pyodbc.connect(driver="SQL Server", host="CRMTEST", database="LIVE",
uid="sa", pwd="password")
Sagecursor = mydbSage.cursor()
Sagecursor.execute(selectQuery)
SageResult = tuple(Sagecursor.fetchall())
# MYSQL APPEND QUERY
mydbVPS = pyodbc.connect(driver="ODBC Driver Name", host="hostname",
uid="username", pwd="password", database="database")
query = """INSERT INTO SOPOrderReturn (SOPOrderReturnID, DocumentTypeID, DocumentNo,
DocumentDate, CustomerID, CustomerTypeID,
CurrencyID, SubtotalGoodsValue, TotalNetValue,
TotalTaxValue, TotalGrossValue, SourceTypeID,
SourceDocumentNo)
VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, , ?, ?, ?)
"""
VPScursor = mydbVPS.cursor()
VPScursor.execute(query, SageResult)
MS Access
Use the Office app as a medium between both relational databases. Since you use SQL Server you may have Microsoft Office available with a possibility of its DBMS Office app: MS Access.
Technically, MS Access is like phpMyAdmin, a GUI console to a database (where app is often conflated with its engine), except Access is not restricted to any one database but can supplement its default database, Jet/ACE SQL engine, with any known backend data and database source.
Create two linked tables (using ODBC connections) to the separate RDBMS's.
Create and then run INSERT...SELECT query on linked tables. This query will use Access' SQL dialect which supports TOP clause and bracketed names. Results should propagate immediately in MySQL table.
INSERT INTO SOPOrderReturn_mysql_linked
SELECT TOP 50 [SOPOrderReturnID]
,[DocumentTypeID]
,[DocumentNo]
,[DocumentDate]
,[CustomerID]
,[CustomerTypeID]
,[CurrencyID]
,[SubtotalGoodsValue]
,[TotalNetValue]
,[TotalTaxValue]
,[TotalGrossValue]
,[SourceTypeID]
,[SourceDocumentNo]
FROM SOPOrderReturn_mssql_linked
In fact, have Python run above query after linking tables in a saved database. Likely the MS Access ODBC driver may already be installed with the Office app or alternatively can be installed with downloaded redistributable.
# LIST OF INSTALLED DRIVERS
print(pydbc.drivers())
# MS ACCESS APPEND QUERY
constr = r"DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ={'C:\\Path\\To\\Database\\File.accdb'};"
accdb = pyodbc.connect(constr)
cur = accdb.cursor()
cur.execute('<APPEND QUERY USING LINKED TABLES>')
Related
I'm currently trying to query a deltadna database. Their Direct SQL Access guide states that any PostgreSQL ODBC compliant tools should be able to connect without issue. Using the guide, I set up an ODBC data source in windows
I have tried adding Set nocount on, changed various formats for the connection string, changed the table name to be (account).(system).(tablename), all to no avail. The simple query works in Excel and I have cross referenced with how Excel formats everything as well, so it is all the more strange that I get the no query problem.
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor().execute(query1)
row = query1_cursor.fetchone()
print(row)
Result is ProgrammingError: No results. Previous SQL was not a query.
Try it like this:
import pyodbc
conn_str = 'DSN=name'
query1 = 'select eventName from table_name limit 5'
conn = pyodbc.connect(conn_str)
conn.setdecoding(pyodbc.SQL_CHAR,encoding='utf-8')
query1_cursor = conn.cursor()
query1_cursor.execute(query1)
row = query1_cursor.fetchone()
print(row)
You can't do the cursor declaration and execution in the same row. Since then your query1_cursor variable will point to a cursor object which hasn't executed any query.
So, after coding with pyodbc for a couple days now, I've run into a road block it seems. My SQL update will not work, even after putting autocommit=True in the connection statement. Nothing changes in the database at all. All my code is provided below. Please help. (I am using the 2016 version of MS Access, code runs with no errors, 32 bit Python and Access.)
import pyodbc
# Connect to the Microsoft Access Database
conn_str = (
r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
r'DBQ=C:\Users\User_Name\Desktop\Databse\CPLM.accdb'
)
cnxn = pyodbc.connect(conn_str, autocommit=True)
crsr = cnxn.cursor()
crsr2 = cnxn.cursor()
# SQL code used for the for statement
SQL = "SELECT NameProject, Type, Date, Amount, ID FROM InvoiceData WHERE Type=? OR Type=? OR Type IS NULL AND ID > ?"
# Defining variables
date = ""
projectNumber = 12.04
numberDate = []
# Main Code, for each row in the SQL query, update the table
for row in crsr.execute(SQL, "Invoice", "Deposit", "1"):
print (projectNumber)
if row.NameProject is not None:
crsr2.execute("UPDATE Cimt SET LastInvoice='%s' WHERE Num='%s'" % (date, projectNumber))
cnxn.commit()
# Just used to find where to input certain data.
# I also know all the code in this if statement completes due to outside testing
projectNumber = row.NameProject[:5]
numberDate.append([projectNumber, date])
else:
date = row.Date
print(numberDate)
crsr.commit()
cnxn.commit()
cnxn.close()
I'm trying to store the current time in my access database with the following script:
import pyodbc
import time
connStr = """
DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};
DBQ=C:/Users/QPCS Registration/Documents/DB Tests/PYODBC.accdb;
"""
cnxn = pyodbc.connect(connStr)
cursor = cnxn.cursor()
def TimeStamp():
RFID = str(input("Please tap your pass on the reader:\n"))
Current_Time = str(time.strftime("%H:%M"))
cursor.execute('INSERT INTO Time_Of_Entry(RFID_Number,Time_Tapped) VALUES('+RFID+','+Current_Time+');')
cnxn.commit()
def Close_DB_Cnxn():
cnxn.close()
TimeStamp()
Close_DB_Cnxn()
When I run it I get the following error:
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Microsoft Access Driver] Syntax error (missing operator) in query expression '19:44'. (-3100) (SQLExecDirectW)")
The problem is definitely with 'Current_Time', because when I try to store the variable 'RFID' with the script shown below, it inserts into the database just fine.
cursor.execute('INSERT INTO Time_Of_Entry(RFID_Number) VALUES('+RFID+');')
I have tried changing the data type of the field 'Time_Tapped' in the table 'Time_Of_Entry' from Short Text, to Date/Time;Short Time but that has had no effect.
My machine is running on windows 7 home premium 64-bit. I have Microsoft office 2010; 32-bit I'm running python 3.3; 32-bit
Parameterized queries are useful for both INSERT queries and SELECT queries when Date/Time values are involved. Instead of messing with date/time formats and delimiters you just pass the Date/Time value as a parameter and let the data access layer (ODBC in this case) sort it out.
The following example works for me:
from datetime import datetime, time
import pypyodbc
rfid = "GORD123" ## for testing
now = datetime.now()
currentTime = datetime(1899, 12, 30, now.hour, now.minute)
connStr = """
Driver={Microsoft Access Driver (*.mdb, *.accdb)};
Dbq=C:/Users/Public/Database1.accdb;
"""
cnxn = pypyodbc.connect(connStr)
cursor = cnxn.cursor()
sql = """
INSERT INTO Time_Of_Entry (RFID_Number, Time_Tapped) VALUES (?, ?)
"""
parameters = (rfid, currentTime)
cursor.execute(sql, parameters)
cursor.close()
cnxn.commit()
cnxn.close()
Notes:
I used pypyodbc instead of pyodbc because I was using Python 3.4.3 and the latest pyodbc installer for Windows choked when it couldn't find Python 3.3. To get pypyodbc all I had to do was run pip install pypyodbc.
All date/time values in Access have both a date and time component. In order for a date/time value to appear by default as time-only in Access we need to assign it the "magic" date 1899-12-30. (That's the date corresponding to CDate(0) in Access.)
Writing a script to convert raw data for MySQL import I worked with a temporary textfile so far which I later imported manually using the LOAD DATA INFILE... command.
Now I included the import command into the python script:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
cursor = db.cursor()
query = """
LOAD DATA INFILE 'temp.txt' INTO TABLE myDB.values
FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';
"""
cursor.execute(query)
cursor.close()
db.commit()
db.close()
This works but temp.txt has to be in the database directory which isn't suitable for my needs.
Next approch is dumping the file and commiting directly:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
sql = "INSERT INTO values(`timestamp`,`id`,`value`,`status`) VALUES(%s,%s,%s,%s)"
cursor=db.cursor()
for line in lines:
mode, year, julian, time, *values = line.split(",")
del values[5]
date = datetime.strptime(year+julian, "%Y%j").strftime("%Y-%m-%d")
time = datetime.strptime(time.rjust(4, "0"), "%H%M" ).strftime("%H:%M:%S")
timestamp = "%s %s" % (date, time)
for i, value in enumerate(values[:20], 1):
args = (timestamp,str(i+28),value, mode)
cursor.execute(sql,args)
db.commit()
Works as well but takes around four times as long which is too much. (The same for construct was used in the first version to generate temp.txt)
My conclusion is that I need a file and the LOAD DATA INFILE command to be faster. To be free where the textfile is placed the LOCAL option seems useful. But with MySQL Connector (1.1.7) there is the known error:
mysql.connector.errors.ProgrammingError: 1148 (42000): The used command is not allowed with this MySQL version
So far I've seen that using MySQLdb instead of MySQL Connector can be a workaround. Activity on MySQLdb however seems low and Python 3.3 support will probably never come.
Is LOAD DATA LOCAL INFILE the way to go and if so is there a working connector for python 3.3 available?
EDIT: After development the database will run on a server, script on a client.
I may have missed something important, but can't you just specify the full filename in the first chunk of code?
LOAD DATA INFILE '/full/path/to/temp.txt'
Note the path must be a path on the server.
To use LOAD DATA INFILE with every accessible file you have to set the
LOCAL_FILES client flag while creating the connection
import mysql.connector
from mysql.connector.constants import ClientFlag
db = mysql.connector.connect(client_flags=[ClientFlag.LOCAL_FILES], <other arguments>)
I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')