Retrieving an Oracle timestamp using Python's Win32 ODBC module - python

Given an Oracle table created using the following:
CREATE TABLE Log(WhenAdded TIMESTAMP(6) WITH TIME ZONE);
Using the Python ODBC module from its Win32 extensions (from the win32all package), I tried the following:
import dbi, odbc
connection = odbc.odbc("Driver=Oracle in OraHome92;Dbq=SERVER;Uid=USER;Pwd=PASSWD")
cursor = connection.cursor()
cursor.execute("SELECT WhenAdded FROM Log")
results = cursor.fetchall()
When I run this, I get the following:
Traceback (most recent call last):
...
results = cursor.fetchall()
dbi.operation-error: [Oracle][ODBC][Ora]ORA-00932: inconsistent datatypes: expected %s got %s
in FETCH
The other data types I've tried (VARCHAR2, BLOB) do not cause this problem. Is there a way of retrieving timestamps?

I believe this is a bug in the Oracle ODBC driver. Basically, the Oracle ODBC driver does not support the TIMESTAMP WITH (LOCAL) TIME ZONE data types, only the TIMESTAMP data type. As you have discovered, one workaround is in fact to use the TO_CHAR method.
In your example you are not actually reading the time zone information. If you have control of the table you could convert it to a straight TIMESTAMP column. If you don't have control over the table, another solution may be to create a view that converts from TIMESTAMP WITH TIME ZONE to TIMESTAMP via a string - sorry, I don't know if there is a way to convert directly from TIMESTAMP WITH TIME ZONE to TIMESTAMP.

My solution to this, that I hope can be bettered, is to use Oracle to explicitly convert the TIMESTAMP into a string:
cursor.execute("SELECT TO_CHAR(WhenAdded, 'YYYY-MM-DD HH:MI:SSAM') FROM Log")
This works, but isn't portable. I'd like to use the same Python script against a SQL Server database, so an Oracle-specific solution (such as TO_CHAR) won't work.

Related

MySql Python Connector turns INT into int64, but not back again?

I am using the MySQL Python connector to manipulate a database, but running into issues when my queries involve the INT database type. When MySQL retrieves an INT column from the database, it seems to convert to a Python int64. This is fine, except it doesn't convert it back into a usable MySql type.
Here's a reduced example:
This is my MySql schema for table 'test', with Id as datatype INT:
My Python code is below. The 2nd execute (an UPDATE query) fails with this exception:
Exception Thrown: Failed processing format-parameters; Python 'int64' cannot be converted to a MySQL type
If I explicitly convert the 'firstId' parameter (which is reported as type <class 'numpy.int64'>), using int(firstId), the code runs successfully: as per another SO answer. I would have, perhaps naively, assumed that if MySql managed the conversion in one direction, it would manage it in the other. As it is, I don't necessarily know the types that I am getting from my actual query (I'm using Python ... I shouldn't have to know). Does this mean that I will have to type-check all my Python variables before running MySql queries?
I tried changing the table column datatype from INT to BIGINT (an 64-bit INT), but I got the same conversion error. Is there perhaps a 32-bit / 64-bit mismatch on the MySql connector package I am using (mysql-connector-python 8.0.23)?
import mysql.connector as msc
import pandas as pd
def main():
dbConn = msc.connect(user='********', password='********',
host='127.0.0.1',
database='********')
#Open a cursor
cursor = dbConn.cursor()
#Find Id of given name
cursor.execute('SELECT * from test WHERE Name = %s',['Hector'])
headers = cursor.column_names
queryVals = list()
for row in cursor:
queryVals.append(row)
cursor.close()
dfQueryResult = pd.DataFrame(queryVals,columns = headers)
print(dfQueryResult)
#Change name
firstId = dfQueryResult['Id'].iloc[0]
print('firstId is of type: ',type(firstId))
cursor = dbConn.cursor()
cursor.execute('UPDATE test SET Name =%s WHERE Id =%s',['Graham',firstId]) #This line gives the error
print(cursor.rowcount,' rows updated')
cursor.close()
dbConn.commit()
dbConn.close()
main()
First off, hat-tip to #NonoLondon for their comments and investigative work.
A pandas Dataframe stores numbers using NumPy types. In this case, the DataFrame constructor was taking a Python 'int' from the MySql return and converting it into a Numpy.int64 object. When this variable was used again by MySql, the connector could not convert the Numpy.int64 back to a straight Python 'int'.
From other SO articles, I discovered the item() method for all Numpy data types, which converts into base Python types. Since all Numpy data types are derived from the base class Numpy.generic, I'm now using the following utility function whenever I extract variables from DataFrames:
import numpy as np
def pyTypeFromNp(val):
if isinstance(val,np.generic):
return val.item()
return val
Hence the amended line is now:
firstId = pyTypeFromNp(dfQueryResult['Id'].iloc[0])
and the code runs as expected

Format Python timestamp for Teradata DB table

I am working with a Teradata table that has a timestamp column: TIMESTAMP(6) with data that looks like this:
2/14/2019 13:09:51.210000
Currently I have a Python time variable that I want to send into the Teradata table via SQL, that looks like below:
from datetime import datetime
time = datetime.now().strftime("%m/%d/%Y %H:%M:%S")
02/14/2019 13:23:24
How can I reformat that to insert correctly? It is error'ing out with:
teradata.api.DatabaseError: (6760, '[22008] [Teradata][ODBC Teradata Driver][Teradata Database](-6760)Invalid timestamp.')
I tried using the same format the Teradata timestamp column uses:
time = datetime.now().strftime("%mm/%dd/%YYYY %HH24:%MI:%SS")
Same error message
Thanks
Figured it out. Turned out to be unrelated to the timestamp, and I had to reformat the DataFrame column it was being read from. Changing the Data Type fixed it:
final_result_set['RECORD_INSERTED'] = pd.to_datetime(final_result_set['RECORD_INSERTED'])
Now when looping through and inserting via SQL, the following worked fine for populating 'RECORD_INSERTED':
time = datetime.now().strftime("%m/%d/%Y %H:%M:%S")
Sorry for the confusion

Error "ODBC data type -150 is not supported" when connecting sqlalchemy to mssql

I keep running into an odd error when attempting to connect python sqlalchemy to a msssql server/database. I need to use sqlalchemy as it is (from what I've been told) the only way to connect pandas dataframes to mssql.
I have tried connecting sqlalchemy two different ways:
using full connection string:
import sqlalchemy as sa
import urllib.parse as ulp
usrCnnStr = r'DRIVER={SQL Server};SERVER=myVoid\MYINSTANCE;Trusted_Connection=yes;'
usrCnnStr = ulp.quote_plus(usrCnnStr)
usrCnnStr = "mssql+pyodbc:///?odbc_connect=%s" % usrCnnStr
engine = sa.create_engine(usrCnnStr)
connection = engine.connect()
connection.execute("select getdate() as dt from mydb.dbo.dk_rcdtag")
connection.close()
using DSN:
import sqlalchemy as sa
import urllib.parse as ulp
usrDsn = 'myDb'
params = ulp.quote_plus(usrDsn)
engine = sa.create_engine("mssql+pyodbc://cryo:pass#myDb")
conn = engine.connect()
conn.execute('select getdate() as dt')
conn.close()
Both methods return the same error:
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"]
I am not sure how to get around this error; when I execute the "SELECT SERVERPROPERTY('ProductVersion')" in mssql, it works fine but comes back with a data type of "sql_variant".
Is there any way to get around this?
This is most certainly a bug introduced in Issue 3814, new in SQLAlchemy 1.1.0, where they introduce SELECT SERVERPROPERTY('ProductVersion') to fetch server version for the pyodbc MSSQL driver. Downgrading to 1.0.15 will make the code work again, but hopefully the SQLAlchemy devs will make the new version lookup scheme work better in a new patch release.
(There is an issue already reported in the SQLAlchemy issue tracker, I would add this comment there, but bitbucket can't log me in.)
I upgraded to sqlalchemy 1.1 today and ran into a similar issue with connections that were working before. Bumped back to 1.0.15 and no problems. Not the best answer, more of a workaround, but it may work if you are on 1.1 and need to get rolling.
If you are unsure of your version:
>>import sqlalchemy
>>sqlalchemy.__version__
IIRC, this is because you can't select non-cast functions directory, since they don't return a datatype pyodbc recognizes.
Try this:
SELECT CAST(GETDATE() AS DATETIME) AS dt
Also, your may want to use CURRENT_TIMESTAMP, which is ANSI standard SQL, instead of GETDATE(): Retrieving date in sql server, CURRENT_TIMESTAMP vs GetDate()
I'm not sure where your product version select is coming from, but hopefully this gets you on the right path. I'll amend the answer if we figure out more.

return type of psycopg2 json field is dict on one machine but str on another

I'm using the python package psycopg2 to get some data from a Postgres db.
My python version is 2.7 and postgres 9.4.
The result of my SQL request has one row and one column, and the result is a string representing a JSON formatted data.
Now I execute the following code:
cur.execute(""" my SQL request""")
rows = cur.fetchall()
print type(rows[0][0])
On my PC, I get a dictionary, meaning that the SQL result is directly loaded as such.
I run the same code on a remote server and I get a string as a result. If I want a dictionary I have to additionally write:
myDict = json.loads(rows[0][0])
Both my PC and the server are running python 2.7 (but not exactly the same 2.7 though) , so I'm a bit confused by this difference in behavior.
Any insight ?

Handling MySQL BLObs in Python

I am accessing a MySQL database (V5.5) via a Python program for generating bespoke reports. The standard fields (int, varchar, datetime, etc.) present no problem - in general I am putting these into Python lists for subsequent processing. However one table makes extensive use of BLObs; these hold either Binary/HTML/PDF/PNG/CSV data. My problem is how to handle them once they are returned in a SELECT statement. The Binary data needs further processing and the PNG may need to be inserted into a report.
Thanks in advance...
I have tried various things including code based on:- How to insert / retrieve a file stored as a BLOB in a MySQL db using python
Te relevant snippets are:-
TheData = open("/home/mjh/Documents/DemoData/sjp.bin", 'rb').read()
sql = "Insert into Results (idResults, idClient, TestDateTime, ResultBinary) \
Values (10, 7, '2014-11-05 14:09:11', %s"
cursor.execute(sql, (TheData,))
This comes back with:-
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL
syntax; check the manual that corresponds to your MySQL server version for
the right syntax to use near '' at line 1")

Categories

Resources