Python to Pull Oracle Data in Unicode (Arabic) format - python

I am using cx_Oracle to fetch some data stored in Arabic characters from an Oracle database. Below is how I try to connect to the database. When I try to print the results, specially those columns stored in Arabic, I get something like "?????" which seems to me that the data was not coded properly.
I tried to print random Arabic string in Python it went alright, which indicates the problem is in the manner in which I am pulling data from the database.
connection = cx_Oracle.connect(username, password, instanceName)
wells = getWells(connection)
def getWells(conn):
cursor = conn.cursor()
wells = []
cursor.execute(sql)
clmns = len(cursor.description)
for row in cursor.fetchall():
print row
well = {}
for i in range(0, clmns):
if type(row[i]) is not datetime.datetime:
well[cursor.description[i][0]] = row[i]
else:
well[cursor.description[i][0]] = row[i].isoformat()
wells.append(well)
cursor.close()
connection.close()
return wells

In order to force a reset of the default encoding from the environment, you can call the setdefaultencoding method in the sys module.
As this is not recommended, it is not visible by default and a reload is required.
It is recommended that you attempt to fix the encoding set in the shell for the user on the host system rather than modifying in a script.
import sys
reload(sys)
sys.setdefaultencoding('utf-8')

Related

Can't get full data from SQL Server with python

I'm trying to get query as a xml data from mssql server with pyodbc. After query, im writing data to new xml file with unique name. Everything works fine to this point with small data. Problem is when i try to read data over 2037 character, i cant get all of them. Its gives me just first 2037 character.
SQL Server version 15.0.2000.5(SQL Server 2019 Express)
Driver is ODBC Driver 17 for SQL Server
Python version 3.11.1
pyodbc version 4.0.35
Code is running on Windows Server 2016 Standard
SQL Query For XML Data
SELECT
C.BLKODU AS "BLKODU",
C.CARIKODU AS "CARIKODU",
C.TICARI_UNVANI AS "TICARI_UNVANI",
C.ADI_SOYADI AS "ADI_SOYADI",
C.VERGI_NO AS "VERGI_NO",
C.TC_KIMLIK_NO AS "TC_KIMLIK_NO",
C.VERGI_DAIRESI AS "VERGI_DAIRESI",
C.CEP_TEL AS "CEP_TEL",
C.ILI AS "ILI",
C.ILCESI AS "ILCESI",
C.ADRESI_1 AS "ADRESI",
(SELECT
CHR.BLKODU AS "BLKODU",
CHR.EVRAK_NO AS "EVRAK_NO",
CHR.MAKBUZNO AS "MAKBUZ_NO",
CAST(CHR.TARIHI AS DATE) AS "TARIHI",
CAST(CHR.VADESI AS DATE) AS "VADESI",
CHR.MUH_DURUM AS "MUH_DURUM",
CAST(CHR.KPB_ATUT AS DECIMAL(10, 2)) AS "KPB_ATUT",
CAST(CHR.KPB_BTUT AS DECIMAL(10, 2)) AS "KPB_BTUT"
FROM CARIHR AS CHR
WHERE CHR.BLCRKODU = C.BLKODU
ORDER BY CHR.TARIHI
FOR XML PATH('CARIHR'), TYPE)
FROM CARI AS C
WHERE C.CARIKODU = 'CR00001'
FOR XML PATH ('CARI')
Python Code
import pyodbc
import uuid
import codecs
import query
import core
conn = pyodbc.connect(core.connection_string, commit=True)
cursor = conn.cursor()
cursor.execute(query.ctr)
row = cursor.fetchval()
id = uuid.uuid4()
xml_file = "./temp/"+str(id)+".xml"
xml = codecs.open(xml_file, "w", "utf-8")
xml.write(row)
xml.close()
I've tried to use pymssql and it didn't change anything.
cursor.fetchvall(), cursor.fetchone() is gives me same result.
cursor.fetchall() gives me full data. But its gives as a list. When its gives as a list i need to convert to string. Before converting to string i need to select first element in the list. So i came with then idea like this below. But result didn't change at all. Its still gives only first 2037 character.
conn = pyodbc.connect(connect_string, commit=True)
cursor = conn.cursor()
cursor.execute(query.ctr)
row = cursor.fetchall()
data = ','.join(row[0])
id = uuid.uuid4()
xml_file = "./temp/"+str(id)+".xml"
xml = codecs.open(xml_file, "w", "utf-8")
xml.write(data)
xml.close()
For XML queries are split to multiple lines by SQL Server automatically if they're long enough. Some clients like Management Studio "merge" these to single row but it's not actually one row.
So you need to concatenate your string yourself:
#code in pseudo-python
xmlString = ""
rows = cursor.fetchall()
for row in rows:
xmlString = xmlString + row[0]

Python MySQL cursor fails to fetch rows

I am trying to fetch data from AWS MariaDB:
cursor = self._cnx.cursor()
stmt = ('SELECT * FROM flights')
cursor.execute(stmt)
print(cursor.rowcount)
# prints 2
for z in cursor:
print(z)
# Does not iterate
row = cursor.fetchone()
# row is None
rows = cursor.fetchall()
# throws 'No result set to fetch from.'
I can verify that table contains data using MySQL Workbench. Am I missing some step?
EDIT: re 2 answers:
res = cursor.execute(stmt)
# res is None
EDIT:
I created new Python project with a single file:
import mysql.connector
try:
cnx = mysql.connector.connect(
host='foobar.rds.amazonaws.com',
user='devuser',
password='devpasswd',
database='devdb'
)
cursor = cnx.cursor()
#cursor = cnx.cursor(buffered=True)
cursor.execute('SELECT * FROM flights')
print(cursor.rowcount)
rows = cursor.fetchall()
except Exception as exc:
print(exc)
If I run this code with simple cursor, fetchall raises "No result set to fetch from". If I run with buffered cursor, I can see that _rows property of cursor contains my data, but fetchall() returns empty array.
Your issue is that cursor.execute(stmt) returns an object with results and you're not storing that.
results = cursor.execute(stmt)
print(results.fetchone()) # Prints out and pops first row
For the future googlers with the same Problem I found a workaround which may help in some cases:
I didn't find the source of the problem but a solution which worked for me.
In my case .fetchone() also returned none whatever I did on my local(on my own Computer) Database. I tried the exact same code with the Database on our companies server and somehow it worked. So I copied the complete server Database onto my local Database (by using database dumps) just to get the server settings and afterwards I also could get data from my local SQL-Server with the code which didn't work before.
I am a SQL-newbie but maybe some crazy setting on my local SQL-Server prevented me from fetching data. Maybe some more experienced SQL-user knows this setting and can explain.

Print python database query to HTML document

I wrote some python to query an Oracle database and I would like it to print the results in a formatted HTML table when I look at it with my browser. I am unsure how to do this.
The python I wrote is as below:
#!/usr/bin/python2.6
import imp,datetime
import cx_Oracle
def index():
conn_str = u'$USERNAME/$PASSWORD#$HOSTNAME:$PORT/$SERVICENAME'
conn = cx_Oracle.connect(conn_str)
c = conn.cursor()
query = c.execute(u'SELECT $FIELD1, $FIELD2, $FIELD3 FROM $TABLE')
cur = c.fetchall()
for row in cur:
print(str(row))
conn.close()
A co-worker of mine has a similar script he wrote that is immensely more complicated in terms of credentialing and cursor creation, and he uses the "write" in Python to output. With his I can at least get output to a webpage, and I can't understand why mine won't even show anything, let alone my query results. The problem is that his output comes out unformatted and even if I used his code I don't know how to give it table structure.
For contrast, his:
#!/usr/bin/python2.6
import os
os.environ["ORACLE_BASE"]="/oracle"
os.environ["ORACLE_HOME"]="/oracle/product/11.2.0/client_1"
os.environ["LD_LIBRARY_PATH"]="/oracle/product/11.2.0/client_1/lib:/oracle/product/11.2.0/client_1/dbjava/lib"
os.environ["TNS_ADMIN"]="/oracle/product/11.2.0/client_1/network/admin"
import imp,datetime
import cx_Oracle
DBCONNECTED=""
CONNECT={}
def connect(tnsname):
global DBCONNECTED
DB={}
DB['$DATABASE']=['$USER','$PASSWORD']
#print str(DB[tnsname][0]+"/"+DB[tnsname][1]+"#"+tnsname)
conn=cx_Oracle.connect(DB[tnsname][0]+"/"+DB[tnsname][1]+"#"+tnsname)
DBCONNECTED+=tnsname+":"
return conn
def getcredentials(env,user):
env=env.lower()
CRED={};CRED['$DATABASENAME']={};
CRED['$DATABASENAME']['$USERNAME']='$PASSWORD'
if env in CRED and user in CRED[env]:
return CRED[env][user]
else:
return 'ERR'
def returnconnection(dbtns):
global CONNECT
if DBCONNECTED.find(dbtns+":")==-1: #connection hasn't been initialized, do that
CONNECT[dbtns]=connect(dbtns.lower())
cur= CONNECT[dbtns].cursor()
return cur
def runq(dbtns,query,bindvar=''):
query=query.replace("\n"," ")
cur=returnconnection(dbtns)
if bindvar=='':
cur.execute(query)
else:
cur.execute(query,bindvar)
rs=cur.fetchall() #this should be fine for up to several thousand rows
return rs
def index (req,rssid=""):
global R; R=req; R.content_type="text/html"
R.write("""
<!DOCTYPE HTML">
<html><head><title>TABLES</title><META HTTP-EQUIV='Pragma' CONTENT='no-cache'>
</head>
<table>
""")
dat=runq('$DATABASE','SELECT $FIELD1, $FIELD2, $FIELD3 FROM $TABLE')
for row in dat:
R.write(str(row))
#write footer
R.write("""
</table>
</body></html>
""")
I like the simplicity of what I wrote, but my colleague is obviously doing something right to spit the output to a page. When I try and re-create his usage of "Global R" to invoke "R.write" I get a unicode error regarding the content_type module, which seems odd.
Regardless, I feel this should be insanely simple. I'm more used to PHP, and this is my first attempt at using Python to create this sort of webpage.
Ideas?
Why do you use unicode strings? Your colleague doesn't. Try without the "u" prefix on your strings.

Fast MySQL Import

Writing a script to convert raw data for MySQL import I worked with a temporary textfile so far which I later imported manually using the LOAD DATA INFILE... command.
Now I included the import command into the python script:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
cursor = db.cursor()
query = """
LOAD DATA INFILE 'temp.txt' INTO TABLE myDB.values
FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';
"""
cursor.execute(query)
cursor.close()
db.commit()
db.close()
This works but temp.txt has to be in the database directory which isn't suitable for my needs.
Next approch is dumping the file and commiting directly:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
sql = "INSERT INTO values(`timestamp`,`id`,`value`,`status`) VALUES(%s,%s,%s,%s)"
cursor=db.cursor()
for line in lines:
mode, year, julian, time, *values = line.split(",")
del values[5]
date = datetime.strptime(year+julian, "%Y%j").strftime("%Y-%m-%d")
time = datetime.strptime(time.rjust(4, "0"), "%H%M" ).strftime("%H:%M:%S")
timestamp = "%s %s" % (date, time)
for i, value in enumerate(values[:20], 1):
args = (timestamp,str(i+28),value, mode)
cursor.execute(sql,args)
db.commit()
Works as well but takes around four times as long which is too much. (The same for construct was used in the first version to generate temp.txt)
My conclusion is that I need a file and the LOAD DATA INFILE command to be faster. To be free where the textfile is placed the LOCAL option seems useful. But with MySQL Connector (1.1.7) there is the known error:
mysql.connector.errors.ProgrammingError: 1148 (42000): The used command is not allowed with this MySQL version
So far I've seen that using MySQLdb instead of MySQL Connector can be a workaround. Activity on MySQLdb however seems low and Python 3.3 support will probably never come.
Is LOAD DATA LOCAL INFILE the way to go and if so is there a working connector for python 3.3 available?
EDIT: After development the database will run on a server, script on a client.
I may have missed something important, but can't you just specify the full filename in the first chunk of code?
LOAD DATA INFILE '/full/path/to/temp.txt'
Note the path must be a path on the server.
To use LOAD DATA INFILE with every accessible file you have to set the
LOCAL_FILES client flag while creating the connection
import mysql.connector
from mysql.connector.constants import ClientFlag
db = mysql.connector.connect(client_flags=[ClientFlag.LOCAL_FILES], <other arguments>)

Sybase sybpydb queries not returning anything

I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')

Categories

Resources