I have attempted to search for a resolution or at least a few troubleshooting steps to understand why I see these errors when I execute a Python script. No luck so far. I SSH to Rackspace to do this.
The errors reference the Oracle db. To be more specific, it is cx_Oracle._Error and cx_Oracle.DatabaseError on separate occasions. Below is an example of a few lines from the server logs. It does not give any more information than that.
Running against Oracle server: prod.somecompanyname + prod123
Error connecting to databases: (<cx_Oracle._Error object at0x7ffff7fe9af8>,)
Error verifying existing Oracle records
My colleague is able to execute the script successfully and does not encounter the error. I compared our .bash_profile and .bashrc and nothing different stands out. The Oracle server credentials are correct in the script as well as the Oracle environment path. This may be isolated to something on my end however I cannot figure out where.
Any suggestions on where to look to fix this is appreciated.
def oraclerecords(logger, env, db1Pass, db2Pass, db3Pass, verifyRecordset):
import cx_Oracle
retval = None
try:
db1UID='somedb1name'
db2UID='somedb2name'
if env == 'p':
dbServer='prod.somecompanyname.com'
dbSID='SIDPR'
elif env == 's':
dbServer='stage.somecompanyname.com'
dbSID='SIDSTG'
elif env == 'r':
dbServer='stage.somecompanyname.com'
dbSID='SIDDEV'
db3UID = 'somedb3name'
db3SID = 'db3HUB'
logger.info('Running against Oracle server:' + dbServer + ' SID:' + dbSID)
connString = (db1UID + '/' + db1Pass + '#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS='
'(PROTOCOL=TCP)(HOST=' + dbServer + ')(PORT=1234)))(CONNECT_DATA=(SID=' + dbSID + ')))')
conndb1 = cx_Oracle.connect(connString)
curdb1 = conndb1.cursor()
curdb0 = conndb1.cursor()
connString = (db2UID + '/' + db2Pass + '#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS='
'(PROTOCOL=TCP)(HOST=' + dbServer + ')(PORT=1234)))(CONNECT_DATA=(SID=' + dbSID + ')))')
conndb2 = cx_Oracle.connect(connString)
curdb2 = conndb2.cursor()
connString = (db3UID + '/' + db3Pass + '#(DESCRIPTION=(ADDRESS_LIST=(ADDRESS='
'(PROTOCOL=TCP)(HOST=prod.atsomecompany.com)(PORT=1234)))(CONNECT_DATA=(SID=' + db3SID + ')))')
conndb3 = cx_Oracle.connect(connString)
curdb3 = conndb3.cursor()
except Exception as e:
logger.error('Error connecting to databases: ' + str(e.args))
return verifyRecordset, 2
The issue with your Python script is on this line:
logger.error('Error connecting to databases: ' + str(e.args))
Perhaps the simplest way to improve it is to replace it with the following:
logger.error('Error connecting to databases: ' + str(e))
I wrote the following short Python script that attempts to connect to an Oracle XE database:
import cx_Oracle
connect_string = "..."
try:
conn = cx_Oracle.connect(connect_string)
print "Got connection"
except Exception as e:
print str(e.args)
print str(e)
I knew this script would raise an exception because the database and the listener it was attempting to connect to were both down. When I ran, it I got the following output:
(<cx_Oracle._Error object at 0x02354308>,)
ORA-12541: TNS:no listener
The first line here doesn't tell me anything helpful, but the second line contains a more useful message.
Hopefully after making this change to your script you will see a more useful error message, which should help you track down what the real problem is here.
Related
We have a script which talks to Scylla (a cassandra drop-in replacement). The script is supposed to run for a few thusands systems. The script runs a few thousands queries to get its required data. However, after sometime the script gets crashed throwing this error:
2021-09-29 12:13:48 Could not execute query because of : errors={'x.x.x.x': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=x.x.x.x
2021-09-29 12:13:48 Trying for : 4th time
Traceback (most recent call last):
File ".../db_base.py", line 92, in db_base
ret_val = SESSION.execute(query)
File "cassandra/cluster.py", line 2171, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 4062, in cassandra.cluster.ResponseFuture.result
cassandra.OperationTimedOut: errors={'x.x.x.x': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=x.x.x.x
The DB Connection code:
def db_base(current_keyspace, query, try_for_times, current_IPs, port):
global SESSION
if SESSION is None:
# This logic to ensure given number of retrying runs on failure of connecting to the Cluster
for i in range(try_for_times):
try:
cluster = Cluster(contact_points = current_IPs, port=port)
session = cluster.connect() # error can be encountered in this command
break
except NoHostAvailable:
print("No Host Available! Trying for : " + str(i) + "th time")
if i == try_for_times - 1:
# shutting down cluster
cluster.shutdown()
raise db_connection_error("Could not connect to the cluster even in " + str(try_for_times) + " tries! Exiting")
SESSION = session
# This logic to ensure given number of retrying runs in the case of failing the actual query
for i in range(try_for_times):
try:
# setting keyspace
SESSION.set_keyspace(current_keyspace)
# execute actual query - error can be encountered in this
ret_val = SESSION.execute(query)
break
except Exception as e:
print("Could not execute query because of : " + str(e))
print("Trying for : " + str(i) + "th time")
if i == (try_for_times -1):
# shutting down session and cluster
cluster.shutdown()
session.shutdown()
raise db_connection_error("Could not execute query even in " + str(try_for_times) + " tries! Exiting")
return ret_val
How can this code be improved to sustain and be able to run for this large no. of queries? Or we should look into other tools / approach to help us with getting this data? Thank you
The Client session timeout indicates that the driver is timing out before the server does or - should it be overloaded - that Scylla hasn't replied back the timeout to the driver. There are a couple of ways to figure this out:
1 - Ensure that your default_timeout is higher than Scylla enforced timeouts in /etc/scylla/scylla.yaml
2 - Check the Scylla logs for any sign of overload. If there is, consider throttling your requests to find a balanced sweet spot to ensure they no longer fail. If it continues, consider resizing your instances.
In addition to these, it is worth to mention that your sample code is not using PreparedStatements, TokenAwareness and other best practices as mentioned under https://docs.datastax.com/en/developer/python-driver/3.19/api/cassandra/policies/ that will certainly improve your overall throughput down the road.
You can find further information on Scylla docs:
https://docs.scylladb.com/using-scylla/drivers/cql-drivers/scylla-python-driver/
and Scylla University
https://university.scylladb.com/courses/using-scylla-drivers/lessons/coding-with-python/
I'm trying to create an alert with Python that connects to a SQL database, executes a SELECT, and based on the results, sends an alert via SMTP. I got it all down, it works when executing the Python script from Visual Studio. But if I compile it to .exe with pyinstaller (which I need to execute from the Windows Task Scheduler on a server that does not have Python installed) and try to run it, it stops on the line that executes the query. Here is the code:
for BM in servers:
try:
server = BM[0]
db = BM[1]
conn = pyodbc.connect('driver={%s};server=%s;database=%s;uid=%s;pwd=%s' % ( driver, server, db, user, password ) )
log.write("Conectado" + '\n')
cursor = conn.cursor()
log.write("Cursor creado" + '\n')
try:
cursor.execute(query)
log.write("Query ejecutada" + '\n')
except pyodbc.Error as ex:
log.write(ex + '\n')
except pyodbc.Error as ex:
log.write(ex + '\n')
The line that stops the script is cursor.execute(query). And "query" is a simple SELECT * FROM table, no conditions.
How can I make this work?
I have mongodb running on azure portal. I can connect to it using nosql booster. I have created a DB TestDb and have added 3 collections to it. I am trying to connect to it using python as below:
mongo_url = 'mongodb://' + <username> + ':' + <password> + '#' + <url> + ':' + port + '/' + admin
client = MongoClient(mongo_url)
db = client.get_database('TestDb')
print(db)
print(db.list_collection_names()) # Error at this line
Below is the output of db:
Database(MongoClient(host=['<name>.documents.azure.com:10255'], document_class=dict, tz_aware=False, connect=True), 'TestDb')
but at db.list_collection_names() it shows error <name>.documents.azure.com:10255: timed out.
I have rechecked everything and all looks good to me. But not sure why not able to do above using python. Please help. Thanks
It's worth mentioning you are using Cosmos DB.
Although it's compatible with MongoDB on wire protocol level, it has own specifics.
Try to follow Quick Start snippets for Python from Azure Portal. It should have most accurate connection settings.
My best guess is it requires ssl enabled oclientside:
mongo_url = 'mongodb://' + <username> + ':' + <password> + '#' + <url> + ':' + port + '/' + admin + '?ssl=true'
I was also using a test DB but nothing worked. This develop DB had some dummy configurations so the solution was adding a tlsAllowInvalidCertificates to my url:
url = f"mongodb://{USERNAME}:{PASSWORD}#{HOST}:{PORT}/{DB_NAME}?authSource=admin&ssl=true&tlsAllowInvalidCertificates=true"
I have written a paramiko script to batch-transfer files with sftp. The script works fine on my development machine -- Linux Mint 13, using Python 2.7.
When I moved the script to the production system, I found I had to build Python from scratch on it since the system Python was too old. So I built Python 2.7 on it --Centos -- and then attempted to run my script. It failed with a:
paramiko.SSHException - Errno 110, connection timeout
I've googled for that exception, but didn't find anything that seemed to fit. The script seems to 'hang' and the timeout on the paramiko.Transport((host, port)) part.
I thought this strange so attempted to do an sftp using openssh from that system, just to assure the remote host was responsive. It was -- and it worked.
So, now I go back to my script and simplify everything so it makes a bare-bones connection .. Still, I get a connection timeout. I don't know how to turn up debug with paramiko. Any suggestions?
Here's the basic script:
import os.path
import sys
import traceback
import paramiko
host = 'sftp.host.com'
user = 'user'
pw = 'password'
storepath = '/home/ftpservice/download'
is_dir = lambda x: oct(x)[1:3] == '40'
is_file = lambda x: oct(x)[1:3] == '10'
tp = paramiko.Transport((host, 22))
print 'tp is made, connecting '
tp.connect(username=user, password=pw, hostkey=None)
sftp = tp.open_sftp_client()
print 'sftp client made, now listing files'
filelist = sftp.listdir('.')
print filelist
for i in filelist:
fs = sftp.stat(i)
print "file is %s " % i
print "stmode %s" % sftp.stat(i).st_mode
if is_dir(sftp.stat(i).st_mode):
print "%s is a directory " % i
elif is_file(sftp.stat(i).st_mode):
print "%s is a file " % i
else:
print "no clue what %s is " % i
how can I avoid getting (undocumented) exception in following code?
import ldap
import ldap.sasl
connection = ldap.initialize('ldaps://server:636', trace_level=0)
connection.set_option(ldap.OPT_REFERRALS, 0)
connection.protocol_version = 3
sasl_auth = ldap.sasl.external()
connection.sasl_interactive_bind_s('', sasl_auth)
baseDN = 'ou=org.com,ou=xx,dc=xxx,dc=com'
filter = 'objectclass=*'
try:
result = connection.search_s(baseDN, ldap.SCOPE_SUBTREE, filter)
except ldap.REFERRAL, e:
print "referral"
except ldap.LDAPError, e:
print "Ldaperror"
It happens that baseDN given in example is a referral. When I run this code I get referral as output.
What would I want is that python-ldap just would skip it or ignore without throwing strange exception (I cannot find documentation about it)?
(this may help or not) The problem happened when I was searching baseDN upper in a tree. When I was searching 'ou=xx,dc=xxx,dc=com' it started to freeze on my production env when on development env everything works great. When I started to looking at it I found that it freezing on referral branches. How can I tell python-ldap to ignore referrals? Code above does not work as I want.
This is a working example, see if it helps.
def ldap_initialize(remote, port, user, password, use_ssl=False, timeout=None):
prefix = 'ldap'
if use_ssl is True:
prefix = 'ldaps'
# ask ldap to ignore certificate errors
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
if timeout:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, timeout)
ldap.set_option(ldap.OPT_REFERRALS, ldap.OPT_OFF)
server = prefix + '://' + remote + ':' + '%s' % port
l = ldap.initialize(server)
l.simple_bind_s(user, password)