im using this block to test authentication and would like to know if you guys have a trick to only display "Failed to establish a new connection: [Errno 61] Connection refused'"
for example.
try:
client_login = module.client(url='https://blabla.com', token='imagineone', verify=False)
except Exception as ex:
print("this is one of the errors.{}".format(ex))
instead of having the whole traceback when i know its either a urllib3.exceptions or requests.exceptions.ConnectionError
i have tried pretty_errors but prefer doing it myself, if possible.
thanks for reading, any suggestion is welcome.
Related
I am currently using python 3.8.8 with version 12.9.0 of azure.storage.blob and 1.14.0 of azure.core.
I am downloading multiple files using the azure.storage.blob package. My code looks something like the following
from azure.storage.blob import ContainerClient
from azure.core.exceptions import ResourceNotFoundError, AzureError
from time import sleep
max_attempts = 5
container_client = ContainerClient(DETAILS)
for file in multiple_files:
attempts = 0
while attempts < max_attempts:
try:
data = container.download_blob(file).readall()
break
except ResourceNotFoundError:
# log missing data
break
except AzureError:
# This is mainly here as connections seem to drop randomly.
attempts += 1
sleep(1)
if attempts >= max_attempts:
#log connection error
#do something with the data.
It seems to be running fine, and I don't see any loss of data. However, within my terminal I keep getting the message
Unable to stream download: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))
This appears to be a TCP 104 return message but isn't being handled by the azure module. My questions are as follows.
Where is this message coming from? I can't see it in any of the packages I am using.
How do I handle this error better? It doesn't appear to be caught as an exception as it isn't crashing my code.
Can I get this to print to a log?
Where is this message coming from? I can't see it in any of the packages I am using.
Looks like The clients seemed to be connected to the server, but when they attempted to transfer data, they received a Errno 104 Connection reset by peer error. This also means, that the other side has reset the connection else the client would encounter with [Errno 32] Broken pipe exception.
How do I handle this error better? It doesn't appear to be caught as an exception as it isn't crashing my code.
One of the workarounds you can try is to have try and catch block to handle that exception:
from socket import error as SocketError
import errno
try:
response = urllib2.urlopen(request).read()
except SocketError as e:
if e.errno != errno.ECONNRESET:
raise # Not error we are looking for
pass # Handle error here.
Also try referring to this similar issue where sudo pip3 install urllib3 solved the issue.
Can I get this to print to a log?
One workaround is that you can pass exception instance in exc_info argument:
import logging
try:
1/0
except Exception as e:
logging.error('Error at %s', 'division', exc_info=e)
For more information you can refer How to log python exception?
Here is a related issue that you can follow up
azure storage blob download: ConnectionResetError(104, 'Connection reset by peer')
REFERENCE:
Connection broken: ConnectionResetError(104, 'Connection reset by peer') error while streaming
I was using psycopg2 in python script to connect to Redshift database and occasionally I receive the error as below:
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
This error only happened once awhile and 90% of the time the script worked.
I tried to put it into a try and except block to catch the error, but it seems like the catching didn't work. For example, I try to capture the error so that it will automatically send me an email if this happens. However, the email was not sent when error happened. Below are my code for try except:
try:
conn2 = psycopg2.connect(host="localhost", port = '5439',
database="testing", user="admin", password="admin")
except psycopg2.Error as e:
print ("Unable to connect!")
print (e.pgerror)
print (e.diag.message_detail)
# Call check_row_count function to check today's number of rows and send
mail to notify issue
print("Trigger send mail now")
import status_mail
print (status_mail.redshift_failed(YtdDate))
sys.exit(1)
else:
print("RedShift Database Connected")
cur2 = conn2.cursor()
rowcount = cur2.rowcount
Errors I received in my log:
Traceback (most recent call last):
File "/home/ec2-user/dradis/dradisetl-daily.py", line 579, in
load_from_redshift_to_s3()
File "/home/ec2-user/dradis/dradisetl-daily.py", line 106, in load_from_redshift_to_s3
delimiter as ','; """.format(YtdDate, s3location))
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
So the question is, what causes this error and why isn't my try except block catching it?
From the docs:
exception psycopg2.OperationalError
Exception raised for errors that are related to the database’s
operation and not necessarily under the control of the programmer,
e.g. an unexpected disconnect occurs, the data source name is not
found, a transaction could not be processed, a memory allocation error
occurred during processing, etc.
This is an error which can be a result of many different things.
slow query
the process is running out of memory
other queries running causing tables to be locked indefinitely
running out of disk space
firewall
(You should definitely provide more information about these factors and more code.)
You were connected successfully but the OperationalError happened later.
Try to handle these disconnects in your script:
Put the command you want to execute into a try-catch block and try to reconnect if the connection is lost.
Recently encountered this error. The cause in my case was the network instability while working with database. If network will became down for enough time that the socket detect the timeout you will see this error. If down time is not so long you wont see any errors.
You may control timeouts of Keepalive and RTO features using this code sample
s = socket.fromfd(conn.fileno(), socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 6)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, 10000)
More information you can find in this post
If you attach the actual code that you are trying to except it would be helpful. In your attached stack trace its : " File "/home/ec2-user/dradis/dradisetl-daily.py", line 106"
Similar except code works fine for me. mind you, e.pgerror will be empty if the error occurred on the client side, such as the error in my example. the e.diag object will also be useless in this case.
try:
conn = psycopg2.connect('')
except psycopg2.Error as e:
print('Unable to connect!\n{0}'.format(e))
else:
print('Connected!')
Maybe it will be helpful for someone but I had such an error when I've tried to restore backup to the database which has not sufficient space for it.
I'm trying to connect to a Magento API using Xmlrpc.
When the url is valid, i have no problem. But i'd like to catch errors if the url is not valid. If i try with an invalid url i have :
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
I'm trying to catch it but i can't find a way to do it ..
I'm using Python 3.5 :
from xmlrpc.client import ServerProxy
from socket import gaierror
params = {
"encoding: "utf-8",
"verbose": False,
"transport": SpecialTransport() # I use a SpecialTransport class
}
try:
client = ServerProxy("https://ma.bad.url, **params)
except gaierror:
print("Error")
The problem is, that i never go through the except ..
I don't understand what i'm doing wrong..
Thanks!
I'm answering to myself.
I've finally been able to make it works like this :
# Connect to the url
client = ServerProxy('https://my.bad.url', **params)
# Try to login to Magento to get a session
try:
session = client.login('username', 'password')
except gaierror:
# Error resolving / connecting to the url
print('Connection error')
sys.exit(2)
except Fault:
# Error with the login
print('Login error')
sys.exit(2)
else:
print('Success')
I have this simple minimal 'working' example below that opens a connection to google every two seconds. When I run this script when I have a working internet connection, I get the Success message, and when I then disconnect, I get the Fail message and when I reconnect again I get the Success again. So far, so good.
However, when I start the script when the internet is disconnected, I get the Fail messages, and when I connect later, I never get the Success message. I keep getting the error:
urlopen error [Errno -2] Name or service not known
What is going on?
import urllib2, time
while True:
try:
print('Trying')
response = urllib2.urlopen('http://www.google.com')
print('Success')
time.sleep(2)
except Exception, e:
print('Fail ' + str(e))
time.sleep(2)
This happens because the DNS name "www.google.com" cannot be resolved. If there is no internet connection the DNS server is probably not reachable to resolve this entry.
It seems I misread your question the first time. The behaviour you describe is, on Linux, a peculiarity of glibc. It only reads "/etc/resolv.conf" once, when loading. glibc can be forced to re-read "/etc/resolv.conf" via the res_init() function.
One solution would be to wrap the res_init() function and call it before calling getaddrinfo() (which is indirectly used by urllib2.urlopen().
You might try the following (still assuming you're using Linux):
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
res_init = libc.__res_init
# ...
res_init()
response = urllib2.urlopen('http://www.google.com')
This might of course be optimized by waiting until "/etc/resolv.conf" is modified before calling res_init().
Another solution would be to install e.g. nscd (name service cache daemon).
For me, it was a proxy problem.
Running the following before import urllib.request helped
import os
os.environ['http_proxy']=''
response = urllib.request.urlopen('http://www.google.com')
In our code we catch IOError and log it before reraising. I am getting a "connection reset by peer", but nothing in the logs. Is "connection reset by peer" a subclass of IOError in python?
.....
File "/usr/lib/python2.5/httplib.py", line 1047, in readline
s = self._read()
File "/usr/lib/python2.5/httplib.py", line 1003, in _read
buf = self._ssl.read(self._bufsize)
error: (104, 'Connection reset by peer')
The stack trace you pasted looks like some Exception of class error with arguments (104, 'Connection reset by peer).
So it looks like it's not a HTTPError exception at all. It looks to me like it's actually a socket.error. This class is indeed a subclass of IOError since Python 2.6.
But I guess that's not your question, since you are asking about HttpError exceptions. Can you rephrase your question to clarify your assumptions and expectations?
Comment from usawaretech:
How are you finding out it is a socket
error? MY code is something like:
try:risky_code(); except IOError:
logger.debug('...'); raise; As I am
assuming that HttpError is a subclass
of IOError, when I get that exception,
I am assuming that it be logged. There
is nothing in my logs
I guess it is a socket.error because I used the index of the standard library documentation, and because I encountered this error before.
What version of Python are you using? I guess it's Python 2.5 or earlier.
If your intent is to log and re-raise exceptions, it would be a better idea to use a bare except:
try:
risky_code()
except:
logger.debug(...)
raise
Also, you can find the module where the exception class was defined using exception.__module__.