I have written a simple motion capture and recording script in Python 3. I am on cv2 version 4.4.0, installed on Ubuntu using "apt install python3-opencv". My script connects to an IP camera, fetching the URL from a config file. When a host at a URL isn't reachable, I raise an exception, output a message on the command line, and wait 30 seconds. However, even when catching this error, OpenCV spits out an error on the terminal. I really don't want to see this error, as I want the terminal to be a clean experience for the user. How can I successfully catch and suppress this error message?
My full code is here on Github.
while True:
try:
video=cv2.VideoCapture(camera_url)
if video is None or not video.isOpened():
raise ConnectionError
size=(int(video.get(3)), int(video.get(4)))
check, frame = video.read()
...
except ConnectionError:
print("Retrying connection to ",camera_name," in ",str(reconnect_time), " seconds...")
time.sleep(reconnect_time)
The error message I would like:
Retrying connection to front_yard in 30 seconds...
The error message I get:
[tcp # 0x18d4b40] Connection to tcp://192.168.1.102:8000 failed: No route to host
[ERROR:0] global /tmp/pip-req-build-99ib2vsi/opencv/modules/videoio/src/cap.cpp (140) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.4.0) /tmp/pip-req-build-99ib2vsi/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): http://192.168.1.102:8000/stream.mjpg in function 'icvExtractPattern'
Retrying connection to front_yard in 30 seconds...
Related
I am currently using python 3.8.8 with version 12.9.0 of azure.storage.blob and 1.14.0 of azure.core.
I am downloading multiple files using the azure.storage.blob package. My code looks something like the following
from azure.storage.blob import ContainerClient
from azure.core.exceptions import ResourceNotFoundError, AzureError
from time import sleep
max_attempts = 5
container_client = ContainerClient(DETAILS)
for file in multiple_files:
attempts = 0
while attempts < max_attempts:
try:
data = container.download_blob(file).readall()
break
except ResourceNotFoundError:
# log missing data
break
except AzureError:
# This is mainly here as connections seem to drop randomly.
attempts += 1
sleep(1)
if attempts >= max_attempts:
#log connection error
#do something with the data.
It seems to be running fine, and I don't see any loss of data. However, within my terminal I keep getting the message
Unable to stream download: ("Connection broken: ConnectionResetError(104, 'Connection reset by peer')", ConnectionResetError(104, 'Connection reset by peer'))
This appears to be a TCP 104 return message but isn't being handled by the azure module. My questions are as follows.
Where is this message coming from? I can't see it in any of the packages I am using.
How do I handle this error better? It doesn't appear to be caught as an exception as it isn't crashing my code.
Can I get this to print to a log?
Where is this message coming from? I can't see it in any of the packages I am using.
Looks like The clients seemed to be connected to the server, but when they attempted to transfer data, they received a Errno 104 Connection reset by peer error. This also means, that the other side has reset the connection else the client would encounter with [Errno 32] Broken pipe exception.
How do I handle this error better? It doesn't appear to be caught as an exception as it isn't crashing my code.
One of the workarounds you can try is to have try and catch block to handle that exception:
from socket import error as SocketError
import errno
try:
response = urllib2.urlopen(request).read()
except SocketError as e:
if e.errno != errno.ECONNRESET:
raise # Not error we are looking for
pass # Handle error here.
Also try referring to this similar issue where sudo pip3 install urllib3 solved the issue.
Can I get this to print to a log?
One workaround is that you can pass exception instance in exc_info argument:
import logging
try:
1/0
except Exception as e:
logging.error('Error at %s', 'division', exc_info=e)
For more information you can refer How to log python exception?
Here is a related issue that you can follow up
azure storage blob download: ConnectionResetError(104, 'Connection reset by peer')
REFERENCE:
Connection broken: ConnectionResetError(104, 'Connection reset by peer') error while streaming
I'm trying to do some web automation using selenium in python, and if my script encounters any errors I want the whole process to restart again (an infinite loop)
so basically I tried to use a recursive function and recall it each time an error occurred, it looks like this :
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
def my_func():
try :
driver.get("https://mywebsite.com")
.
.
.
except Exception as E :
print(str(E)) #printing the exception message
driver.quit() #quitting from the current tab
my_func() #recalling the function again
my_func()
the first time everything works fine and when an error occurs (because the website switch to another page) it prints this Exception: Message: element not interactable which is totally normal, but on the second iteration I get this :
HTTPConnectionPool(host='127.0.0.1', port=59887): Max retries exceeded with url: /session/6d23ab3406dbef8f6581c4c7652d2633/url (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000019D2BA3CC10>: Failed to establish a new connection: `[WinError 10061] No connection could be made because the target machine actively refused it'))
so is there any way to fix this error or other better solutions for this script?
I think, the message:
No connection could be made because the target machine actively refused it
It may be a security issue that resulted from the target URL. Try to connect to the internet via another connection such as a mobile hotspot.
I figured it out I have to create a new driver on each iteration, also I added a time.sleep before recalling the function just in case the requests were sent too fast :
def my_func():
try :
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://mywebsite.com")
.
.
.
except Exception as E :
print(str(E)) #printing the exception message
driver.quit() #quitting from the current tab
time.sleep(5)
my_func() #recalling the function again
my_func()
I was using psycopg2 in python script to connect to Redshift database and occasionally I receive the error as below:
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
This error only happened once awhile and 90% of the time the script worked.
I tried to put it into a try and except block to catch the error, but it seems like the catching didn't work. For example, I try to capture the error so that it will automatically send me an email if this happens. However, the email was not sent when error happened. Below are my code for try except:
try:
conn2 = psycopg2.connect(host="localhost", port = '5439',
database="testing", user="admin", password="admin")
except psycopg2.Error as e:
print ("Unable to connect!")
print (e.pgerror)
print (e.diag.message_detail)
# Call check_row_count function to check today's number of rows and send
mail to notify issue
print("Trigger send mail now")
import status_mail
print (status_mail.redshift_failed(YtdDate))
sys.exit(1)
else:
print("RedShift Database Connected")
cur2 = conn2.cursor()
rowcount = cur2.rowcount
Errors I received in my log:
Traceback (most recent call last):
File "/home/ec2-user/dradis/dradisetl-daily.py", line 579, in
load_from_redshift_to_s3()
File "/home/ec2-user/dradis/dradisetl-daily.py", line 106, in load_from_redshift_to_s3
delimiter as ','; """.format(YtdDate, s3location))
psycopg2.OperationalError: SSL SYSCALL error: EOF detected
So the question is, what causes this error and why isn't my try except block catching it?
From the docs:
exception psycopg2.OperationalError
Exception raised for errors that are related to the database’s
operation and not necessarily under the control of the programmer,
e.g. an unexpected disconnect occurs, the data source name is not
found, a transaction could not be processed, a memory allocation error
occurred during processing, etc.
This is an error which can be a result of many different things.
slow query
the process is running out of memory
other queries running causing tables to be locked indefinitely
running out of disk space
firewall
(You should definitely provide more information about these factors and more code.)
You were connected successfully but the OperationalError happened later.
Try to handle these disconnects in your script:
Put the command you want to execute into a try-catch block and try to reconnect if the connection is lost.
Recently encountered this error. The cause in my case was the network instability while working with database. If network will became down for enough time that the socket detect the timeout you will see this error. If down time is not so long you wont see any errors.
You may control timeouts of Keepalive and RTO features using this code sample
s = socket.fromfd(conn.fileno(), socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 6)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 2)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_USER_TIMEOUT, 10000)
More information you can find in this post
If you attach the actual code that you are trying to except it would be helpful. In your attached stack trace its : " File "/home/ec2-user/dradis/dradisetl-daily.py", line 106"
Similar except code works fine for me. mind you, e.pgerror will be empty if the error occurred on the client side, such as the error in my example. the e.diag object will also be useless in this case.
try:
conn = psycopg2.connect('')
except psycopg2.Error as e:
print('Unable to connect!\n{0}'.format(e))
else:
print('Connected!')
Maybe it will be helpful for someone but I had such an error when I've tried to restore backup to the database which has not sufficient space for it.
I have created a Python program that uses Autobahn to make WebSocket connections to a remote host, and receive a data flow over these connections.
From time to time, some different exceptions occur during these connections, most often either an exception immediately when attempting to connect, stating that the initial WebSocket handshake failed (most likely due to an overloaded server), like this:
2017-05-03T20:31:10 dropping connection to peer tcp:1.2.3.4:443 with abort=True: WebSocket opening handshake timeout (peer did not finish the opening handshake in time)
Or a later exception during a successful and ongoing connection, saying that the connection timed out due to lack of pong response to a ping, as follows:
2017-05-04T13:33:40 dropping connection to peer tcp:1.2.3.4:443 with abort=True: WebSocket ping timeout (peer did not respond with pong in time)
2017-05-04T13:33:40 session closed with reason wamp.close.transport_lost [WAMP transport was lost without closing the session before]
2017-05-04T13:33:40 While firing onDisconnect: Traceback (most recent call last):
File "c:\Python36\lib\site-packages\txaio\aio.py", line 450, in done
f._result = x
AttributeError: attribute '_result' of '_asyncio.Future' objects is not writable
As can be seen above, this also triggers some other strange exception in the txaio module in this particular case.
No matter what kind of exception that occurs, I would like to catch them and handle them gracefully, but for some reason the exceptions (none of them) seem to bubble up to the code that initiated these connections (i.e. get caught by my try ... except clause there), which looks like this:
from autobahn.asyncio.wamp import ApplicationSession
from autobahn.asyncio.wamp import ApplicationRunner
...
class MyComponent(ApplicationSession):
...
try:
runner = ApplicationRunner("wss://my.websocket.server.com:443", "realm1")
runner.run(MyComponent)
except Exception as e:
print('Unexpected connection error')
...
Instead, all these exceptions just hang my program completely after the error messages have been dumped out to the terminal as above, why is this?
So, the question is: How and where in the code can I catch these exceptions that occur during the WebSocket connections in Autobahn, and react/handle them gracefully?
Here is the relevant code that's causing the Error.
ftp = ftplib.FTP('server')
ftp.login(r'user', r'pass')
#change directories to the "incoming" folder
ftp.cwd('incoming')
fileObj = open(fromDirectory + os.sep + f, 'rb')
#push the file
try:
msg = ftp.storbinary('STOR %s' % f, fileObj)
except Exception as inst:
msg = inst
finally:
fileObj.close()
if '226' not in msg:
#handle error case
I've never seen this error before and any information about why I might be getting it would be useful and appreciated.
complete error message:
[Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
It should be noted that when I manually (i.e. open a dos-prompt and push the files using ftp commands) push the file from the same machine that the script is on, I have no problems.
Maybe you should increase the "timeout" option, and let the server more time to response.
In my case, changing to ACTV mode, as #Anders Lindahl suggested, got everything back into working order.