I'm using django-channels. When an exception is raised within a consumer it will log the error, the websocket connection will be disconnected and then it will re-connect again.
I would like to send a websocket message before it disconnects. I've tried to catch the error, send the message and then re-raise the exception. But the message still isn't sent.
What's the best way of achieving this?
When you raise an error it seems like the the actual raising of the error takes precedence of sending the message which happens later.
So the solution I went with in the end was to catch the exception in place, append the exception and check whether there were any exceptions to be raised after a message was sent.
If there was an error to raise, raise it. That way errors are raised server side and any errors will get known frontend side as well.
Alternatively, which might be a better solution. Catch the error and then log the exception. Have a special method that sends the error back to frontend and then return early.
That way the server will never disconnect and there is no need for re-connection. Which saves some time and processing.
Related
so first post here and it is something i cant find any info about anywhere(could be i am not looking in the right places for the right things) but it has been plaguing my code for a while now.
i am using python as mentioned in title version 3.9.
in short what the issue is is that when i call Rasdial and the vpn connection is live(i can ping it) but there is some other network issue i get
Remote Access error 807 - The network connection between your computer and the VPN server was interrupted. This can be caused by a problem in the VPN transmission and is commonly the result of internet latency or simply that your VPN server has reached capacity. Please try to reconnect to the VPN server. If this problem persists, contact the VPN administrator and analyze quality of network connectivity.
For more help on this error:
Type 'hh netcfg.chm'
In help, click Troubleshooting, then Error Messages, then 807
i know what it is and can do nothing about the error. the annoying thing is python sees this as a normal output and carries on with the code when i have try except.
here is the code in question:
try:
vpn_disconnect()
sleep(2)
os.system(f"rasdial {cstring}")
print(subprocess.CompletedProcess.check_returncode())
sleep(5)
except Exception as e:
errtyp= type(e).__name__
return(f"Some error occured...\n{errtyp} was the main cause")
except:
return()
cstring is a formatting of {vpn name} {vpn username} {vpn password} which changes based on the connection i am trying to connect to.
i tried using subprocess with bellow code:
try:
txtstring = cstring.split(" ")
vpn_disconnect()
sleep(2)
txtstring.insert(0,"rasdial")
print(txtstring)
subprocess.run(txtstring, shell=True, check=True,capture_output=True)
print(subprocess.CompletedProcess.check_returncode())
sleep(5)
except Exception as e:
errtyp= type(e).__name__
return(f"Some error occured...\n{errtyp} was the main cause")
except:
return()
the intended action is when an error occures when attempting to connect is to skip a loop and start next loop.
what happens is it dosent catch these errors and continues to pointlesly query servers that dont exist because rasdial failed to connect to the vpn server.
any help is greatly appreciated and thanks in advance.
please let me know if there is anything i can do check out or provide to aid in this or if there is something i am obviously missing here(method of catching this type of error)
I'm working on a Python script with a client/server socket. After searching for alternate solutions, I'm not sure if my solution is correct or the best.
I have read these posts :
Python handling socket.error: [Errno 104] Connection reset by peer
Asyncio detecting disconnect hangs
Their solutions do not work for me, but I have the following .I'm not sure if that it's correct or clean.
try:
# See if client has disconnected.
try:
data = (await asyncio.wait_for(client_reader.readline(),timeout=0.01))
except ConnectionResetError, ConnectionAbortedError) as e:
break # Client disconnected
except TimeoutError:
pass # Client hasn't disconnect
If i don't use except for ConnectionResetError, I get an error because the data raises connectionReset when I kill the client.
Is it a good solution to detect an irregular client disconnection ?
ps : Thank you Prune for cleaned up wording and grammar.
As long as you are not interacting with the socket, you don't know if it's still connected or not. The proper way to handle disconnections is not to checks the state of the socket, but to verify if a read or write operation failed because of such error.
Usually, one is always at least awaiting for a read() on a socket, this is where one should look for disconnections. When the exception happens, the stream will be detected as closed and propagate the exception to any other task awaiting on an operation on this socket. It means that if you have concurrent operations on one stream, you must expect this exception to be raised anywhere, and handle it everywhere in your code.
About the exceptions caught: Checking for a ConnectionError is the best solution, it's a parent class of all exceptions related to a connection (ConnectionAbortedError, ConnectionResetError, etc).
I have a RabbitMQ Server set up where I fetch messages using Python-Pika. The problem is that if I have persistent delivery mode enabled, and the workers fails to process a message. Instead of releasing the message, it will keep it until the message until the RabbitMQ connection has been reset.
Is there a way to make sure that the message that failed to process automatically gets picked up again within a reasonable time-frame from an available worker, including the same one?
This is my current code
if success:
ch.basic_ack(delivery_tag=method.delivery_tag)
else:
syslog.syslog('Error (Callback) -- Failed to process payload: %s' % body)
The idea is that I never want to lose a message, instead I want it to get re-published, or rather picked up again if it failed. This should always be the case until the message has been successfully processed by a worker. This normally happens when one of the worker is unable to open a connection to the HTTP server.
I finally figured out why this was happening. I did not realize that it wasn't enough to simply acknowledge when you were done with a message, but also had to reject any message you could not process using channel.basic_reject. This may seem obvious, but it is not the default behavior for AMQP.
Basically we have to release the message using basic_reject with requeue set to True. The important factor here is the requeue keyword which prevents the message from being discarded, and instead queues it up again, so that one of our available workers can process it.
if success:
# On Success - Mark message as processed.
ch.basic_ack(delivery_tag=method.delivery_tag)
else:
# Else - Mark message as rejected and move it back to the queue.
ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True)
I found some really useful information in this article, and there are more technical details on the reject keyword in this blog post.
So far my networking code works fine, but I'm a bit worried about something I hid under the carpet:
The man pages for accept, close, connect, recv and send mention that errno.EINTR can show up when a system call was interrupted by a signal.
I am quite clueless here.
What does python do with that ? Does it automatically retry the call, does it raise a socket.error with that errno ? What is the appropriate thing I should do if that exception is raised ? Can I generate these signals myself in my unittests ?
Python simply retries the call and hides the signal from the user (helps with cross-platform consistency where -EINTR doesn't exist). You can safely ignore the EINTR issue but if you'd like to test it anyway, it's easy to do. Just set up a blocking operation that will not return (such as a socket.accept with no incoming connection) and send the process a signal.
I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above.
Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies.
I am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything.
If I need a "try:" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()?
Exceptions which occur during logging should not stop your script, though they may cause a traceback to be printed to sys.stderr. In order to prevent this printout, do the following:
logging.raiseExceptions = 0
This is not the default (because in development you typically want to know about failures) but in production, raiseExceptions should not be set.
You should find that the SMTPHandler will attempt a re-connection the next time an ERROR (or higher) is logged.
logging does pass through SystemExit and KeyboardInterrupt exceptions, but all others should be handled so that they do not cause termination of an application which uses logging. If you find that this is not the case, please post specific details of the exceptions which are getting through and causing your script to terminate, and about your version of Python/operating system. Bear in mind that the script may appear to hang if there is a network timeout which is causing the handler to block (e.g. if a DNS lookup takes a long time, or if the SMTP connection takes a long time).
You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message.
To keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).