Python's logging handlers are great. Some of them, such as the SMTPHandler may take a long while to execute (contacting an SMTP server and all). Are they executed on a separate thread as to not block the main program?
SMTPHandler uses smtplib and when sending an email with this library, your process is blocked until it have been correctly sent, no thread created.
If you do not want to block your process when sending an email, you'll have to implement your own SMTPHandler and override the emit(self, record) method.
The less blocking handler is the SysLogHandler, because it is in general a local communication, and in UDP so the system doesn't wait for any acknowledgement from the destination.
No, you should spawn a separate process, as far as I know.
Related
Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds.
To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant.
The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly.
The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client.
Is there any option of receiving and sending messages on a single socket simultaneously?
Thanks!
Option 1
Use two threads, one will write to the socket and the second will read from it.
This works since sockets are full-duplex (allow bi-directional simultaneous access).
Option 2
Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own
As discussed in another answer, threads are one common approach. The other approach is to use an event loop and nonblocking I/O. Recent versions of Python (I think starting at 3.4) include a package called asyncio that supports this.
You can call the create_connection method on an event_loop to create an asyncio connection. See this example for a simple server that reads and writes over TCP.
In many cases an event loop can permit higher performance than threads, but it has the disadvantage of requiring most or all of your code to be aware of the event model.
I have a Python program which spawns several other Python programs as subprocesses. One of these subprocesses is supposed to open and bind a ZMQ publisher socket, such that other subprocesses can subscribe to it.
I cannot give guarantees about which tcp ports will be available, so when I bind to a random port in the subprocess, my main program will not know what to tell the other subprocesses.
Is there a way to bind the socket in the main process and then somehow pass the socket to my subprocess? Or either some other way to preregister the socket or a standard way to pass the port information from the subprocess back to my main process (stdout and stderr are already used by other data)?
Just checking for a free port in the main process and passing that to the subprocess is not really optimal, because this could still fail if the socket is being assigned in the meantime. Also, since my program should work on Unix and Windows, I cannot really use ipc sockets, which would otherwise solve my problem.
The simplest is to create a logic for a pool-of-ports manager ( rather avoid attempts to share / pass ZeroMQ sockets to / among other processes )
One may create a persistent, a-priori known, tcp://A.B.C.D:8765-transport-class based .bind() access-point, exposed to all client processes as a port-assignment service, to which client processes .connect(), handshake in whatever manner is needed to proof an identity/credentials/purpose/etc and .recv() in a coordinated manner one actually free messaging/signalling-service port number, that is system-wide guaranteed to not be used at the very moment / until returned to the port-manager ( a rotating pool of ports is centrally managed, under your code-control, whereas all the sockets, created locally in the distributed process(es)/thread(s) .connect() / .bind()-ing to the pool-manager announced port#, and thus will still remain, and ought remain, consistently within ZeroMQ advice, not to be shared per-se ).
I'm writing a server program in Python that uses the following workflow:
1) Start a daemon
2) Start a server socket and listen for incoming connections
3) When an incoming socket is accepted successfully, fork a new process to handle the connection, closing the client socket in the child and the server socket in the daemon.
When I register a signal handler for SIGCHLD in the daemon process to reap child processes (regardless of the content of the handler) and run the server, the daemon crashes when it receives SIGCHLD. I can't for the life of me figure out why because for whatever reason logging to syslog won't work for me, and I have no way of debugging this. I'm using PyCharm and it has no way to debug forked processes. How can I debug this problem ? What could be causing the program to fail on invocation of the SIGCHLD handler ?
I'm using Python 3.4 on Mac OS X.8
As it turns out, I was using an incorrect signature for my signal handling function. I was using def my_handler() instead of def my_handler(signum, frame), as per this thread
I have nginx in front of 8 instances of Tornado, and for some requests (a handler for comments), I need Tornado to push messages on ZeroMQ. I am doing this at the end of the handler (just before I send the response to the client):
# here is body of handler for comments
context = zmq.Context()
port = "5252"
socket = context.socket(zmq.PUSH)
socket.bind("tcp://*:%s" % port)
print "Running server on port: ", port
socket.send("Commented")
# here I flush response to client
But this is hanging. Is this real way to push to ZeroMQ whenever the handler is executed?
Is this real way to push to ZeroMQ whenever the handler is executed?
No. Your code calls zmq.Context() every time the request handler is invoked. This is bad. It should be called exactly once - usually at the very beginning of your process, perhaps in some kind of init handler. You can safely share the context instance among any number of threads.
Same thing with socket creation and binding - this should be done once at startup. You must be more careful with the socket. If all your handlers (application, request etc) are executing in the same thread each time the handler is called, then you can use the same socket.
Another problem is the way your a "send"-ing to a PUSH socket. As described in http://api.zeromq.org/3-2:zmq-socket, a send to a PUSH socket may very well block in certain situations and you probably want to avoid that. Use the zmq.Poller with the POLLOUT flag (and a 0 timeout) to determine if the send would block. If not, then send right away. If so, you have to decide if you want to just drop the message or store it in your application to try again later.
Is it possible to somehow cancel xmlrpc client request?
Let say that in one thread I have code like:
svr = xmlrpclib.ServerProxy('http://localhost:9092')
svr.DoSomethingWhichNeedTime()
I don't mean some kind of TimeOut... Sometimes from another thread I can get event to cancel my work. And then I need to cancel this request.
I know that I can do it with twisted but, is it possible to do it with standard xmlrpclib?
First of all, it must be implemented on server side, not in client (xmlrpclib). If you simply interrupt your HTTP request to XML-RPC server, it's not guaranteed that long process running on the server will be interrupted at all. So xmlrpclib just can't have this functionality.
If you want to implement this behaviour, you need to create two type of requests. A request of first type will tell your server to start some long process. It must be executed in background (in another thread or process), and your XML-RPC server must send the response ("Process started!") to the client immediately. When you want to stop the process, client must send another request that will tell your server to stop executing of process.
Yes, if you want to do really dirty hacks....
Basically the ServerProxy object keeps a handle to the underlying socket/http connection. If you reached into those internals and simply close() the socket your client code will blow up with an exception. If you handle those properly its your cancel.
You can do it a little more sane if you register your own transport class for the ServerProxy via the transport parameter and give it some cancel method that does what you want.
That won't stop the server from processing things, unless it reacts to closing the channel directly.