Twisted includes a reactor implemented on top of MsgWaitForMultipleObjects. Apparently the reactor has problems reliably noticing when a TCP connection ends, at least in the case where a peer sends some bytes and then quickly closes the connection. What seems to happen is:
The reactor calls MsgWaitForMultipleObjects with some socket handles and QS_ALLINPUT.
The call completes and indicates the handle for a socket in this state (that is, has bytes waiting to be read and has been closed by the peer) is active.
The reactor dispatches this notification to the common TCP implementation.
The TCP implementation reads the available bytes from the socket. There are some, they get delivered to application code.
Control is returned to the reactor, which eventually calls MsgWaitForMultipleObjects again.
MsgWaitForMultipleObjects never again indicates that the handle is active. The TCP implementation never gets to look at the socket again, so it can never detect that the connection is closed.
This makes it appear as though MsgWaitForMultipleObjects is an edge-triggered notification mechanism. The MSDN documentation says:
Waits until one or all of the specified objects are in the signaled state
or the time-out interval elapses.
This doesn't sound like edge-triggering. It sounds like level-triggering.
Is MsgWaitForMultipleObjects actually edge-triggered? Or is it level-triggered and this misbehavior is caused by some other aspect of its behavior?
Addendum The MSDN docs for WSAEventSelect explains what's going on here a bit more, including pointing out that FD_CLOSE is basically a one-off event. After its signaled once, you'll never get it again. This goes some way towards explaining why Twisted has this problem. I'm still interested to hear how to effectively use MsgWaitForMultipleObjects given this limitation, though.
In order to use WSAEventSelect and differentiate activities, you need to call WSAEnumNetworkEvents. Make sure you're processing each event that was reported, not just the first.
WSAAsyncSelect makes it easy to determine the cause, and is often used together with MsgWaitForMultipleObjects.
So you might use WSAAsyncSelect instead of WSAEventSelect.
Also, I think you have a fundamental misunderstanding of the difference between edge-triggered and level-triggered. Your reasoning seems to be more related to auto-reset vs manual-reset events.
Related
Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds.
To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant.
The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly.
The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client.
Is there any option of receiving and sending messages on a single socket simultaneously?
Thanks!
Option 1
Use two threads, one will write to the socket and the second will read from it.
This works since sockets are full-duplex (allow bi-directional simultaneous access).
Option 2
Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own
As discussed in another answer, threads are one common approach. The other approach is to use an event loop and nonblocking I/O. Recent versions of Python (I think starting at 3.4) include a package called asyncio that supports this.
You can call the create_connection method on an event_loop to create an asyncio connection. See this example for a simple server that reads and writes over TCP.
In many cases an event loop can permit higher performance than threads, but it has the disadvantage of requiring most or all of your code to be aware of the event model.
I am currently working on a server + client combo on python and I'm using TCP sockets. From networking classes I know, that TCP connection should be closed step by step, first one side sends the signal, that it wants to close the connection and waits for confirmation, then the other side does the same. After that, socket can be safely closed.
I've seen in python documentation function socket.shutdown(flag), but I don't see how it could be used in this standard method, theoretical of closing TCP socket. As far as I know, it just blocks either reading, writing or both.
What is the best, most correct way to close TCP socket in python? Are there standard functions for closing signals or do I need to implement them myself?
shutdown is useful when you have to signal the remote client that no more data is being sent. You can specify in the shutdown() parameter which half-channel you want to close.
Most commonly, you want to close the TX half-channel, by calling shutdown(1). In TCP level, it sends a FIN packet, and the remote end will receive 0 bytes if blocking on read(), but the remote end can still send data back, because the RX half-channel is still open.
Some application protocols use this to signal the end of the message. Some other protocols find the EOM based on data itself. For example, in an interactive protocol (where messages are exchanged many times) there may be no opportunity, or need, to close a half-channel.
In HTTP, shutdown(1) is one method that a client can use to signal that a HTTP request is complete. But the HTTP protocol itself embeds data that allows to detect where a request ends, so multiple-request HTTP connections are still possible.
I don't think that calling shutdown() before close() is always necessary, unless you need to explicitly close a half-channel. If you want to cease all communication, close() does that too. Calling shutdown() and forgetting to call close() is worse because the file descriptor resources are not freed.
From Wikipedia: "On SVR4 systems use of close() may discard data. The use of shutdown() or SO_LINGER may be required on these systems to guarantee delivery of all data." This means that, if you have outstanding data in the output buffer, a close() could discard this data immediately on a SVR4 system. Linux, BSD and BSD-based systems like Apple are not SVR4 and will try to send the output buffer in full after close(). I am not sure if any major commercial UNIX is still SVR4 these days.
Again using HTTP as an example, an HTTP client running on SVR4 would not lose data using close() because it will keep the connection open after request to get the response. An HTTP server under SVR would have to be more careful, calling shutdown(2) before close() after sending the whole response, because the response would be partly in the output buffer.
According to the python documentation which says:
Strictly speaking, you’re supposed to use shutdown on a socket before
you close it. The shutdown is an advisory to the socket at the other
end. Depending on the argument you pass it, it can mean “I’m not going
to send anymore, but I’ll still listen”, or “I’m not listening, good
riddance!”. Most socket libraries, however, are so used to programmers
neglecting to use this piece of etiquette that normally a close is the
same as shutdown(); close(). So in most situations, an explicit
shutdown is not needed.
I think the most correct way to close a TCP connection would be to use shutdown before closing a connection, because close is not atomic! This can make some bugs. Suppose you're using close function without shutdown and the data didn't send to the server correctly, at the same time python closes the connection and server can't reply to client, now the socket at the other end may hang indefinitely.
I have implemented a server program using Twisted. I am using basic.lineReceiver with the method dataReceived to receive data from multiple clients. Also, I am using protocol.ServerFactory to keep track of connected clients. The server sends some commands to each connected client. Based on the response that the server gets from each client, it (the server) should perform some tasks. Thus, the best solution that came to my mind was to create a buffer for received messages as a python list, and each time that the functions at server side want to know the response from a client, they access the last element of the buffer list (of that client).
This approach has turned out to be unreliable. The first issue is that since TCP streaming is used, sometimes messages merge (I can use a delimiter for this). Second, the received messages are sometimes not in their appropriate sequence. Third, the networking communication seems to be too slow, as when the server initially tries to access the last element of the buffered list, the list is empty (this shows that the last messages on the buffer might not be the response to the last sent commands).
Could you tell me what is the best parctice for using dataReceived or its equivalents in the above problem? thank you in advance.
EDIT 1: Answer- While I accept #Jean-Paul Calderone's answer since I certainly learned from it, I would like to add that in my own research of Twisted's documentation, I learned that in order to avoid delays in communications of the server, one should use return at the end of dataReceived() or lineReceived() functions, and this solved part of my problem. The rest, were explained in the answer.
I have implemented a server program using Twisted. I am using basic.lineReceiver with the method dataReceived to receive data from multiple clients.
This is a mistake - an unfortunately common one brought on by the mistaken use of inheritance in many of Twisted's protocol implementations as the mechanism for building up more and more sophisticated behaviors. When you use twisted.protocols.basic.LineReceiver, the dataReceived callback is not for you. LineReceiver.dataReceived is an implementation detail of LineReceiver. The callback for you is LineReceiver.lineReceived. LineReceiver.dataReceived looks like it might be for you - it doesn't start with an underscore or anything - but it's not. dataReceived is how LineReceiver receives information from its transport. It is one of the public methods of IProtocol - the interface between a transport and the protocol interpreting the data received over that transport. Yes, I just said "public method" there. The trouble is it's public for the benefit of someone else. This is confusing and perhaps not communicated as well as it could be. No doubt this is why it is a Frequently Asked Question.
This approach has turned out to be unreliable. The first issue is that since TCP streaming is used, sometimes messages merge (I can use a delimiter for this).
Use of dataReceived is why this happens. LineReceiver already implements delimiter-based parsing for you. That's why it's called "line" receiver - it receives lines separated by a delimiter. If you override lineReceived instead of dataReceived then you'll be called which each line that is received, regardless of how TCP splits things up or smashes them together.
Second, the received messages are sometimes not in their appropriate sequence.
TCP is a reliable, ordered, stream-oriented transport. "Ordered" means that bytes arrive in the same order they are sent. Put another way, when you write("x"); write("y") it is guaranteed that the receiver will receive "x" before they receive "y" (they may receive "x" and "y" in the same call to recv() but if they do, the data will definitely be "xy" and not "yx"; or they may receive the two bytes in two calls to recv() and if they do, the first recv() will definitely by "x" and the second will definitely be "y", not the other way around).
If bytes appear to be arriving in a different order than you sent them, there's probably another bug somewhere that makes it look like this is happening - but it actually isn't. Your platform's TCP stack is very likely very close to bug free and in particular it probably doesn't have TCP data re-ordering bugs. Likewise, this area of Twisted is extremely well tested and probably works correctly. This leaves a bug in your application code or a misinterpretation of your observations. Perhaps your code doesn't always append data to a list or perhaps the data isn't being sent in the order you expected.
Another possibility is that you are talking about the order in which data arrives across multiple separate TCP connections. TCP is only ordered over a single connection. If you have two connections, there are very few (if any) guarantees about the order in which data will arrive over them.
Third, the networking communication seems to be too slow, as when the server initially tries to access the last element of the buffered list, the list is empty (this shows that the last messages on the buffer might not be the response to the last sent commands).
What defines "too slow"? The network is as fast as the network is. If that's not fast enough for you, find a fatter piece of copper. It sounds like what you really mean here is that your server sometimes expects data to have arrived before that data actually arrives. This doesn't mean the network is too slow, though, it means your server isn't properly event driven. If you're inspecting a buffer and not finding the information you expected, it's because you inspected it before the occurrence of the event which informs you of the arrival of that information. This is why Twisted has all these callback methods - dataReceived, lineReceived, connectionLost, etc. When lineReceived is called, this is an event notification telling you that right now something happened which resulted in a line being available (and, for convenience, lineReceived takes one argument - an object representing the line which is now available).
If you have some code that is meant to run when a line has arrived, consider putting that code inside an implementation of the lineReceived method. That way, when it runs (in response to a line being received), you can be 100% sure that you have a line to operate on. You can also be sure that it will run as soon as possible (as soon as the line arrives) but no sooner.
I have a Python test program for testing features of another software component, let's call the latter the component under test (COT).
The Python test program is connected to the COT via a persistent TCP connection.
The Python program is using the Python socket API for this.
Now in order to simulate a failure of the physical link, I'd like to have the Python program shut the socket down, but without disconnecting appropriately.
I.e. I don't want anything to be sent on the TCP channel any more, including any TCP SYN/ACK/FIN. I just want the socket to go silent. It must not respond to the remote packets any more.
This is not as easy as it seems, since calling close on a socket will send TCP FIN packets to the remote end. (graceful disconnection).
So how can I kill the socket without sending any packets out?
I cannot shut down the Python program itself, because it needs to maintain other connections to other components.
For information, the socket runs in a separate thread. So I thought of abruptly killing the thread, but this is also not so easy. (Is there any way to kill a Thread?)
Any ideas?
You can't do that from a userland process since in-kernel network stack still holds resources and state related to given TCP connection. Event if you kill your whole process the kernel is going to send a FIN to the other side since it knows what file descriptors your process had and will try to clean them up properly.
One way to get around this is to engage firewall software (on local or intermediate machine). Call a script that tells the firewall to drop all packets from/to given IP and port (that of course would need appropriate administrative privileges).
Contrary to Nikolai's answer, there is indeed a way to reset the connection from userland such that an RST is sent and pending data discarded, rather than a FIN after all the pending data. However as it is more abused than used, I won't publish it here. And I don't know whether it can be done from Python. Setting one of the three possible SO_LINGER configurations and closing will do it. I won't say more than that, and I will say that this technique should only be used for the purpose outlined in the question.
Should sockets be set to non-blocking when used with select.select in Python?
What difference does it make if they are or aren't?
Occasionally I find that calling send on a socket that returns sendable will block. Furthermore I find that blocking sockets will generally send the whole buffer given (128 KiB). In non-blocking mode, sending will accept far fewer bytes (20-40 KiB compared with the example given earlier) and return quicker. I'm using Python 3.1 on Lucid.
The answer might be OS dependent unfortunately. I'm replying only regarding Linux.
I'm not aware of differences regarding blocking/non-blocking sockets in select, but on linux, the select system call man page has this in it 'BUGS' section:
Under Linux, select() may report a
socket file descriptor as "ready for
reading", while nevertheless a
subsequent read blocks. This could
for example happen when data has
arrived but upon examination has
wrong checksum and is discarded. There may be other
circumstances in which a file
descriptor is spuriously reported as
ready. Thus it may be safer to use
O_NONBLOCK on sockets that should not
block.
I doubt a python abstraction above that could "hide" this issue without side-effects.
As for the blocking write sending more data, that's expected. send will block until there is enough buffer space to pass your whole request down if the socket is blocking. If the socket is non-blocking, it only sends as much as can currently fit in the socket's send buffer.