Detect the end of a raw data - python

Twisted has two data reception modes: a Line Mode and a Raw Mode, and we can switch between them using setRawMode() and setLineMode() functions.
the line mode detects and end of line and then calls the lineReceived() function.
From Twisted doc:
def rawDataReceived(self, data):
Override this for when raw data is received.
How can Twisted detect the end of a raw data and then call rawDataReceived() ?
EDIT:
I'll add this to to complete my question.
I'm using this Qt function to send data to the Twisted server
qint64 QIODevice::write(const QByteArray & byteArray)
I thought that using write() two times means that the Twisted server will trigger the rawDataReceived() functions two times too.
write( "raw1" );
write( "raw2" );
but data are received in one time.

You asked:
How can Twisted detect the end of a raw data and then call rawDataReceived() ?
In short, when you turn on raw your asking Twisted not to detect.
... but let me explain
When you talk about 'detecting the end of data' inside of a connection (I.E. if your not closing the connection at the end of data), your talking about an idea that is normally referred to as framing.
Framing is one of the primary issues you have to keep in mind when your doing application level networking programming, because most (networking) protocols don't guarantee data framing to the application.
Confusingly many networking protocols (of which TCP is one of the most notorious) will often but not always present data to the receiver in the same way as it is transmitted (I.E. As though it had framing, each write will cause one read to happen - but only in cases of slow-use and low-load). Because of this maybe-it-will-work-maybe-it-won't behavior the best practice is to always explicitly add/build-in some sort of framing.
The most common method to add application-level framing in TCP/Serial/Keyboard style interfaces is to use line-breaks as end-of-frame makers, which is what LineMode is about.
Turning on raw mode in Twisted is like saying 'I want to write my own framing', but I doubt thats really what your after.
Instead you probably want to look at some of the other helper protocols (netstring, prefixed-message-length) that Twisted offers that will do binary framing for you (also see SO: Fragmented data in Twisted dataRecivied by Twisted's author Glyph)

Twisted does not detect the end of the raw data. It just calls rawDataReceived as it receive data.
Following is relevant part from Twisted code. (protocols/basic.py)
def dataReceived(self, data):
"""
Protocol.dataReceived.
Translates bytes into lines, and calls lineReceived (or
rawDataReceived, depending on mode.)
"""
if self._busyReceiving:
self._buffer += data
return
try:
self._busyReceiving = True
self._buffer += data
while self._buffer and not self.paused:
if self.line_mode:
....
else:
data = self._buffer
self._buffer = b''
why = self.rawDataReceived(data) # <--------
if why:
return why
finally:
self._busyReceiving = False

Related

What is the difference between socket.send() and socket.sendall()?

I'm confused about socket.send() and socket.sendall() functions in Python. As I understand from the documentation send() function uses TCP protocol and sendall() function uses UDP protocol for sending data. I know that TCP is more reliable for most of the Web Applications because we can check which packets are sent and which packets are not. That's why, I think use of send() function can be more reliable rather than sendall() function.
At this point, I want to ask what is the exact difference between these two functions and which one is more reliable for web applications?
Thank you.
socket.send is a low-level method and basically just the C/syscall method send(3) / send(2). It can send less bytes than you requested, but returns the number of bytes sent.
socket.sendall is a high-level Python-only method that sends the entire buffer you pass or throws an exception. It does that by calling socket.send until everything has been sent or an error occurs.
If you're using TCP with blocking sockets and don't want to be bothered
by internals (this is the case for most simple network applications),
use sendall.
And python docs:
Unlike send(), this method continues to send data from string until
either all data has been sent or an error occurs. None is returned on
success. On error, an exception is raised, and there is no way to
determine how much data, if any, was successfully sent
Credits to Philipp Hagemeister for brief description I got in the past.
edit
sendall use under the hood send - take a look on cpython implementation. Here is sample function acting (more or less) like sendall :
def sendall(sock, data, flags=0):
ret = sock.send(data, flags)
if ret > 0:
return sendall(sock, data[ret:], flags)
else:
return None
or from rpython (pypy source):
def sendall(self, data, flags=0, signal_checker=None):
"""Send a data string to the socket. For the optional flags
argument, see the Unix manual. This calls send() repeatedly
until all data is sent. If an error occurs, it's impossible
to tell how much data has been sent."""
with rffi.scoped_nonmovingbuffer(data) as dataptr:
remaining = len(data)
p = dataptr
while remaining > 0:
try:
res = self.send_raw(p, remaining, flags)
p = rffi.ptradd(p, res)
remaining -= res
except CSocketError, e:
if e.errno != _c.EINTR:
raise
if signal_checker is not None:
signal_checker()

Non-blocking socket in Python?

Is it me, or can I not find a good tutorial on non-blocking sockets in python?
I'm not sure how to exactly work the .recv and the .send in it. According to the python docs, (my understanding of it, at least) the recv'ed or send'ed data might be only partial data. So does that mean I have to somehow concatenate the data while recv and make sure all data sends through in send. If so, how? An example would be much appreciated.
It doesn't really matter if your socket is in non-blocking mode or not, recv/send work pretty much the same; the only difference is that non-blocking socket throws 'Resource temporarily unavailable' error instead of waiting for data/socket.
recv method returns numbers of bytes received, which is told to be less or equal to the passed bufsize. If you want to receive exactly size bytes, you should do something similar to the following code:
def recvall(sock, size):
data = ''
while len(data) < size:
d = sock.recv(size - len(data))
if not d:
# Connection closed by remote host, do what best for you
return None
data += d
return data
This is important to remember, that in blocking mode you have to do exactly the same. (The number of bytes passed to application layer is for example limited by recv buffer size in the OS.)
send method returns number of bytes sent, which is told to be less or equal to the length of passed string. If you want to ensure the whole message was sent, you should do something similar to the following code:
def sendall(sock, data):
while data:
sent = sock.send(data)
data = data[sent:]
You can use sock.sendall directly, but (according to the documentation) on error, an exception is raised, and there is no way to determine how much data, if any, was successfully sent.
The sockets in Python follow the BSD socket API and behave in the similar way to c-style sockets (the difference is, for example, they throw exception instead of returning error code). You should be happy with any socket tutorial on the web and manpages.
Keep bytes you want to send in a buffer. (A list of byte-strings would be best, since you don't have to concatenate them.) Use the fcntl.fcntl function to set the socket in non-blocking mode:
import fcntl, os
fcntl.fcntl(mysocket, fcntl.F_SETFL, os.O_NONBLOCK)
Then select.select will tell you when it is OK to read and write to the socket. (Writing when it is not OK will give you the EAGAIN error in non-blocking mode.) When you write, check the return value to see how many bytes were actually written. Eliminate that many bytes from your buffer. If you use the list-of-strings approach, you only need to try writing the first string each time.
If you read the empty string, your socket has closed.

How can I create a non-http proxy with Twisted

How can I create a non-http proxy with Twisted. Instead I would like to do it for the Terraria protocol which is made entirely of binary data. I see that they have a built-in proxy for HTTP connections, but this application needs to act more like an entry point which is forwarded to a set server (almost like a BNC on IRC).
I can't figure out how to read the data off of one connection and send it to the other connection.
I have already tried using a socket for this task, but the blocking recv and send methods do not work well as two connections need to be live at the same time.
There are several different ways to create proxies in Twisted. The basic technique is built on peering, by taking two different protocols, on two different ports, and somehow gluing them together so that they can exchange data with each other.
The simplest proxy is a port-forwarder. Twisted ships with a port-forwarder implementation, see http://twistedmatrix.com/documents/current/api/twisted.protocols.portforward.html for the (underdocumented) classes ProxyClient and ProxyServer, although the actual source at http://twistedmatrix.com/trac/browser/tags/releases/twisted-11.0.0/twisted/protocols/portforward.py might be more useful to read through. From there, we can see the basic technique of proxying in Twisted:
def dataReceived(self, data):
self.peer.transport.write(data)
When a proxying protocol receives data, it puts it out to the peer on the other side. That's it! Quite simple. Of course, you'll usually need some extra setup... Let's look at a couple of proxies I've written before.
This is a proxy for Darklight, a little peer-to-peer system I wrote. It is talking to a backend server, and it wants to only proxy data if the data doesn't match a predefined header. You can see that it uses ProxyClientFactory and endpoints (fancy ClientCreator, basically) to start proxying, and when it receives data, it has an opportunity to examine it before continuing, either to keep proxying or to switch protocols.
class DarkServerProtocol(Protocol):
"""
Shim protocol for servers.
"""
peer = None
buf = ""
def __init__(self, endpoint):
self.endpoint = endpoint
print "Protocol created..."
def challenge(self, challenge):
log.msg("Challenged: %s" % challenge)
# ...omitted for brevity...
return is_valid(challenge)
def connectionMade(self):
pcf = ProxyClientFactory()
pcf.setServer(self)
d = self.endpoint.connect(pcf)
d.addErrback(lambda failure: self.transport.loseConnection())
self.transport.pauseProducing()
def setPeer(self, peer):
# Our proxy passthrough has succeeded, so we will be seeing data
# coming through shortly.
log.msg("Established passthrough")
self.peer = peer
def dataReceived(self, data):
self.buf += data
# Examine whether we have received a challenge.
if self.challenge(self.buf):
# Excellent; change protocol.
p = DarkAMP()
p.factory = self.factory
self.transport.protocol = p
p.makeConnection(self.transport)
elif self.peer:
# Well, go ahead and send it through.
self.peer.transport.write(data)
This is a rather complex chunk of code which takes two StatefulProtocols and glues them together rather forcefully. This is from a VNC proxy (https://code.osuosl.org/projects/twisted-vncauthproxy to be precise), which needs its protocols to do a lot of pre-authentication stuff before they are ready to be joined. This kind of proxy is the worst case; for speed, you don't want to interact with the data going over the proxy, but you need to do some setup beforehand.
def start_proxying(result):
"""
Callback to start proxies.
"""
log.msg("Starting proxy")
client_result, server_result = result
success = True
client_success, client = client_result
server_success, server = server_result
if not client_success:
success = False
log.err("Had issues on client side...")
log.err(client)
if not server_success:
success = False
log.err("Had issues on server side...")
log.err(server)
if not success:
log.err("Had issues connecting, disconnecting both sides")
if not isinstance(client, Failure):
client.transport.loseConnection()
if not isinstance(server, Failure):
server.transport.loseConnection()
return
server.dataReceived = client.transport.write
client.dataReceived = server.transport.write
# Replay last bits of stuff in the pipe, if there's anything left.
data = server._sful_data[1].read()
if data:
client.transport.write(data)
data = client._sful_data[1].read()
if data:
server.transport.write(data)
server.transport.resumeProducing()
client.transport.resumeProducing()
log.msg("Proxying started!")
So, now that I've explained that...
I also wrote Bravo. As in, http://www.bravoserver.org/. So I know a bit about Minecraft, and thus about Terraria. You will probably want to parse the packets coming through your proxy on both sides, so your actual proxying might start out looking like this, but it will quickly evolve as you begin to understand the data you're proxying. Hopefully this is enough to get you started!

Twisted transport.write

Is there any way to force self.transport.write(response) to write immediately to its connection so that the next call to self.transport.write(response) does not get buffered into the same call.
We have a client with legacy software we cannot amend, that reads for the 1st request and then starts reading again, and the problem I have is twisted joins the two writes together which breaks the client any ideas i have tried looking into deferreds but i don't think it will help in this case
Example:
self.transport.write("|123|") # amount of messages to follow
a loop to generate next message
self.transport.write("|message 1 text here|")
Expected:
|123|
|message 1 text here|
Result:
|123||message 1 text here|
I was having a somewhat related problem using down level Python 2.6. The host I was talking to was expecting a single ACK character, and THEN a separate data buffer, and they all came at once. On top
of this, it was a TLS connection. However, if you reference the socket DIRECTLY, you can invoke a
sendall() as:
self.transport.write(Global.ACK)
to:
self.transport.getHandle().sendall(Global.ACK)
... and that should work. This does not seem to be a problem on Python 2.7 with Twisted on X86, just
Python 2.6 on a SHEEVAPlug ARM processor.
Can you tell which transport you are using. For most implementations,
This is the typical approach :
def write(self, data):
if data:
if self.writeInProgress:
self.outQueue.append(data)
else:
....
Based on the details the behavior of write function can be changed to do as desired.
Maybe You can register your protocol as a pull producer to the transport
self.transport.registerProducer(self, False)
and then create a write method in your protocol that has it's job buffering
the data until the transport call your protocol resumeProducing method to fetch
the data one by one.
def write(self, data):
self._buffers.append(data)
def resumeProducing(self):
data = self._buffers.pop()
self.transport.write(data)

Read from socket: Is it guaranteed to at least get x bytes?

I have a rare bug that seems to occur reading a socket.
It seems, that during reading of data sometimes I get only 1-3 bytes of a data package that is bigger than this.
As I learned from pipe-programming, there I always get at least 512 bytes as long as the sender provides enough data.
Also my sender does at least transmit >= 4 Bytes anytime it does transmit anything -- so I was thinking that at least 4 bytes will be received at once in the beginning (!!) of the transmission.
In 99.9% of all cases, my assumption seems to hold ... but there are really rare cases, when less than 4 bytes are received. It seems to me ridiculous, why the networking system should do this?
Does anybody know more?
Here is the reading-code I use:
mySock, addr = masterSock.accept()
mySock.settimeout(10.0)
result = mySock.recv(BUFSIZE)
# 4 bytes are needed here ...
...
# read remainder of datagram
...
The sender sends the complete datagram with one call of send.
Edit: the whole thing is working on localhost -- so no complicated network applications (routers etc.) are involved. BUFSIZE is at least 512 and the sender sends at least 4 bytes.
I assume you're using TCP. TCP is a stream based protocol with no idea of packets or message boundaries.
This means when you do a read you may get less bytes than you request. If your data is 128k for example you may only get 24k on your first read requiring you to read again to get the rest of the data.
For an example in C:
int read_data(int sock, int size, unsigned char *buf) {
int bytes_read = 0, len = 0;
while (bytes_read < size &&
((len = recv(sock, buf + bytes_read,size-bytes_read, 0)) > 0)) {
bytes_read += len;
}
if (len == 0 || len < 0) doerror();
return bytes_read;
}
As far as I know, this behaviour is perfectly reasonable. Sockets may, and probably will fragment your data as they transmit it. You should be prepared to handle such cases by applying appropriate buffering techniques.
On other hand, if you are transmitting the data on the localhost and you are indeed getting only 4 bytes it probably means you have a bug somewhere else in your code.
EDIT: An idea - try to fire up a packet sniffer and see whenever the packet transmitted will be full or not; this might give you some insight whenever your bug is in your client or in your server.
The simple answer to your question, "Read from socket: Is it guaranteed to at least get x bytes?", is no. Look at the doc strings for these socket methods:
>>> import socket
>>> s = socket.socket()
>>> print s.recv.__doc__
recv(buffersize[, flags]) -> data
Receive up to buffersize bytes from the socket. For the optional flags
argument, see the Unix manual. When no data is available, block until
at least one byte is available or until the remote end is closed. When
the remote end is closed and all data is read, return the empty string.
>>>
>>> print s.settimeout.__doc__
settimeout(timeout)
Set a timeout on socket operations. 'timeout' can be a float,
giving in seconds, or None. Setting a timeout of None disables
the timeout feature and is equivalent to setblocking(1).
Setting a timeout of zero is the same as setblocking(0).
>>>
>>> print s.setblocking.__doc__
setblocking(flag)
Set the socket to blocking (flag is true) or non-blocking (false).
setblocking(True) is equivalent to settimeout(None);
setblocking(False) is equivalent to settimeout(0.0).
From this it is clear that recv() is not required to return as many bytes as you asked for. Also, because you are calling settimeout(10.0), it is possible that some, but not all, data is received near the expiration time for the recv(). In that case recv() will return what it has read - which will be less than you asked for (but consistenty < 4 bytes does seem unlikely).
You mention datagram in your question which implies that you are using (connectionless) UDP sockets (not TCP). The distinction is described here. The posted code does not show socket creation so we can only guess here, however, this detail can be important. It may help if you could post a more complete sample of your code.
If the problem is reproducible you could disable the timeout (which incidentally you do not seem to be handling) and see if that fixes the problem.
This is just the way TCP works. You aren't going to get all of your data at once. There are just too many timing issues between sender and receiver including the senders operating system, NIC, routers, switches, the wires themselves, the receivers NIC, OS, etc. There are buffers in the hardware, and in the OS.
You can't assume that the TCP network is the same as a OS pipe. With the pipe, it's all software so there's no cost in delivering the whole message at once for most messages. With the network, you have to assume there will be timing issues, even in a simple network.
That's why recv() can't give you all the data at once, it may just not be available, even if everything is working right. Normally, you will call recv() and catch the output. That should tell you how many bytes you've received. If it's less than you expect, you need to keep calling recv() (as has been suggested) until you get the correct number of bytes. Be aware that in most cases, recv() returns -1 on error, so check for that and check your documentation for ERRNO values. EAGAIN in particular seems to cause people problems. You can read about it on the internet for details, but if I recall, it means that no data is available at the moment and you should try again.
Also, it sounds like from your post that you're sure the sender is sending the data you need sent, but just to be complete, check this:
http://beej.us/guide/bgnet/output/html/multipage/advanced.html#sendall
You should be doing something similar on the recv() end to handle partial receives. If you have a fixed packet size, you should read until you get the amount of data you expect. If you have a variable packet size, you should read until you have the header that tells you how much data you send(), then read that much more data.
From the Linux man page of recv http://linux.about.com/library/cmd/blcmdl2_recv.htm:
The receive calls normally return any
data available, up to the requested
amount, rather than waiting for
receipt of the full amount requested.
So, if your sender is still transmitting bytes, the call will only give what has been transmitted so far.
If the sender sends 515 bytes, and your BUFSIZE is 512, then the first recv will return 512 bytes, and the next will return 3 bytes... Could this be what's happening?
(This is just one case amongst many which will result in a 3-byte recv from a larger send...)
If you are still interested, patterns like this :
# 4 bytes are needed here ......
# read remainder of datagram...
may create the silly window thing.
Check this out
Use recv_into(...) method from the socket module.
Robert S. Barnes written the example in C.
But you can use Python 2.x with standard python-libraries:
def readReliably(s,n):
buf = bytearray(n)
view = memoryview(buf)
sz = s.recv_into(view,n)
return sz,buf
while True:
sk,skfrom = s.accept()
sz,buf = io.readReliably(sk,4)
a = struct.unpack("4B",buf)
print repr(a)
...
Notice, that sz returned by readReliably() function may be greater than n.

Categories

Resources