I am trying to implement a simple UDP client and server. Server should receive a message and return a transformed one.
My main technique for server is to listen UDP messages in a loop, then spawn multiprocessing.Process for each incoming message and send the reply within each Process instance:
class InputProcessor(Process):
...
def run(self):
output = self.process_input()
self.sock.sendto(output, self.addr) # send a reply
if __name__ == "__main__":
print "serving at %s:%s" % (UDP_IP, UDP_PORT)
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP,UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message: %s from %s:%s" % (data, addr[0], addr[1])
p = InputProcessor(sock, data, addr)
p.start()
In test client, I do something like this:
def send_message(ip, port, data):
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
print "sending: %s" % data
sock.sendto(data, (ip, port))
sock.close()
for i in xrange(SECONDS*REQUESTS_PER_SECOND):
data = generate_data()
p = multiprocessing.Process(target=send_message, args=(UDP_IP,
UDP_PORT,
data))
p.start()
time.sleep(1/REQUESTS_PER_SECOND)
The problem I am having with the code above is that when REQUESTS_PER_SECOND becomes higher than certain value (~50), it seems some client processes receive responses destinated to different processes, i.e. process #1 receives response for process #2, and vice versa.
Please criticize my code as much as possible, due to I am new to network programming and may miss something obvious. Maybe it's even worth and better for some reason to use Twisted, hovewer, I am highly interested in understanding the internals. Thanks.
As per previous answer, I think that the main reason is that there is a race condition at the UDP port for the clients. I do not see receiving at the client code, but presumably it is similar to the one in server part. What I think happens in concrete terms is that for values under 50 requests / second, the request - response roundtrip gets completed and the client exits. When more requests arrive, there may be multiple processes blocking to read the UDP socket, and then it is probably nondeterministic which client process receives the incoming message. If the network latency is going to be larger in the real setting, this limit will be hit sooner.
Thanks guys a lot! It seems I've found why my code failed before. I was using multiprocessing.Manager().dict() within client to check if the results from server are correct. However, I didn't use any locks to wrap a set of write operations to that dict(), thus got a lot of errors though the output from server was correct.
Shortly, in client, I was doing incorrect checks for correct server responses.
Related
I have the following problem: I want a sever to send the contents of a textfile
when requested to do so. I have writen a server script which sends the contents to the client and the client script which receives all the contents with a revcall loop. The recvall works fine when
I run the server and client from the same device for testing.
But when I run the server from a different device in the same wifi network to receive the textfile contents from the server device, the recvall doesn't work and I only receive the first 1460 bytes of the text.
server script
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(("", 5000))
server.listen(5)
def send_file(client):
read_string = open("textfile", "rb").read() #6 kilobyte large textfile
client.send(read_string)
while True:
client, data = server.accept()
connect_data = client.recv(1024)
if connect_data == b"send_string":
send_file(client)
else:
pass
client script
import socket
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("192.168.1.10", 5000))
connect_message = client.send(b"send_string")
receive_data = ""
while True: # the recvall loop
receive_data_part = client.recv(1024).decode()
receive_data += receive_data_part
if len(receive_data_part) < 1024:
break
print(receive_data)
recv(1024) means to receive at least 1 and at most 1024 bytes. If the connection has closed, you receive 0 bytes, and if something goes wrong, you get an exception.
TCP is a stream of bytes. It doesn't try to keep the bytes from any given send together for the recv. When you make the call, if the TCP endpoint has some data, you get that data.
In client, you assume that anything less than 1024 bytes must be the last bit of data. Not so. You can receive partial buffers at any time. Its a bit subtle on the server side, but you make the same mistake there by assuming that you'll receive exactly the command b"send_string" in a single call.
You need some sort of a protocol that tells receivers when they've gotten the right amount of data for an action. There are many ways to do this, so I can't really give you the answer. But this is why there are protocols out there like zeromq, xmlrpc, http, etc...
I am writing a UDP server application that serves as a back end to Teltonika FMB630 car mounted devices.
I already took care of the protocol specifics and decoding, the problem I am facing relates to the UDP socket used.
My UDP server has to send an acknowledgement to the client device upon receiving a message (that is the protocol), however, if I send those ACKs, the server socket stops receiving data after a while.
The server's UDP socket object is passed to an concurrent.futures.ThreadPoolExecutor that fires a function (send_ack) that sends the ACK, however this is not the issue because I tried calling send_ack in the main thread, after receiving data and the same issue occurs.
I suspect the problem is the remote device somehow breaks the connection or the ISP or MNO doesn't route the reply packet (this is a GPRS device) and then the socket.send() method that is used to send the acknowledge, somehow freezes other socket operations, specifically recvfrom_into called in the main thread loop.
I wrote two scripts to illustrate the situation:
udp_test_echo.py :
#!/usr/env/bin python
import socket
import concurrent.futures
def send_ack(sock, addr, ack):
print("Sending ACK to {}".format(addr))
sock.connect(addr)
print("connected to {}".format(addr))
sock.send(ack)
print("ACK sent to {}".format(addr))
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind(("127.0.0.1", 1337))
data = bytearray([0] * 10)
executor = concurrent.futures.ThreadPoolExecutor(max_workers=4)
while True:
print("listening")
nbytes, address = s.recvfrom_into(data)
print("Socket Data received {} bytes Address {}".format(nbytes, address))
print("Data received: ", data, " Echoing back to client")
executor.submit(send_ack, s, address, data[:nbytes])
udp_test_client.py:
#!/usr/env/bin python
import socket
import time
import random
def get_random_bytes():
return bytearray([random.randint(0,255) for b in range(10)])
ip = "127.0.0.1"
port = 1337
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect((ip, port))
while True:
stuff_to_send = get_random_bytes()
print("Sending stuff", stuff_to_send)
s.sendall(stuff_to_send)
print("reply: ", s.recvfrom(10))
time.sleep(0.1)
Running udp_test_echo.py in one terminal and udp_test_client.py in another, we see normal operation but if you Ctrl+C the test client and re run it, you will see that the server doesn't respond until it is restarted.
Is there a way to timeout a specific sending operation from a specific call to socket.send() method without affecting other calls ? (I want my socket.recvfrom_into call to block on the main thread)
If I settimeout on the entire socket object, I am going to have to deal with many exceptions while waiting for data in the main thread and I don't like to have to rely on exceptions for proper program operation.
The culprit was the socket.connect() call in send_ack, when being called on the server's socket object it causes the socket to no longer be bound and listen on the port specified in the start of the program.
Instead the send_ack function was changed to be:
def send_ack(sock, addr, ack):
print("Sending ACK to {}".format(addr))
sock.sendto(ack, addr)
print("ACK sent to {}".format(addr))
socket.sendto(data, address) uses the existing connection instead of starting a new one.
I have just started learning python network programming. I was reading Foundations of Python Network Programming and could not understand the use of s.shutdown(socket.SHUT_WR) where s is a socket object.
Here is the code(where sys.argv[2] is the number of bytes user wants to send, which is rounded off to a multiple of 16) in which it is used:
import socket, sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
HOST = '127.0.0.1'
PORT = 1060
if sys.argv[1:] == ['server']:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((HOST, PORT))
s.listen(1)
while True:
print 'Listening at', s.getsockname()
sc, sockname = s.accept()
print 'Processing up to 1024 bytes at a time from', sockname
n = 0
while True:
message = sc.recv(1024)
if not message:
break
sc.sendall(message.upper()) # send it back uppercase
n += len(message)
print '\r%d bytes processed so far' % (n,),
sys.stdout.flush()
print
sc.close()
print 'Completed processing'
elif len(sys.argv) == 3 and sys.argv[1] == 'client' and sys.argv[2].isdigit():
bytes = (int(sys.argv[2]) + 15) // 16 * 16 # round up to // 16
message = 'capitalize this!' # 16-byte message to repeat over and over
print 'Sending', bytes, 'bytes of data, in chunks of 16 bytes'
s.connect((HOST, PORT))
sent = 0
while sent < bytes:
s.sendall(message)
sent += len(message)
print '\r%d bytes sent' % (sent,),
sys.stdout.flush()
print
s.shutdown(socket.SHUT_WR)
print 'Receiving all the data the server sends back'
received = 0
while True:
data = s.recv(42)
if not received:
print 'The first data received says', repr(data)
received += len(data)
if not data:
break
print '\r%d bytes received' % (received,),
s.close()
else:
print >>sys.stderr, 'usage: tcp_deadlock.py server | client <bytes>'
And this is the explanation that the author provides which I am finding hard to understand:
Second, you will see that the client makes a shutdown() call on the socket after it finishes sending its transmission. This solves an important problem: if the server is going to read forever until it sees end-of-file, then how will the client avoid having to do a full close() on the socket and thus forbid itself from doing the many recv() calls that it still needs to make to receive the server’s response? The solution is to “half-close” the socket—that is, to permanently shut down communication in one direction but without destroying the socket itself—so that the server can no longer read any data, but can still send any remaining reply back in the other direction, which will still be open.
My understanding of what it will do is that it will prevent the client application from further sending the data and thus will also prevent the server side from further attempting to read any data.
What I cant understand is that why is it used in this program and in what situations should I consider using it in my programs?
My understanding of what it will do is that it will prevent the client
application from further sending the data and thus will also prevent
the server side from further attempting to read any data.
Your understanding is correct.
What I cant understand is that why is it used in this program …
As your own statement suggests, without the client's s.shutdown(socket.SHUT_WR) the server would not quit waiting for data, but instead stick in its sc.recv(1024) forever, because there would be no connection termination request sent to the server.
Since the server then would never get to its sc.close(), the client on his part also would not quit waiting for data, but instead stick in its s.recv(42) forever, because there would be no connection termination request sent from the server.
Reading this answer to "close vs shutdown socket?" might also be enlightening.
The explanation is half-baked, it applies only to this specific code and overall I would vote with all-fours that this is bad practice.
Now to understand why is it so, you need to look at a server code. This server works by blocking execution until it receives 1024 bytes. Upon reception it processes the data (makes it upper-case) and sends it back. Now the problem is with hardcoded value of 1024. What if your string is shorter than 1024 bytes?
To resolve this you need to tell the server that - hey there is no more data coming your way, so return from message = sc.recv(1024) and you do this by shutting down the socket in one direction.
You do not want to fully close the socket, because then the server would not be able to send you the reply.
I was debugging a python program, the application can't receive udp packets as expected. Finally I found it was the UdpSocket.connect cause the UdpSocket to lose these packets. See below code:
def main:
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.connect((server_ip, 9)) #This results in the issue
#Use sock.bind((local_ip, 12345))
#instead would solve the problem
localip, localport = sock.getsockname()
packet = GetRegisterData(localip, localport)
sock.sendto(packet, (server_ip, 5000)) #the host (server_ip, 5000) will
#always send back two udp packets
#to respond this statement
sleep(1)
while True:
response = sock.recv(1024) #packets from (server_ip, 5000) reached
#machine but this statement never return
if not len(response)
break
print response
I am very new to Python, and don't understand why this would happen. Any body helps explain this?
[Update]
I use tcpdump to capture packets, only to find the lost packets have reached the machine, but due to unkown reason the sock.recv just doesn't retuern. I want somebody to help explain why the sock.recv doesn't return everytime here.
You didn't mention where the packets that you expect to receive (but fail to) are coming from. I'm guessing they're not coming from the address you connected to, though. See the man page for connect(2) - which is what you're calling when you use this Python API - for information about why this matters. In particular:
If the socket sockfd is of type SOCK_DGRAM then addr is the address to which datagrams are sent by default, and the only address from which datagrams are received.
(emphasis mine).
I have a device that continually outputs data and I would like to send that data to a client on the same network as it is produced and I'm not finding a good solution. Here is what I'm trying.
Server:
import SocketServer
from subprocess import Popen,PIPE
class Handler(SocketServer.BaseRequestHandler):
def handle(self):
if not hasattr(self, 'Proc'):
self.Proc = Popen('r.sh', stdout=PIPE)
socket = self.request[1]
socket.sendto(self.Proc.stdout.readline(),self.client_address)
if __name__ == "__main__":
HOST, PORT = "192.168.1.1", 6001
server = SocketServer.UDPServer((HOST, PORT), Handler)
server.serve_forever()
Client:
import socket
data = " ".join(sys.argv[1:])
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(data + "\n", ("192.168.1.1", 6001))
try:
received = sock.recv(1024)
while True:
print "Sent: {}".format(data)
print "Received: {}".format(received)
sock.sendto('more' + "\n", ("192.168.1.1", 6001))
received = sock.recv(1024)
except:
print "No more messages"
arg[1] for the client is a program that outputs lines of data for several minutes that I need to process as it is created. The problem seems to be that every time the client sends another request, a new Handler object is created, so I loose Proc. How can I stream Proc.stdout?
Edit: The device is a Korebot2, so I have limited access to other python libraries due to space.
Using UDP you get a new "connection" each time you send a datagram, which is the reason you notice that a new object instance is created each time you send something. You're probably using the wrong kind of protocol here... UDP is used mostly for sending distinct "datagrams", or when a longer connection is not needed. TCP is also called a "streaming" protocol, and is often used for data that has no fixed end.
Also remember that UDP is not a reliable protocol, if used over a network it is almost guaranteed that you will loose packets.