Reliable UDP implementation using sequence numbers, deadlocking - python

I am trying to implement a reliable UDP messaging scheme between a single client and a server.
In my current code I can send incrementing numbers by 1 to the server if he only uses the receive command. if the server tries replying with the received data using the send command, it seems to work for 1-3 messages back and forth, and then I enter a deadlock. I do not understand where the source of the deadlock comes from. Below is my implementation of send and receive.
Both the client and server start with their self.seqnumber set to 0 and both sockets are set to timeout after 1 second. Both the client and server share these methods as they belong to a class both the client and server import.
def sendCommand(self, command):
self.s.sendto((self.seqnumber).to_bytes(8, "little") + command, self.address)
try:
data, self.address = self.s.recvfrom(1200)
if int.from_bytes(data[:8], 'little') == self.seqnumber:
self.seqnumber += 1
return 0
else:
return self.sendCommand(command)
except:
return self.sendCommand(command)
def getCommand(self):
while(1):
try:
data, self.address = self.s.recvfrom(1200)
if int.from_bytes(data[:8], 'little') == self.seqnumber:
self.s.sendto(data, self.address)
self.seqnumber += 1
break
elif int.from_bytes(data[:8], 'little') < self.seqnumber:
self.s.sendto(data, self.address)
else:
continue
except:
continue
return data[8:]
The code running on the server (commInf is the class in which the get and send command are defined):
while (1):
command = self.commInf.getCommand()
print(command.decode())
self.commInf.sendCommand(command)
and the code running on the client:
for i in range(100):
self.commInf.sendCommand(f"{i}".encode())
command = self.commInf.getCommand()
print(command.decode())
I expect the output to allow me to reliably send messages and return them using the sendCommand(with the received data) since the returned data from getCommand does not include its sequence number and is just the raw data.

Related

Python slow at accepting connections via sockets

So first I'll describe what I am doing.
A game is providing a webinterface but only on IPv4 and I would like people out in the internet to reach it too. From my ISP I however only get public IPv6. And since I couldn't find anything on the internet to translate requests and responses I wrote a little app that does. So IPv6 requests get forwarded to the webserver and the webservers responses get translated back to IPv6. That's working fine.
The only troubling bit is that... not all requests get detected or whatever is happening, like I first visit the webpage and sometimes it just hangs and says that it's waiting for a style.css file but when I look at the console output there's no reported connection. And generally there's just a whole lot of delay when you try doing something with the webinterface.
Then here is my code: (A word of warning I don't really know what everything with the networking exactly does, the stuff around the sending I especially don't quite understand if it's even needed, I just found it online)
def handle_request(return_address):
ipv4side = socket.create_connection(("127.0.0.1", 7245))
request = return_address.recv(2048)
print(request)
request = str(request, 'utf-8')
p = re.compile('\\[[^]]*]:7250')
m = p.search(request)
request = request.replace(request[m.start():m.end()], '127.0.0.1:7245')
request = request.encode('utf-8')
msg_len = len(request)
totalsent = 0
while totalsent < msg_len:
sent = ipv4side.send(request[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent += sent
while True:
response = ipv4side.recv(2048)
if len(response) == 0:
ipv4side.close()
return
msg_len = len(response)
totalsent = 0
while totalsent < msg_len:
sent = return_address.send(response[totalsent:])
if sent == 0:
raise RuntimeError("socket connection broken")
totalsent += sent
ipv6side = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
ipv6side.bind((IPV6, 7250))
ipv6side.listen(20)
ipv6side.settimeout(30)
while True:
try:
connected_socket = ipv6side.accept()[0]
print("NEW CONNECTION!" + str(connected_socket))
Thread(target=handle_request, args=(connected_socket,)).start()
except socket.timeout:
print("nothing new...")
I hope anyone can help me with this :D
I fixed the problem! (I think)
So, I followed what user253751 said about how HTTP is allowed to use the same connection for more than one request so I made adjustments and also so that that works it now has to detect the end-of-response/request (eor) for the responses and requests.
First thing I did was wrap the whole handle_request code in a while statement for the multiple requests on one connection thing.
For the end-of-response I am using regex
eor = re.compile('\\r\\n\\r\\n\\Z')
and then at the appropriate place:
eorm = eor.search(str(response, 'ISO-8859-1', ignore)) # I replaced the utf-8
# by the correct ISO-
# 8859-1 used for HTTP
# just to be safe
if eorm or len(response) == 0:
safe_send(return_address, response)
ipv4side.close()
break
And for the end-of-request it's basically the same just it only looks that the request is 0 length.
The code responsible for sending I put into the safe_send function that takes a connection and a msg.
Aaand I coded it so that in case the server aborts a connection for some reason and thus throws an error when trying to receive, it resends the request on a new connection.
try:
response = ipv4side.recv(2048)
except ConnectionAbortedError:
ipv4side = socket.create_connection(("127.0.0.1", 7245))
safe_send(ipv4side, request)
continue
I hope this is a good explanation :]

How to receive data from a raw socket in Python?

I am trying to create a port scanner (using SYN packets) with the sockets library (yes I know scapy would make this much easier, but I'm mostly doing this for a learning exercise.) I have crafted the packet and successfully sent it, however I'm having troubled receiving and parsing the subsequent response.
So far I've tried the s.recv(1024) and 4096, as well as recvfrom().
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
s.sendto(packet, (dstip, 80))
r = s.recv(1024)
print(r)
However, I am having trouble receiving the response, I can see that the packet is being sent correctly via Wireshark, and the SYN-ACK is sent to my machine, however I am unable to properly receive and print it. Is there a better way I can use the s.recv() function for this sort of input? Or am I using the wrong function?
Any help is appreciated, I'm new to the sockets library. Thanks.
The book Black Hat Python has en example using the socket library to create a scanner, unfortunately not a port scanner. They check if a host is up, and they use a raw socket to receive data. The code is available here.
They are sending SYN-packets with one socket object in a new thread, and sniffing the replies using another socket object.
In the example they use socket.IPPROTO_IP or socket.IPPROTO_ICMP instead of socket.IPPROTO_RAW depending on if it is Windows or not.
For the sniffer they use the function setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1) for sniffing, where IPPROTO_IP is a dummy-protocol for TCP, IP_HDRINCL is to include headers in the IP packets, and 1 is mapped to the ICMP-protocol in the code.
Good luck!
Below is a recent module I wrote with the help from various sources for socket IO, take what you would like from it.
import socket
import threading
import time
import pygogo as gogo
from icentralsimulator.bridgeio.read_packets import PacketFactory
from icentralsimulator.bridgeio.write_packets import WritePacket
from icentralsimulator.configurations.interfaces import IServerInfoProvider
logger = gogo.Gogo(__name__).logger
send_lock = threading.Lock()
class BridgeConnection:
def __init__(self, bridge_info_provider: IServerInfoProvider):
info = bridge_info_provider.get_bridge_server_info()
self.callback = None
self.bridge_ip = info.IpAddress
self.bridge_port = info.Port
self._connection = None
self._terminate_wait_for_incoming = False
#property
def is_connected(self):
return self._connection is not None
def connect(self, callback):
"""
The purpose of this method is to create (and hold) a connection to the server. At the same time,
it creates a new thread for the purpose of waiting on incoming packets.
"""
if self._connection is not None: return
self._connection = socket.create_connection((self.bridge_ip, self.bridge_port))
self._connection.settimeout(0.5)
self.callback = callback
t = threading.Thread(target=self._wait_for_incoming)
t.start()
time.sleep(5)
def disconnect(self):
"""
Breaks existing connection to the server if one is currently made and cancels the thread that is waiting
for incoming packets. If the connection is not currently open, simply returns silently -- thus it is safe
to call this method repeatedly.
"""
self._terminate_wait_for_incoming = True
while self._terminate_wait_for_incoming:
time.sleep(0.1)
self._connection.close()
self._connection = None
def send_packet(self, packet: WritePacket):
"""
Sends an arbitrary packet to the server.
"""
with send_lock:
logger.debug(f"Sending packet: {packet.payload_plain_text}")
payload = packet.payload
self._connection.sendall(payload)
def _wait_for_incoming(self):
"""
Continually runs a loop to wait for incoming data on the open socket. If data is received, it is converted
to a receive packet and forwarded to the consumer as part of a callback.
"""
self._terminate_wait_for_incoming = False
buf_len = 4096
try:
while not self._terminate_wait_for_incoming:
data = None
try:
_cnx = self._connection
if _cnx is None: break
data = _cnx.recv(buf_len)
if data is not None and len(data) > 0:
while True:
new_data = _cnx.recv(buf_len)
if new_data is None or len(new_data) == 0:
break
data = data + new_data
except socket.timeout:
if data is not None and self.callback is not None:
packet = PacketFactory.get_packet(data)
self.callback(packet)
logger.debug(f"Received packet: {data}")
time.sleep(0.5)
except OSError: # Happens when stopping the application
logger.info("Application aborted")
return
finally:
self._terminate_wait_for_incoming = False
Note that I don't include IServerInfoProvider, or the PacketFactory here. Those are pretty custom to my application. You will need to interpret the packet according to the packet data that arrives in your specific use case.

How do I clear the buffer upon start/exit in ZMQ socket? (to prevent server from connecting with dead clients)

I am using a REQ/REP type socket for ZMQ communication in python. There are multiple clients that attempt to connect to one server. Timeouts have been added in the client script to prevent indefinite wait.
The problem is that when the server is not running, and a client attempts to establish connection, it's message gets added to the queue buffer, which should not even exist at this moment ideally. When the script starts running and a new client connects, the previous client's data is taken in first by the server. This should not happen.
When the server starts, it assumes a client is connected to it since it had tried to connect previously, and could not exit cleanly (since the server was down).
In the code below, when the client tries the first time, it gets ERR 03: Server down which is correct, followed by Error disconnecting. When server is up, I get ERR 02: Server Busy for the first client which connects. This should not occur. The client should be able to seamlessly connect with the server now that it's up and running.
Server Code:
import zmq
def server_fn():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://192.168.1.14:5555")
one=1
while one == 1:
message = socket.recv()
#start process if valid new connection
if message == 'hello':
socket.send(message) #ACK
#keep session alive until application ends it.
while one == 1:
message = socket.recv()
print("Received request: ", message)
#exit connection
if message == 'bye':
socket.send(message)
break
#don't allow any client to connect if already busy
if message == 'hello':
socket.send ('ERR 00')
continue
#do all data communication here
else:
socket.send('ERR 01: Connection Error')
return
server_fn()
Client Code:
import zmq
class client:
def clientInit(self):
hello='hello'
#zmq connection
self.context = zmq.Context()
print("Connecting to hello world server...")
self.socket = self.context.socket(zmq.REQ)
self.socket.connect("tcp://192.168.1.14:5555")
#RCVTIMEO to prevent forever block
self.socket.setsockopt(zmq.RCVTIMEO, 5000)
#SNDTIME0 is needed since script may not up up yet
self.socket.setsockopt(zmq.SNDTIMEO, 5000)
try:
self.socket.send(hello)
except:
print "Sending hello failed."
try:
echo = self.socket.recv()
if hello == echo:
#connection established.
commStatus = 'SUCCESS'
elif echo == 'ERR 00':
#connection busy
commStatus = "ERR 00. Server busy."
else:
#connection failed
commStatus="ERR 02"
except:
commStatus = "ERR 03. Server down."
return commStatus
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
cObj = client()
commStatus=cObj.clientInit()
print commStatus
cObj.clientQuit()
PS - I have a feeling the solution may lie in the correct usage of socket.bind and socket.connect.
Answering my own question-
The problem is that the first client sends a message which the server accepts when it starts running, regardless of the status of the client.
To prevent this, 2 things have to be done. The most important thing is to use socket.close() to close the client connection. Secondly, the LINGER parameter can be set to a low value or zero. This clears the buffer after the timeout value from the time the socket is closed.
class client:
def clientInit(self):
...
self.socket.setsockopt(zmq.LINGER, 100)
...
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
self.socket.close()

python - socket not receiving data

I am trying to implement a RFC1350 on top of a UDP. So far all was smooth sending a file
from server to client worked like a charm i gave the method for receiving data to server and sending data to client but this direction is a no go.
Key Server code:
def listen(self):
while True:
packet, address = self.serverSocket.recvfrom(512)
mode = str(packet)[2:5]
self.file = str(str(packet)[6:]).replace("'", "")
if(mode == "RRQ"):
self.sendResponse(address)
else:
self.receiveData()
def receiveData(self):
data = open("new1.jpg", "wb")
while True:
packet, server = self.serverSocket.recvfrom(512)
if packet.__len__() == 512:
data.write(packet)
else:
data.write(packet)
break;
Key Client code:
def sendWRQ(self):
request = 'WRQ-' + self.file
self.clientSocket.sendto(str(request).encode(), (self.serverAddress, self.serverPort))
self.sendData()
def sendData(self):
with open(self.file, "rb") as data:
while True:
packet = data.read(512)
if packet != b"":
self.clientSocket.sendto(packet, (self.serverAddress, self.serverPort))
else:
self.clientSocket.sendto(packet, (self.serverAddress, self.serverPort))
break
time.sleep(0.0005)
Client sends WRQ packet whit a name of a file that will be the key of transfer
Server recog. the transfer type RRQ or WRQ in this instance and starts listening for
transfer via receiveData().
Client terminates after sending sendWRQ() now a problem occurs on either Server or Client side in sendData or receiveData i get a file whit 0kB
All of the code:
Server Class: http://www.copypastecode.com/181330/
Client Class: http://www.copypastecode.com/181326/
The method that the client informs the server the file is completed, is not correct.
In your code, when the file is completed, at the client side, you will call "sendto" to send an empty string, but this will actually do nothing; at the server side, you use condition "packet.len() == 512" to judge whether the file is completed, however, during the transfer process, if the server cpu is running faster than the transfer speed, you will get zero packet length frequently, but this does not indicate that the transfer is completed, maybe the next packet is just on the way.
My suggestion is to use a special command to indicate the end of transfer, and the server will only break the loop when that command is received.

How to replace URLs from the packets that are transferred through a proxy?

I am writing code for my own Proxy server in python. The code that transfers packets from client and target server is as follows :
def _read_write(self):
if self.target:
pass
else:
domain, port = self.get_target_host()
self._connect_target(domain, port)
self.target.send(self.headers)
maxtimeout = self.timeout / 3
inputs = [self.client, self.target]
count = 0
try:
while 1:
count += 1
(recv, send, err) = select.select(inputs, [], inputs)
if err:
break
if recv:
for in_ in recv:
data = in_.recv(BUFFLEN)
if in_ is self.client:
out = self.target
else:
out = self.client
if data:
out.send(data)
count = 0
if count == maxtimeout:
break
except select.error:
print >> sys.stderr, "Error : Internal queue error\n", "Reason : Unknown"
Now since each packet is individually transferred from the client to the server, I want to intercept the data being transferred and replace the URLs of resources such as images, css, etc. with some new URLs with the closest server from the client. Is this the right way to do it ? I think I will get into a problem if the URL is separated by the packets.
Right - it would be difficult to do this on the packet level. What might be easier would be to build the entire data in the proxy first, then do your processing, then send the data to the client.

Categories

Resources