python socket; my connection is breaking my data stream - python

I am trying to connect to a socket server and view/download a temperature readings polling stream
i.e.
72.81
72.83
72.79
72.85
But what I get are float values split in half.
72
.35
72
.36
72
.36
72
.37
72
.38
72
.38
72
.38
72
.39
How do I output unbroken float values from a socket connection?
client code:
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect(("192.168.1.249" , 8080))
s.sendall(b"GET / HTTP/1.1\r\nHost: webcode.me\r\nAccept: text/html\r\nConnection: close\r\n\r\n")
while True:
data = s.recv(1024)
if not data:
break
print(data.decode())

Related

How to receive and handle multiple TCP stream sockets?

I would like to send the location of a moving point to a server via TCP with the socket module. That point location is updated at each iteration of a for loop and is sent in the form of a tuple (x, y) that has been serialized with pickle dumps methods.
Problem:
On the server side, it seems that I only get to receive the location from the first iteration of that loop. As if all the following updated positions had been skipped or lost in the process.
I can’t say for sure what is the reason behind this behavior but my bet is that I am not correctly setting things on the server side. I suspect the data to be sent entirely but not processed adequately on reception due to some mistakes that I am probably doing with the socket module (I am completely new to the world of network interfaces).
Code:
--client side--
#Python3.7
import socket
import pickle
import math
HOST = "127.0.0.1"
PORT = 12000
den = 20
rad = 100
theta = math.tau / den
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect((HOST, PORT)) #connect to server
for step in range(1000):
i = step%den
x = math.cos(i*theta) * rad
y = math.sin(i*theta) * rad
data = pickle.dumps((x, y), protocol=0)
sock.sendall(data)
--server side--
#Jython2.7
import pickle
import socket
HOST = "127.0.0.1"
PORT = 12000
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
while True:
connection, address = s.accept()
if connection:
data = connection.recv(4096)
print(pickle.loads(data)) # <-- only print once (first location)
You need to put connection, address = s.accept() outside the while loop otherwise your server will wait for a new connection every time.
You also have an issue with the way your are receiving data. connection.recv(4096) will return any amount of bytes between 0 and 4096 not every time a complete "data" message is received. To handle this you could send a header before sending you json indicating how much data should be received
By adding a header, you will make sure the data messages you are sending will be received properly.
The header in this example is a four bytes int indicating the size of data.
Server
import pickle
import socket
import struct
HEADER_SIZE = 4
HOST = "127.0.0.1"
PORT = 12000
def receive(nb_bytes, conn):
# Ensure that exactly the desired amount of bytes is received
received = bytearray()
while len(received) < nb_bytes:
received += conn.recv(nb_bytes - len(received))
return received
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
connection, address = s.accept()
while True:
# receive header
header = receive(HEADER_SIZE, connection)
data_size = struct.unpack(">i", header)[0]
# receive data
data = receive(data_size, connection)
print(pickle.loads(data))
Client
import socket
import pickle
import math
HEADER_SIZE = 4
HOST = "127.0.0.1"
PORT = 12000
den = 20
rad = 100
theta = math.tau / den
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.connect((HOST, PORT)) #connect to server
for step in range(1000):
i = step%den
x = math.cos(i*theta) * rad
y = math.sin(i*theta) * rad
data = pickle.dumps((x, y), protocol=0)
# compute header by taking the byte representation of the int
header = len(data).to_bytes(HEADER_SIZE, byteorder ='big')
sock.sendall(header + data)
Hope it helps

Can't connect to modbus RTU over TCP. What is wrong?

Prove that device is working: ./modpoll -m enc -p4660 -t4:float -r60 -a3 192.168.1.1:
Protocol configuration: Encapsulated RTU over TCP
Slave configuration...: address = 3, start reference = 60, count = 1
Communication.........: 192.168.1.1, port 4660, t/o 1.00 s, poll rate 1000 ms
Data type.............: 32-bit float, output (holding) register table
TRACELOG: Set poll delay 0
TRACELOG: Set port 4660
TRACELOG: Open connection to 192.168.1.1
TRACELOG: Configuration: 1000, 1000, 0
-- Polling slave... (Ctrl-C to stop)
TRACELOG: Read multiple floats 3 60
TRACELOG: Send(6): 03 03 00 3B 00 02
TRACELOG: Recv(9): 03 03 04 6E 08 42 F7 35 FF
[60]: 123.714905
And how I try to get the connection with pymodbus library:
from pymodbus.client.sync import ModbusTcpClient
from pymodbus.transaction import ModbusRtuFramer
ModbusTcpClient(host='192.168.1.1', port=4660, framer=ModbusRtuFramer, timeout=5)
client.connect() # returns True
client.read_holding_registers(60, count=3, unit=0x03)
And get this result:
pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] 192.168.1.1:4660
Modbus Error: [Connection] 192.168.1.1:4660
What I'm doing wrong?

why is it timing out like this?

I'm trying to build a TCP-proxy script that sends and receives data, i managed to get it to listen but it doesn't seem to be connecting properly...my code looks right to me and after checking python docs(i'm trying to run it in python 2.7 and 3.6) i get this timeout message:
Output:
anon#kali:~/Desktop/python scripts$ sudo python TCP\ proxy.py 127.0.0.1 21 ftp.target.ca 21 True
[*] Listening on 127.0.0.1:21d
[==>] Received incoming connection from 127.0.0.1:44806d
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "TCP proxy.py", line 60, in proxy_handler
remote_socket.connect((remote_host,remote_port))
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 110] Connection timed out
i looked into the file "/usr/lib/python2.7/socket.py" but couldn't really understand what i was looking for as it seemed right when i compared it to python docs and my script
my code:
# import the modules
import sys
import socket
import threading
#define the server
def server_loop(local_host,local_port,remote_host,remote_port,receive_first):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.bind((local_host, local_port))
server.listen(5)
print ("[*] Listening on %s:%sd" % (local_host, local_port))
except:
print("[!!] Failed to listen on %s:%sd" % (local_host,local_port))
print ("[!!] Check for others listening sockets or correct permissions")
sys.exit(0)
while True:
client_socket, addr = server.accept()
#print out the local connection information
print ("[==>] Received incoming connection from %s:%sd" % (addr[0],addr[1]))
#start a thread to talk to the remote host
proxy_thread = threading.Thread(target=proxy_handler,args=(client_socket,remote_host,remote_port,receive_first))
proxy_thread.start()
else:
print ("something went wrong")
def main():
#no fancy command-line parasing here
if len(sys.argv[1:]) !=5:
print ("Usage: ./TCP proxy.py [localhost] [localport] [remotehost] [remoteport] [receive_first]")
print("Example: ./TCP proxy.py 127.0.0.1 9000 10.12.132.1 9000 True")
#set up local listening parameters
local_host = sys.argv[1]
local_port = int(sys.argv[2])
#set up remote target
remote_host = sys.argv[3]
remote_port = int(sys.argv[4])
#this tells proxy to connect and receive data before sending to remote host
receive_first = sys.argv[5]
if "True" in receive_first:
receive_first = True
else:
receive_first = False
#now spin up our listening socket
server_loop(local_host,local_port,remote_host,remote_port,receive_first)
def proxy_handler(client_socket, remote_host, remote_port, receive_first):
#connect to the remote host
remote_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
remote_socket.connect((remote_host,remote_port))
#receive data from the remote end if necessary
if receive_first:
remote_buffer = receive_from(remote_socket)
hexdump(remote_buffer)
#send it to the repsonse handler
remote_buffer = respsonse_handler(remote_buffer)
#if data is able to be sent to local client, send it
if len(remote_buffer):
print ("[<==] Sending %d bytes to localhost." % len(remote_buffer))
client_socket.send(remote_buffer)
#now loop and read from local,sent to remote,send to local,rinse/wash/repeat
while True:
#read from local host
local_buffer = receive_from(client_socket)
if len(local_buffer):
print ("[==>] Received %d bytes from localhost." % len(local_buffer))
#send it to request handler
local_buffer = request_handler(local_buffer)
#send data to remote host
remote_socket.send(local_buffer)
print ("[==>] Sent to remote.")
#receive back response
remote_buffer = receive_from(remote_socket)
if len(remote_buffer):
print ("[<==] Received %d bytes from remote." % len(remote_buffer))
hexdump(remote_buffer)
#send response to handler
remote_buffer = response_handler(remote_buffer)
#send response to local socket
client_socket.send(remote_buffer)
print ("[<==] Sent to localhost.")
#if no data left on either side, close connection
if not len(local_buffer) or not len(remote_buffer):
client_socket.close()
remote_socket.close()
print ("[*] No more data, closing connections.")
break
#this is a pretty hex dumping function taken from the comments of http://code.activestate.com/recipes/142812-hex-dumper/
def hexdump(src, length=16):
result = []
digits = 4 if isinstance(src,unicode) else 2
for i in xrange(0,len(src), length):
s = src[i:i+length]
hexa = b' '.join(["%0*X" % (digits, ord(x)) for x in s])
text = b' '.join([x if 0x20 <= ord(x) < 0x7F else b'.' for x in s])
result.append( b"%04X %-*s %s" % (i, length*(digits + 1), hexa, text) )
print (b'/n'.join(result))
def receive_from(connection):
buffer = ""
#set a 2 second timeout; depending on your target this may need to be adjusted
connection.settimeout(2)
try:
#keep reading the buffer until no more data is there or it times out
while True:
data = connection.recv(4096)
if not data:
break
buffer += data
except:
pass
return buffer
#modify any requested destined for the remote host
def request_handler(buffer):
#perform packet modifications
return buffer
#modify any responses destined for the local host
def response_handler(buffer):
#perform packet modifications
return buffer
main()
i have tried different ftp servers/sites,etc but get the same result, where am i going wrong with my code? any input or direction would be greatly appreciated.
okay so turns out my script is good just the ftp servers i was running weren't haha
this is the final output:
anon#kali:~/Desktop/python scripts$ sudo python TCP\ proxy.py 127.0.0.1 21 ftp.uconn.edu 21 True
[*] Listening on 127.0.0.1:21d
[==>] Received incoming connection from 127.0.0.1:51532d
0000 32 32 30 20 50 72 6F 46 54 50 44 20 31 2E 32 2E 2 2 0 P r o F T P D 1 . 2 ./n0010 31 30 20 53 65 72 76 65 72 20 28 66 74 70 2E 75 1 0 S e r v e r ( f t p . u/n0020 63 6F 6E 6E 2E 65 64 75 29 20 5B 31 33 37 2E 39 c o n n . e d u ) [ 1 3 7 . 9/n0030 39 2E 32 36 2E 35 32 5D 0D 0A 9 . 2 6 . 5 2 ] . .
[<==] Sending 58 bytes to localhost.
[==>] Received 353 bytes from localhost.
[==>] Sent to remote.
[<==] Received 337 bytes from remote.

Python socket performance drops considerably when used in parallel

I use pretty much standard code to transfer a file from one node to another using Python/socket.
Num-of-threads/Performance (comparing to sftp/scp)
1: 2x
2: 1.2x
3: 1x
4: 1x
5: 1x
It takes 25 sec to transfer 2.7G file over 10Gb network using Python/socket.
If I use fstp/scp it takes 50 sec to transfer the same file.
2 threads complete transfer in 47 sec sec using Python/socket script.
If I use fstp/scp it takes 55 sec to transfer the same 2 files in parallel.
3 threads transfer in 112 sec sec using Python/socket script.
fstp/scp does the same job in 112 sec (3 files in parallel).
Client code:
#client.py
import socket
import sys
import datetime as dt
e=sys.exit
n1=dt.datetime.now()
#s = socket.socket()
s = socket.socket(socket.AF_INET, type=socket.SOCK_STREAM, proto=0)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
#s.setsockopt(socket.IPPROTO_TCP, socket.TCP_CORK, 1)
host = 'testserver'
port = %s # Reserve a port for your service.
s.connect((host, port))
f = open('/tmp/testfile.gz','rb')
print 'Sending..',
l = f.read(1024*1024)
while (l):
print '.',
s.send(l)
l = f.read(1024*1024)
f.close()
print "Done Sending"
s.shutdown(socket.SHUT_WR)
s.close
n2=dt.datetime.now()
diff=(n2-n1)
print diff.seconds
e(0)
Server code:
#server.py
import socket # Import socket module
import sys, time
e=sys.exit
#s = socket.socket() # Create a socket object
s= socket.socket(socket.AF_INET, type=socket.SOCK_STREAM, proto=0)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
host = socket.gethostname() # Get local machine name
port = %s # Reserve a port for your service.
print port
s.bind((host, port)) # Bind to the port
f = open('/tmp/testfile_%d.png','wb')
s.listen(5) # Now wait for client connection.
i=0
while True:
c, addr = s.accept() # Establish connection with client.
c.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
c.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
#c.setNoDelay(True)
#c.setsockopt(socket.IPPROTO_TCP, socket.TCP_CORK, 1)
print 'Got connection from', addr
print "Receiving..."
if netcat:
netcat.write('netcat from file writer')
i=0
l = c.recv(1024*1024)
while (l):
f.write(l)
l = c.recv(1024*1024)
i +=1
if 0 and i>20:
f.close()
e(0)
f.close()
#c.send('Thank you for connecting')
c.close()
s.shutdown(socket.SHUT_WR)
s.close()
print "Done Receiving"
e(0)
e(0)
When I run it in 2 jobs in parallel (different ports/shells) performance drops 50%.
Alternatively when I use sftp in parallel:
time sftp user#server://tmp/testfile.gz.gz test0.gz&
time sftp user#server://tmp/testfile.gz.gz test1.gz&
time sftp user#server://tmp/testfile.gz.gz test2.gz&
elapsed time does not change for 2 or 3 parallel jobs.

TCP vs. UDP socket latency benchmark

I have implemented a small benchmark for socket communication via TCP and UDP in Python. Surprisingly, TCP is almost exactly double as fast as UDP.
To avoid routing effects, server and client are running on the same Unix machine, but on different threads.
Maybe the code is useful. Here is the server code:
import socket
import sys
host = 'localhost'
port = 8888
buffersize = 8
server_address = (host, port)
def start_UDP_server():
socket_UDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
socket_UDP.bind(server_address)
print("UDP server is running...")
while True:
data, from_address = socket_UDP.recvfrom(buffersize)
if not data: break
socket_UDP.sendto(data, from_address)
socket_UDP.close()
def start_TCP_server():
socket_TCP = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_TCP.bind(server_address)
socket_TCP.listen(1)
print("TCP server is running...")
while True:
client, client_address = socket_TCP.accept()
while True:
data = client.recv(buffersize)
if not data: break
client.sendall(data)
client.close()
So you can run either start_TCP_server() or start_UDP_server().
On client side the code is:
import socket
import sys
import time
host = 'localhost'
port = 8888
buffersize = 8
server_address = (host, port)
client_address = (host, port+1)
N = 1000000
def benchmark_UDP():
socket_UDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
socket_UDP.bind(client_address)
print("Benchmark UDP...")
duration = 0.0
for i in range(0, N):
b = bytes("a"*buffersize, "utf-8")
start = time.time()
socket_UDP.sendto(b, server_address)
data, from_address = socket_UDP.recvfrom(buffersize)
duration += time.time() - start
if data != b:
print("Error: Sent and received data are bot the same")
print(duration*pow(10, 6)/N, "µs for UDP")
def benchmark_TCP():
socket_TCP = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket_TCP.connect(server_address)
print("Benchmark TCP...")
duration = 0.0
for i in range(0, N):
b = bytes("a"*buffersize, "utf-8")
start = time.time()
socket_TCP.sendall(b)
data = socket_TCP.recv(buffersize)
duration += time.time() - start
if data != b:
print("Error: Sent and received data are bot the same")
print(duration*pow(10, 6)/N, "µs for TCP")
socket_TCP.close()
Like for the server you can start the benchmark by benchmark_TCP() or benchmark_UDP().
The results are about 25 µs for TCP, and about 54 µs for UDP on Unix and even worse for Windows (about 30 µs for TCP and more than 200 µs for UDP). Why? I would expect a minimal advantage for UDP.
Your TCP socket is connected but your UDP socket is not. This means extra processing for every send/receive on the UDP socket. Call connect on each side for the UDP socket, just like you call connect/accept on the TCP socket.
Programs like iperf do this to measure accurately.

Categories

Resources