I am running a simple closing socket listener to ingest a UDP stream containing json directed to port 5001 (in this instance) on localhost:
import socket
import json
from contextlib import closing
def monitor_stream():
with closing(socket.socket(socket.AF_INET, socket.SOCK_DGRAM)) as s:
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind(('', 5001))
while True:
data, addr = s.recvfrom(4096)
try:
data = data.decode('ascii')
data = json.loads(data)
except:
print 'Not valid JSON: ', data
I need to re-broadcast the stream to another local port (arbitrarily 5002 in this example) so that a second program can access the data in as near as real time. Low latency is crucial. Is the following (using socket.sendto() within the while) an acceptable method:
while True:
data, addr = s.recvfrom(4096)
s.sendto(data, ('localhost', 5002))
If not, how else might I achieve the same result?
I have assumed that there is no way of multiple programs ingesting the original stream in a simultaneous fashion as they originate as unicast packets so only the first bound socket receives them.
Secondly, how might I cast the same stream to multiple ports (local or not)?
I am unable to change the incoming stream port / settings.
This was a simple misuse of socket.sendto() which can only be called on a non-connected socket.
I was trying to use the bound-listener to send-on the ingested stream.
I seems you need two socket objects to re-broadcast to a different address. The second socket is an un-bound echo client as the address is specified in .sendto(string, address)
Related
I have the following problem: I want a sever to send the contents of a textfile
when requested to do so. I have writen a server script which sends the contents to the client and the client script which receives all the contents with a revcall loop. The recvall works fine when
I run the server and client from the same device for testing.
But when I run the server from a different device in the same wifi network to receive the textfile contents from the server device, the recvall doesn't work and I only receive the first 1460 bytes of the text.
server script
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(("", 5000))
server.listen(5)
def send_file(client):
read_string = open("textfile", "rb").read() #6 kilobyte large textfile
client.send(read_string)
while True:
client, data = server.accept()
connect_data = client.recv(1024)
if connect_data == b"send_string":
send_file(client)
else:
pass
client script
import socket
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("192.168.1.10", 5000))
connect_message = client.send(b"send_string")
receive_data = ""
while True: # the recvall loop
receive_data_part = client.recv(1024).decode()
receive_data += receive_data_part
if len(receive_data_part) < 1024:
break
print(receive_data)
recv(1024) means to receive at least 1 and at most 1024 bytes. If the connection has closed, you receive 0 bytes, and if something goes wrong, you get an exception.
TCP is a stream of bytes. It doesn't try to keep the bytes from any given send together for the recv. When you make the call, if the TCP endpoint has some data, you get that data.
In client, you assume that anything less than 1024 bytes must be the last bit of data. Not so. You can receive partial buffers at any time. Its a bit subtle on the server side, but you make the same mistake there by assuming that you'll receive exactly the command b"send_string" in a single call.
You need some sort of a protocol that tells receivers when they've gotten the right amount of data for an action. There are many ways to do this, so I can't really give you the answer. But this is why there are protocols out there like zeromq, xmlrpc, http, etc...
I am currently trying to learn networking with python. I am really new to this topic so I replicated some examples from somewhere like here
I want to achieve a continous data transfer with TCP. This means I want to send data as long as some condition is met. So I slightly modified the example to this code below:
My Setup is Win10 with Python 3.8
My client.py copied and modified form above:
# Echo client program
import socket
HOST = '192.168.102.127' # The remote host
PORT = 21
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
i=0 #For counting how often the string was sent
while True: #for testing this is forever
s.sendall(b'Hello, world')
data = s.recv(1024)#
print(i)
i=i+1
print('Received', repr(data))
My server.py:
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 21
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
with conn:
print('Connected by', addr)
while True:
data = conn.recv(1024)
if not data: break
conn.sendall(data)
The error I am getting is
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
after i=5460 (in multiple tries) on the Client side and
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host on the Server side
The longer my text message is, less messages got sent before the error.
This leads me to believe I sent the data to some sort of buffer which is (over-)written to until the error is thrown.
When looking for possible solutions I only found different implementations, which did not cover my problem or used other software.
As stated in some answers for similar questions, I disabled my firewall and stopped my antivirus, but with no noticable difference.
When looking up the error, there is also the possibilty of protocol errors but I do not expect that to be a problem.
When reading into the socket/TCP documentation, I found somewhere that TCP is not really designed for this kind of problem, but rather for
client connects to server
|
V
client sends request to server
|
V
server sends request answer
|
V
server closes connection.
Is this really true?
But I cannot believe that for every data that is sent a new socket must be connected, like in this question. This solution is also really slow.
But if this is the case, what could I use alternatively?
To illustrate the bigger picture:
I have a some other code which is giving me status data (text) at 500Hz. In Python, I am processing this data and sending the processed data to an Arduino with Ethernet shield. This data is "realtime" data, so I need the data sent to the arduino as fast as possible. Here the client is Python and the Server is the Arduino with the Ethernet module. The connection and everthing is working fine, only the continous sending of data is my problem.
I'm trying to access socket objects from memory address "socket._socketobject object at 0x7f4c39d78b40" and use it for another function at different times. The clients are connected to port 9999 and I want the server to react with each one at a later stage while keeping the connection up.
def sock_con(host,port):
host = host
port = port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((host, port))
sock.listen(5)
while True:
client, address = sock.accept()
print client
print type(client)
print "Server (%s, %s) connected" % address
mongoconn = connectionx('IP_Clients')
key = {'addresses':'192.168.11.1'}
data = {'client':str(client), 'addresses':address}
mongoconn.update(key, data)
client.settimeout(60)
The next code is at a different module which can be used at anytime:
import os,sys
import socket
currentdir = os.path.dirname(os.path.realpath(__file__))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
from mgodb import connectionx
mongoconn = connectionx('IP_Clients')
x= mongoconn.find_one({'addresses':'192.168.11.1'})
client= eval(x['client'])
def send_stuff(client,addresses,arg1):
while True:
try:
#data = client.recv(size)
print data
client.send(arg1)
return data
except:
#raise error('Client disconnected')
client.close()
return False
send_stuff(client,x['addresses'],'test10')
To use sockets later in the same process, just store them at their arrival and find them later. Something like this:
...
clients = {}
while True:
client, addr = server.accept()
clients[addr[0]] = client
So, if you stop the listening loop, or run it in a thread, or you run something else in a thread (doesn't matter), you can get the opened socket object from dictionary clients by the client's IP address.
client = clients.get("192.168.1.1")
But you should count in the port as well for detection, because there may be two different clients contacting you from same IP address.
If you want to send an opened socket to another process, well, it is doable but not worth the trouble.
You would need to send the socket's filedescriptor ( socket.fileno() ) to another process, and that can be done using Python module sendfds. It can be found on pypi.python.org.
Then, in receiving process, you would have to construct the socket wrapper object around it manually or trick somehow the existing _socket.dll/.so and socket.py modules to do it for you.
A lot of work and success dubious. What you should do instead is to use the dictionary to store sockets and create an interface (over socket, PIPE or whatever IPC) to forward messages to and from needed connected sockets.
Finally, you do not have to worry about this mess at all, because Python has asyncore module.
It already does the socket storing into dictionary and other useful stuff. The thing is, you need to know what you want to achieve to be able to adequately tune the asyncore client handler. Set correct buffer sizes etc. etc. But asyncore is elegant and you can easily mix it with existing GUI event loop. asyncore and asynchat are often used when creating push servers or instant-messaging-like systems.
Im trying to send 3 packets one after the other with python socket.
Python optimize it to one or 2 packets.
I prevented it by sleep command, but it takes too long time.
I thought to turn on the TCP urg flag, Does someone know how to do it?
or you have another solotion?
client side:
import socket
from time import sleep
IP = '127.0.0.1'
PORT = 5081
BUFFER_SIZE = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((IP, PORT))
s.send('1'*5)
#sleep( 1)
s.send('2'*5)
#sleep( 1)
s.send('3'*5)
s.close()
server side:
import socket
IP = '0.0.0.0'
PORT = 5081
BUFFER_SIZE = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((IP, PORT))
s.listen(1)
connection, address = s.accept()
while 1:
#Here I expected to get the 1nd value
data1 = connection.recv(BUFFER_SIZE)
#end of communication
if not data1:
break
print 'data1', data1
#Here I expected to get the 2nd value,but both inputs arrived here, 22222 and 33333
data2 = connection.recv(BUFFER_SIZE)
print 'data2', data2
#Here I expected to get the 3nd value
data3 = connection.recv(BUFFER_SIZE)
print 'data3', data3
connection.close()
thanks
Avinoam
You should not even try. TCP is a stream protocol and should be used as a stream protocol (meaning a single sequence of bytes). Even if you manage to maintain the separation of packets when you use localhost on your system, it could break if you use it between different hosts, or simply after an upgrade of the TCP/IP stack. And as soon as your packets will pass through a proxy or a software filter, anything can happen.
The correct way to separate different objects on a stream is to use an upper level protocol encoding the objects sender side and decoding them client side. An example of that is one or two bytes (in network order if more than one byte) for the size followed by the relevant bytes. Or you could imagine a text protocol with commands, headers and data, or [put whatever you want here]
I have a device that continually outputs data and I would like to send that data to a client on the same network as it is produced and I'm not finding a good solution. Here is what I'm trying.
Server:
import SocketServer
from subprocess import Popen,PIPE
class Handler(SocketServer.BaseRequestHandler):
def handle(self):
if not hasattr(self, 'Proc'):
self.Proc = Popen('r.sh', stdout=PIPE)
socket = self.request[1]
socket.sendto(self.Proc.stdout.readline(),self.client_address)
if __name__ == "__main__":
HOST, PORT = "192.168.1.1", 6001
server = SocketServer.UDPServer((HOST, PORT), Handler)
server.serve_forever()
Client:
import socket
data = " ".join(sys.argv[1:])
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(data + "\n", ("192.168.1.1", 6001))
try:
received = sock.recv(1024)
while True:
print "Sent: {}".format(data)
print "Received: {}".format(received)
sock.sendto('more' + "\n", ("192.168.1.1", 6001))
received = sock.recv(1024)
except:
print "No more messages"
arg[1] for the client is a program that outputs lines of data for several minutes that I need to process as it is created. The problem seems to be that every time the client sends another request, a new Handler object is created, so I loose Proc. How can I stream Proc.stdout?
Edit: The device is a Korebot2, so I have limited access to other python libraries due to space.
Using UDP you get a new "connection" each time you send a datagram, which is the reason you notice that a new object instance is created each time you send something. You're probably using the wrong kind of protocol here... UDP is used mostly for sending distinct "datagrams", or when a longer connection is not needed. TCP is also called a "streaming" protocol, and is often used for data that has no fixed end.
Also remember that UDP is not a reliable protocol, if used over a network it is almost guaranteed that you will loose packets.