Create required number of sockets during execution - python

User parses number of remote machines as a argument(python scriptName 3). During the execution script needs to connect and send data to this machines(in this case to 3 machines) in random order.
My code right now:
def createSocket(ip):
s = socket.socket()
s.connect((ip, 55555))
def sendData(data):
s.send(data)
def closeSocket():
s.close()
createSocket(ip)
sendData(data)
closeSocket()
So I'm using one socket and re-connect each time I need to connect to other machine. Because of that script transmits data really slowly.
Can I somehow assign required number of sockets during execution and use them? Or maybe there is a better way of keeping connection to all machines?

Don't create a single global socket, instead keep a list of sockets. Don't close your sockets after each use, and then reopen them: just keep them open.
eg.
def createSocket(ip): # return the new socket object
s = socket.socket()
s.connect((ip, 55555))
return s
addresses=[ip, ip2, ip3, ...]
sockets=[createSocket(addr) for addr in addresses]
sock = chooseSocket(sockets) # pick one (somehow)
sock.send(data) # use the selected socket

Related

Bind Bluetooth device programmatically to rfcomm via python in

i wrote a script in python for serial communication between my M5Stack Stick C (like raduino) and the raspberry pi.
all work fine. i can send "X","Y" or "Z" from raspberry py to the stick and he will reply the value (G-Force) back to the raspi! so far so good
Codes:
Python on raspy:
import serial
import time
import threading
ser = serial.Serial('/dev/rfcomm5') #init serial port
input_line = []#init input char array
def process_data(_data):
#called every time a sream is terminated by \n
#and the command string is ready to use
command = convert(_data)
print(command)
def convert(s): #convert the char list in a string
new = "" #init string to append all chars from char array
for x in s: # traverse in the string
new += str(x)
return new # return string
def processIncomingByte(inByte):#adding incoming chars to input_line
global input_line# globalize the input_line
if(inByte == '\n'):#if \n is incoming, end the chararray and release process data method
process_data(input_line)
input_line = []#reset input_line for next incoming string
elif(inByte == '\r'):
pass
else:#put all incoming chars in input_line
input_line.append(inByte)
while True:
while(ser.in_waiting > 0):#while some data is waiting to read....
processIncomingByte(ser.read())#.... process bytes whit method
ser.write(b'X\n')
time.sleep(0.5)
before the script work, i have to manually bind the m5Stak Stick-C over Blueman
to /dev/Rfcomm5. it work just fine over GUI or Console....
but now i would like to connect the stick via python to rfcomm5 (just by know the MAC adress, will be found in a config file later on...)
i startet to investigate a bit, but the more i research the more confused i am!!
i read some stuff over sockets and server-client aproaches. over a seperated script and so on....
i tested this code:
from bluetooth import *
target_name = "M5-Stick-C"
target_address = None
nearby_devices = discover_devices()
for address in nearby_devices:
if (target_name == lookup_name( address )):
target_address = address
break
if (target_address is not None):
print ("found target bluetooth device with address ", target_address)
else:
print ("could not find target bluetooth device nearby")
and indeed it found the device (just testing)!
but do i realy need to make a second script/process to connect to from my script?
is the the M5stack Stick-C the server? (i think so)
im so confused about all that stuff. i coded a lot, but never whit sockets, server-client stuff.
basically the communication (server/client?) works.
i just need to connect the device i found in the second script via macadress to rfcomm5 (or whatever rfcomm).
do i need a bluetooth socket? like in this example
https://gist.github.com/kevindoran/5428612
isnt the rfcomm the socket or am i wrong?
There are a number of layers that are used in the communication process and depending where you tap into that stack will depend what coding you need to do. The other complication is that BlueZ (the Bluetooth stack on linux) changed how it works over recent times leaving a lot of out of date information on the internet and easy for people to get confused.
With two Bluetooth devices, they need to establish a pairng. This is typically a one off provisioning step. This can be done with tools like Blueman or on the command line with bluetoothctl. Once you have a pairing established between your RPi and the M5Stack Stick, you shouldn't need to discover nearby devices again. Your script should just be able to connect if you tell it which device to connect to.
The M5Stack stick is advertising as having a Serial Port Profile (SPP). This is a layer on top of rfcomm.
There is a blog post about how this type of connection can be done with the standard Python3 installation: http://blog.kevindoran.co/bluetooth-programming-with-python-3/
My expectation is that you will only have to do the client.py on your RPi as the M5Stack Stick is the server. You will need to know its address and which port to connect on. Might be some trial and error on the port number (1 and 3 seem to be common).
Another library that I find helpful for SPP, is bluedot as it abstracts away some of the boilerplate code: https://bluedot.readthedocs.io/en/latest/btcommapi.html#bluetoothclient
So in summary, my recommendation is to use the standard Python Socket library or Bluedot. This will allow you to specify the address of the device you wish to connect to in your code and the underlying libraries will take care of making the connection and setting up the serial port (as long as you have already paired the two devices).
Example of what the above might look like with Bluedot
from bluedot.btcomm import BluetoothClient
from signal import pause
from time import sleep
# Callback to handle data
def data_received(data):
print(data)
sleep(0.5)
c.send("X\n")
# Make connection and establish serial connection
c = BluetoothClient("M5-Stick-C", data_received)
# Send initial requests
c.send("X\n")
# Cause the process to sleep until data received
pause()
Example using the Python socket library:
import socket
from time import sleep
# Device specific information
m5stick_addr = 'xx:xx:xx:xx:xx:xx'
port = 5 # This needs to match M5Stick setting
# Establish connection and setup serial communication
s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM)
s.connect((m5stick_addr, port))
# Send and receive data
while True:
s.sendall(b'X\n')
data = s.recv(1024)
print(data)
sleep(0.5)
s.close()

I have trouble understanding the code for socket programming in python

I'm a beginner in the field of sockets and lately trying ti create a terminal chat app with that.I still have trouble understanding setblocking and select functions
"This is the code i have taken from a website i'm reading from and in the code if there is nothing in data, how does it mean that the socket has been disconnected and please also do explain what affect the setblocking in the server or the client does.I have read somewhere that setblocking allows to move on if the data has been fully not recieved,i'm not quite satisfied with the explaination.Please explain in simple words "
import select
import socket
import sys
import Queue
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setblocking(0)
server_address = ('localhost', 10000)
server.bind(server_address)
server.listen(5)
inputs = [ server ]
outputs = [ ]
message_queues = {}
while inputs:
readable, writable, exceptional = select.select(inputs, outputs, inputs)
for s in readable:
if s is server:
connection, client_address = s.accept()
connection.setblocking(0)
inputs.append(connection)
message_queues[connection] = Queue.Queue()
else:
data = s.recv(1024)
if data:
message_queues[s].put(data)
if s not in outputs:
outputs.append(s)
else:
if s in outputs:
outputs.remove(s)
inputs.remove(s)
s.close()
if there is nothing in data, how does it mean that the socket has been disconnected
The POSIX specification of recv() says:
Upon successful completion, recv() shall return the length of the message in bytes. If no messages are available to be
received and the peer has performed an orderly shutdown, recv() shall return 0. …
In the Python interface, return value 0 corresponds to a returned buffer of length 0, i. e. nothing in data.
what affect the setblocking in the server or the client does.
The setblocking(0) sets the socket to non-blocking, i. e. if e. g. the accept() or recv() cannot be completed immediately, the operation fails rather than blocks until complete. In the given code, this can hardly happen, since the operations are not tried before they are possible (due to the use of select()). However, the example is bad in that it includes output in the select() arguments, resulting in a busy loop since output is writable most of the time.

Python TCP Socket Data Sometimes Missing Parts. Socket Overflow?

Short description:
Client sends server data via TCP socket. Data varies in length and is strings broken up by the delimiter "~~~*~~~"
For the most part it works fine. For a while. After a few minutes data winds up all over the place. So I start tracking the problem and data is ending up in the wrong place because the full thing has not been passed.
Everything comes into the server script and is parsed by a different delimiter -NewData-* then placed into a Queue. This is the code:
Yes I know the buffer size is huge. No I don't send data that kind of size in one go but I was toying around with it.
class service(SocketServer.BaseRequestHandler):
def handle(self):
data = 'dummy'
#print "Client connected with ", self.client_address
while len(data):
data = self.request.recv(163840000)
#print data
BigSocketParse = []
BigSocketParse = data.split('*-New*Data-*')
print "Putting data in queue"
for eachmatch in BigSocketParse:
#print eachmatch
q.put(str(eachmatch))
#print data
#self.request.send(data)
#print "Client exited"
self.request.close()
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
t = ThreadedTCPServer(('',500), service)
t.serve_forever()
I then have a thread running on while not q.empty(): which parses the data by the other delimiter "~~~*~~~"
So this works for a while. An example of the kind of data I'm sending:
2016-02-23 18:01:24.140000~~~*~~~Snowboarding~~~*~~~Blue Hills~~~*~~~Powder 42
~~~*~~~Board Rental~~~*~~~15.0~~~*~~~1~~~*~~~http://bigshoes.com
~~~*~~~No Wax~~~*~~~50.00~~~*~~~No Ramps~~~*~~~2016-02-23 19:45:00.000000~~~*~~~-15
But things started to break. So I took some control data and sent it in a loop. Would work for a while then results started winding up in the wrong place. And this turned up in my queue:
2016-02-23 18:01:24.140000~~~*~~~Snowboarding~~~*~~~Blue Hills~~~*~~~Powder 42
~~~*~~~Board Rental~~~*~~~15.0~~~*~~~1~~~*~~~http://bigshoes.com
~~~*~~~No Wax~~~*~~~50.00~~~*~~~No Ramps~~~*~~~2016-02-23 19:45:00.000000~~~*~
Cutting out the last "~~-15".
So the exact same data works then later doesn't. That suggests some kind of overflow to me.
The client connects like this:
class Connect(object):
def connect(self):
host = socket.gethostname() # Get local machine name
#host = "127.0.0.1"
port = 500 # Reserve a port for your service.
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#print('connecting to host')
sock.connect((host, port))
return sock
def send(self, command):
sock = self.connect()
#recv_data = ""
#data = True
#print('sending: ' + command)
sock.sendall(command)
sock.close()
return
It doesn't wait for a response because I don't want it hanging around waiting for one. But it closes the socket and (as far as I understand) I don't need to flush the socket buffer or anything it should just be clearing itself when the connection closes.
Would really appreciate any help on this one. It's driving me a little spare at this point.
Updates:
I'm running this on both my local machine and a pretty beefy server and I'd be pushed to believe it's a hardware issue. The server/client both run locally and sockets are used as a way for them to communicate so I don't believe latency would be the cause.
I've been reading into the issues with TCP communication. An area where I feel I'll quickly be out of my depth but I'm starting to wonder if it's not an overflow but just some king of congestion.
If sendall on the client does not ensure everything is sent maybe some kind of timer/check on the server side to make sure nothing more is coming.
The basic issue is that your:
data = self.request.recv(163840000)
line is not guaranteed to receive all the data at once (regardless of how big you make the buffer).
In order to function properly, you have to handle the case where you don't get all the data at once (you need to track where you are, and append to it). See the relevant example in the Python docs on using a socket:
Now we come to the major stumbling block of sockets - send and recv operate on the network buffers. They do not necessarily handle all the bytes you hand them (or expect from them), because their major focus is handling the network buffers. In general, they return when the associated network buffers have been filled (send) or emptied (recv). They then tell you how many bytes they handled. It is your responsibility to call them again until your message has been completely dealt with.
As mentioned, you are not receiving the full message even though you have a large buffer size. You need to keep receiving until you get zero bytes. You can write your own generator that takes the request object and yields the parts. The nice side is that you can start processing messages while some are still coming in
def recvblocks(request):
buf = ''
while 1:
newdata = request.recv(10000)
if not newdata:
if buf:
yield buf
return
buf += newdata
parts = buf.split('*-New*Data-*')
buf = parts.pop()
for part in parts:
yield part
But you need a fix on your client also. You need to shutdown the socket before close to really close the TCP connection
sock.sendall(command)
sock.shutdown(request.SHUT_RDWR)
sock.close()

Multicast works in IDLE but not stand-alone

I have a simple network of two machines connected directly by cable (no switches, routers, or anything else). One of the machines is a radar, which continously multicasts image data. The other machine is a Windows PC, on which I want to receive that data.
For a first test, I have a simple Python script:
import socket
MULTICAST_GROUP = '239.0.17.8'
PORT = 6108
LOCAL_IF = '192.168.3.42'
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.settimeout(1.0)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((LOCAL_IF, PORT))
sock.setsockopt(
socket.IPPROTO_IP,
socket.IP_ADD_MEMBERSHIP,
socket.inet_aton(MULTICAST_GROUP) + socket.inet_aton(LOCAL_IF)
)
while True:
try:
data, address = sock.recvfrom(64*1024)
except socket.timeout:
print 'timeout'
else:
print address, len(data)
If I run this from IDLE, it works fine. But if I run it stand-alone (from the command prompt, or double-click in Explorer), it doesn't receive any data; it only prints 'timeout' once a second.
I've been looking at Wireshark output to try to find the difference, but I've found none. Same data arrives, same membership request is sent (the membership is sent twice actually; is that normal?).
The datagrams are quite large (29504 bytes); could that be a problem?
What could be the big difference between running the script within or without IDLE? How can I make it always work ?
As Michele d'Amico suspected, the problem was a mis-configured firewall. Shame on me for not discovering that myself.

WinSock error #10055

I have client-server architecture build in python, unfortunately the original design was made that each request to server is represented by one TCP connection and I have to send requests in large groups (20 000+) and sometimes there occurs socket error #10055.
I've already found out how to handle it in python:
>>> errno.errorcode[10055]
'WSAENOBUFS'
>>> errno.WSAENOBUFS
10055
And build a code that is able to handle that error and reconnect (of course with little time delay to give server time to do whatever it has to do):
class MyConnect:
# __init__ and send are not important here
def __enter__(self):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Try several reconnects
for i in range(0,100):
try:
self.sock.connect((self.address, self.port))
break
except socket.error as e:
if e.errno == errno.WSAENOBUFS:
time.sleep(1)
else:
raise
return self
def __exit__(self, type, value, traceback):
self.sock.close()
# Pseudocode
for i in range(0,20000):
with MyConnect(ip,port) as c:
c.send(i)
My questions are:
is there any "good practice" way to do this?
is e.errno == errno.WSAENOBUFS multi-platform? If not so, how to make it multi-platform?
Note: I've tested in just on Windows yet, I need it to work on Linux too.
You are clogging your TCP stack with outgoing data and all the connection establishment and termination packets.
If you have to stick to this design, then force each connection to linger until its data has been successfully sent. That is to say, that by default, close() on the socket returns immediately and further delivery attempts and connection tear-down happen "in the background". You can see that doing so over 20000+ times in a tight loop can easily overwhelm the OS network stack.
The following will force your socket close() to hang on for up to 10 seconds trying to deliver the data:
import struct
s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 10))
Note that this is not the same as Python socket.sendall() - that one just passes all the bytes to the kernel .
Hope this helps.

Categories

Resources