Python Socket Receive/Send Multi-threading - python

I am writing a Python program where in the main thread I am continuously (in a loop) receiving data through a TCP socket, using the recv function. In a callback function, I am sending data through the same socket, using the sendall function. What triggers the callback is irrelevant. I've set my socket to blocking.
My question is, is this safe to do? My understanding is that a callback function is called on a separate thread (not the main thread). Is the Python socket object thread-safe? From my research, I've been getting conflicting answers.

Sockets in Python are not thread safe.
You're trying to solve a few problems at once:
Sockets are not thread-safe.
recv is blocking and blocks the main thread.
sendall is being used from a different thread.
You may solve these by either using asyncio or solving it the way asyncio solves it internally: By using select.select together with a socketpair, and using a queue for the incoming data.
import select
import socket
import queue
# Any data received by this queue will be sent
send_queue = queue.Queue()
# Any data sent to ssock shows up on rsock
rsock, ssock = socket.socketpair()
main_socket = socket.socket()
# Create the connection with main_socket, fill this up with your code
# Your callback thread
def different_thread():
# Put the data to send inside the queue
send_queue.put(data)
# Trigger the main thread by sending data to ssock which goes to rsock
ssock.send(b"\x00")
# Run the callback thread
while True:
# When either main_socket has data or rsock has data, select.select will return
rlist, _, _ = select.select([main_socket, rsock], [], [])
for ready_socket in rlist:
if ready_socket is main_socket:
data = main_socket.recv(1024)
# Do stuff with data, fill this up with your code
else:
# Ready_socket is rsock
rsock.recv(1) # Dump the ready mark
# Send the data.
main_socket.sendall(send_queue.get())
We use multiple constructs in here. You will have to fill up the empty spaces with your code of choice. As for the explanation:
We first create a send_queue which is a queue of data to send. Then, we create a pair of connected sockets (socketpair()). We need this later on in order to wake up the main thread as we don't wish recv() to block and prevent writing to the socket.
Then, we connect the main_socket and start the callback thread. Now here's the magic:
In the main thread, we use select.select to know if the rsock or main_socket has any data. If one of them has data, the main thread wakes up.
Upon adding data to the queue, we wake up the main thread by signaling ssock which wakes up rsock and thus returns from select.select.
In order to fully understand this, you'll have to read select.select(), socketpair() and queue.Queue().
#tobias.mcnulty asked a good question in the comments: Why should we use a Queue instead of sending all the data through the socket?
You can use the socketpair to send the data as well, which has its benefits, but sending over a queue might be preferable for multiple reasons:
Sending data over a socket is an expensive operation. It requires a syscall, requires passing data back and forth inside system buffers, and entails full use of the TCP stack. Using a Queue guarantees we'll have only 1 call - for the single-byte signal - and not more (apart from the queue's internal lock, but that one is pretty cheap). Sending large data through the socketpair will result in multiple syscalls. As a tip, you may as well use a collections.deque which CPython guarantees to be thread-safe because of the GIL. That way you won't have to require any syscall besides the socketpair.
Architecture-wise, using a queue allows you to have finer-grained control later on. For example, the data can be sent in whichever type you wish and be decoded afterwards. This allows the main loop to be a little smarter and can help you create an easier interface.
You don't have size limits. It can be a bug or a feature. I believe changing the system's buffer size is not exactly encouraged, which creates a natural throttle to the amount of data you can send. It might be a benefit, but the application may wish to control it on its own. Using the "natural" feature will cause the calling thread to hang.
Just like socketpair.recv syscalls, for large data you will pass through multiple select calls as well. TCP does not have message boundaries. You'll either have to create artificial ones, set the socket to nonblocking and deal with asynchronous sockets, or think of it as a stream and continuously pass through select calls which might be expensive depending on your OS.
Support for multiple threads on the same socketpair. Sending 1 byte for signalling over a socket from multiple threads is fine, and is exactly how asyncio works. Sending more than that may cause the data to be sent in an incorrect order.
All in all, transferring the data back and forth between the kernel and userspace is possible and will work, but I personally do not recommend it.

Related

Twisted many inlineCallbacks at once

Short brief about my situation:
I'm writing a server (twisted powered) which handles WebSocket connections with multiple clients (over 1000). Messages send from server to client are handled via Redis pub/subinterface (because messages can be applied via REST) in that flow:
REST appends command to the client and publishes,
twisted is getting poked because it subscribes that redis channel,
the message is added to the client queue and waits for further processing
Now, as client connects, gets registered I'm launching inlineCallback for each client to sweep throught the queue, like this:
#inlineCallbacks
def client_queue_handler(self, uuid):
queue = self.send_queue[uuid]
client = self.get_client(uuid)
while True:
uniqueID = yield queue.get()
client_context = self.redis_handler.get_single(uuid)
msg_context = next(iter([msg
for msg in client_context
if msg['unique'] == unique]),
None)
client.sendMessage(msg_context)
As I said previously, many clients may connect. Is this perfectly fine, that each client has it's own inlineCallback which performs an infinite loop? As far as I know, twisted has customizable thread pool limit. What will happen if there will be more clients (inlineCallbacks) than threads in threadpool? Will queue.get() block/sleep that "virtual thread" and pass control to the other one? Maybe one "global" thread which sweeps all clients is a better option?
inlineCallbacks doesn't start any OS threads. It's just a different interface to using Deferred. A Deferred is just an API for dealing with callbacks.
queue.get() returns a Deferred. When you yield it, inlineCallbacks internally adds a callback to it and your function remains suspended. When the callback fires, inlineCallbacks resumes your function with the value passed to the callback - which is the "result" of the Deferred you yielded.
All that's happening is some Deferred objects are being created and some callbacks are being added to them. Somewhere inside the implementation of your redis client, some event sources are "firing" the Deferred with a result which starts the process of calling its callbacks.
You can have as many of these:
* as you have system memory to hold
* as the redis client can keep track of at a time
I don't know the details of how your redis client is implemented. If it has to open a socket per queue then you'll probably be limited to the number of file descriptors you can open or the number of sockets your system can support. These numbers will be somewhere in the tens of thousands and when you bump into them, there are tricks you can deploy to raise the limit further.
If it doesn't have to open a socket per queue (for example, if it can multiplex notification for all the queues over a single socket) then it probably has a limit that's much, much higher (maybe imposed by algorithmic complexity of the slowest part of its implementation).

Python threading leaving a thread unattended for seconds

I am trying to develop a stable structure where basically there are three threads running parallely:
One thread reading a serial port for incoming data.
Other thread checking continuously a file for new lines (basically same as previous thread)
The last one destinated to other periodicall functions calls like keep alive command through serial port and deleting old files.
First two threads are on a infinite while loop that always checks for incoming new data, the third is a scheduled function that call other functions and sleeps untill the next function calls.
When my third thread is doing stuffs the other two threads are being delayed to handle new data. I have read a bit about GIL and maybe this is the reasson why I am having these delays.
Should I use other type of structure to priorize handling all the incoming data asap instead of the other thread?

Is there a benefit in using a separate thread to handle serial data in Python?

I am building an application in Python and TkInter, which accepts a constant stream of data through the serial port of the PC, at about 10-100Hz (i.e. a packet of data will arrive every 10-100ms). This data is then processed and presented to the user.
A simple implementation would be to have a big loop, where the new values are received through the serial and then the GUI is updated.
Another implementation which I find more robust, would be to have a separate thread to handle the incoming data and then send the data to the main application to update the GUI. I am not sure how to implement this, though.
Would the separate thread give any benefit in this situation?
Yes, a separate thread would definitely be beneficial here, because the call you use to accept serial data will most likely block to wait for data to arrive. If you're using just a single thread, your entire GUI will freeze up while that call blocks to wait for data. Then it will just briefly unfreeze when data is received and you update the UI, before being frozen again until more data arrives.
By using a separate thread to read from the serial port, you can block all you want without ever making your GUI unresponsive. If you search around SO a bit you should be able to find several questions that cover implementing this sort of pattern.
Also note that since your thread will primarily be doing I/O, you should not be noticeably affected by the GIL, so you don't need to worry about that.

Listening for events on a network and handling callbacks robostly

I am developing a small Python program for the Raspberry Pi that listens for some events on a Zigbee network.
The way I've written this is rather simplisic, I have a while(True): loop checking for a Uniquie ID (UID) from the Zigbee. If a UID is received it's sent to a dictionary containing some callback methods. So, for instance, in the dictionary the key 101 is tied to a method called PrintHello().
So if that key/UID is received method PrintHello will be executed - pretty simple, like so:
if self.expectedCallBacks.has_key(UID) == True:
self.expectedCallBacks[UID]()
I know this approach is probably too simplistic. My main concern is, what if the system is busy handling a method and the system receives another message?
On an embedded MCU I can handle easily with a circuler buffer + interrupts but I'm a bit lost with it comes to doing this with a RPi. Do I need to implement a new thread for the Zigbee module that basically fills a buffer that the call back handler can then retrieve/read from?
I would appreciate any suggestions on how to implement this more robustly.
Threads can definitely help to some degree here. Here's a simple example using a ThreadPool:
from multiprocessing.pool import ThreadPool
pool = ThreadPool(2) # Create a 2-thread pool
while True:
uid = zigbee.get_uid()
if uid in self.expectedCallbacks:
pool.apply_async(self.expectedCallbacks[UID])
That will kick off the callback in a thread in the thread pool, and should help prevent events from getting backed up before you can send them to a callback handler. The ThreadPool will internally handle queuing up any tasks that can't be run when all the threads in the pool are already doing work.
However, remember that Raspberry Pi's have only one CPU core, so you can't execute more than one CPU-based operation concurrently (and that's even ignoring the limitations of threading in Python caused by the GIL, which is normally solved by using multiple processes instead of threads). That means no matter how many threads/processes you have, only one can get access to the CPU at a time. For that reason, you probably don't want more than one thread actually running the callbacks, since as you add more you're just going to slow things down, due to the OS needing to constantly switch between threads.

Python: How to combine a process poll and a non-blocking WebSocket server?

I have an idea. Write a WebSocket based RPC that would process messages according to the scenario below.
Client connects to a WS (web socket) server
Client sends a message to the WS server
WS server puts the message into the incoming queue (can be a multiprocessing.Queue or RabbitMQ queue)
One of the workers in the process pool picks up the message for processing
Message is being processed (can be blazingly fast or extremely slow - it is irrelevant for the WS server)
After the message is processed, results of the processing are pushed to the outcoming queue
WS server pops the result from the queue and sends it to the client
NOTE: the key point is that the WS server should be non-blocking and responsible only for:
connection acceptance
getting messages from the client and puting them into the incoming queue
popping messages from the outcoming queue and sending them back to the client
NOTE2: it might be a good idea to store client identifier somehow and pass it around with the message from the client
NOTE3: it is completely fine that because of queueing the messages back and forth the speed of simple message processing (e.g. get message as input and push it back as a result) shall become lower. Target goal is to be able to run processor expensive operations (rough non-practical example: several nested “for” loops) in the pool with the same code style as handling fast messages. I.e. pop message from the input queue together with some sort of client identifier, process it (might take a while) and push the processing results together with client ID to the output queue.
Questions:
In TornadoWeb, if I have a queue (multiprocessing or Rabit), how can
I make Tornado’s IOLoop trigger some callback whenever there is a new
item in that queue? Can you navigate me to some existing
implementation if there is any?
Is there any ready implementation of such a design? (Not necessarily with Tornado)
Maybe I should use another language (not python) to implement such a design?
Acknowledgments:
Recommendations to use REST and WSGI for whatever goal I aim to achieve are not welcome
Comments like “Here is a link to the code that I found by googling for 2 seconds. It has some imports from tornado and multiprocessing.I am not sure what it does, however I am for 99% certain that it isexactly what you need” are not welcome neither
Recommendations to use asynchronous libraries instead of normal blocking ones are ... :)
Tornado's IOLoop allows you handling events from any file object by its file descriptor, so you could try this:
connect with each of your workers processes through multiprocessing.Pipe
call add_handler for each pipe's parent end (using the connection's fileno())
make the workers write some random garbage each time they put something into the output queue, no matter if that's multiprocessing.Queue of any MQ.
handle the answers form the workers in the event handlers

Categories

Resources