I am trying to develop a stable structure where basically there are three threads running parallely:
One thread reading a serial port for incoming data.
Other thread checking continuously a file for new lines (basically same as previous thread)
The last one destinated to other periodicall functions calls like keep alive command through serial port and deleting old files.
First two threads are on a infinite while loop that always checks for incoming new data, the third is a scheduled function that call other functions and sleeps untill the next function calls.
When my third thread is doing stuffs the other two threads are being delayed to handle new data. I have read a bit about GIL and maybe this is the reasson why I am having these delays.
Should I use other type of structure to priorize handling all the incoming data asap instead of the other thread?
Related
I use a list of processes with queues for each one. Another thread is used to fill these queues one after the other and the processes fetch the data from it. The problem is that after a while the queues raise an empty exception from within the processes but the thread get a full exception. When I check the queue size it is consistent with the exceptions.
To make it worse this behavior can only be reproduced as part of a large code base, i can’t generate a small program to reproduce this.
Anyone had similar issues with multiprocessing queues not being consistent in different processes?
Edit
To add more to the description of the pipeline. I have multiple worker objects, each worker has an input queue (multiprocessing.Queue), a worker queue (multiprocessing.Queue), an output queue (threading.Queue), a worker process (multiprocessing.Process) and a manager thread (threading.Thread).
Against all these workers, I have a single feeder thread (threading.Thread) that adds sample identifiers to the input queues of all workers, one by one. The sample identifiers are very small in size (paths of files) so the feeder thread can keep up with the processes.
The worker gets the sample identifiers from the input queue, reads these samples, processes them and puts them into the worker queue on by one. The manager thread reads the data in the worker queues and puts it into the output queue because multiprocessing.Queue is slower on read.
All .get() and .put() calls have timeouts and I keep track of time it takes to get new data from this pipeline. I also have mechanisms for closing it and reopening it, by joining all processes and threads (even for queues) and then recreating all of them from scratch. When everything is working, the main process goes over the workers and reads the data off of their output queue one by one. It also takes a few ms to read new data most of the time.
This whole pipeline exists two times in my code (used for machine learning with Tensorflow). One instance is used for training and is created close to the beginning of the program, the other is used for testing. The second instance is created after a while of training, it goes over all of my dataset and then resets. When the second instance is run for the second time it gets stuck after 1000 samples or so. When it is stuck and I break on debug mode in the main process, I see that the input queue is full and the worker and output queues are empty. When I then break inside one of the worker processes I see that their input queue is empty. It seems like for some reason the worker process sees a different input queue than it should. Note that this is not some race issue because this result is stable.
Edit 2
I zeroed in on the point that the program hangs on. It seems like performing json.loads() on read file data. This means that the problem is different than what originally described. The processes hang and don't see an empty queue.
code for opening the file:
with open(json_path, 'r') as f:
data = f.read()
json_data = json.loads(data) # <== program hangs at this line
I tried using signal.alarm package to pinpoint where in json.loads() the program hangs but it doesn't raise the exception. The problem is reproduced with a single multiprocessing.Process as well, but not when all processing is done in the main process.
Rings a bell to anyone?
I am writing a Python program where in the main thread I am continuously (in a loop) receiving data through a TCP socket, using the recv function. In a callback function, I am sending data through the same socket, using the sendall function. What triggers the callback is irrelevant. I've set my socket to blocking.
My question is, is this safe to do? My understanding is that a callback function is called on a separate thread (not the main thread). Is the Python socket object thread-safe? From my research, I've been getting conflicting answers.
Sockets in Python are not thread safe.
You're trying to solve a few problems at once:
Sockets are not thread-safe.
recv is blocking and blocks the main thread.
sendall is being used from a different thread.
You may solve these by either using asyncio or solving it the way asyncio solves it internally: By using select.select together with a socketpair, and using a queue for the incoming data.
import select
import socket
import queue
# Any data received by this queue will be sent
send_queue = queue.Queue()
# Any data sent to ssock shows up on rsock
rsock, ssock = socket.socketpair()
main_socket = socket.socket()
# Create the connection with main_socket, fill this up with your code
# Your callback thread
def different_thread():
# Put the data to send inside the queue
send_queue.put(data)
# Trigger the main thread by sending data to ssock which goes to rsock
ssock.send(b"\x00")
# Run the callback thread
while True:
# When either main_socket has data or rsock has data, select.select will return
rlist, _, _ = select.select([main_socket, rsock], [], [])
for ready_socket in rlist:
if ready_socket is main_socket:
data = main_socket.recv(1024)
# Do stuff with data, fill this up with your code
else:
# Ready_socket is rsock
rsock.recv(1) # Dump the ready mark
# Send the data.
main_socket.sendall(send_queue.get())
We use multiple constructs in here. You will have to fill up the empty spaces with your code of choice. As for the explanation:
We first create a send_queue which is a queue of data to send. Then, we create a pair of connected sockets (socketpair()). We need this later on in order to wake up the main thread as we don't wish recv() to block and prevent writing to the socket.
Then, we connect the main_socket and start the callback thread. Now here's the magic:
In the main thread, we use select.select to know if the rsock or main_socket has any data. If one of them has data, the main thread wakes up.
Upon adding data to the queue, we wake up the main thread by signaling ssock which wakes up rsock and thus returns from select.select.
In order to fully understand this, you'll have to read select.select(), socketpair() and queue.Queue().
#tobias.mcnulty asked a good question in the comments: Why should we use a Queue instead of sending all the data through the socket?
You can use the socketpair to send the data as well, which has its benefits, but sending over a queue might be preferable for multiple reasons:
Sending data over a socket is an expensive operation. It requires a syscall, requires passing data back and forth inside system buffers, and entails full use of the TCP stack. Using a Queue guarantees we'll have only 1 call - for the single-byte signal - and not more (apart from the queue's internal lock, but that one is pretty cheap). Sending large data through the socketpair will result in multiple syscalls. As a tip, you may as well use a collections.deque which CPython guarantees to be thread-safe because of the GIL. That way you won't have to require any syscall besides the socketpair.
Architecture-wise, using a queue allows you to have finer-grained control later on. For example, the data can be sent in whichever type you wish and be decoded afterwards. This allows the main loop to be a little smarter and can help you create an easier interface.
You don't have size limits. It can be a bug or a feature. I believe changing the system's buffer size is not exactly encouraged, which creates a natural throttle to the amount of data you can send. It might be a benefit, but the application may wish to control it on its own. Using the "natural" feature will cause the calling thread to hang.
Just like socketpair.recv syscalls, for large data you will pass through multiple select calls as well. TCP does not have message boundaries. You'll either have to create artificial ones, set the socket to nonblocking and deal with asynchronous sockets, or think of it as a stream and continuously pass through select calls which might be expensive depending on your OS.
Support for multiple threads on the same socketpair. Sending 1 byte for signalling over a socket from multiple threads is fine, and is exactly how asyncio works. Sending more than that may cause the data to be sent in an incorrect order.
All in all, transferring the data back and forth between the kernel and userspace is possible and will work, but I personally do not recommend it.
I'm using Python http.client.HTTPResponse.read() to read data from a stream. That is, the server keeps the connection open forever and sends data periodically as it becomes available. There is no expected length of response. In particular, I'm getting Tweets through the Twitter Streaming API.
To accomplish this, I repeatedly call http.client.HTTPResponse.read(1) to get the response, one byte at a time. The problem is that the program will hang on that line if there is no data to read, which there isn't for large periods of time (when no Tweets are coming in).
I'm looking for a method that will get a single byte of the HTTP response, if available, but that will fail instantly if there is no data to read.
I've read that you can set a timeout when the connection is created, but setting a timeout on the connection defeats the whole purpose of leaving it open for a long time waiting for data to come in. I don't want to set a timeout, I want to read data if there is data to be read, or fail if there is not, without waiting at all.
I'd like to do this with what I have now (using http.client), but if it's absolutely necessary that I use a different library to do this, then so be it. I'm trying to write this entirely myself, so suggesting that I use someone else's already-written Twitter API for Python is not what I'm looking for.
This code gets the response, it runs in a separate thread from the main one:
while True:
try:
readByte = dc.request.read(1)
except:
readByte = []
if len(byte) != 0:
dc.responseLock.acquire()
dc.response = dc.response + chr(byte[0])
dc.responseLock.release()
Note that the request is stored in dc.request and the response in dc.response, these are created elsewhere. dc.responseLock is a Lock that prevents dc.response from being accessed by multiple threads at once.
With this running on a separate thread, the main thread can then get dc.response, which contains the entire response received so far. New data is added to dc.response as it comes in without blocking the main thread.
This works perfectly when it's running, but I run into a problem when I want it to stop. I changed my while statement to while not dc.twitterAbort, so that when I want to abort this thread I just set dc.twitterAbort to True, and the thread will stop.
But it doesn't. This thread remains for a very long time afterward, stuck on the dc.request.read(1) part. There must be some sort of timeout, because it does eventually get back to the while statement and stop the thread, but it takes around 10 seconds for that to happen.
How can I get my thread to stop immediately when I want it to, if it's stuck on the call to read()?
Again, this method is working to get Tweets, the problem is only in getting it to stop. If I'm going about this entirely the wrong way, feel free to point me in the right direction. I'm new to Python, so I may be overlooking some easier way of going about this.
Your idea is not new, there are OS mechanisms(*) for making sure that an application is only calling I/O-related system calls when they are guaranteed to be not blocking . These mechanisms are usually used by async I/O frameworks, such as tornado or gevent. Use one of those, and you will find it very easy to run code "while" your application is waiting for an I/O event, such as waiting for incoming data on a socket.
If you use gevent's monkey-patching method, you can proceed using http.client, as requested. You just need to get used to the cooperative scheduling paradigm introduced by gevent/greenlets, in which your execution flow "jumps" between sub-routines.
Of course you can also perform blocking I/O in another thread (like you did), so that it does not affect the responsiveness of your main thread. Regarding your "How can I get my thread to stop immediately" problem:
Forcing a thread that's blocking in a system call to stop is usually not a clean or even valid process (also see Is there any way to kill a Thread in Python?). Either -- if your application has finished its jobs -- you take down the entire process, which also affects all contained threads, or you just leave the thread be and give it as much time to terminate as required (these 10 seconds you were referring to are not a problem -- are they?)
If you do not want to have such long-blocking system calls anywhere in your application (be it in the main thread or not), then use above-mentioned techniques to prevent blocking system calls.
(*) see e.g. O_NONBLOCK option in http://man7.org/linux/man-pages/man2/open.2.html
I am writing super awesome software where i will create a new thread every new minute. This thread will store some data on a remote database server and end. When a new thread is created resources(memory...) are assigned to that thread. If i don't correctly free those resources at some time i will have a problem.
The thread that stores the data can sometimes end unexpectedly, an error because the remote server is unreachable. This is not a problem the thread will end and the data will be stored the next minute together with the data of that next minute.
So my question is: Do python threads free all the resources they use when they end as expected? Do they free all resources when they end because of a error?
Python threads (as opposed to multiprocessing processes) use the same block of memory. If a thread adds something to a data structure that is directly or indirectly referenced from the master thread or other workers (for instance, a shared dictionary or list), that data won't be deleted when the thread dies. So basically, as long as the only data your threads write to memory is referenced by variables local to the thread target function scope or below, the resources should be cleaned up the next time the gc runs after the thread exits.
I am developing a small Python program for the Raspberry Pi that listens for some events on a Zigbee network.
The way I've written this is rather simplisic, I have a while(True): loop checking for a Uniquie ID (UID) from the Zigbee. If a UID is received it's sent to a dictionary containing some callback methods. So, for instance, in the dictionary the key 101 is tied to a method called PrintHello().
So if that key/UID is received method PrintHello will be executed - pretty simple, like so:
if self.expectedCallBacks.has_key(UID) == True:
self.expectedCallBacks[UID]()
I know this approach is probably too simplistic. My main concern is, what if the system is busy handling a method and the system receives another message?
On an embedded MCU I can handle easily with a circuler buffer + interrupts but I'm a bit lost with it comes to doing this with a RPi. Do I need to implement a new thread for the Zigbee module that basically fills a buffer that the call back handler can then retrieve/read from?
I would appreciate any suggestions on how to implement this more robustly.
Threads can definitely help to some degree here. Here's a simple example using a ThreadPool:
from multiprocessing.pool import ThreadPool
pool = ThreadPool(2) # Create a 2-thread pool
while True:
uid = zigbee.get_uid()
if uid in self.expectedCallbacks:
pool.apply_async(self.expectedCallbacks[UID])
That will kick off the callback in a thread in the thread pool, and should help prevent events from getting backed up before you can send them to a callback handler. The ThreadPool will internally handle queuing up any tasks that can't be run when all the threads in the pool are already doing work.
However, remember that Raspberry Pi's have only one CPU core, so you can't execute more than one CPU-based operation concurrently (and that's even ignoring the limitations of threading in Python caused by the GIL, which is normally solved by using multiple processes instead of threads). That means no matter how many threads/processes you have, only one can get access to the CPU at a time. For that reason, you probably don't want more than one thread actually running the callbacks, since as you add more you're just going to slow things down, due to the OS needing to constantly switch between threads.