pyserial time based repeated data request - python

I am writing an application in python to acquire data using a serial communication. I use the pyserial library for establishing the communication. What is the best approach to request data in an interval (eg every 2 seconds). I always have to the send a request, wait for the answer and start the process again.

if this a "slow" process, that does not accurate time precision, use a while loop and time.sleep(2) to timeout the process for 2 seconds.

I thought about using a separate thread to prevent the rest of the applicaiton from freezing. The thread takes a function which requests data from the instrument.
class ReadingThread(Thread):
'''Used to read out from the instrument, interrupted by an interval'''
def __init__(self, controller, lock, function, queue_out, interval=3 , *args, **kwargs):
Thread.__init__(self)
self.lock = lock
self.function = function
self.queue_out = queue_out
self.interval = interval
self.args = args
self.kwargs = kwargs
self.is_running = True
def run(self):
while self.is_running:
with self.lock:
try:
result = self.function(self.args, self.kwargs)
except Exception as e:
print(e)
else:
self.queue_out.put(result)
time.sleep(self.interval)
def stop(self):
self.is_running = False

Related

How to detect if a Python Thread is being killed?

I have my own Thread called TimeBasedLogThread. I would like to fire a function my_function when the TimeBasedLogThread is being killed because the main process is exiting. I would like to do it from within this object. Is it possible to do so?
Here is my current approach:
class TimeBasedBufferingHandler(MemoryHandler):
# This is a logging-based handler that buffers logs to send
# them as emails
# the target of this handler is a SMTPHandler
def __init__(self, capacity=10, flushLevel=logging.ERROR, target=None,
flushOnClose=True, timeout=60):
MemoryHandler.__init__(self, capacity=capacity, flushLevel=flushLevel,
target=target, flushOnClose=flushOnClose)
self.timeout = timeout # in seconds (as time.time())
def flush(self):
# Send the emails that are younger than timeout, all together
# in the same email
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=60):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
def run(self):
while True:
self.handler.flush()
time.sleep(self.timeout)
def my_function(self):
print("my_function is being called")
self.handler.flush()
def setup_thread():
smtp_handler = SMTPHandler()
new_thread = TimeBasedLogThread(smtp_handler, timeout=10)
new_thread.start()
In my main thread, I have:
setup_thread()
logging.error("DEBUG_0")
time.sleep(5)
logging.error("DEBUG_1")
time.sleep(5)
logging.error("DEBUG_2")
The time.sleep(5) releases the main thread 5 seconds before the timeout of my other thread. So, I receive the first 2 emails with "DEBUG_0" and "DEBUG_1", but not the last one "DEBUG_2" because the main process exits before the timeout has finished.
I would like to link the class TimeBasedLogThread and the function my_function that will flush (send the emails) before exiting. How can I do that? I looked at the source code of threading but I did not understand what method I could use.
Build your function as a Thread too. (Ex: AfterDeadThread)
You have two strategy here:
TimeBasedLogThread call AfterDeadThread before die
AfterDeadThread check if TimeBasedLogThread is alive or not, if not it will run some methods
Extend run() method (representing the thread’s activity) to fire the on_terminate handler passed to custom thread’s constructor as keyword argument.
On a slightly changed custom thread class (for demonstration):
from threading import Thread
import time, random
class TimeBasedLogThread(Thread):
def __init__(self, handler, timeout=2, on_terminate=None):
Thread.__init__(self)
self.handler = handler
self.timeout = timeout
self.terminate_handler = on_terminate
def run(self):
while True:
num = self.handler()
if num > 5:
break
time.sleep(self.timeout)
print(num)
if self.terminate_handler:
self.terminate_handler()
def my_term_function():
print("my_function is being called")
f = lambda: random.randint(3, 10)
tlog_thread = TimeBasedLogThread(f, on_terminate=my_term_function)
tlog_thread.start()
tlog_thread.join()
Sample output:
3
4
5
4
5
my_function is being called

How to run several functions with different time interval in Python

I am trying to use python to request for data from a flight control board using serial communication.
There are several fields of data, but wish to update with different rate. For example, field A will get updated every 0.001s, field B will get updated every 0.02s, field C every 1s.
As there might be packet loss, so sometimes it may take longer to read data and time.sleep() will cause inaccurate time interval. Also the class doing the serial communication is not thread safe, so any method which may cause thread safe problem cannot be used.
Following is the main structure of the process, is there anything I can do to make it request for data with the time interval I wish?
class WorkerProcess(multiprocessing.Process):
def __init__(self, port, addresslist, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.exit = multiprocessing.Event()
self.serialPort = port
self.addressList = addresslist
self.sch = SerialCommunication(self.serialPort, self.addressList)
self.task_queue = task_queue
self.result_queue = result_queue
self.modeSelection = 0
def run(self):
while not self.exit.is_set():
...
self.sch.requestA()
self.sch.requestB()
self.sch.requestC()
...
def shutdown(self):
print("Shutdown initiated")
try:
self.sch.stopSerial()
except Exception:
print(Exception)
print('Process stopped')
self.exit.set()

How to implement a daemon stoppable polling thread in Python?

From my understanding there is no such out-of-the-box solution in the python stdlib.
The solution has to have the following characteristics:
Start the thread as daemon with a target function and optional arguments
Have polling: the thread should rerun the target every X seconds
Allow easy and graceful stopping: Not breaking target execution midway
Expose the ability to stop from outside the program it belongs to, in particular be able to stop the thread from testing code.
Allow thread restarting after stop (or pseudo-restarting with a new thread)
I ran across a couple of suggestions on SO, but would like to aggregate any collected knowledge here (i will contribute in that in a follow-up answer), so that any new or alternative or additional ideas be heard.
My proposal uses the threading library, as it is advertised as more high level than thread.
A middle ground is this solution, found from other SO answer:
def main():
t_stop= threading.Event()
t = threading.Thread(target=thread, args=(1, t_stop))
t.daemon = True
t.start()
time.sleep(duration)
#stop the thread
t_stop.set()
def thread(arg, stop_event):
while(not stop_event.is_set()):
# Code to execute here
stop_event.wait(time)
This, unfortunately, requires us to have the t_stop object handy when testing -in order to stop the thread- and that handle to the object is not designed to be exposed.
A solution would be to add t and t_stop handles in a top level or global dictionary somewhere, for the testing code to reach.
Another solution (copied and improved from somewhere) is use of the following:
def main():
t = DaemonStoppableThread(sleep_time, target=target_function,
name='polling_thread',
args=(arg1, arg2))
t.start()
# Stopping code from a test
def stop_polling_threads():
threads = threading.enumerate()
polling_threads = [thread for thread in threads
if 'polling_thread' in thread.getName()]
for poll_thread in polling_threads:
poll_thread.stop()
class DaemonStoppableThread(threading.Thread):
def __init__(self, sleep_time, target=None, **kwargs):
super(DaemonStoppableThread, self).__init__(target=target, **kwargs)
self.setDaemon(True)
self.stop_event = threading.Event()
self.sleep_time = sleep_time
self.target = target
def stop(self):
self.stop_event.set()
def stopped(self):
return self.stop_event.isSet()
def run(self):
while not self.stopped():
if self.target:
self.target()
else:
raise Exception('No target function given')
self.stop_event.wait(self.sleep_time)
As good as these solutions may be, none of them face the restarting of the polling target function.
I avoided using the expression "restarting thread", as I understand that python threads cannot be restarted, so a new thread will have to be used to allow for this "pseudo-restarting"
EDIT:
To improve on the above, a solution to start/stop the polling target multiple times:
class ThreadManager(object):
def __init__(self):
self.thread = None
def start_thread(self):
if not self.thread or not self.thread.is_alive():
self.thread = DaemonStoppableThread(sleep_time=5, target=some_func, args=(1, 2))
self.thread.start()
return 'thread running'
def stop_thread(self):
if self.thread and self.thread.is_alive():
self.thread.stop()
return 'thread stopped'
else:
return 'dead thread'
def check_thread(self):
if self.thread and self.thread.is_alive():
return 'thread alive'
else:
return 'dead_thread'

What is correct way of creating threaded zeromq socket?

I'm wondering how to correctly create background thread that would be listenning some random port and pushing received object to Queue?
I want my socket wrapper to launch new thread, select some random port and start listenning on in. I have to be able to get this port number from socket wrapper.
I've come up with simple class:
class SocketWrapper(Thread):
def __init__(self, socket_type, *args, **kwargs):
super(Thread, self).__init__(*args, **kwargs)
self._ctx = zmq.Context()
self._socket = self._ctx._socket(socket_type)
self.port = self._socket.bind_to_random_port('tcp://*')
self._queue = Queue()
def run(self):
while not self.stop_requested:
try:
item = socket.recv_pyobj(flags=zmq.NOBLOCK)
self._queue.put(item)
except ZMQError:
time.sleep(0.01) # Wait a little for next item to arrive
However, zmq sockets can't be shared between threads, they are not thread-safe (http://api.zeromq.org/2-1:zmq). So socket creation and binding should be moved to run() method:
class SocketWrapper2(Thread):
def __init__(self, socket_type, *args, **kwargs):
super(Thread, self).__init__(*args, **kwargs)
self._socket_type = socket_type
self._ctx = zmq.Context()
self._queue = Queue()
self._event = Event()
def run(self):
socket = self._ctx._socket(self._socket_type)
self.port = self._socket.bind_to_random_port('tcp://*')
self._event.set()
while not self.stop_requested:
try:
item = socket.recv_pyobj(flags=zmq.NOBLOCK)
self._queue.put(item)
except ZMQError:
time.sleep(0.01) # Wait a little for next item to arrive
def get_port(self):
self._event.wait()
return self.port
I had to add event to be sure that port is already binded before I can read it but it introduces risk of deadlock, when SocketWrapper2.get_port() is called before start(). This can be avoided by using Thread's _started Event:
def get_port(self):
if not self._started.is_set():
raise RuntimeError("You can't call run_port before thread start.")
self._event.wait()
return self.port
Is this is at last thread-safe? Is there anything else to take care of?
Problem I still see here is that I want to get port right after SocketWrapper is created. Can I safely call Thread's start() in __init__?
I ended up modifying this solution a little to avoid deadlocking main thread:
def get_port(self):
if not self._started.is_set():
raise RuntimeError("You can't call run_port before thread start.")
if not self._event.wait(1):
raise RuntimeError("Couldn't get port after a while.")
return self.port
This is not perfect. Since we delay get_port but it's simple and do the job. Any suggestions how to improve it?

How to detect if cherrypy is shutting down

I've made a server based on cherrypy but I have a repetitive task which takes a long time (more than a minute) to run. This is all fine until I need to shut down the server, then I am waiting forever for the threads to finish.
I was wondering how you'd detect a cherrypy shutdown inside the client thread so that the thread could abort when the server is shutting down.
I'm after something like this:
class RootServer:
#cherrypy.expose
def index(self, **keywords):
for i in range(0,1000):
lengthyprocess(i)
if server_is_shutting_down():
return
You can inspect the state directly:
if cherrypy.engine.state != cherrypy.engine.states.STARTED:
return
Or, you can register a listener on the 'stop' channel:
class RootServer:
def __init__(self):
cherrypy.engine.subscribe('start', self.start)
cherrypy.engine.subscribe('stop', self.stop)
def start(self):
self.running = True
def stop(self):
self.running = False
#cherrypy.expose
def index(self, **keywords):
for i in range(0,1000):
lengthyprocess(i)
if not self.running:
return
The latter is especially helpful if you also want to have the lengthyprocess start (or perform some preparation) when the server starts up, rather than upon a request.

Categories

Resources