I am writing python code that reads and writes to a serial device. The device is basically an Arduino Mega running the Marlin 3D printer firmware.
My python code is sending a series of GCode commands (ASCII strings terminated by newlines, including checksums and line numbers). Marlin responds to each successfully received line with an "ok\n". Marlin only has a limited line buffer size so if it is full Marlin will hold off on sending the "ok\n" response until space is freed up.
If the checksum fails then Marlin requests the line to be sent again with a "Resend: 143\n" response. Another possible response is "ok T:{temp value}\n" if the current temperature is requested.
My code uses three threads. The main thread, a read thread and a write thread. Here is a stripped down version of the code:
class Printer:
def connect(self):
self.s = serial.Serial(self.port, self.baudrate, timeout=3)
self.ok_received.set()
def _start_print_thread(self):
self.print_thread = Thread(target=self._empty_buffer, name='Print')
self.print_thread.setDaemon(True)
self.print_thread.start()
def _start_read_thread(self):
self.read_thread = Thread(target=self._continous_read, name='Read')
self.read_thread.setDaemon(True)
self.read_thread.start()
def _empty_buffer(self):
while not self.stop_printing:
if self.current_line_idx < len(self.buffer):
while not self.ok_received.is_set() and not self.stop_printing:
logger.debug('waiting on ok_received')
self.ok_received.wait(2)
line = self._next_line()
self.s.write(line)
self.current_line_idx += 1
self.ok_received.clear()
else:
break
def _continous_read(self):
while not self.stop_reading:
if self.s is not None:
line = self.s.readline()
if line == 'ok\n':
self.ok_received.set()
continue # if we got an OK then we need to do nothing else.
if 'Resend:' in line: # example line: "Resend: 143"
self.current_line_idx = int(line.split()[1]) - 1
if line: # if we received _anything_ then set the flag
self.ok_received.set()
else: # if no printer is attached, wait 10ms to check again.
sleep(0.01)
In the above code, self.ok_received is a threading.Event. This mostly works ok. Once every couple of hours however it gets stuck in the while not self.ok_received.is_set() and not self.stop_printing: loop inside of _empty_buffer(). This kills the print by locking up the machine.
When stuck inside the loop, I can get the print to continue by sending any command manually. This allows the read thread to set the ok_recieved flag.
Since Marlin does not respond with checksums, I guess it is possible the "ok\n" gets garbled. The third if statement in the read thread is supposed to handle this by setting the flag if anything is received from Marlin.
So my question is: Do I have a possible race condition somewhere? Before I add locks all over the place or combine the two threads into one I would really like to understand how this is failing. Any advice would be greatly appreciated.
It looks like the read thread could get some data in the window where the write thread has broken out of the is_set loop, but has not yet called self.ok_received.clear(). So, the read thread ends up calling self.ok_received.set() while the write thread is still processing the previous line, and then the write thread unknowingly calls clear() once its done processing the previous message, and never knows that another line should be written.
def _empty_buffer(self):
while not self.stop_printing:
if self.current_line_idx < len(self.buffer):
while not self.ok_received.is_set() and not self.stop_printing:
logger.debug('waiting on ok_received')
self.ok_received.wait(2)
# START OF RACE WINDOW
line = self._next_line()
self.s.write(line)
self.current_line_idx += 1
# END OF RACE WINDOW
self.ok_received.clear()
else:
break
A Queue might be a good way to handle this - you want to write one line in the write thread every time the read thread receives a line. If you replaced self.ok_received.set() with self.recv_queue.put("line"), then the write thread could just write one line every time it pulls something from the Queue:
def _empty_buffer(self):
while not self.stop_printing:
if self.current_line_idx < len(self.buffer):
while not self.stop_printing:
logger.debug('waiting on ok_received')
try:
val = self.recv_queue.get(timeout=2)
except Queue.Empty:
pass
else:
break
line = self._next_line()
self.s.write(line)
self.current_line_idx += 1
else:
break
You could also shrink the window to the point you probably won't hit it in practice by moving the call to self.ok_received.clear() up immediately after exiting the inner while loop, but technically there will still be a race.
Related
There's a little app named logivew that I'm writing a script to monitor, along with some other tasks. In the main while loop (which will exit when the app I'm most concerned about closes), I check to see if logview needs restarting. The code I have presently is roughly as follows:
#a good old global
logview = "/usr/bin/logview"
#a function that starts logview:
port = 100
log_file = "/foo/bar"
logview_process = subprocess.Popen([logview, log_file, port],
stdout = subprocess.DEVNULL,
stderr = subprocess.STDOUT)
#a separate function that monitors in the background:
while True:
time.sleep(1)
logview_status = 0
try:
logview_status = psutil.Process(logview_process.pid).status()
except psutil.NoSuchProcess:
pass
if(logview_status == psutil.STATUS_STOPPED or
logview_status == psutil.STATUS_ZOMBIE or
logview_status == psutil.STATUS_DEAD or
logview_status == 0):
print("Logview died; restarting")
logview_cli_list = [logview]
logview_cli_list.extend(logview_process.args)
logview_process = subprocess.Popen(logview_cli_list,
stdout = subprocess.DEVNULL,
stderr = subprocess.STDOUT)
if(some_other_condition): break
However, if I test-kill logview, the condition triggers and I do see the printed message, but then I see it again, and again, and again. It seems that the condition triggers every single iteration of the loop if logview does die. And, it never does get restarted properly.
So clearly... I'm doing something wrong. =)
Any help (or better methods!) would be greatly appreciated.
I don't know your logview program but the problem is here:
logview_cli_list = [logview]
logview_cli_list.extend(logview_process.args)
When you're creating the argument list, you're putting logview twice in your command, because logview_process.args also contains the name of the launched command, so the program probably fails immediately because of bad args, and is run again and again...
The fix is then obvious:
logview_cli_list = logview_process.args
a better fix would be to create the process in the loop if a given flag is set and set the flag at the start.
When process dies, set the flag to trigger the process creation again. Would have avoided this copy/almost paste mistake.
I have a python script which will parse xml file for serial numbers and will write them to a text file. The problem with the below code is, It is going on infinite loop. If I am adding a break statement some where after logging to a file, It is writing only one serial number. How do I increase the counter, so that the program will exit after writing all the serial numbers.
try:
while True:
data, addr = s.recvfrom(65507)
mylist=data.split('\r')
url = re.findall('http?://(?:[a-zA-Z]|[0-9]|[$-_#.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', data)
print url[0]
response = urllib2.urlopen(url[0])
the_page = response.read()
tree = ET.XML(the_page)
with open("temp.xml", "w") as f:
f.write(ET.tostring(tree))
document = parse('temp.xml')
actors = document.getElementsByTagName("ns0:serialNumber")
for act in actors:
for node in act.childNodes:
if node.nodeType == node.TEXT_NODE:
r = "{}".format(node.data)
print r
logToFile(str(r))
time.sleep(10)
s.sendto(msg, ('239.255.255.250', 1900) )
except socket.timeout:
pass
I would normally create a flag so that the while would be
while working == True:
Then reset the flag at the appropriate time.
This allows you to use the else statement to close the text file and output the final results after the while loop is complete. Else clause on Python while statement.
Note that it is always better to explicitly close open files when finished rather than relying on garbage collection. You should also close the file and output a timeout message in the except logic.
For debugging, you can output a statement at each write to the text file.
If your s.recvfrom(65507) is working correctly it should be an easy fix. Write this code just below your data, addr = s.recvfrom(65507)
if not data:
break
You open a UDP socket and you use recvfrom to get data from the socket.
You set a high timeout which makes this function a blocking function. It means when you start listening on the socket, if no data have been sent from the sender your program will be blocked on that line until either the sender sends something or the timeout reaches. In case of timeout and no data the function will raise an Exception.
I see two options:
Send something from the sender that indicates the end of stream (the serial numbers in your case).
Set a small timeout then catch the Exception and use it to break the loop.
Also, take a look at this question: socket python : recvfrom
Hope it helps.
I'm new to Python (disclaimer: I'm new to programming and I've been reading python online for two weeks) and I've written a simple multi-processing script that should allow me to use four subprocesses at once. I was using a global variable (YES, I KNOW BETTER NOW) to keep track of how many processes were running at once. Start a new process, increment by one; end a process, decrement by one. This was messy but I was only focused on getting the multi-processes working, which it does.
So far I've been doing the equivalent of:
processes = 0
def function(value)
global processes
do stuff to value
processes-=1
While read line
if processes < 4
processes+=1
create a new subprocess - function(line)
1: I need to keep track of processes in a better way than a global. I saw some use of a 'pool' in python to have 4 workers, but I failed hard at it. I like the idea of a pool but I don't know how to pass each line of a list to the next worker. Thoughts?
2: On general principles, why is my global var decrement not working? I know it's ugly, but I at least expected it to be ugly and successful.
3: I know I'm not locking the var before editing, I was going to add that once the decrementation was working properly.
Sorry if that's horrible pseudo-code, but I think you can see the gist. Here is the real code if you want to dive in:
MAX_THREADS = 4
CURRENT_THREADS = 0
MAX_LOAD = 8
# Iterate through all users in the userlist and call the funWork function on each user
def funReader(filename):
# I defined the logger in detail above, I skipped about 200 lines of code to get it slimmed down
logger.info("Starting 'move' function for file \"{0}\"...".format(filename))
# Read in the entire user list file
file = open(filename, 'r')
lines = file.read()
file.close()
for line in lines:
user = line.rstrip()
funControl(user)
# Accept a username and query system load and current funWork thread count; decide when to start next thread
def funControl(user):
# Global variables that control whether a new thread starts
global MAX_THREADS
global CURRENT_THREADS
global MAX_LOAD
# Decide whether to start a new subprocess of funWork for the current user
print
logger.info("Trying to start a new thread for user {0}".format(user))
sysLoad = os.getloadavg()[1]
logger.info("The current threads before starting a new loop are: {0}.".format(CURRENT_THREADS))
if CURRENT_THREADS < MAX_THREADS:
if sysLoad < MAX_LOAD:
CURRENT_THREADS+=1
logger.info("Starting a new thread for user {0}.".format(user))
p = Process(target=funWork, args=(user,))
p.start()
else:
print "Max Load is {0}".format(MAX_LOAD)
logger.info("System load is too high ({0}), process delayed for four minutes.".format(sysLoad))
time.sleep(240)
funControl(user)
else:
logger.info("There are already {0} threads running, user {1} delayed for ten minutes.".format(CURRENT_THREADS, user))
time.sleep(600)
funControl(user)
# Actually do the work for one user
def funWork(user):
global CURRENT_THREADS
for x in range (0,10):
logger.info("Processing user {0}.".format(user))
time.sleep(1)
CURRENT_THREADS-=1
Lastly: any errors you see are likely to be transcription mistakes because the code executes without bugs on a server at work. However, any horrible coding practices you see are completely mine.
Thanks in advance!
how about this: (not tested)
MAX_PROCS = 4
# Actually do the work for one user
def funWork(user):
for x in range (0,10):
logger.info("Processing user {0}.".format(user))
time.sleep(1)
return
# Iterate through all users in the userlist and call the funWork function on each user
def funReader(filename):
# I defined the logger in detail above, I skipped about 200 lines of code to get it slimmed down
logger.info("Starting 'move' function for file \"{0}\"...".format(filename))
# Read in the entire user list file
file = open(filename, 'r')
lines = file.read()
file.close()
work = []
for line in lines:
user = line.rstrip()
work.append(user)
pool = multiprocessing.Pool(processes=MAX_PROCS) #threads are different from processes...
return pool.map(funWork, work)
I wrote a program that reads a text file and runs an .exe for every line in the text file. This results in opening a new command line window for every time i run the .exe. The windows do close on their own once the current task is finished, but the problem is as follows:
If i have 100 lines in the text file, this means that i call the .exe file 100 times. My problem with that is if i want to cancel the run after it already started, i have to click on the red "X" to close every window one after the another.
What i am trying to do is have some sort of a command interrupt the running program and either close all upcoming windows or just stop the for loop from running.
Is it possible to write into the console a command to interrupt the current running code?
Would it be better to use some sort of a key event listener? If so, are there any built-in key listeners in Python? I can't seem to find any. Does that mean that i have to install Pygame just so i can use a key event listener?
Maybe i should try to listen to the command line and detect an exit code on one of the windows that i manually close and that way end the for loop?
There are a few ways you could go about this. But pretty much you have one main issue - you need some sort of flag that can be switched such that the code will know it must stop. For instance, if the code is working in a while-loop, it should check at the start of this loop if the flag is valid, or if the flag is telling the loop to stop...
while flag:
# do code
There are a few ways to implement this flagging like operation for your needs. I will discuss the threading option. First, you need to understand how threading works, and then you need to mold your script such that instead of "running an executable" for each line of the text file, you would read the text file, and put all the lines into a queue, then you would have a few threads that read from that queue, and perform the desired action (like running an executable) but instead of running an external executable, you should mimick this with Python, this thread should be a daemon thread.. and it should have a main loop which checks if a flag that exists in the parent thread is turned on...
Below is an example:
from threading import Thread
from Queue import Queue
import sys
import time
class Performer():
def __init__(self):
self.active = False
self.queue = Queue()
def action(self, line):
pass # your code should be here
def operate(self, text_file, threads=5):
with open(text_file) as f:
for line in f:
self.queue.put(line)
self.active = True
thread_pool = []
for i in range(threads):
t = Thread(target=self.__thread, name=('worker-%d' % i))
t.daemon = True
t.start()
thread_pool.append(t)
while self.active:
try:
if self.queue.empty():
break
except KeyboardInterrupt:
self.active = False
sys.exit('user + keyboard = byebye')
else:
time.sleep(1)
def __thread(self):
while self.active:
if not self.queue.empty():
try:
self.action(self.queue.get())
except Exception:
pass # do something here
I've got a Python program which is reading data from a serial port via the PySerial module. The two conditions I need to keep in mind are: I don't know how much data will arrive, and I don't know when to expect data.
Based on this I have came up with the follow code snippets:
#Code from main loop, spawning thread and waiting for data
s = serial.Serial(5, timeout=5) # Open COM5, 5 second timeout
s.baudrate = 19200
#Code from thread reading serial data
while 1:
tdata = s.read(500) # Read 500 characters or 5 seconds
if(tdata.__len__() > 0): #If we got data
if(self.flag_got_data is 0): #If it's the first data we recieved, store it
self.data = tdata
else: #if it's not the first, append the data
self.data += tdata
self.flag_got_data = 1
So this code will loop forever getting data off the serial port. We'll get up to 500 characters store the data, then alert the main loop by setting a flag. If no data is present we'll just go back to sleep and wait.
The code is working, but I don't like the 5s timeout. I need it because I don't know how much data to expect, but I don't like that it's waking up every 5 seconds even when no data is present.
Is there any way to check when data becomes available before doing the read? I'm thinking something like the select command in Linux.
Note: I found the inWaiting() method, but really that seems it just change my "sleep" to a poll, so that's not what I want here. I just want to sleep until data comes in, then go get it.
Ok, I actually got something together that I like for this. Using a combination of read() with no timeout and the inWaiting() method:
#Modified code from main loop:
s = serial.Serial(5)
#Modified code from thread reading the serial port
while 1:
tdata = s.read() # Wait forever for anything
time.sleep(1) # Sleep (or inWaiting() doesn't give the correct value)
data_left = s.inWaiting() # Get the number of characters ready to be read
tdata += s.read(data_left) # Do the read and combine it with the first character
... #Rest of the code
This seems to give the results I wanted, I guess this type of functionality doesn't exist as a single method in Python
You can set timeout = None, then the read call will block until the requested number of bytes are there. If you want to wait until data arrives, just do a read(1) with timeout None. If you want to check data without blocking, do a read(1) with timeout zero, and check if it returns any data.
(see documentation https://pyserial.readthedocs.io/en/latest/)
def cmd(cmd,serial):
out='';prev='101001011'
serial.flushInput();serial.flushOutput()
serial.write(cmd+'\r');
while True:
out+= str(serial.read(1))
if prev == out: return out
prev=out
return out
call it like this:
cmd('ATZ',serial.Serial('/dev/ttyUSB0', timeout=1, baudrate=115000))