I am using pySerial to communicate to a microcontroller over USB, Most of the communication is initiated by the desktop python script which sends a command packet and waits for reply.
But there is also a alert packet that may be sent by the microcontroller without a command from the python script. In this case I need to monitor the read stream for any alerts.
For handling alerts, I dedicate a seperate process to call readline() and loop around it, like so:
def serialMonitor(self):
while not self.stopMonitor:
self.lock.acquire()
message = self.stream.readline()
self.lock.release()
self.callback(message)
inside a class. The function is then started in a seperate process by
self.monitor = multiprocessing.Process(target = SerialManager.serialMonitor, args = [self])
Whenever a command packet is send, the command function needs to take back control of the stream, for which it must interrupt the readline() call which is in blocking. How do I interrupt the readline() call? Is there any way to terminate a process safely?
You can terminate a multiprocessing process with .terminate(). Is this safe? Probably it's alright for a readline case.
However, this is not how I would handle things here. As I read your scenario, there are two possibilities:
MCU initiates alert package
Computer sends data to MCU (and MCU perhaps responds)
I assume the MCU will not send an alert package whilst an exchange is going on initiated by the computer.
So I would just initiate the serial object with a small timeout, and leave it in a loop when I'm not using it. My overall flow would go like this:
ser = Serial(PORT, timeout=1)
response = None
command_to_send = None
running = True
while running: # event loop
while running and not command_to_send and not line:
try:
line = ser.readline()
except SerialTimeoutException:
pass
if not command_to_send:
process_mcu_alert(line)
else:
send_command(command_to_send)
command_to_send = None
response = ser.readline()
This is only a sketch, as it would need to be run in a thread or subprocess, since readline() is indeed blocking, so you need some thread-safe way of setting command_to_send and running (used to exit gracefully) and getting response, and you likely want to wrap all this state up in a class. The precise implementation of that depends upon what you are doing, but the principle is the same---have one loop which handles reading and writing to the serial port, have it timeout to respond relatively quickly (you can set a smaller timeout if you need to), and have it expose some interface you can handle.
Sadly to my knowledge python has no asyncio compatible serial library, otherwise that approach would seem neater.
Related
I am trying to readlines from the tcp server that I ran in the same script. I am able to send one command and reads its output but at the end (after reading all outputs) it looks like that program hangs in readline, I have tried almost all the solutions here and here but still it hangs.
Most of these solutions propose to check if output of readline is none or not but in my case program never returns from last read and just hangs in there.
tcp server is not in my control, or say I just have to test server script therefore I can not modify it. Also, is it possible to send commands to runing server using python without using subprocess? any better alternative?
def subprocess_cmd(command):
process=subprocess.Popen(command,stdin=subprocess.PIPE,stderr=subprocess.STDOUT,stdout=subprocess.PIPE, shell=True)
for cmd in ['python3 -u tcp_server.py 123 port1']:
subprocess_cmd(cmd)
process.stdin.write('command like print_list')
process.stdin.flush()
while True:
line=process.stdout.readline()
if line == '':
break
readline hangs because your TCP connection is still open and readline expects more data to come in. You must close the connection from server side to notify readline that there is nothing more to read. Usually it is done by closing socket on client side notifying the server that there will not be any more requests to it. When server finishes processing all your commands it closes socket too. And this is the signal for you that you have received all the data that server sent to you.
Or, alternatively, if you don't want to close the connection, you must invent delimiters which will mark end of response. So the client will stop calling readline when such delimiter is read.
quick question that I'm never even sure is possible :3
I have a python script, a network script that connects to a server and remains connected until I either disconnect or it kicks me (which it normally shouldn't), which is constantly receiving data and doing other tasks.
I was curious if it's at all possible while the script is running, to trigger functions from within the script? Say while the script was running, if I had the urge to send some sort of data to the server, I could type it up and send it to the function that handles this?
Wasn't quite sure if it was possible or not, as I've never had to attempt or even seen it done. If it helps, I'm on Ubuntu linux running the script from the terminal.
The usual 'UNIX-way' to solve such problems is to poll or select on both the socket and the standard input file descriptors. You then handle network input on 'IN' event on the socket and terminal input on 'IN' event on the stdin file descriptor.
This is not portable to Windows (which sucks), but that is the most natural way to do it on UNIX-like systems. And you don't get all the problems which come with threads (which often need polling in Python too, as they get 'unkillable' otherwise).
Take a look at gevent:
gevent is a coroutine-based Python networking library that uses
greenlet to provide a high-level synchronous API on top of the
libevent event loop.
and gevent.socket.
Jacek Konieczny's solution is good and simple. Should you want more flexible message passing, consider ZeroMQ. This gives you lots of power to easily create various messaging solutions around your main program. Using a single thread, your main program would look something like this:
#!/usr/bin/env python
import zmq
from time import sleep
CTX = zmq.Context()
incoming = CTX.socket(zmq.PULL)
incoming.bind("tcp://127.0.0.1:3000")
outgoing = CTX.socket(zmq.PUB)
outgoing.bind("tcp://127.0.0.1:3001")
# Poller for the incoming messages
poller = zmq.Poller()
poller.register(incoming, zmq.POLLIN)
def main():
while True:
# Do things on the network
print("[Did things on the network]")
# Send messages if you want
outgoing.send("Important message")
# Poll for incoming messages
socks = dict(poller.poll(zmq.NOBLOCK))
if incoming in socks and socks[incoming] == zmq.POLLIN:
message = incoming.recv()
# Handle message
print("[Handled message '%s']" % message)
sleep(1) # Only for this dummy program
if __name__ == "__main__":
main()
You would then write a client (in any language that has ZeroMQ bindings) that pushes and subscribes to messages from the main program. Example pusher:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
pusher = CTX.socket(zmq.PUSH)
pusher.connect("tcp://127.0.0.1:3000")
def main():
pusher.send("Message to main program")
if __name__ == "__main__":
main()
Example subscriber:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
subscriber = CTX.socket(zmq.SUB)
subscriber.connect("tcp://127.0.0.1:3001")
subscriber.setsockopt(zmq.SUBSCRIBE, "")
def main():
while True:
msg = subscriber.recv()
print("[Received message] %s" % msg)
if __name__ == "__main__":
main()
It sounds like you will want to combine the pusher and subscriber programs into one. If you decide to use ZeroMQ have a look at the excellent user guide.
You can of course also use ZeroMQ with multiple threads or processess (just be careful not to share individual ZeroMQ sockets between threads).
Without more details, I can only provide you with general ideas. In order to do two things at once (download from the server and wait for data to send) you will need to use either multiple threads or processes. There is a tutorial with some examples of multiple threads here. If you use multiple processes, you would be using the multiprocessing package.
With either solution, you would need a similar setup. I'll use the term thread for the rest, but you could easily replace that with process if you used multiple processes instead. You would probably have (at least) a thread to send and receive data (this might be two threads) and a separate thread to wait for something to send. This is a simplified example of the producer/consumer problem. The thread that waits for the commands/data would be a simple input loop that produces data to send, while the thread that sends data would consume the data as it sends it to the server.
Stick your server stuff in another thread (investigate the threading module) and use the main thread for interaction with the user via raw_input/input.
I receive data from some device via socket-module.
But after some time the device stops sending packages.
Then I want to interupt the for-loop.
While True doesn't work, because he receives more then 100 packages.
How can I stop this process?
s stands for socket.
...
for i in range(packages100):
data = s.recv(4)
f.write(data)
...
Edit:
I think socket.settimeout() is part of the solution. See also:
How to set timeout on python's socket recv method?
If your peer really just stops sending data, as opposed to closing the connection, this is tricky and you'll be forced to resort to asynchronous reading from this socket.
Put it in asynchronous mode (the docs and Google are your friends), and try to read it each time, instead of the blocking read. You can then just stop "trying" anytime you wish. Note that by nature of async IO your code will be a bit different - you will no longer be able to assume that once recv returns, it actually read some data.
while 1:
data = conn.recv(4)
if not data: break
f.write(data)
Also, example in python docs
so i have a motion sensor connected to an avr micro that is communicating with my python app via usb. im using pyserial to do the comm. during my script i have an infinate loop checking for data from the avr micro. before this loop i start a timer with signal.alarm() that will call a function to end a subprocess. when this alarm goes it interrupts the pyserial comm and the program exits completly. i get the error that pyserial read() is interrupted. is there any way around this issue. any help would be awesome
The problem is that your alarm will interrupt the read from the serial port, which isn't at all what you want.
It sounds like you probably want to break this into two threads that do work separately.
You are using alarm(), which send a signal, and pyserial, which does reads and writes to a serial port. When you are reading or writing to a device like that, and the SIGALRM signal is received, a read() or a write() call is interrupted so the signal can be handled.
As signals are handled in userspace, and reading and writing is actually handled by the kernel, this makes things rather ugly. This is a know wart of the way signals are handled, and dates back to the very early UNIX days.
Code that handles signals correctly in python may look like:
import errno
while True:
try:
data = read_operation()
except OSError, e:
if getattr(e, 'errno', errno.EINTR):
continue
raise
else:
break
This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()