How to write a Python terminal application with a fixed input line? - python

I'm trying to write a terminal application to interact with an Arduino microcontroller via pyserial. The following features are important:
Print incoming messages to the command line.
Allow the user to enter output messages to the serial port. The input should be possible, while new incoming messages are printed.
In principle, this should be possible with cmd. But I'm struggling with printing incoming messages, when the user started typing.
For simplicity, I wrote the following test script emulating incoming messages every second. Outgoing messages are just echoed back to the command line with the prefix ">":
#!/usr/bin/env python3
from cmd import Cmd
from threading import Thread
import time
class Prompt(Cmd):
def default(self, inp):
print('>', inp)
stop = False
def echo():
while not stop:
print(time.time())
time.sleep(1)
thread = Thread(target=echo)
thread.daemon = True
thread.start()
try:
Prompt().cmdloop()
except KeyboardInterrupt:
stop = True
thread.join()
In Spyder IDE, the result is just perfect:
But in iterm2 (Mac OS) the output is pretty messed up:
Since I want to use this application from within Visual Studio Code, it should work outside Spyder. Do you have any idea how to get the same behaviour in iterm2 as in Spyder?
Things I already considered or tried out:
Use the curses library. This solves my problem of printing text to different regions. But I'm loosing endless scrolling, since curses defines its own fullscreen window.
Move the cursor using ansi escape sequences. It might be a possible solution, but I'm just not getting it to work. It always destroys the bottom line where the user is typing. I might need to adjust the scrolling region, which I still didn't manage to do.
Use a different interpreter. I already tried Python vs. iPython, without success. It might be a more subtle setting in Spyder's interpreter.

Yes! I found a solution: The Prompt Toolkit 3.0 in combination with asyncio lets you handle this very problem using patch_stdout, "a context manager that ensures that print statements within it won’t destroy the user interface".
Here is a minimum working example:
#!/usr/bin/env python3
from prompt_toolkit import PromptSession
from prompt_toolkit.patch_stdout import patch_stdout
import asyncio
import time
async def echo():
while True:
print(time.time())
await asyncio.sleep(1)
async def read():
session = PromptSession()
while True:
with patch_stdout():
line = await session.prompt_async("> ")
print(line.upper())
loop = asyncio.get_event_loop()
loop.create_task(echo())
loop.create_task(read())
loop.run_forever()

It's a while since I was interacting with an Arduino with my Mac. I used pyserial and it was 100% reliable. key is user read_until(). I've included my wrapper class for illustration. (Also has an emulation mode for when I didn't have a Arduino)
import serial # pip install PySerial
from serial.tools import list_ports
import pty, os # for creating virtual serial interface
from serial import Serial
from typing import Optional
class SerialInterface:
# define constants which control how class works
FULLEMULATION=0
SERIALEMULATION=1
URLEMULATION=2
FULLSOLUTION=3
# define private class level variables
__emulate:int = FULLEMULATION
__ser:Serial
__port:str = ""
def __init__(self, emulate:int=FULLEMULATION, port:str="") -> None:
self.__buffer:list = []
self.__emulate = emulate
self.__port = port
#self.listports()
# setup connection to COM/serial port
# emulation sets up a virtual port, but this has not been working
if emulate == self.FULLSOLUTION:
self.__ser = serial.Serial(port, 9600)
elif emulate == self.SERIALEMULATION:
master, slave = pty.openpty()
serialport = os.ttyname(slave)
self.__ser = serial.Serial(port=serialport, baudrate=9600, timeout=1)
elif emulate == self.URLEMULATION:
self.__ser = serial.serial_for_url("loop://")
# useful to show COM/serial ports on a computer
#staticmethod
def listports() -> list:
for p in serial.tools.list_ports.comports():
print(p, p.device)
serialport = p.device
return serial.tools.list_ports.comports()
def read_until(self, expected:bytes=b'\n', size:Optional[int]=None) -> bytes:
if self.__emulate == self.FULLEMULATION:
return self.__buffer.pop()
else:
return self.__ser.read_until(expected, size)
# note it is important to have \n on end of every write to allow data to be read item by item
def write(self, bytes:bytes=b'') -> None:
if self.__emulate == self.FULLEMULATION:
self.__buffer.append(bytes)
else:
self.__ser.write(bytes)
def dataAvail(self) -> bool:
if self.__emulate == self.FULLEMULATION:
return len(self.__buffer) > 0
else:
return self.__ser.inWaiting() > 0
def close(self) -> None:
self.__ser.close()
def mode(self) -> int:
return self.__emulate

Related

Emulate Gpio input on Raspberry Pi for testing

I have a python script running on my RPi. It uses the Gpiozero library (that's really great by the way).
For testing purposes i was wondering if it was possible to emulate GPIO states somehow (say emulate pressing a button) and have it picked up by the gpiozero library.
Thanks !
TLDNR: Yes, it is possible.
I am not aware of any already prepared solution that can help you achieve what you want to do. Hence I found it very interesting whether it is feasible at all.
I was looking a while for a seam which can be used to stub the GPIO features and I have found that gpiozero uses GPIOZERO_PIN_FACTORY environmental variable to pick a backend. The plan is to write own pin factory, that will provide possibility to test other scripts.
NOTE: Please treat my solution as a proof of concept. It is far from being production ready.
The idea is to get GPIO states out of script under test scope. My solution uses env variable RPI_STUB_URL to get path of unix socket which will be used to communicate with the stub controller.
I have introduced very simple one request/response per connection protocol:
"GF {pin}\n" - ask what is the current function of the pin. Stub does not validate the response, but I would expect "input", "output" to be used.
"SF {pin} {function}\n" - request change of the pin's current function. Stub does not validate the function, bu I would expect "input", "output" to be used. Stub expects "OK" as a response.
"GS {pin}\n" - ask what is the current state of the pin. Stub expects values "0" or "1" as a response.
"SS {pin} {value|]n" - request change of the pin's current state. Stub expects "OK" as a response.
My "stub package" contains following files:
- setup.py # This file is needed in every package, isn't it?
- rpi_stub/
- __init__.py # This file collects entry points
- stubPin.py # This file implements stub backend for gpiozero
- controller.py # This file implements server for my stub
- trigger.py # This file implements client side feature of my stub
Let's start with setup.py content:
from setuptools import setup, find_packages
setup(
name="Raspberry PI GPIO stub",
version="0.1",
description="Package with stub plugin for gpiozero library",
packages=find_packages(),
install_requires = ["gpiozero"],
include_package_data=True,
entry_points="""
[console_scripts]
stub_rpi_controller=rpi_stub:controller_main
stub_rpi_trigger=rpi_stub:trigger_main
[gpiozero_pin_factories]
stub_rpi=rpi_stub:make_stub_pin
"""
)
It defines two console_scripts entry points one for controller and one for trigger. And one pin factory for gpiozero.
Now rpi_stub/__init__.py:
import rpi_stub.stubPin
from rpi_stub.controller import controller_main
from rpi_stub.trigger import trigger_main
def make_stub_pin(number):
return stubPin.StubPin(number)
It is rather simple file.
File rpi_stub/trigger.py:
import socket
import sys
def trigger_main():
socket_addr = sys.argv[1]
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(socket_addr)
request = "{0}\n".format(" ".join(sys.argv[2:]))
sock.sendall(request.encode())
data = sock.recv(1024)
sock.close()
print(data.decode("utf-8"))
trigger allows you to make your own request. You can use it to check what is the state of GPIO pin or change it.
File rpi_stub/controller.py:
import socketserver
import sys
functions = {}
states = {}
class MyHandler(socketserver.StreamRequestHandler):
def _respond(self, response):
print("Sending response: {0}".format(response))
self.wfile.write(response.encode())
def _handle_get_function(self, data):
print("Handling get_function: {0}".format(data))
try:
self._respond("{0}".format(functions[data[0]]))
except KeyError:
self._respond("input")
def _handle_set_function(self, data):
print("Handling set_function: {0}".format(data))
functions[data[0]] = data[1]
self._respond("OK")
def _handle_get_state(self, data):
print("Handling get_state: {0}".format(data))
try:
self._respond("{0}".format(states[data[0]]))
except KeyError:
self._respond("0")
def _handle_set_state(self, data):
print("Handling set_state: {0}".format(data))
states[data[0]] = data[1]
self._respond("OK")
def handle(self):
data = self.rfile.readline()
print("Handle: {0}".format(data))
data = data.decode("utf-8").strip().split(" ")
if data[0] == "GF":
self._handle_get_function(data[1:])
elif data[0] == "SF":
self._handle_set_function(data[1:])
elif data[0] == "GS":
self._handle_get_state(data[1:])
elif data[0] == "SS":
self._handle_set_state(data[1:])
else:
self._respond("Not understood")
def controller_main():
socket_addr = sys.argv[1]
server = socketserver.UnixStreamServer(socket_addr, MyHandler)
server.serve_forever()
This file contains the simplest server I was able to write.
And the most complicated file rpi_stub/stubPin.py:
from gpiozero.pins import Pin
import os
import socket
from threading import Thread
from time import sleep
def dummy_func():
pass
def edge_detector(pin):
print("STUB: Edge detector for pin: {0} spawned".format(pin.number))
while pin._edges != "none":
new_state = pin._get_state()
print("STUB: Edge detector for pin {0}: value {1} received".format(pin.number, new_state))
if new_state != pin._last_known:
print("STUB: Edge detector for pin {0}: calling callback".format(pin.number))
pin._when_changed()
pin._last_known = new_state
sleep(1)
print("STUB: Edge detector for pin: {0} ends".format(pin.number))
class StubPin(Pin):
def __init__(self, number):
super(StubPin, self).__init__()
self.number = number
self._when_changed = dummy_func
self._edges = "none"
self._last_known = 0
def _make_request(self, request):
server_address = os.getenv("RPI_STUB_URL", None)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(server_address)
sock.sendall(request.encode())
data = sock.recv(1024)
sock.close()
return data.decode("utf-8")
def _get_function(self):
response = self._make_request("GF {pin}\n".format(pin=self.number))
return response;
def _set_function(self, function):
response = self._make_request("SF {pin} {function}\n".format(pin=self.number, function=function))
if response != "OK":
raise Exception("STUB Not understood", response)
def _get_state(self):
response = self._make_request("GS {pin}\n".format(pin=self.number))
if response == "1":
return 1
else:
return 0
def _set_pull(self, value):
pass
def _set_edges(self, value):
print("STUB: set edges called: {0}".format(value))
if self._edges == "none" and value != "none":
self._thread = Thread(target=edge_detector,args=(self,))
self._thread.start()
if self._edges != "none" and value == "none":
self._edges = value;
self._thread.join()
self._edges = value
pass
def _get_when_changed(self, value):
return self._when_changed
def _set_when_changed(self, value):
print("STUB: set when changed: {0}".format(value))
self._when_changed = value
def _set_state(self, value):
response = self._make_request("SS {pin} {value}\n".format(pin=self.number, value=value))
if response != "OK":
raise Exception("Not understood", response)
The file defines StubPin which extends Pin from gpiozero. It defines all functions that was mandatory to be overriden. It also contains very naive edge detection as it was needed for gpio.Button to work.
Let's make a demo :). Let's create virtualenv which gpiozero and my package installed:
$ virtualenv -p python3 rpi_stub_env
[...] // virtualenv successfully created
$ source ./rpi_stub_env/bin/activate
(rpi_stub_env)$ pip install gpiozero
[...] // gpiozero installed
(rpi_stub_env)$ python3 setup.py install
[...] // my package installed
Now let's create stub controller (open in other terminal etc.):
(rpi_stub_env)$ stub_rpi_controller /tmp/socket.sock
I will use the following script example.py:
from gpiozero import Button
from time import sleep
button = Button(2)
while True:
if button.is_pressed:
print("Button is pressed")
else:
print("Button is not pressed")
sleep(1)
Let's execute it:
(rpi_stub_env)$ RPI_STUB_URL=/tmp/socket.sock GPIOZERO_PIN_FACTORY=stub_rpi python example.py
By default the script prints that the button is pressed. Now let's press the button:
(rpi_stub_env)$ stub_rpi_trigger /tmp/socket.sock SS 2 1
Now the script should print that the button is not pressed. If you execute the following command it will be pressed again:
(rpi_stub_env)$ stub_rpi_trigger /tmp/socket.sock SS 2 0
I hope it will help you.

Python Serial Port with threading - freezing computer

Okay, time for another question/post...
So currently i am trying to develop a simple python program that has a webkit/ webpage view and a serial port interface.. Not that it should matter, but this is also running on a raspberry pi.
The following code works fine.. But it will freeze the system as soon as i uncomment the serial port line that you can see commented out.
The day has been long and this one for some reason has my brain fried.. Python is not my strongest point, but mind you this is just a quick test script for now... Yes i have used google and other resources...
#!/usr/bin/env python
import sys
import serial
import threading
import time
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
sURL = ""
sURL2 = ""
objSerial = serial.Serial(0)
def SerialLooper():
global objSerial
if objSerial.isOpen() == True:
print("is_responding")
#objSerial.write("is_responding")
time.sleep(10)
SerialLooper()
class TestCLASS(object):
def __init__(self):
global sURL
global sURL2
global objSerial
objSerial = serial.Serial(0)
sURL = "http://localhost/tester"
app = QApplication(sys.argv)
webMain = QWebView()
webMain.loadFinished.connect(self.load_finished)
webMain.load(QUrl(sURL))
webMain.show()
thread = threading.Thread(target=SerialLooper)
thread.start()
sys.exit(app.exec_())
def load_finished(boolNoErrors):
global sURL
print("Url - " + sURL)
#something here
#something else here
newObjClass = TestCLASS()
EDIT
Futher on this, it appears its not the multithreading but the serial.write()
It has been a while since I used serial, but IIRC it is not threadsafe (on Windows at least). You are opening the port in the main thread and performing a write in another thread. It's a bad practice anyway. You might also consider writing a simple single-threaded program to see if the serial port is actually working.
PS Your program structure could use some work. You only need one of the global statements (global objSerial), the rest do nothing. It would be better to get rid of that one, too.
And the recursive call to SerialLooper() will eventually fail when the recursion depth is exceeded; why not just use a while loop...
def SerialLooper():
while objSerial().isOpen(): # Drop the == True
# print something
# write to the port
# Sleep or do whatever

libvlc and dbus interface

I'm trying a to create a basic media player using libvlc which will be controlled through dbus. I'm using the gtk and libvlc bindings for python. The code is based on the official example from the vlc website
The only thing I modified is to add the dbus interface to the vlc instance
# Create a single vlc.Instance() to be shared by (possible) multiple players.
instance = vlc.Instance()
print vlc.libvlc_add_intf(instance, "dbus"); // this is what i added. // returns 0 which is ok
All is well, the demo works and plays any video files. but for some reason the dbus control module doesn't work (I can't believe I just said the dreaded "doesn't work" words):
I already have the working client dbus code which binds to the MPRIS 2 interface. I can control a normal instance of a VLC media player - that works just fine, but with the above example nothing happens. The dbus control module is loaded properly, since libvlc_add_intf doesn't return an error and i can see the MPRIS 2 service in D-Feet (org.mpris.MediaPlayer2.vlc).
Even in D-Feet, trying to call any of the methods of the dbus vlc object returns no error but nothing happens.
Do I need to configure something else in order to make the dbus module control the libvlc player?
Thanks
UPDATE
It seems that creating the vlc Instance and setting a higher verbosity, shows that the DBus calls are received but they have no effect whatsoever on the player itself.
Also, adding the RC interface to the instance instead of DBus, has some problems too: When I run the example from the command line it drops me to the RC interface console where i can type the control commands, but it has the same behaviour as DBus - nothing happens, no error, nada, absolutely nothing. It ignores the commands completely.
Any thoughts?
UPDATE 2
Here is the code that uses libvlc to create a basic player:
from dbus.mainloop.glib import DBusGMainLoop
import gtk
import gobject
import sys
import vlc
from gettext import gettext as _
# Create a single vlc.Instance() to be shared by (possible) multiple players.
instance = vlc.Instance("--one-instance --verbose 2")
class VLCWidget(gtk.DrawingArea):
"""Simple VLC widget.
Its player can be controlled through the 'player' attribute, which
is a vlc.MediaPlayer() instance.
"""
def __init__(self, *p):
gtk.DrawingArea.__init__(self)
self.player = instance.media_player_new()
def handle_embed(*args):
if sys.platform == 'win32':
self.player.set_hwnd(self.window.handle)
else:
self.player.set_xwindow(self.window.xid)
return True
self.connect("map", handle_embed)
self.set_size_request(640, 480)
class VideoPlayer:
"""Example simple video player.
"""
def __init__(self):
self.vlc = VLCWidget()
def main(self, fname):
self.vlc.player.set_media(instance.media_new(fname))
w = gtk.Window()
w.add(self.vlc)
w.show_all()
w.connect("destroy", gtk.main_quit)
self.vlc.player.play()
DBusGMainLoop(set_as_default = True)
gtk.gdk.threads_init()
gobject.MainLoop().run()
if __name__ == '__main__':
if not sys.argv[1:]:
print "You must provide at least 1 movie filename"
sys.exit(1)
if len(sys.argv[1:]) == 1:
# Only 1 file. Simple interface
p=VideoPlayer()
p.main(sys.argv[1])
the script can be run from the command line like:
python example_vlc.py file.avi
The client code which connects to the vlc dbus object is too long to post so instead pretend that i'm using D-Feet to get the bus connection and post messages to it.
Once the example is running, i can see the players dbus interface in d-feet, but i am unable to control it. Is there anything else that i should add to the code above to make it work?
I can't see your implementation of your event loop, so it's hard to tell what might be causing commands to not be recognized or to be dropped. Is it possible your threads are losing the stacktrace information and are actually throwing exceptions?
You might get more responses if you added either a psuedo-code version of your event loop and DBus command parsing or a simplified version?
The working programs found on nullege.com use ctypes. One which acted as a server used rpyc. Ignoring that one.
The advantages of ctypes over dbus is a huge speed advantage (calling the C library code, not interacting using python) as well as not requiring the library to implement the dbus interface.
Didn't find any examples using gtk or dbus ;-(
Notable examples
PyNuvo vlc.py
Milonga Tango DJing program
Using dbus / gtk
dbus uses gobject mainloop, not gtk mainloop. Totally different beasts. Don't cross the streams! Some fixes:
Don't need this. Threads are evil.
gtk.gdk.threads_init()
gtk.main_quit() shouldn't work when using gobject Mainloop. gobject mainloop can't live within ur class.
if __name__ == '__main__':
loop = gobject.MainLoop()
loop.run()
Pass in loop into ur class. Then call to quit the app
loop.quit()
dbus (notify) / gtk working example
Not going to write ur vlc app for u. But here is a working example of using dbus / gtk. Just adapt to vlc. Assumed u took my advise on gtk above. As u know any instance of DesktopNotify must be called while using gobject.Mainloop . But u can place it anywhere within ur main class.
desktop_notify.py
from __future__ import print_function
import gobject
import time, dbus
from dbus.exceptions import DBusException
from dbus.mainloop.glib import DBusGMainLoop
class DesktopNotify(object):
""" Notify-OSD ubuntu's implementation has a 20 message limit. U've been warned. When queue is full, delete old message before adding new messages."""
#Static variables
dbus_loop = None
dbus_proxy = None
dbus_interface = None
loop = None
#property
def dbus_name(self):
return ("org.freedesktop.Notifications")
#property
def dbus_path(self):
return ("/org/freedesktop/Notifications")
#property
def dbus_interface(self):
return self.dbus_name
def __init__(self, strInit="initializing passive notification messaging")
strProxyInterface = "<class 'dbus.proxies.Interface'>"
""" Reinitializing dbus when making a 2nd class instance would be bad"""
if str(type(DesktopNotify.dbus_interface)) != strProxyInterface:
DesktopNotify.dbus_loop = DBusGMainLoop(set_as_default=True)
bus = dbus.SessionBus(mainloop=DesktopNotify.dbus_loop)
DesktopNotify.dbus_proxy = bus.get_object(self.dbus_name, self.dbus_path)
DesktopNotify.dbus_interface = dbus.Interface(DesktopNotify.dbus_proxy, self.dbus_interface )
DesktopNotify.dbus_proxy.connect_to_signal("NotificationClosed", self.handle_closed)
def handle_closed(self, *arg, **kwargs):
""" Notification closed by user or by code. Print message or not"""
lngNotificationId = int(arg[0])
lngReason = int(arg[1])
def pop(self, lngID):
""" ID stored in database, but i'm going to skip this and keep it simple"""
try:
DesktopNotify.dbus_interface.CloseNotification(lngID)
except DBusException as why:
print(self.__class__.__name__ + ".pop probably no message with id, lngID, why)
finally:
pass
def push(self, strMsgTitle, strMsg, dictField):
""" Create a new passive notification (took out retrying and handling full queues)"""
now = time.localtime( time.time() )
strMsgTime = strMsg + " " + time.asctime(now)
del now
strMsgTime = strMsgTime % dictField
app_name="[your app name]"
app_icon = ''
actions = ''
hint = ''
expire_timeout = 10000 #Use seconds * 1000
summary = strMsgTitle
body = strMsgTime
lngNotificationID = None
try:
lngNotificationID = DesktopNotify.dbus_interfacec.Notify(app_name, 0, app_icon, summary, body, actions, hint, expire_timeout)
except DBusException as why:
#Excellent spot to delete oldest notification and then retry
print(self.__class__.__name__ + ".push Being lazy. Posting passive notification was unsuccessful.", why)
finally:
#Excellent spot to add to database upon success
pass

Sending ^C to Python subprocess objects on Windows

I have a test harness (written in Python) that needs to shut down the program under test (written in C) by sending it ^C. On Unix,
proc.send_signal(signal.SIGINT)
works perfectly. On Windows, that throws an error ("signal 2 is not supported" or something like that). I am using Python 2.7 for Windows, so I have the impression that I should be able to do instead
proc.send_signal(signal.CTRL_C_EVENT)
but this doesn't do anything at all. What do I have to do? This is the code that creates the subprocess:
# Windows needs an extra argument passed to subprocess.Popen,
# but the constant isn't defined on Unix.
try: kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
except AttributeError: pass
proc = subprocess.Popen(argv,
stdin=open(os.path.devnull, "r"),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs)
There is a solution by using a wrapper (as described in the link Vinay provided) which is started in a new console window with the Windows start command.
Code of the wrapper:
#wrapper.py
import subprocess, time, signal, sys, os
def signal_handler(signal, frame):
time.sleep(1)
print 'Ctrl+C received in wrapper.py'
signal.signal(signal.SIGINT, signal_handler)
print "wrapper.py started"
subprocess.Popen("python demo.py")
time.sleep(3) #Replace with your IPC code here, which waits on a fire CTRL-C request
os.kill(signal.CTRL_C_EVENT, 0)
Code of the program catching CTRL-C:
#demo.py
import signal, sys, time
def signal_handler(signal, frame):
print 'Ctrl+C received in demo.py'
time.sleep(1)
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print 'demo.py started'
#signal.pause() # does not work under Windows
while(True):
time.sleep(1)
Launch the wrapper like e.g.:
PythonPrompt> import subprocess
PythonPrompt> subprocess.Popen("start python wrapper.py", shell=True)
You need to add some IPC code which allows you to control the wrapper firing the os.kill(signal.CTRL_C_EVENT, 0) command. I used sockets for this purpose in my application.
Explanation:
Preinformation
send_signal(CTRL_C_EVENT) does not work because CTRL_C_EVENT is only for os.kill. [REF1]
os.kill(CTRL_C_EVENT) sends the signal to all processes running in the current cmd window [REF2]
Popen(..., creationflags=CREATE_NEW_PROCESS_GROUP) does not work because CTRL_C_EVENT is ignored for process groups. [REF2]
This is a bug in the python documentation [REF3]
Implemented solution
Let your program run in a different cmd window with the Windows shell command start.
Add a CTRL-C request wrapper between your control application and the application which should get the CTRL-C signal. The wrapper will run in the same cmd window as the application which should get the CTRL-C signal.
The wrapper will shutdown itself and the program which should get the CTRL-C signal by sending all processes in the cmd window the CTRL_C_EVENT.
The control program should be able to request the wrapper to send the CTRL-C signal. This might be implemnted trough IPC means, e.g. sockets.
Helpful posts were:
I had to remove the http in front of the links because I'm a new user and are not allowed to post more than two links.
http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/dc9586ab-1ee8-41aa-a775-cf4828ac1239/#6589714f-12a7-447e-b214-27372f31ca11
Can I send a ctrl-C (SIGINT) to an application on Windows?
Sending SIGINT to a subprocess of python
http://bugs.python.org/issue9524
http://ss64.com/nt/start.html
http://objectmix.com/python/387639-sending-cntrl-c.html#post1443948
Update: IPC based CTRL-C Wrapper
Here you can find a selfwritten python module providing a CTRL-C wrapping including a socket based IPC.
The syntax is quite similiar to the subprocess module.
Usage:
>>> import winctrlc
>>> p1 = winctrlc.Popen("python demo.py")
>>> p2 = winctrlc.Popen("python demo.py")
>>> p3 = winctrlc.Popen("python demo.py")
>>> p2.send_ctrl_c()
>>> p1.send_ctrl_c()
>>> p3.send_ctrl_c()
Code
import socket
import subprocess
import time
import random
import signal, os, sys
class Popen:
_port = random.randint(10000, 50000)
_connection = ''
def _start_ctrl_c_wrapper(self, cmd):
cmd_str = "start \"\" python winctrlc.py "+"\""+cmd+"\""+" "+str(self._port)
subprocess.Popen(cmd_str, shell=True)
def _create_connection(self):
self._connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._connection.connect(('localhost', self._port))
def send_ctrl_c(self):
self._connection.send(Wrapper.TERMINATION_REQ)
self._connection.close()
def __init__(self, cmd):
self._start_ctrl_c_wrapper(cmd)
self._create_connection()
class Wrapper:
TERMINATION_REQ = "Terminate with CTRL-C"
def _create_connection(self, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', port))
s.listen(1)
conn, addr = s.accept()
return conn
def _wait_on_ctrl_c_request(self, conn):
while True:
data = conn.recv(1024)
if data == self.TERMINATION_REQ:
ctrl_c_received = True
break
else:
ctrl_c_received = False
return ctrl_c_received
def _cleanup_and_fire_ctrl_c(self, conn):
conn.close()
os.kill(signal.CTRL_C_EVENT, 0)
def _signal_handler(self, signal, frame):
time.sleep(1)
sys.exit(0)
def __init__(self, cmd, port):
signal.signal(signal.SIGINT, self._signal_handler)
subprocess.Popen(cmd)
conn = self._create_connection(port)
ctrl_c_req_received = self._wait_on_ctrl_c_request(conn)
if ctrl_c_req_received:
self._cleanup_and_fire_ctrl_c(conn)
else:
sys.exit(0)
if __name__ == "__main__":
command_string = sys.argv[1]
port_no = int(sys.argv[2])
Wrapper(command_string, port_no)
New answer:
When you create the process, use the flag CREATE_NEW_PROCESS_GROUP. And then you can send CTRL_BREAK to the child process. The default behavior is the same as CTRL_C, except that it won't affect the calling process.
Old answer:
My solution also involves a wrapper script, but it does not need IPC, so it is far simpler to use.
The wrapper script first detaches itself from any existing console, then attach to the target console, then files the Ctrl-C event.
import ctypes
import sys
kernel = ctypes.windll.kernel32
pid = int(sys.argv[1])
kernel.FreeConsole()
kernel.AttachConsole(pid)
kernel.SetConsoleCtrlHandler(None, 1)
kernel.GenerateConsoleCtrlEvent(0, 0)
sys.exit(0)
The initial process must be launched in a separate console so that the Ctrl-C event will not leak. Example
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# Do something else
subprocess.check_call([sys.executable, 'ctrl_c.py', str(p.pid)]) # Send Ctrl-C
where I named the wrapper script as ctrl_c.py.
Try calling the GenerateConsoleCtrlEvent function using ctypes. As you are creating a new process group, the process group ID should be the same as the pid. So, something like
import ctypes
ctypes.windll.kernel32.GenerateConsoleCtrlEvent(0, proc.pid) # 0 => Ctrl-C
should work.
Update: You're right, I missed that part of the detail. Here's a post which suggests a possible solution, though it's a bit kludgy. More details are in this answer.
Here is a fully working example which doesn't need any modification in the target script.
This overrides the sitecustomize module so it might no be suitable for every scenario. However, in this case you could use a *.pth file in site-packages to execute code at the subprocess startup (see https://nedbatchelder.com/blog/201001/running_code_at_python_startup.html).
Edit This works only out of the box for subprocesses in Python. Other processes have to manually call SetConsoleCtrlHandler(NULL, FALSE).
main.py
import os
import signal
import subprocess
import sys
import time
def main():
env = os.environ.copy()
env['PYTHONPATH'] = '%s%s%s' % ('custom-site', os.pathsep,
env.get('PYTHONPATH', ''))
proc = subprocess.Popen(
[sys.executable, 'sub.py'],
env=env,
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP,
)
time.sleep(1)
proc.send_signal(signal.CTRL_C_EVENT)
proc.wait()
if __name__ == '__main__':
main()
custom-site\sitecustomize.py
import ctypes
import sys
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)
if not kernel32.SetConsoleCtrlHandler(None, False):
print('SetConsoleCtrlHandler Error: ', ctypes.get_last_error(),
file=sys.stderr)
sub.py
import atexit
import time
def cleanup():
print ('cleanup')
atexit.register(cleanup)
while True:
time.sleep(1)
I have a single file solution with the following advantages:
- No external libraries. (Other than ctypes)
- Doesn't require the process to be opened in a specific way.
The solution is adapted from this stack overflow post, but I think it's much more elegant in python.
import os
import signal
import subprocess
import sys
import time
# Terminates a Windows console app sending Ctrl-C
def terminateConsole(processId: int, timeout: int = None) -> bool:
currentFilePath = os.path.abspath(__file__)
# Call the below code in a separate process. This is necessary due to the FreeConsole call.
try:
code = subprocess.call('{} {} {}'.format(sys.executable, currentFilePath, processId), timeout=timeout)
if code == 0: return True
except subprocess.TimeoutExpired:
pass
# Backup plan
subprocess.call('taskkill /F /PID {}'.format(processId))
if __name__ == '__main__':
pid = int(sys.argv[1])
import ctypes
kernel = ctypes.windll.kernel32
r = kernel.FreeConsole()
if r == 0: exit(-1)
r = kernel.AttachConsole(pid)
if r == 0: exit(-1)
r = kernel.SetConsoleCtrlHandler(None, True)
if r == 0: exit(-1)
r = kernel.GenerateConsoleCtrlEvent(0, 0)
if r == 0: exit(-1)
r = kernel.FreeConsole()
if r == 0: exit(-1)
# use tasklist to wait while the process is still alive.
while True:
time.sleep(1)
# We pass in stdin as PIPE because there currently is no Console, and stdin is currently invalid.
searchOutput: bytes = subprocess.check_output('tasklist /FI "PID eq {}"'.format(pid), stdin=subprocess.PIPE)
if str(pid) not in searchOutput.decode(): break;
# The following two commands are not needed since we're about to close this script.
# You can leave them here if you want to do more console operations.
r = kernel.SetConsoleCtrlHandler(None, False)
if r == 0: exit(-1)
r = kernel.AllocConsole()
if r == 0: exit(-1)
exit(0)
For those interested in a "quick fix", I've made a console-ctrl package based on Siyuan Ren's answer to make it even easier to use.
Simply run pip install console-ctrl, and in your code:
import console_ctrl
import subprocess
# Start some command IN A SEPARATE CONSOLE
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# ...
# Stop the target process
console_ctrl.send_ctrl_c(p.pid)
I have been trying this but for some reason ctrl+break works, and ctrl+c does not. So using os.kill(signal.CTRL_C_EVENT, 0) fails, but doing os.kill(signal.CTRL_C_EVENT, 1) works. I am told this has something to do with the create process owner being the only one that can pass a ctrl c? Does that make sense?
To clarify, while running fio manually in a command window it appears to be running as expected. Using the CTRL + BREAK breaks without storing the log as expected and CTRL + C finishes writing to the file also as expected. The problem appears to be in the signal for the CTRL_C_EVENT.
It almost appears to be a bug in Python but may rather be a bug in Windows. Also one other thing, I had a cygwin version running and sending the ctrl+c in python there worked as well, but then again we aren't really running native windows there.
example:
import subprocess, time, signal, sys, os
command = '"C:\\Program Files\\fio\\fio.exe" --rw=randrw --bs=1M --numjobs=8 --iodepth=64 --direct=1 ' \
'--sync=0 --ioengine=windowsaio --name=test --loops=10000 ' \
'--size=99901800 --rwmixwrite=100 --do_verify=0 --filename=I\\:\\test ' \
'--thread --output=C:\\output.txt'
def signal_handler(signal, frame):
time.sleep(1)
print 'Ctrl+C received in wrapper.py'
signal.signal(signal.SIGINT, signal_handler)
print 'command Starting'
subprocess.Popen(command)
print 'command started'
time.sleep(15)
print 'Timeout Completed'
os.kill(signal.CTRL_C_EVENT, 0)
(This was supposed to be a comment under Siyuan Ren's answer but I don't have enough rep so here's a slightly longer version.)
If you don't want to create any helper scripts you can use:
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# Do something else
subprocess.run([
sys.executable,
"-c",
"import ctypes, sys;"
"kernel = ctypes.windll.kernel32;"
"pid = int(sys.argv[1]);"
"kernel.FreeConsole();"
"kernel.AttachConsole(pid);"
"kernel.SetConsoleCtrlHandler(None, 1);"
"kernel.GenerateConsoleCtrlEvent(0, 0);"
"sys.exit(0)",
str(p.pid)
]) # Send Ctrl-C
But it won't work if you use PyInstaller - sys.executable points to your executable, not the Python interpreter. To solve that issue I've created a tiny utility for Windows: https://github.com/anadius/ctrlc
Now you can send the Ctrl+C event with:
subprocess.run(["ctrlc", str(p.pid)])

How to attach debugger to a python subproccess?

I need to debug a child process spawned by multiprocessing.Process(). The pdb degugger seems to be unaware of forking and unable to attach to already running processes.
Are there any smarter python debuggers which can be attached to a subprocess?
I've been searching for a simple to solution for this problem and came up with this:
import sys
import pdb
class ForkedPdb(pdb.Pdb):
"""A Pdb subclass that may be used
from a forked multiprocessing child
"""
def interaction(self, *args, **kwargs):
_stdin = sys.stdin
try:
sys.stdin = open('/dev/stdin')
pdb.Pdb.interaction(self, *args, **kwargs)
finally:
sys.stdin = _stdin
Use it the same way you might use the classic Pdb:
ForkedPdb().set_trace()
Winpdb is pretty much the definition of a smarter Python debugger. It explicitly supports going down a fork, not sure it works nicely with multiprocessing.Process() but it's worth a try.
For a list of candidates to check for support of your use case, see the list of Python Debuggers in the wiki.
This is an elaboration of Romuald's answer which restores the original stdin using its file descriptor. This keeps readline working inside the debugger. Besides, pdb special management of KeyboardInterrupt is disabled, in order it not to interfere with multiprocessing sigint handler.
class ForkablePdb(pdb.Pdb):
_original_stdin_fd = sys.stdin.fileno()
_original_stdin = None
def __init__(self):
pdb.Pdb.__init__(self, nosigint=True)
def _cmdloop(self):
current_stdin = sys.stdin
try:
if not self._original_stdin:
self._original_stdin = os.fdopen(self._original_stdin_fd)
sys.stdin = self._original_stdin
self.cmdloop()
finally:
sys.stdin = current_stdin
Building upon #memplex idea, I had to modify it to get it to work with joblib by setting the sys.stdin in the constructor as well as passing it directly along via joblib.
import os
import pdb
import signal
import sys
import joblib
_original_stdin_fd = None
class ForkablePdb(pdb.Pdb):
_original_stdin = None
_original_pid = os.getpid()
def __init__(self):
pdb.Pdb.__init__(self)
if self._original_pid != os.getpid():
if _original_stdin_fd is None:
raise Exception("Must set ForkablePdb._original_stdin_fd to stdin fileno")
self.current_stdin = sys.stdin
if not self._original_stdin:
self._original_stdin = os.fdopen(_original_stdin_fd)
sys.stdin = self._original_stdin
def _cmdloop(self):
try:
self.cmdloop()
finally:
sys.stdin = self.current_stdin
def handle_pdb(sig, frame):
ForkablePdb().set_trace(frame)
def test(i, fileno):
global _original_stdin_fd
_original_stdin_fd = fileno
while True:
pass
if __name__ == '__main__':
print "PID: %d" % os.getpid()
signal.signal(signal.SIGUSR2, handle_pdb)
ForkablePdb().set_trace()
fileno = sys.stdin.fileno()
joblib.Parallel(n_jobs=2)(joblib.delayed(test)(i, fileno) for i in range(10))
remote-pdb can be used to debug sub-processes. After installation, put the following lines in the code you need to debug:
import remote_pdb
remote_pdb.set_trace()
remote-pdb will print a port number which will accept a telnet connection for debugging that specific process. There are some caveats around worker launch order, where stdout goes when using various frontends, etc. To ensure a specific port is used (must be free and accessible to the current user), use the following instead:
from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace()
remote-pdb may also be launched via the breakpoint() command in Python 3.7.
Just use PuDB that gives you an awesome TUI (GUI on terminal) and supports multiprocessing as follow:
from pudb import forked; forked.set_trace()
An idea I had was to create "dummy" classes to fake the implementation of the methods you are using from multiprocessing:
from multiprocessing import Pool
class DummyPool():
#staticmethod
def apply_async(func, args, kwds):
return DummyApplyResult(func(*args, **kwds))
def close(self): pass
def join(self): pass
class DummyApplyResult():
def __init__(self, result):
self.result = result
def get(self):
return self.result
def foo(a, b, switch):
# set trace when DummyPool is used
# import ipdb; ipdb.set_trace()
if switch:
return b - a
else:
return a - b
if __name__ == '__main__':
xml = etree.parse('C:/Users/anmendoza/Downloads/jim - 8.1/running-config.xml')
pool = DummyPool() # switch between Pool() and DummyPool() here
results = []
results.append(pool.apply_async(foo, args=(1, 100), kwds={'switch': True}))
pool.close()
pool.join()
results[0].get()
Here is the version of the ForkedPdb(Romuald's Solution) which will work for Windows and *nix based systems.
import sys
import pdb
import win32console
class MyHandle():
def __init__(self):
self.screenBuffer = win32console.GetStdHandle(win32console.STD_INPUT_HANDLE)
def readline(self):
return self.screenBuffer.ReadConsole(1000)
class ForkedPdb(pdb.Pdb):
def interaction(self, *args, **kwargs):
_stdin = sys.stdin
try:
if sys.platform == "win32":
sys.stdin = MyHandle()
else:
sys.stdin = open('/dev/stdin')
pdb.Pdb.interaction(self, *args, **kwargs)
finally:
sys.stdin = _stdin
The problem here is that Python always connects sys.stdin in the child process to os.devnull to avoid contention for the stream. But this means that when the debugger (or a simple input()) tries to connect to stdin to get input from the user, it immediately reaches end-of-file and reports an error.
One solution, at least if you don't expect multiple debuggers to run at the same time, is to reopen stdin in the child process. That can be done by setting sys.stdin to open(0), which always opens the active terminal. This in fact is what the ForkedPdb solution does, but it can be done more simply and in an os-independent manner like this:
import multiprocessing, sys
def main():
process = multiprocessing.Process(target=worker)
process.start()
process.join()
def worker():
# Python automatically closes sys.stdin for the subprocess, so we reopen
# stdin. This enables pdb to connect to the terminal and accept commands.
# See https://stackoverflow.com/a/30149635/3830997.
sys.stdin = open(0) # or os.fdopen(0)
print("Hello from the subprocess.")
breakpoint() # or import pdb; pdb.set_trace()
print("Exited from breakpoint in the subprocess.")
if __name__ == '__main__':
main()
If you are on a supported platform, try DTrace. Most of the BSD / Solaris / OS X family support DTrace.
Here is an intro by the author. You can use Dtrace to debug just about anything.
Here is a SO post on learning DTrace.

Categories

Resources