Currently I try to create an unit test which opens a file (with the corresponding application) and then the test-run should wait until the program is closed.
def test_HFG(self):
#....
print "please edit this file"
os.chdir(r'C:\test\a')
os.startfile("myfile.vdx")
# here I need a "stop until the program is closed"-function
#....
Does anyone have any idea how to realize(as simple as possible) my plan?
os.startfile is, of course, completely non-blocking with no options to wait.
I'd recommend using the subprocess module, calling the Windows "start" command to open the file with the associated object, which does the same thing as os.startfile, but allows you to wait for the process to finish.
e.g.:
subprocess.call(["start", my_file])
From the docs:
startfile() returns as soon as the associated application is
launched. There is no option to wait for the application to close, and
no way to retrieve the application’s exit status.
If you know the path of the application to open the file with, you could use subprocess.Popen() which allows for you to wait.
See:
http://docs.python.org/library/os.html#os.startfile
http://docs.python.org/library/subprocess.html#subprocess.Popen
The documentation of os.startfile explicitly says:
startfile() returns as soon as the associated application is launched.
There is no option to wait for the application to close, and no way to
retrieve the application’s exit status
So, I recommend using an alternative method, such as launching it via subprocess.Popen, which does allow you to wait until the sub-process finishes.
answer = raw_input("Please Edit this file (done): ")
if answer == done:
blablabla
else:
stop programm or whatever
just let him tell you when he is done, simplest way i can imagine
You can call the underlying windows APIs, as recommended in this C++ answer, either via pywin32:
def start_file_wait(fname):
import win32con
import win32api
import win32event
from win32com.shell import shellcon
from win32com.shell.shell import ShellExecuteEx
rc = ShellExecuteEx(
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
nShow=win32con.SW_SHOW,
lpFile=fname)
hproc = rc['hProcess']
win32event.WaitForSingleObject(hproc, win32event.INFINITE)
win32api.CloseHandle(hproc)
or directly from ctypes:
def startfile_wait(fname):
import ctypes
from ctypes.wintypes import ULONG, DWORD, HANDLE, HKEY, HINSTANCE, HWND, LPCWSTR
class SHELLEXECUTEINFOW(ctypes.Structure):
_fields_ = [
("cbSize", DWORD),
("fMask", ULONG),
("hwnd", HWND),
("lpVerb", LPCWSTR),
("lpFile", LPCWSTR),
("lpParameters", LPCWSTR),
("lpDirectory", LPCWSTR),
("nShow", ctypes.c_int),
("hInstApp", HINSTANCE),
("lpIDList", ctypes.c_void_p),
("lpClass", LPCWSTR),
("hkeyClass", HKEY),
("dwHotKey", DWORD),
("DUMMYUNIONNAME", ctypes.c_void_p),
("hProcess", HANDLE)
]
shell_execute_ex = ctypes.windll.shell32.ShellExecuteExW
shell_execute_ex.argtypes = [ctypes.POINTER(SHELLEXECUTEINFOW)]
shell_execute_ex.res_type = ctypes.c_bool
wait_for_single_object = ctypes.windll.kernel32.WaitForSingleObject
wait_for_single_object.argtypes = [HANDLE, DWORD]
wait_for_single_object.res_type = DWORD
close_handle = ctypes.windll.kernel32.CloseHandle
close_handle.argtypes = [HANDLE]
close_handle.res_type = bool
# https://stackoverflow.com/a/17638969/102441
arg = SHELLEXECUTEINFOW()
arg.cbSize = ctypes.sizeof(arg)
arg.fMask = 0x00000040 # SEE_MASK_NOCLOSEPROCESS
arg.hwnd = None
arg.lpVerb = None
arg.lpFile = fname
arg.lpParameters = ""
arg.lpDirectory = None
arg.nShow = 10 # SW_SHOWDEFAULT
arg.hInstApp = None
ok = shell_execute_ex(arg)
if not ok:
raise ctypes.WinError()
try:
wait_for_single_object(arg.hProcess, -1)
finally:
close_handle(arg.hProcess)
Related
I'm trying to figure out how to get my program to open in a fullscreen console window.
Is there any command that you can type within the command prompt to toggle fullscreen?
If so I'd imagine the code going something like:
from os import system
system("toggle.fullscreen")
{CODE HERE}
I understand mode con can be used, but that doesn't actually toggle it being maximized, which would be much more useful for me, thanks!
Here's a function to maximize the current console window. It uses ctypes to call WinAPI functions. First it calls GetLargestConsoleWindowSize in order to figure how big it can make the window, with the option to specify a number of lines that exceeds this in order to get a scrollback buffer. To do the work of resizing the screen buffer it simply calls mode.com via subprocess.check_call. Finally, it gets the console window handle via GetConsoleWindow and calls ShowWindow to maximize it.
import os
import ctypes
import msvcrt
import subprocess
from ctypes import wintypes
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)
user32 = ctypes.WinDLL('user32', use_last_error=True)
SW_MAXIMIZE = 3
kernel32.GetConsoleWindow.restype = wintypes.HWND
kernel32.GetLargestConsoleWindowSize.restype = wintypes._COORD
kernel32.GetLargestConsoleWindowSize.argtypes = (wintypes.HANDLE,)
user32.ShowWindow.argtypes = (wintypes.HWND, ctypes.c_int)
def maximize_console(lines=None):
fd = os.open('CONOUT$', os.O_RDWR)
try:
hCon = msvcrt.get_osfhandle(fd)
max_size = kernel32.GetLargestConsoleWindowSize(hCon)
if max_size.X == 0 and max_size.Y == 0:
raise ctypes.WinError(ctypes.get_last_error())
finally:
os.close(fd)
cols = max_size.X
hWnd = kernel32.GetConsoleWindow()
if cols and hWnd:
if lines is None:
lines = max_size.Y
else:
lines = max(min(lines, 9999), max_size.Y)
subprocess.check_call('mode.com con cols={} lines={}'.format(
cols, lines))
user32.ShowWindow(hWnd, SW_MAXIMIZE)
You can use keyboard.press. Install with pip3 install keyboard if it is not installed.
Code:
import keyboard
keyboard.press('f11')
I found this awhile back on a different post and it works perfectly for console window maximization:
import win32gui, win32con
hwnd = win32gui.GetForegroundWindow()
win32gui.ShowWindow(hwnd, win32con.SW_MAXIMIZE)
I see there's win32process.GetWindowThreadProcess() that gets a window handler and returns it's process id. Is there a way to do the opposite: get the window handler of a running process by it's process id? Something like win32gui.GetWindowHandler(processId) ?
Specifically What I'm trying to do:
I have a python script that runs an external program, lets say notepad.exe.
Notepad is fired when runProgram() method is called. I want to prevent this method from running Notepad more than once. I accomplish this in the following way, using win32process:
import win32process as process
import sys
PORTABLE_APPLICATION_LOCATION = "C:\\Windows\\system32\\notepad.exe"
processHandler = -1
def runProgram():
global processHandler
#don't run a process more than once
if (isLiveProcess(processHandler)):
#Bring focus back to running window!
return;
try:
startObj = process.STARTUPINFO()
myProcessTuple = process.CreateProcess(PORTABLE_APPLICATION_LOCATION,None,None,None,8,8,None,None,startObj)
processHandler = myProcessTuple[2]
except:
print(sys.exc_info[0])
def isLiveProcess(processHandler): #Process handler is dwProcessId
processList = process.EnumProcesses()
for aProcess in processList:
if (aProcess == processHandler):
return True
return False
runProgram()
This works as expected, but if the process is found to be already alive, I'd like to bring it's window back to front with win32gui
I dont think that Windows API provides a method for this , but you could iterate over all open windows , and find the one that belongs to you .
I have modified your program so it looks like this :
import win32process
import win32process as process
import win32gui
import sys
PORTABLE_APPLICATION_LOCATION = "C:\\Windows\\system32\\notepad.exe"
processHandler = -1
def callback(hwnd, procid):
if procid in win32process.GetWindowThreadProcessId(hwnd):
win32gui.SetForegroundWindow(hwnd)
def show_window_by_process(procid):
win32gui.EnumWindows(callback, procid)
def runProgram():
global processHandler
#don't run a process more than once
if (isLiveProcess(processHandler)):
#Bring focus back to running window!
show_window_by_process(processHandler)
return;
try:
startObj = process.STARTUPINFO()
myProcessTuple = process.CreateProcess(PORTABLE_APPLICATION_LOCATION,None,None,None,8,8,None,None,startObj)
processHandler = myProcessTuple[2]
except:
print(sys.exc_info[0])
def isLiveProcess(processHandler): #Process handler is dwProcessId
processList = process.EnumProcesses()
for aProcess in processList:
if (aProcess == processHandler):
return True
return False
runProgram()
I have a test harness (written in Python) that needs to shut down the program under test (written in C) by sending it ^C. On Unix,
proc.send_signal(signal.SIGINT)
works perfectly. On Windows, that throws an error ("signal 2 is not supported" or something like that). I am using Python 2.7 for Windows, so I have the impression that I should be able to do instead
proc.send_signal(signal.CTRL_C_EVENT)
but this doesn't do anything at all. What do I have to do? This is the code that creates the subprocess:
# Windows needs an extra argument passed to subprocess.Popen,
# but the constant isn't defined on Unix.
try: kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
except AttributeError: pass
proc = subprocess.Popen(argv,
stdin=open(os.path.devnull, "r"),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs)
There is a solution by using a wrapper (as described in the link Vinay provided) which is started in a new console window with the Windows start command.
Code of the wrapper:
#wrapper.py
import subprocess, time, signal, sys, os
def signal_handler(signal, frame):
time.sleep(1)
print 'Ctrl+C received in wrapper.py'
signal.signal(signal.SIGINT, signal_handler)
print "wrapper.py started"
subprocess.Popen("python demo.py")
time.sleep(3) #Replace with your IPC code here, which waits on a fire CTRL-C request
os.kill(signal.CTRL_C_EVENT, 0)
Code of the program catching CTRL-C:
#demo.py
import signal, sys, time
def signal_handler(signal, frame):
print 'Ctrl+C received in demo.py'
time.sleep(1)
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print 'demo.py started'
#signal.pause() # does not work under Windows
while(True):
time.sleep(1)
Launch the wrapper like e.g.:
PythonPrompt> import subprocess
PythonPrompt> subprocess.Popen("start python wrapper.py", shell=True)
You need to add some IPC code which allows you to control the wrapper firing the os.kill(signal.CTRL_C_EVENT, 0) command. I used sockets for this purpose in my application.
Explanation:
Preinformation
send_signal(CTRL_C_EVENT) does not work because CTRL_C_EVENT is only for os.kill. [REF1]
os.kill(CTRL_C_EVENT) sends the signal to all processes running in the current cmd window [REF2]
Popen(..., creationflags=CREATE_NEW_PROCESS_GROUP) does not work because CTRL_C_EVENT is ignored for process groups. [REF2]
This is a bug in the python documentation [REF3]
Implemented solution
Let your program run in a different cmd window with the Windows shell command start.
Add a CTRL-C request wrapper between your control application and the application which should get the CTRL-C signal. The wrapper will run in the same cmd window as the application which should get the CTRL-C signal.
The wrapper will shutdown itself and the program which should get the CTRL-C signal by sending all processes in the cmd window the CTRL_C_EVENT.
The control program should be able to request the wrapper to send the CTRL-C signal. This might be implemnted trough IPC means, e.g. sockets.
Helpful posts were:
I had to remove the http in front of the links because I'm a new user and are not allowed to post more than two links.
http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/dc9586ab-1ee8-41aa-a775-cf4828ac1239/#6589714f-12a7-447e-b214-27372f31ca11
Can I send a ctrl-C (SIGINT) to an application on Windows?
Sending SIGINT to a subprocess of python
http://bugs.python.org/issue9524
http://ss64.com/nt/start.html
http://objectmix.com/python/387639-sending-cntrl-c.html#post1443948
Update: IPC based CTRL-C Wrapper
Here you can find a selfwritten python module providing a CTRL-C wrapping including a socket based IPC.
The syntax is quite similiar to the subprocess module.
Usage:
>>> import winctrlc
>>> p1 = winctrlc.Popen("python demo.py")
>>> p2 = winctrlc.Popen("python demo.py")
>>> p3 = winctrlc.Popen("python demo.py")
>>> p2.send_ctrl_c()
>>> p1.send_ctrl_c()
>>> p3.send_ctrl_c()
Code
import socket
import subprocess
import time
import random
import signal, os, sys
class Popen:
_port = random.randint(10000, 50000)
_connection = ''
def _start_ctrl_c_wrapper(self, cmd):
cmd_str = "start \"\" python winctrlc.py "+"\""+cmd+"\""+" "+str(self._port)
subprocess.Popen(cmd_str, shell=True)
def _create_connection(self):
self._connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self._connection.connect(('localhost', self._port))
def send_ctrl_c(self):
self._connection.send(Wrapper.TERMINATION_REQ)
self._connection.close()
def __init__(self, cmd):
self._start_ctrl_c_wrapper(cmd)
self._create_connection()
class Wrapper:
TERMINATION_REQ = "Terminate with CTRL-C"
def _create_connection(self, port):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('localhost', port))
s.listen(1)
conn, addr = s.accept()
return conn
def _wait_on_ctrl_c_request(self, conn):
while True:
data = conn.recv(1024)
if data == self.TERMINATION_REQ:
ctrl_c_received = True
break
else:
ctrl_c_received = False
return ctrl_c_received
def _cleanup_and_fire_ctrl_c(self, conn):
conn.close()
os.kill(signal.CTRL_C_EVENT, 0)
def _signal_handler(self, signal, frame):
time.sleep(1)
sys.exit(0)
def __init__(self, cmd, port):
signal.signal(signal.SIGINT, self._signal_handler)
subprocess.Popen(cmd)
conn = self._create_connection(port)
ctrl_c_req_received = self._wait_on_ctrl_c_request(conn)
if ctrl_c_req_received:
self._cleanup_and_fire_ctrl_c(conn)
else:
sys.exit(0)
if __name__ == "__main__":
command_string = sys.argv[1]
port_no = int(sys.argv[2])
Wrapper(command_string, port_no)
New answer:
When you create the process, use the flag CREATE_NEW_PROCESS_GROUP. And then you can send CTRL_BREAK to the child process. The default behavior is the same as CTRL_C, except that it won't affect the calling process.
Old answer:
My solution also involves a wrapper script, but it does not need IPC, so it is far simpler to use.
The wrapper script first detaches itself from any existing console, then attach to the target console, then files the Ctrl-C event.
import ctypes
import sys
kernel = ctypes.windll.kernel32
pid = int(sys.argv[1])
kernel.FreeConsole()
kernel.AttachConsole(pid)
kernel.SetConsoleCtrlHandler(None, 1)
kernel.GenerateConsoleCtrlEvent(0, 0)
sys.exit(0)
The initial process must be launched in a separate console so that the Ctrl-C event will not leak. Example
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# Do something else
subprocess.check_call([sys.executable, 'ctrl_c.py', str(p.pid)]) # Send Ctrl-C
where I named the wrapper script as ctrl_c.py.
Try calling the GenerateConsoleCtrlEvent function using ctypes. As you are creating a new process group, the process group ID should be the same as the pid. So, something like
import ctypes
ctypes.windll.kernel32.GenerateConsoleCtrlEvent(0, proc.pid) # 0 => Ctrl-C
should work.
Update: You're right, I missed that part of the detail. Here's a post which suggests a possible solution, though it's a bit kludgy. More details are in this answer.
Here is a fully working example which doesn't need any modification in the target script.
This overrides the sitecustomize module so it might no be suitable for every scenario. However, in this case you could use a *.pth file in site-packages to execute code at the subprocess startup (see https://nedbatchelder.com/blog/201001/running_code_at_python_startup.html).
Edit This works only out of the box for subprocesses in Python. Other processes have to manually call SetConsoleCtrlHandler(NULL, FALSE).
main.py
import os
import signal
import subprocess
import sys
import time
def main():
env = os.environ.copy()
env['PYTHONPATH'] = '%s%s%s' % ('custom-site', os.pathsep,
env.get('PYTHONPATH', ''))
proc = subprocess.Popen(
[sys.executable, 'sub.py'],
env=env,
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP,
)
time.sleep(1)
proc.send_signal(signal.CTRL_C_EVENT)
proc.wait()
if __name__ == '__main__':
main()
custom-site\sitecustomize.py
import ctypes
import sys
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)
if not kernel32.SetConsoleCtrlHandler(None, False):
print('SetConsoleCtrlHandler Error: ', ctypes.get_last_error(),
file=sys.stderr)
sub.py
import atexit
import time
def cleanup():
print ('cleanup')
atexit.register(cleanup)
while True:
time.sleep(1)
I have a single file solution with the following advantages:
- No external libraries. (Other than ctypes)
- Doesn't require the process to be opened in a specific way.
The solution is adapted from this stack overflow post, but I think it's much more elegant in python.
import os
import signal
import subprocess
import sys
import time
# Terminates a Windows console app sending Ctrl-C
def terminateConsole(processId: int, timeout: int = None) -> bool:
currentFilePath = os.path.abspath(__file__)
# Call the below code in a separate process. This is necessary due to the FreeConsole call.
try:
code = subprocess.call('{} {} {}'.format(sys.executable, currentFilePath, processId), timeout=timeout)
if code == 0: return True
except subprocess.TimeoutExpired:
pass
# Backup plan
subprocess.call('taskkill /F /PID {}'.format(processId))
if __name__ == '__main__':
pid = int(sys.argv[1])
import ctypes
kernel = ctypes.windll.kernel32
r = kernel.FreeConsole()
if r == 0: exit(-1)
r = kernel.AttachConsole(pid)
if r == 0: exit(-1)
r = kernel.SetConsoleCtrlHandler(None, True)
if r == 0: exit(-1)
r = kernel.GenerateConsoleCtrlEvent(0, 0)
if r == 0: exit(-1)
r = kernel.FreeConsole()
if r == 0: exit(-1)
# use tasklist to wait while the process is still alive.
while True:
time.sleep(1)
# We pass in stdin as PIPE because there currently is no Console, and stdin is currently invalid.
searchOutput: bytes = subprocess.check_output('tasklist /FI "PID eq {}"'.format(pid), stdin=subprocess.PIPE)
if str(pid) not in searchOutput.decode(): break;
# The following two commands are not needed since we're about to close this script.
# You can leave them here if you want to do more console operations.
r = kernel.SetConsoleCtrlHandler(None, False)
if r == 0: exit(-1)
r = kernel.AllocConsole()
if r == 0: exit(-1)
exit(0)
For those interested in a "quick fix", I've made a console-ctrl package based on Siyuan Ren's answer to make it even easier to use.
Simply run pip install console-ctrl, and in your code:
import console_ctrl
import subprocess
# Start some command IN A SEPARATE CONSOLE
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# ...
# Stop the target process
console_ctrl.send_ctrl_c(p.pid)
I have been trying this but for some reason ctrl+break works, and ctrl+c does not. So using os.kill(signal.CTRL_C_EVENT, 0) fails, but doing os.kill(signal.CTRL_C_EVENT, 1) works. I am told this has something to do with the create process owner being the only one that can pass a ctrl c? Does that make sense?
To clarify, while running fio manually in a command window it appears to be running as expected. Using the CTRL + BREAK breaks without storing the log as expected and CTRL + C finishes writing to the file also as expected. The problem appears to be in the signal for the CTRL_C_EVENT.
It almost appears to be a bug in Python but may rather be a bug in Windows. Also one other thing, I had a cygwin version running and sending the ctrl+c in python there worked as well, but then again we aren't really running native windows there.
example:
import subprocess, time, signal, sys, os
command = '"C:\\Program Files\\fio\\fio.exe" --rw=randrw --bs=1M --numjobs=8 --iodepth=64 --direct=1 ' \
'--sync=0 --ioengine=windowsaio --name=test --loops=10000 ' \
'--size=99901800 --rwmixwrite=100 --do_verify=0 --filename=I\\:\\test ' \
'--thread --output=C:\\output.txt'
def signal_handler(signal, frame):
time.sleep(1)
print 'Ctrl+C received in wrapper.py'
signal.signal(signal.SIGINT, signal_handler)
print 'command Starting'
subprocess.Popen(command)
print 'command started'
time.sleep(15)
print 'Timeout Completed'
os.kill(signal.CTRL_C_EVENT, 0)
(This was supposed to be a comment under Siyuan Ren's answer but I don't have enough rep so here's a slightly longer version.)
If you don't want to create any helper scripts you can use:
p = subprocess.Popen(['some_command'], creationflags=subprocess.CREATE_NEW_CONSOLE)
# Do something else
subprocess.run([
sys.executable,
"-c",
"import ctypes, sys;"
"kernel = ctypes.windll.kernel32;"
"pid = int(sys.argv[1]);"
"kernel.FreeConsole();"
"kernel.AttachConsole(pid);"
"kernel.SetConsoleCtrlHandler(None, 1);"
"kernel.GenerateConsoleCtrlEvent(0, 0);"
"sys.exit(0)",
str(p.pid)
]) # Send Ctrl-C
But it won't work if you use PyInstaller - sys.executable points to your executable, not the Python interpreter. To solve that issue I've created a tiny utility for Windows: https://github.com/anadius/ctrlc
Now you can send the Ctrl+C event with:
subprocess.run(["ctrlc", str(p.pid)])
I need to debug a child process spawned by multiprocessing.Process(). The pdb degugger seems to be unaware of forking and unable to attach to already running processes.
Are there any smarter python debuggers which can be attached to a subprocess?
I've been searching for a simple to solution for this problem and came up with this:
import sys
import pdb
class ForkedPdb(pdb.Pdb):
"""A Pdb subclass that may be used
from a forked multiprocessing child
"""
def interaction(self, *args, **kwargs):
_stdin = sys.stdin
try:
sys.stdin = open('/dev/stdin')
pdb.Pdb.interaction(self, *args, **kwargs)
finally:
sys.stdin = _stdin
Use it the same way you might use the classic Pdb:
ForkedPdb().set_trace()
Winpdb is pretty much the definition of a smarter Python debugger. It explicitly supports going down a fork, not sure it works nicely with multiprocessing.Process() but it's worth a try.
For a list of candidates to check for support of your use case, see the list of Python Debuggers in the wiki.
This is an elaboration of Romuald's answer which restores the original stdin using its file descriptor. This keeps readline working inside the debugger. Besides, pdb special management of KeyboardInterrupt is disabled, in order it not to interfere with multiprocessing sigint handler.
class ForkablePdb(pdb.Pdb):
_original_stdin_fd = sys.stdin.fileno()
_original_stdin = None
def __init__(self):
pdb.Pdb.__init__(self, nosigint=True)
def _cmdloop(self):
current_stdin = sys.stdin
try:
if not self._original_stdin:
self._original_stdin = os.fdopen(self._original_stdin_fd)
sys.stdin = self._original_stdin
self.cmdloop()
finally:
sys.stdin = current_stdin
Building upon #memplex idea, I had to modify it to get it to work with joblib by setting the sys.stdin in the constructor as well as passing it directly along via joblib.
import os
import pdb
import signal
import sys
import joblib
_original_stdin_fd = None
class ForkablePdb(pdb.Pdb):
_original_stdin = None
_original_pid = os.getpid()
def __init__(self):
pdb.Pdb.__init__(self)
if self._original_pid != os.getpid():
if _original_stdin_fd is None:
raise Exception("Must set ForkablePdb._original_stdin_fd to stdin fileno")
self.current_stdin = sys.stdin
if not self._original_stdin:
self._original_stdin = os.fdopen(_original_stdin_fd)
sys.stdin = self._original_stdin
def _cmdloop(self):
try:
self.cmdloop()
finally:
sys.stdin = self.current_stdin
def handle_pdb(sig, frame):
ForkablePdb().set_trace(frame)
def test(i, fileno):
global _original_stdin_fd
_original_stdin_fd = fileno
while True:
pass
if __name__ == '__main__':
print "PID: %d" % os.getpid()
signal.signal(signal.SIGUSR2, handle_pdb)
ForkablePdb().set_trace()
fileno = sys.stdin.fileno()
joblib.Parallel(n_jobs=2)(joblib.delayed(test)(i, fileno) for i in range(10))
remote-pdb can be used to debug sub-processes. After installation, put the following lines in the code you need to debug:
import remote_pdb
remote_pdb.set_trace()
remote-pdb will print a port number which will accept a telnet connection for debugging that specific process. There are some caveats around worker launch order, where stdout goes when using various frontends, etc. To ensure a specific port is used (must be free and accessible to the current user), use the following instead:
from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace()
remote-pdb may also be launched via the breakpoint() command in Python 3.7.
Just use PuDB that gives you an awesome TUI (GUI on terminal) and supports multiprocessing as follow:
from pudb import forked; forked.set_trace()
An idea I had was to create "dummy" classes to fake the implementation of the methods you are using from multiprocessing:
from multiprocessing import Pool
class DummyPool():
#staticmethod
def apply_async(func, args, kwds):
return DummyApplyResult(func(*args, **kwds))
def close(self): pass
def join(self): pass
class DummyApplyResult():
def __init__(self, result):
self.result = result
def get(self):
return self.result
def foo(a, b, switch):
# set trace when DummyPool is used
# import ipdb; ipdb.set_trace()
if switch:
return b - a
else:
return a - b
if __name__ == '__main__':
xml = etree.parse('C:/Users/anmendoza/Downloads/jim - 8.1/running-config.xml')
pool = DummyPool() # switch between Pool() and DummyPool() here
results = []
results.append(pool.apply_async(foo, args=(1, 100), kwds={'switch': True}))
pool.close()
pool.join()
results[0].get()
Here is the version of the ForkedPdb(Romuald's Solution) which will work for Windows and *nix based systems.
import sys
import pdb
import win32console
class MyHandle():
def __init__(self):
self.screenBuffer = win32console.GetStdHandle(win32console.STD_INPUT_HANDLE)
def readline(self):
return self.screenBuffer.ReadConsole(1000)
class ForkedPdb(pdb.Pdb):
def interaction(self, *args, **kwargs):
_stdin = sys.stdin
try:
if sys.platform == "win32":
sys.stdin = MyHandle()
else:
sys.stdin = open('/dev/stdin')
pdb.Pdb.interaction(self, *args, **kwargs)
finally:
sys.stdin = _stdin
The problem here is that Python always connects sys.stdin in the child process to os.devnull to avoid contention for the stream. But this means that when the debugger (or a simple input()) tries to connect to stdin to get input from the user, it immediately reaches end-of-file and reports an error.
One solution, at least if you don't expect multiple debuggers to run at the same time, is to reopen stdin in the child process. That can be done by setting sys.stdin to open(0), which always opens the active terminal. This in fact is what the ForkedPdb solution does, but it can be done more simply and in an os-independent manner like this:
import multiprocessing, sys
def main():
process = multiprocessing.Process(target=worker)
process.start()
process.join()
def worker():
# Python automatically closes sys.stdin for the subprocess, so we reopen
# stdin. This enables pdb to connect to the terminal and accept commands.
# See https://stackoverflow.com/a/30149635/3830997.
sys.stdin = open(0) # or os.fdopen(0)
print("Hello from the subprocess.")
breakpoint() # or import pdb; pdb.set_trace()
print("Exited from breakpoint in the subprocess.")
if __name__ == '__main__':
main()
If you are on a supported platform, try DTrace. Most of the BSD / Solaris / OS X family support DTrace.
Here is an intro by the author. You can use Dtrace to debug just about anything.
Here is a SO post on learning DTrace.
I have an little script which logs users that login to my Pidgin/MSN account
#!/usr/bin/env python
def log_names(buddy):
name = str(purple.PurpleBuddyGetName(buddy))
account = purple.PurpleAccountGetUsername(purple.PurpleBuddyGetAccount(buddy))
if account == u'dummy_account#hotmail.com':
try: log[name] += 1
except KeyError: log[name] = 1
log.sync()
import dbus, gobject, shelve
from dbus.mainloop.glib import DBusGMainLoop
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
bus = dbus.SessionBus()
log = shelve.open('pidgin.log')
obj = bus.get_object('im.pidgin.purple.PurpleService',
'/im/pidgin/purple/PurpleObject')
purple = dbus.Interface(obj, 'im.pidgin.purple.PurpleInterface')
bus.add_signal_receiver(log_names,
dbus_interface='im.pidgin.purple.PurpleInterface',
signal_name='BuddySignedOn')
loop = gobject.MainLoop()
loop.run()
I wanna add a simple interactive console to this which allows me to query the data from the log object, but I'm stuck at how I would implement it
Do I use threads of some kind or am I able to use some sort of call back within gobject.MainLoop()?
You should look in the direction of general GObject/GLib programming (this is where gobject.MainLoop() is coming from). You could use threads, you could use event callbacks, whatever. For example, this is a simple 'console' using event callbacks. Add this just before the loop.run():
import glib, sys, os, fcntl
class IODriver(object):
def __init__(self, line_callback):
self.buffer = ''
self.line_callback = line_callback
flags = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL)
flags |= os.O_NONBLOCK
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, flags)
glib.io_add_watch(sys.stdin, glib.IO_IN, self.io_callback)
def io_callback(self, fd, condition):
chunk = fd.read()
for char in chunk:
self.buffer += char
if char == '\n':
self.line_callback(self.buffer)
self.buffer = ''
return True
def line_entered(line):
print "You have typed:", line.strip()
d = IODriver(line_entered)
If you are building a PyGTK application, you don't have to call the mainloop specially for the dbus, because it will use the main application's mainloop. There also other mainloops for other libraries available, for example dbus.mainloop.qt for PyQt4.