Python Process which is joined will not call atexit - python

I thought Python Processes call their atexit functions when they terminate. Note that I'm using Python 2.7. Here is a simple example:
from __future__ import print_function
import atexit
from multiprocessing import Process
def test():
atexit.register(lambda: print("atexit function ran"))
process = Process(target=test)
process.start()
process.join()
I'd expect this to print "atexit function ran" but it does not.
Note that this question:
Python process won't call atexit
is similar, but it involves Processes that are terminated with a signal, and the answer involves intercepting that signal. The Processes in this question are exiting gracefully, so (as far as I can tell anyway) that question & answer do not apply (unless these Processes are exiting due to a signal somehow?).

I did some research by looking at how this is implemented in CPython. This is assumes you are running on Unix. If you are running on Windows the following might not be valid as the implementation of processes in multiprocessing differs.
It turns out that os._exit() is always called at the end of the process. That, together with the following note from the documentation for atexit, should explain why your lambda isn't running.
Note: The functions registered via this module are not called when the
program is killed by a signal not handled by Python, when a Python
fatal internal error is detected, or when os._exit() is called.
Here's an excerpt from the Popen class for CPython 2.7, used for forking processes. Note that the last statement of the forked process is a call to os._exit().
# Lib/multiprocessing/forking.py
class Popen(object):
def __init__(self, process_obj):
sys.stdout.flush()
sys.stderr.flush()
self.returncode = None
self.pid = os.fork()
if self.pid == 0:
if 'random' in sys.modules:
import random
random.seed()
code = process_obj._bootstrap()
sys.stdout.flush()
sys.stderr.flush()
os._exit(code)
In Python 3.4, the os._exit() is still there if you are starting a forking process, which is the default. But it seems like you can change it, see Contexts and start methods for more information. I haven't tried it, but perhaps using a start method of spawn would work? Not available for Python 2.7 though.

Related

gdb.execute blocks all the threads in python scripts

I am scripting GDB with Python 2.7.
I am simply stepping instructions with gdb.execute("stepi"). If the debugged program is idling and waiting for user interaction, gdb.execute("stepi") doesn't return. If there is such a situation, I want to stop the debugging session without terminating gdb.
To do so, I create a thread that will kill the debugged process if the current instruction ran for more than x seconds:
from ctypes import c_ulonglong, c_bool
from os import kill
from threading import Thread
from time import sleep
import signal
# We need mutable primitives in order to update them in the thread
it = c_ulonglong(0) # Instructions counter
program_exited = c_bool(False)
t = Thread(target=check_for_idle, args=(pid,it,program_exited))
t.start()
while not program_exited.value:
gdb.execute("si") # Step instruction
it.value += 1
# Threaded function that will kill the loaded program if it's idling
def check_for_idle(pid, it, program_exited):
delta_max = 0.1 # Max delay between 2 instructions, seconds
while not program_exited.value:
it_prev = c_ulonglong(it.value) # Previous value of instructions counter
sleep(delta_max)
# If previous instruction lasted for more than 'delta_max', kill debugged process
if (it_prev.value == it.value):
# Process pid has been retrieved before
kill(pid, signal.SIGTERM)
program_exited.value = True
print("idle_process_end")
However, gdb.execute is pausing my thread... Is there another way to kill the debugged process if it is idling?
However, gdb.execute is pausing my thread
What is happening here is that gdb.execute does not release Python's global lock when calling into gdb. So, while the gdb command executes, other Python threads are stuck.
This is just an oversight in gdb. I've filed a bug for it.
Is there another way to kill the debugged process if it is idling?
There is one other technique you can try -- I am not certain it will work. Unfortunately this part of gdb is not fully fleshed out (at the present moment); so also feel free to file bug reports.
The main idea is to run gdb commands on the main thread -- but not from Python. So, try writing your stepping loop using the gdb CLI, maybe like:
(gdb) while 1
> stepi
> end
Then your thread should be able to kill the inferior. Another approach might be for your thread to inject a gdb command into the main loop using gdb.post_event.

Forking and exiting from child in python

I'm trying to fork a process, do something in the child and then exit from it (see code below). To exit I first tried sys.exit which turned out to be a problem because an intermediate function caught the SystemExit exception (as in the code below) and so the child didn't actually terminate. I figured out that I should use os._exit instead. Now the child terminates, but I still see defunct processes lying around (when I do ps -ef). Is there a way to avoid these?
import os, sys
def fctn():
if os.fork() != 0:
return 0
# sys.exit(0)
os._exit(0)
while True:
str = raw_input()
try:
print(fctn())
except SystemExit:
print('Caught SystemExit.')
Edit: this was actually not really a Python question but more of a Unix question (so I guess results may vary depending on the system). Ivan's answer suggests that I should do something like
def handleSIGCHLD(sig, frame):
os.wait()
signal.signal(signal.SIGCHLD, handleSIGCHLD)
while for me a simple
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
also works.
And then it's probably true that I should use some library...
You should wait() for a child to remove its zombie process entry from the table.
Finally, to offload tasks to children, you may be better off with multiprocessing.
you are better off using the subprocess module for your need. Its the preferred way of forking off a process.
https://docs.python.org/2/library/subprocess.html#subprocess.check_call

Why does the billiard multiprocessing module require the "if __name__=='__main__'" line?

If I have the following code:
def f():
print 'ok!'
import sys
sys.exit()
if __name__=='__main__':
import billiard
billiard.forking_enable(0)
p = billiard.Process( target=f)
p.start()
while p.is_alive():
pass
The script behaves as expected, printing "ok!" and ending. But if I omit the if __name__=='__main__': line and de-indent the following lines, my machine (OS X) goes crazy, continually spawning tons of Python processes until I killall Python. Any idea what's going on here?
(To those marking this as a duplicate, note that while the other question asks the purpose of if __name__=='__main__' generally, I'm specifically asking why failure to use it here causes dramatically unexpected behaviour)
You're disabling fork support with the line:
billiard.forking_enable(0)
That means that the library will need to spawn (instead of fork) your child process, and have it re-import the __main__ module to run f, just like Windows does. Without the if __name__ ... guard, re-importing the __main__ module in the children will also mean re-running your code that creates the billiard.Process, which creates an infinite loop.
If you leave fork enabled, the re-import in the child process isn't necessary, so everything works fine with or without the if __name__ ... guard.

Python subprocess.call thread hang, subprocess.popen no hang

I am trying to automate the installation of a specific program using Sikuli and scripts on Windows 7. I needed to start the program installer and then used Siluki to step through the rest of the installation. I did this using Python 2.7
This code works as expected by creating a thread, calling the subprocess, and then continuing the main process:
import subprocess
from threading import Thread
class Installer(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
subprocess.Popen(["msiexec", "/i", "c:\path\to\installer.msi"], shell=True)
i = Installer()
i.run()
print "Will show up while installer is running."
print "Other things happen"
i.join()
This code does not operate as desired. It will start the installer but then hang:
import subprocess
from threading import Thread
class Installer(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
subprocess.call("msiexec /i c:\path\to\installer.msi")
i = Installer()
i.run()
print "Will not show up while installer is running."
print "Other things happen"
i.join()
I understand that subprocess.call will wait for the process to terminate. Why does that prevent the main thread from continuing on? Should the main continue execution immediately after the process call?
Why is there such a difference in behaviors?
I have only just recently started using threads C.
You're calling i.run(), but what you should be calling is i.start(). start() invokes run() in a separate thread, but calling run() directly will execute it in the main thread.
First.
you need to add the command line parameters to your install command to make it a silent install..
http://msdn.microsoft.com/en-us/library/aa372024%28v=vs.85%29.aspx
the subprocess is probably hung waiting for an install process that will never end because it is waiting for user input.
Second.
if that doesn't work.. you should be using popen and communicate
How to use subprocess popen Python
Third.
if that still didn't work, your installer is hanging some where and you should debug the underlying process there.

Killing child process when parent crashes in python

I am trying to write a python program to test a server written in C. The python program launches the compiled server using the subprocess module:
pid = subprocess.Popen(args.server_file_path).pid
This works fine, however if the python program terminates unexpectedly due to an error, the spawned process is left running. I need a way to ensure that if the python program exits unexpectedly, the server process is killed as well.
Some more details:
Linux or OSX operating systems only
Server code can not be modified in any way
I would atexit.register a function to terminate the process:
import atexit
process = subprocess.Popen(args.server_file_path)
atexit.register(process.terminate)
pid = process.pid
Or maybe:
import atexit
process = subprocess.Popen(args.server_file_path)
#atexit.register
def kill_process():
try:
process.terminate()
except OSError:
pass #ignore the error. The OSError doesn't seem to be documented(?)
#as such, it *might* be better to process.poll() and check for
#`None` (meaning the process is still running), but that
#introduces a race condition. I'm not sure which is better,
#hopefully someone that knows more about this than I do can
#comment.
pid = process.pid
Note that this doesn't help you if you do something nasty to cause python to die in a non-graceful way (e.g. via os._exit or if you cause a SegmentationFault or BusError)

Categories

Resources