subprocess.call 'occasionally' hangs in Powershell - python

I am trying to launch mayapy with subprocess and route all output to a text file. This works, however I am noticing about 30% of the time the initial script that launches subprocess will hang in the window I run it in. The mayapy that was launched finishes fine, all output goes to the indicated stdout_file, but the main process that called it is hanging. A carriage return will usually cause the rest of it to finish and I'm unsure why. I've even added a sys.exit() call in the subprocess that is being run, yet it still hangs.
maya = subprocess.call(command, stdout=stdout_file, stderr=stdout_file, shell=True)
logger.info("Exitcode {}".format(maya))
if str(maya) != '0':
logger.error("Something went wrong...")
UPDATE: The main python script was started from within a Powershell window in Windows. The answer as to why this was happening relates entirely to that.

I feel a bit stupid that this was the entire reason the process was hanging.
https://serverfault.com/questions/204150/sometimes-powershell-stops-sending-output-until-i-press-enter-why
"If the QuickEdit Mode and\or Insert options are checked within the console\window properties, and you click within the console, it will pause the output. If those options are not checked, the output can't be paused by clicking within the console."
Since I turned off these options in the Powershell window, the process has yet to hang. Also explains why it was so sporadic and only happening on occasion if I accidentally clicked into the window.

Related

Is there a way to catch script manual termination?

I have a python script that launches a subprocess which then opens a python terminal window that displays its running logs/status output.
If the user closes the window, the process terminates - this is desirable. However, I cannot figure out a way to "catch" and log if the user manually terminates the script (closes window).
I have tried putting my main() in a try/except block but it seems that no exceptions are being raised (like a KeyboardInterrupt).
Also tried atexit package - but this seems only to take into effect at normal script termination.
Is there a way to log/print to file when a user terminates a script mid run?

Close Image with subprocess

I am trying to open an image with subprocess so it is visible to the user, then close the image so that it disapears.
This question has been asked before, but the answers I found have not worked for me. Here is what I have checked:
Killing a process created with Python's subprocess.Popen()
How to terminate a python subprocess launched with shell=True
How can I close an image shown to the user with the Python Imaging Library?
I need the code to open an image (with Preview (optional), default on my Mac), wait a second, then close the image.
openimg = subprocess.Popen(["open", mypath])
time.sleep(1)
openimg.terminate()
openimg.kill()
Everything is telling me to terminate() and kill(), but that still isn't closing the image.
This does not HAVE to use preview, I am open to hearing other options as well.
Edit: I have additionally tried the below code, still no success
print('openimg\'s pid = ',openimg.pid)
os.kill(openimg.pid, signal.SIGKILL)
OS X's open command is asynchronous - it spawns a Preview process (or whatever app it launches) and immediately exits. So by the time your Python code gets around to calling terminate() and kill(), the open process is done. It no longer exists.
You can force synchronous behavior, i.e. make open keep running until after Preview exits, by passing the -W option,
subprocess.Popen(["open", "-W", mypath])
This way, open will still be running when your code gets around to running terminate and kill. (I would suggest also passing the -n option to make sure Preview starts a new instance, in case you had another instance of Preview sitting around from before.) And when the open process ends, hopefully it will also end the underlying Preview process. You can check whether this actually happens using a process viewer such as ps or pgrep.
If terminating or killing open does not kill Preview, you'll need to either change the configuration so that the signal is delivered to all subprocesses of open when you call terminate() or kill(), for which this question or this one may be helpful, or you'll have to find a way to get the process ID of Preview and send signals to that directly, which will require you to go beyond Popen. I'm not sure of the best way to do that, but perhaps someone else can contribute an answer that shows you how.
I'm sure this is the hackiest way to do this, but hell, it works. If anyone stumbles on this and knows a better way, please let me know.
The Problem
As David Z described, I was not targeting the correct process. So I had to figure out what the correct process ID was in order to close it properly.
I started by opening my terminal and entering the command ps -A. This will show all of the current open processes and their ID. Then I just searched for Preview and found two open processes.
Previously, I was closing the first Preview pid in the list, let's call it 11329 but the second on the list was still open. The second Preview process, 11330, was always 1 digit higher then the first process. This second one was the pid I needed to target to close Preview.
The Answer
openimg = subprocess.Popen(["open", mypath]) ##Opens the image to view
newpid = openimg.pid + 1 ##Gets the pid 1 digit higher than the openimg pid.
os.kill(newpid, signal.SIGKILL) ##Kills the pid and closes Preview
This answer seems very fragile, but it works for me. I only just started learning about pids so if anyone could provide some knowledge, I would be grateful.

Wait and complete processes when Python script is stopped from PyCharm console?

Basically I am writing a script that can be stopped and resumed at any time. So if the user uses, say PyCharm console to execute the program, he can just click on the stop button whenever he wants.
Now, I need to save some variables and let an ongoing function finish before terminating. What functions do I use for this?
I have already tried atexit.register() to no avail.
Also, how do I make sure that an ongoing function is completed before the program can exit?
Solved it using a really bad workaround. I used all functions that are related to exit in Python, including SIG* functions, but uniquely, I did not find a way to catch the exit signal when Python program is being stopped by pressing the "Stop" button in PyCharm application. Finally got a workaround by using tkinter to open an empty window, with my program running in a background thread, and used that to close/stop program execution. Works wonderfully, and catches the SIG* signal as well as executing atexit . Anyways massive thanks to #scrineym as the link really gave a lot of useful information that did help me in development of the final version.
It looks like you might want to catch a signal.
When a program is told to stop a signal is sent to the process from the OS, you can then catch them and do cleanup before exit. There are many diffferent signals , for xample when you press CTRL+C a SIGINT signal is sent by the OS to stop your process, but there are many others.
See here : How do I capture SIGINT in Python?
and here for the signal library: https://docs.python.org/2/library/signal.html

Python: kill process and close window it opened in (windows)

I'm working on a tool for data entry at my job where it basically takes a report ID number, opens a PDF to that page of that report, allows you to input the information and then saves it.
I'm completely new to instantiating new processes in python; this is the first time that I've really tried to do it. so basically, I have a relevant function:
def get_report(id):
path = report_path(id)
if not path:
raise NameError
page = get_page(path, id)
proc = subprocess.Popen(["C:\Program Files (x86)\Adobe\Reader 11.0\Reader\AcroRd32.exe", "/A", "page={}".format(page),
path])
in order to open the report in Adobe Acrobat and be able to input information while the report is still open, I determined that I had to use multiprocessing. So, as a result, in the main loop of the program, where it iterates through data and gets the report ID, I have this:
for row in rows:
print 'Opening report for {}'.format(ID)
arg = ID
proc = Process(target=get_report, args=(arg,))
proc.start()
row[1] = raw_input('Enter the desired value: ')
rows.updateRow(row)
while proc.is_alive():
pass
This way, one can enter data without the program hanging on the subprocess.Popen() command. However, if it simply continues on to the next record without closing the Acrobat window that pops up, then it won't actually open the next report. Hence the while proc.is_alive():, as it gives one a chance to close the window manually. I'd like to kill the process immediately after 'enter' is hit and the value entered, so it will go on and just open the next report with even less work. I tried several different things, ways to kill processes through the pid using os.kill(); I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
What am I missing here? How do I kill the process and close the window that it opened in? Is this even possible? Like I said, I have just about 0 experience with processes in python. If I'm doing something horribly wrong, please let me know!
Thanks in advance.
To kill/terminate a subprocess, call proc.kill()/proc.terminate(). It may leave grandchildren processes running, see subprocess: deleting child processes in Windows
This way, one can enter data without the program hanging on the subprocess.Popen() command.
Popen() starts the command. It does not wait for the command to finish. There are .wait() method and convenience functions such as call()
Even if Popen(command).wait() returns i.e., if the corresponding external process has exited; it does not necessarily mean that the document is closed in the general case (the launcher app is done but the main application may persist).
i.e., the first step is to drop unnecessary multiprocessing.Process and call Popen() in the main process instead.
The second step is to make sure to start an executable that owns the opened document i.e., if it is killed the corresponding document won't stay opened: AcroRd32.exe might be already such program (test it: see whether call([r'..\AcroRd32.exe', ..]) waits for the document to be closed) or it might have a command-line switch that enables such behavior. See How do I launch a file in its default program, and then close it when the script finishes?
I tried killing the subprocess, killing the process itself, killing both of them, and also tried using subprocess.call() instead of Popen() to see if it made a difference.
It didn't.
If kill() and Popen() behave the same in your case then either you've made a mistake (they don't behave the same: you should create a minimal standalone code example with a dummy pdf that demonstrates the problem. Describe using words: what do you expect to happen (step by step) and what happens instead) or AcroRd32.exe is just a launcher app that I've described above (it just opens the document and immediately exits without waiting for the document to be closed).

How can I inspect a linux process to determine how/when a process dies/terminates?

I have a python irc bot which I start up as root by doing /etc/init.d/phenny start. Sometimes it dies, though and it seems to happen overnight.
What can I do to inspect it and see the status of the process in a text file?
If you know it's still running, you can pstack it to see it's walkback. I'm not sure how useful that will be because you will see the call stack of the interpreter. You could also try strace or ltrace as someone else mentioned.
I would also make sure that in whatever environment the script runs in, you have set ulimit -c unlimited so that a core is generated in case python it is outright crashing.
Another thing I might try is to have this job executed by a parent that does not wait it's child. This should cause the proc table entry to stick around as a zombie even when the underlying job has exited.
If you're interested in really low level process activity, you can run the python interpreter under strace with standard error redirected to a file.
If you're only interested in inspecting the python code when your bot crashes, and you have the location in the source where the crash happens, you can wrap that location with try/except and break into the debugger in theexcept clause:
import pdb; pdb.set_trace()
You'll probably need to run your bot in non-daemon mode for that to work, though.
You might want to try Python psutils, it is something that I have used and works.
A cheap way to get some extra clues about the problem would be to start phenny with
/etc/init.d/phenny start 2>/tmp/phenny.out 1>&2
When it crashes, check the tail of /tmp/phenny.out for the Python traceback.
If you only need to verify that the process is running you could just run a script that checks the output of command
ps ax | grep [p]henny
every few seconds. If it's empty, then obviously the process is dead.

Categories

Resources