How to avoid Python Subprocess stopping execution - python

I have a python program that process a lot of files, and one step is made through a .JAR file
I currently have something like that
for row in rows:
try:
subprocess.check_call(f'java -jar ffdec/ffdec.jar -export png "{out_dir}/" "{row[0]}.swf", stdout=subprocess.DEVNULL)
except (OSError, subprocess.SubprocessError, subprocess.CalledProcessError):
print(f"Error on {row[0]}")
continue
That works fine for executing the os command (i'm on Windows 10) and not stop on errors.
However, there is one specific error that stop the execution of my python programm.
I think it is because the .jar file doesn't really stop, and still run in the background, thus preventing python from continuing.
I there a way to call a command in Python and run it asynchronously, or skip it after a timeout of 20sec ?
I can also make a Java program to run that part of the process, but for convenience issue I'll prefer having all on Python
Just in case, i'll put here the error that stops my program (all other get properly caught by try: except:)
f�vr. 25, 2021 8:05:00 AM com.jpexs.decompiler.flash.console.ConsoleAbortRetryIgnoreHandler handle
GRAVE: Error occured
java.util.EmptyStackException
at java.util.Stack.peek(Unknown Source)
at com.jpexs.decompiler.flash.exporters.commonshape.SVGExporter.addUse(SVGExporter.java:230)
at com.jpexs.decompiler.flash.timeline.Timeline.toSVG(Timeline.java:1043)
at com.jpexs.decompiler.flash.exporters.FrameExporter.lambda$exportFrames$0(FrameExporter.java:216)
at com.jpexs.decompiler.flash.RetryTask.run(RetryTask.java:41)
at com.jpexs.decompiler.flash.exporters.FrameExporter.exportFrames(FrameExporter.java:220)
at com.jpexs.decompiler.flash.console.CommandLineArgumentParser.parseExport(CommandLineArgumentParser.java:2298)
at com.jpexs.decompiler.flash.console.CommandLineArgumentParser.parseArguments(CommandLineArgumentParser.java:891)
at com.jpexs.decompiler.flash.gui.Main.main(Main.java:1972)

After checking in depth subprocess documentation, I found a parameter called timeout :
subprocess.check_call('...', stdout=subprocess.DEVNULL, timeout=20)
That can do the job for me
Documentation for timeout

Related

How to detect a failure with subprocess in python?

I have a "cmd_to_execute" command which distributes workload across 2 different machines (machine1, machine2) and does some processing over data.
The Python code is executed on machine1.
If by chance machine2 shuts down during this processing, then the cmd_to_execute just aborts. I am trying to detect this by checking the return code "ret" of subprocess.call, but it is always "0". Also tried putting this around try-exception block, but do not see any exception.
Is there a way to do this in Python?
cmd_to_execute: a shell script
Python code:
ret = subprocess.call (cmd_to_Execute, shell=True)
if ret!= 0, retry

How to guarantee file removing after script stopped working?

I have a script running by crontab every hour and interacts with API (database sync). Usually it take one hour or so, and I check for the next run if this process still in the memory or not:
#/usr/bin/env python
import os
import sys
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
file(pidfile, 'w').write(pid)
try:
# Do some actual work here
finally:
os.unlink(pidfile)
BUT after some time script stopped working, when I look at the "ps aux | grep python", I don't see this script as the process, but I do see file on the place.
And when I run script manually, I see information printed iteratively on the screen, but after some time I see the word "Terminated", script exited and file still on the place.
How to guarantee 100% the file removed after the script stopped working?
Thanks!
It looks like your script is terminated unexpectedly, most probably due to too high memory usage. It's not guaranteed that finally will be executed on unexpected program termination. So, first of all I suggest you to find the cause of the unexpected termination an fix it.
Actually there is no 100% way to guarantee that the file will be removed. However, there are a few workarounds for handling dangling pid files.
Place your pid files on the /var/run volume, so they will be removed on unexpected system restart.
Check wether the process with such pid is still running on each script execution:
import os
def is_alive(pid):
try:
os.kill(pid, 0) # do nothing but throws an exception
return True
except OSError:
return False
# and add this to your code:
if os.path.isfile(pidfile):
with open(pidfile) as f:
if is_alive(f.read()):
sys.exit()
Again, provided code is not 100% safe because of possible pid collisions. You can make the verification of running process more sophisticated by adding parsing of ps command output. Try to find a line with the desired pid value and check wether it looks similar to your crontab entry.
Normally you can use atextit module functionality, but in your case (unexpected termination) it also may not work.
Maybe use of mkstemp (specifying required program suffix/refix) within with statement may work: it will create unique pidfile in /tmp and clear it, when with block completes or terminates.

Why Windows command Processor is exited after Opened the file in python?

I opened a file in python with the following code,
s=subprocess.Popen(fileName,shell=True)
s.wait()
os.remove(fileName)
Whether the file is closed or not, it directly jumps to os.remove(), by that time file is not closed. So getting the following exception.
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
The above problem is occuring in other machine, but in my machine until the file is closed code flow is not coming to os.remove()
How can I make subprocess.Popen() should wait until the file is closed.
Without knowing the value of fileName this is a bit of a guess, but I believe your problem is that you are waiting for the shell to exit, but the shell starts another process to open the file and doesn't wait for that process to exit.
For example, if you try to open an Excel spreadsheet this way, the shell will lookup the registry and determine that the way to open a spreadsheet is to run Excel (if not already running) with a command line flag that prevents it creating a blank worksheet, and then send it a DDE command to tell it to open the worksheet. The shell then exits, your wait completes, but the file is still open (or may even not yet have opened).
If you want to wait for completion you need at the very least to run the actual command that opens the file, and not depend on the shell's automated lookup. That may still not be enough if the application is already running as some applications may detect an existing instance so your wait would only work if the application is not already running.
There probably isn't any simple clean way to do what you want. The best I can suggest is that if the os.remove() call fails you use a polling loop, sleep for 1 second and then try again, but you may still have problems if the file hasn't actually been opened by the time the wait() call completes.
You could maybe try something like this:
s=subprocess.Popen(fileName,shell=True)
while True:
try:
os.remove(filename)
break
except(WindowsError):
time.sleep(1) # wait 1 second
....
This snippet should try repeatedly to close the file. If it gets the windows error, it waits for a second before it tries again.
I tried in the Different way as follows,
def removeTempFile(tempFileName):
f = None
while f == None:
try:
f=open(tempFileName,'w')
f.close()
except Exception as e:
print "already open"
f=None
time.sleep(2)
os.remove(tempFileName)

IOError Input/Output Error When Printing

I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.

Control a subprocess (specifically gdb) in multiple ways

I am developing a wrapper around gdb using python. Basically, I just want to be able to detect a few setup annoyances up-front and be able to run a single command to invoke gdb, rather than a huge string I have to remember each time.
That said, there are two cases that I am using. The first, which works fine, is invoking gdb by creating a new process and attaching to it. Here's the code that I have for this one:
def spawnNewProcessInGDB():
global gObjDir, gGDBProcess;
from subprocess import Popen
from os.path import join
import subprocess
binLoc = join(gObjDir, 'dist');
binLoc = join(binLoc, 'bin');
binLoc = join(binLoc, 'mycommand')
profileDir = join(gObjDir, '..')
profileDir = join(profileDir, 'trash-profile')
try:
gGDBProcess = Popen(['gdb', '--args', binLoc, '-profile', profileDir], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
# Send a termination signal to the GDB process, if it's running
promptAndTerminate(gGDBProcess)
Now, if the user presses CTRL-C while this is running, it breaks (i.e. it forwards the CTRL-C to GDB). This is the behavior I want.
The second case is a bit more complicated. It might be the case that I already had this program running on my system and it crashed, but was caught. In this case, I want to be able to connect to it using gdb to get a stack trace (or perhaps I was already running it, and I simply now want to connect to the process that's already in memory).
As a convenience feature, I've created a mirror function, which will connect to a running process using gdb:
def connectToProcess(procNum):
global gObjDir, gGDBProcess
from subprocess import Popen
import subprocess
import signal
print("Connecting to mycommand process number " + str(procNum) + "...")
try:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir)
gGDBProcess.wait()
except KeyboardInterrupt:
promptAndTerminate(gGDBProcess)
Again, this seems to work as expected. It starts gdb, I can set breakpoints, run the program, etc. The only catch is that it doesn't forward CTRL-C to gdb if I press it while the program is running. Instead, it jumps immediately to promptAndTerminate().
I'm wondering if anyone can see why this is happening - the two calls to subprocess.Popen() seem identical to me, albeit that one is running gdb in a different mode.
I have also tried replacing the call to subprocess.Popen() with the following:
gGDBProcess = Popen(['gdb', '-p', procNum], cwd=gObjDir, stdin=subprocess.PIPE)
but this leads to undesirable results as well, because it doesn't actually communicate anything to the child gdb process (e.g. if I type in c to start the program running again after it is broken upon connection from gdb, it doesn't do anything). Again, it terminates the running python process when I type CTRL-C.
Any help would be appreciated!

Categories

Resources