Given a pexpect spawned process that's opened with sudo, like so:
#!/usr/bin/env python
import pexpect
cmd = ['sudo', 'bash', '-c', '"some long-running sudo command"']
cmd = ' '.join(cmd)
child = pexpect.spawn(cmd, timeout=60)
i = child.expect([
'success',
'error'])
if i == 0:
print('ok')
else:
print('fail')
# insert code here
How would I kill this process on fail (or success, for that matter)?
I've tried the following (replacing # insert code here):
child.kill(0)
child.close(force=True)
Both give the following error, which makes sense as the Python script is not a root process, and it's trying to kill something that is a root process.
Traceback (most recent call last):
File "./myscript.py", line 85, in <module>
requires_qemu()
File "./myscript.py", line 82, in requires_qemu
child.close(0)
File "/usr/lib/python2.7/site-packages/pexpect/__init__.py", line 747, in close
raise ExceptionPexpect('Could not terminate the child.')
pexpect.ExceptionPexpect: Could not terminate the child.
It is not possible to run the script as root, due to file permissions (run from a shared NFS drive where root access is blocked)
Use sudo to kill it as root:
subprocess.call(['sudo', 'kill', str(child.pid)])
Related
I'm facing a strange situation, I've searched on google without any good results.
I'm running a python script as a subprocess from a parent subprocess with nohup using subprocess package:
cmd = list()
cmd.append("nohup")
cmd.append(sys.executable)
cmd.append(os.path.abspath(script))
cmd.append(os.path.abspath(conf_path))
_env = os.environ.copy()
if env:
_env.update({k: str(v) for k, v in env.items()})
p = subprocess.Popen(cmd, env=_env, cwd=os.getcwd())
After some time the parent process exists and the subprocess (the one with the nohup continues to run).
After another minute or two the process with the nohup exits, and with obvious reasons, becomes a zombie.
When running it on local PC with python3.6 and ubuntu 18.04, I manage to run the following code and everything works like a charm:
comp_process = psutil.Process(pid)
if comp_process.status() == "zombie":
comp_status_code = comp_process.wait(timeout=10)
As I said, everything works like a charm, The zombie process removed and I got the status code of the mentioned process.
But for some reason, when doing the SAME at docker container with the SAME python version and Ubuntu version, It fails after the timeout (Doesn't matter if its 10 seconds or 10 minutes)
The error:
psutil.TimeoutExpired timeout after 60 seconds (pid=779)
Traceback (most recent call last): File
"/usr/local/lib/python3.6/dist-packages/psutil/_psposix.py", line 84,
in wait_pid
retpid, status = waitcall() File "/usr/local/lib/python3.6/dist-packages/psutil/_psposix.py", line 75,
in waitcall
return os.waitpid(pid, os.WNOHANG) ChildProcessError: [Errno 10] No child processes
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File ".py", line 41, in
run
comp_status_code = comp_process.wait(timeout=60) File "/usr/local/lib/python3.6/dist-packages/psutil/init.py", line
1383, in wait
return self._proc.wait(timeout) File "/usr/local/lib/python3.6/dist-packages/psutil/_pslinux.py", line
1517, in wrapper
return fun(self, *args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/psutil/_pslinux.py", line
1725, in wait
return _psposix.wait_pid(self.pid, timeout, self._name) File "/usr/local/lib/python3.6/dist-packages/psutil/_psposix.py", line 96,
in wait_pid
delay = check_timeout(delay) File "/usr/local/lib/python3.6/dist-packages/psutil/_psposix.py", line 68,
in check_timeout
raise TimeoutExpired(timeout, pid=pid, name=proc_name) psutil.TimeoutExpired: psutil.TimeoutExpired timeout after 60 seconds
(pid=779)
One possibility may be the lack of an init process to reap zombies. You can fix this by running with docker run --init, or using e.g. tini. See https://hynek.me/articles/docker-signals/
In my python script I have:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
where cmd is something like ['mail', '-b', emails,...] which allows me to run mail interactively and go back to the python script after mail finishes.
The only problem is when I press Ctrl-C. It seems that "both mail and the python script react to it" (*), whereas I'd prefer that while mail is ran, only mail should react, and no exception should be raised by python. Is it possible to achieve it?
(*) What happens exactly on the console is:
^C
(Interrupt -- one more to kill letter)
Traceback (most recent call last):
File "./tutster.py", line 104, in <module>
cmd(cmd_run)
File "./tutster.py", line 85, in cmd
code = os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
File "/usr/lib/python3.4/os.py", line 868, in spawnvpe
return _spawnvef(mode, file, args, env, execvpe)
File "/usr/lib/python3.4/os.py", line 819, in _spawnvef
wpid, sts = waitpid(pid, 0)
KeyboardInterrupt
and then the mail is in fact sent (which is already bad because the intention was to kill it), but the body is empty and the content is sent as a attachment with a bin extension.
Wrap it with an try/except statement:
try:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
except KeyboardInterrupt:
pass
I have a bash script, which is running perfectly:
gvim --servername "servername" $1
if [ -f ${1%.tex}.pdf ];
then
evince ${1%.tex}.pdf &
fi
evince_vim_dbus.py GVIM servername ${1%.tex}.pdf $1 &
I am trying to convert it to python as:
#!/usr/bin/env python3
from subprocess import call
import sys, os
inp_tex = sys.argv[1]
oup_pdf = os.path.splitext(sys.argv[1])[0]+".pdf"
print(oup_pdf)
call(["gvim", "--servername", "servername", sys.argv[1]])
if os.path.exists(oup_pdf):
call(["evince", oup_pdf])
call(["evince_vim_dbus.py", "GVIM", "servername", oup_pdf, inp_tex])
in the python, both gvim and evince window is open, but evince_vim_dbus.py line is not working. Not that it is giving any error, but it is not showing intended result, as it should, and is doing with the bash script.
trying with check_call (I have to kill it after a while, here's the traceback):
Traceback (most recent call last):
File "/home/rudra/vims.py", line 28, in <module>
check_call(["python","/home/rudra/bin/evince_vim_dbus.py", "GVIM", "servername", oup_pdf, inp_tex])
File "/usr/lib64/python3.5/subprocess.py", line 576, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib64/python3.5/subprocess.py", line 559, in call
return p.wait(timeout=timeout)
File "/usr/lib64/python3.5/subprocess.py", line 1658, in wait
(pid, sts) = self._try_wait(0)
File "/usr/lib64/python3.5/subprocess.py", line 1608, in _try_wait
(pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt
I'm going to have a guess that your real problem isn't the evince_vim_dbus.py line itself, but rather the gvim line, because you pass it the server name 'servername' instead of simply servername, and so doesn't match the name on the line that runs evince_vim_dbus.py.
I'm not familiar with gvim or its server functionality, but I'm guessing the evince_vim_dbus.py program connects to gvim using the given name, in which case it's going to fail since the server of the right name isn't running.
If that's not it, then maybe the problem is that subprocess.call() runs the given program and waits for it to exit, whereas in your original bash script, you run evince with an ampersand, causing bash not to wait for it, so maybe the problem is that evince_vim_dbus.py never runs at all until you exit Evince.
I would like to run an exe from this directory:/home/pi/pi_sensors-master/bin/Release/
This exe is then run by tying mono i2c.exe and it runs fine.
I would like to get this output in python which is in a completely different directory.
I know that I should use subprocess.check_output to take the output as a string.
I tried to implement this in python:
import subprocess
import os
cmd = "/home/pi/pi_sensors-master/bin/Release/"
os.chdir(cmd)
process=subprocess.check_output(['mono i2c.exe'])
print process
However, I received this error:
The output would usually be a data stream with a new number each time, is it possible to capture this output and store it as a constantly changing variable?
Any help would be greatly appreciated.
Your command syntax is incorrect, which is actually generating the exception. You want to call mono i2c.exe, so your command list should look like:
subprocess.check_output(['mono', 'i2c.exe']) # Notice the comma separation.
Try the following:
import subprocess
import os
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
print subprocess.check_output(['mono', executable])
The sudo is not a problem as long as you give the full path to the file and you are sure that running the mono command as sudo works.
I can generate the same error by doing a ls -l:
>>> subprocess.check_output(['ls -l'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
However when you separate the command from the options:
>>> subprocess.check_output(['ls', '-l'])
# outputs my entire folder contents which are quite large.
I strongly advice you to use the subprocess.Popen -object to deal with external processes. Use Popen.communicate() to get the data from both stdout and stderr. This way you should not run into blocking problems.
import os
import subprocess
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
proc = subprocess.Popen(['mono', executable])
try:
outs, errs = proc.communicate(timeout=15) # Times out after 15 seconds.
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
Or you can call the communicate in a loop if you want a 'data-stream' of sort, an answer from this question:
from subprocess import Popen, PIPE
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
p = Popen(["mono", executable], stdout=PIPE, bufsize=1)
for line in iter(p.stdout.readline, b''):
print line,
p.communicate() # close p.stdout, wait for the subprocess to exit
I am using python-daemon module to manage the daemon process of my Python script.
However, I am running into a headache when running the script that I simply can't figure out. Nor do I really know how to begin to debug it.
I have the code:
def run_application():
#Do something here...
class App():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = 'stdout.txt'
self.stderr_path = 'stdlog.log'
self.pidfile_path = 'filelock.pid'
self.pidfile_timeout = 5
def run(self):
run_application()
app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
When run, it always writes the following to stdlog.log:
Traceback (most recent call last):
File "MyApp.py", line 335, in <module>
daemon_runner.do_action()
File "/anaconda/lib/python2.7/site-packages/daemon/runner.py", line 189, in do_action
func(self)
File "/anaconda/lib/python2.7/site-packages/daemon/runner.py", line 124, in _start
self.daemon_context.open()
File "/anaconda/lib/python2.7/site-packages/daemon/daemon.py", line 346, in open
self.pidfile.__enter__()
File "/anaconda/lib/python2.7/site-packages/lockfile/__init__.py", line 229, in __enter__
self.acquire()
File "/anaconda/lib/python2.7/site-packages/daemon/pidfile.py", line 42, in acquire
super(TimeoutPIDLockFile, self).acquire(timeout, *args, **kwargs)
File "/anaconda/lib/python2.7/site-packages/lockfile/pidlockfile.py", line 88, in acquire
self.path)
lockfile.LockTimeout: Timeout waiting to acquire lock for /MyApp/filelock.pid
So it appears to be timing out when attempting to lock filelock.pid. I have no idea why this is. I have deleted filelock.pid, I've changed permissions; same error every time.
How can I begin to debug this??? I'm at a loss.
I am using python-daemon version 1.6 (if it matters).
Update:
Following the advice here, I now see that there is already a process running. Now how can I figure out how to determine the PID of the running daemon process.
I agree with #ExploWare as far as how he demonstrates you can capture those LockTimeout exceptions.
So as far as a way to debug and see what process is holding on to this lock, here is an external bit of code you can run...
import daemon.pidfile
import os
import lockfile
# We know the lockfile name.
pidfile = daemon.pidfile.PIDLockFile(
os.path.join("/MyApp/","filelock.pid"))
# This current process id...
os.getpid()
# 46337
So what process has acquired this lock if any?
pidfile.is_locked()
# True
pidfile.read_pid()
# 96856
When our PIDLockFile instance tries to "acquire",
pidfile.__dict__
# {'unique_name': '/MyApp/filelock.pid', 'lock_file': '/MyApp/filelock.pid.lock', 'hostname':
# 'MyMachine.local', 'pid': 46337, 'timeout': None, 'tname': '', 'path': '/MyApp/filelock.pid'}
pidfile.acquire()
#
# (Had to Control-C quit because I didnt set a timeout on PIDLockFile )
#
# ^CTraceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "/Users/michal/venf/lib/python2.7/site-packages/lockfile/pidlockfile.py", line 92, in acquire
# time.sleep(timeout is not None and timeout/10 or 0.1)
# KeyboardInterrupt
So instead, use #ExploWare 's exception catching.
# Wait only 5 seconds.
pidfile.timeout = 5
try:
pidfile.acquire()
except lockfile.LockTimeout:
print 'locked . need to wait or move on.'
#
# locked . need to wait or move on.
I found a nice way to handle this exception, so maybe its helpful for you as well:
Add
from lockfile import LockTimeout
to the beginning of the script and surround daemon_runner.doaction() like this
try:
daemon_runner.do_action()
except LockTimeout:
print "Error: couldn't aquire lock"
#you can exit here or try something else