Im having a unexpected behaviour of the linux cat when its called via subprocess.Popen().
The Python script is structured like such:
import os, subprocess
def _degrade_child_rights(user_uid, user_gid):
def result():
os.setgid(user_gid)
os.setegid(user_gid)
os.setuid(user_uid)
os.seteuid(user_uid)
return result
child = subprocess.Popen("cat /home/myuser/myfolder/screenlog.0",
preexec_fn=_degrade_child_rights(0, 0), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
When i check the executed shell-command with ps aux | grep cat it shows me that python successfully run the shell-command.
> ps aux | grep cat
root 21236 0.0 0.0 6564 780 pts/1 S 20:49 0:00 /bin/sh -c cat /home/myuser/myfolder/screenlog.0
root 21237 0.0 0.0 11056 732 pts/1 S 20:49 0:00 cat /home/myuser/myfolder/screenlog.0
root 21476 0.0 0.0 15800 936 pts/1 S+ 20:52 0:00 grep --color=auto cat
However, the cat command never finishes.
I also outsourced the cat $file command to a bash-script. Then bash executes my cat-call, but also blocks.
When i manually execute cat $file it runs like expected, so a not existing EOF at the end of the file is also impossible.
I think, the '/bin/sh -c' added by Popen messes somehow with the correct execution of cat $file.
Can i somehow prevent this?
You might want to try the communicate method for the Popen object:
Popen.communicate(input=None)
Interact with process: Send data to
stdin. Read data from stdout and stderr, until end-of-file is reached.
Wait for process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data should be
sent to the child.
communicate() returns a tuple (stdoutdata, stderrdata).
Note that if you want to send data to the process’s stdin, you need to
create the Popen object with stdin=PIPE. Similarly, to get anything
other than None in the result tuple, you need to give stdout=PIPE
and/or stderr=PIPE too.
Note The data read is buffered in memory, so do not use this method if
the data size is large or unlimited.
There is more info in the subprocess python doc
Related
I need one help regarding killing application in linux
As manual process I can use command -- ps -ef | grep "app_name" | awk '{print $2}'
It will give me jobids and then I will kill using command " kill -9 jobid".
I want to have python script which can do this task.
I have written code as
import os
os.system("ps -ef | grep app_name | awk '{print $2}'")
this collects jobids. But it is in "int" type. so I am not able to kill the application.
Can you please here?
Thank you
import subprocess
temp = subprocess.run("ps -ef | grep 'app_name' | awk '{print $2}'", stdin=subprocess.PIPE, shell=True, stdout=subprocess.PIPE)
job_ids = temp.stdout.decode("utf-8").strip().split("\n")
# sample job_ids will be: ['59899', '68977', '68979']
# convert them to integers
job_ids = list(map(int, job_ids))
# job_ids = [59899, 68977, 68979]
Then iterate through the job ids and kill them. Use os.kill()
for job_id in job_ids:
os.kill(job_id, 9)
Subprocess.run doc - https://docs.python.org/3/library/subprocess.html#subprocess.run
To kill a process in Python, call os.kill(pid, sig), with sig = 9 (signal number for SIGKILL) and pid = the process ID (PID) to kill.
To get the process ID, use os.popen instead of os.system above. Alternatively, use subprocess.Popen(..., stdout=subprocess.PIPE). In both cases, call the .readline() method, and convert the return value of that to an integer with int(...).
I have an older python 2.7.5 script which suddenly makes problems on Red Hat Enterprise Linux Server release 7.6 (Maipo). After all I see, it runs fine on Red Hat Enterprise Linux Server release 7.4 (Maipo).
The script basically implements something like
cat /proc/cpuinfo | grep -m 1 -i 'cpu MHz'
by creating two subrocesses and piping the output of the first into the second (see code example below). On the newer OS version, the cat processes stay open until the script terminates.
It seems, that the pipe to grep somehow holds the cat-process open and I can't find any documentation on how to explicitely close it.
The issue can be reproduced by pasting this code into the python CLI and then checking the ps process list for a static process 'cat /proc/cpuinfo'.
The code is breaking down what's originally happening inside a loop, so please don't argue about its style. ;-)
import shlex
from subprocess import *
cmd1 = "cat /proc/cpuinfo"
cmd2 = "grep -m 1 -i 'cpu MHz'"
args1 = shlex.split(cmd1) # split into args
args2 = shlex.split(cmd2) # split into args
# first process uses default stdin
ps1 = Popen(args1, stdout=PIPE)
# then use the output of the previous process as stdin
ps2 = Popen(args2, stdin=ps1.stdout, stdout=PIPE)
out, err = ps2.communicate()
print(out)
Afterwards check the process list in a second session(!) with:
ps -eF |grep -v grep|grep /proc/cpuinfo
On RHEL7.4 I find no open process in the process list, whereas on RHEL 7.6 after some attempts it looks like this:
[reinski#myhost ~]$ ps -eF |grep -v grep|grep /proc/cpuinfo
reinski 2422 89459 0 26993 356 142 18:46 pts/3 00:00:00 cat /proc/cpuinfo
reinski 2597 139605 0 26993 352 31 18:39 pts/3 00:00:00 cat /proc/cpuinfo
reinski 7809 139605 0 26993 352 86 18:03 pts/3 00:00:00 cat /proc/cpuinfo
These processes will only dissappear when I close the python CLI, in which case I get errors like this (I left the formatting messed up as it was):
cat: write error: Broken pipe
cat: write errorcat: write error: Broken pipe
: Broken pipe
Why is cat obviously still wanting to write to the pipe, even though it should have already output the whole /proc/cpuinfo and should have terminated itself?
Or more important: How can I prevent this from happening?
Thanks for any help!
Example 2:
Given the suggestion from VPfB it turned out, that my example was a little unlucky, since the expected result can be achieved by a single grep command.
So here is a modified example to show the problem with piping in another way:
import shlex
from subprocess import *
cmd1 = "grep -m 1 -i 'cpu MHz' /proc/cpuinfo"
cmd2 = "awk '{print $4}'"
args1 = shlex.split(cmd1) # split into args
args2 = shlex.split(cmd2) # split into args
# first process uses default stdin
ps1 = Popen(args1, stdout=PIPE)
# then use the output of the previous process as stdin
ps2 = Popen(args2, stdin=ps1.stdout, stdout=PIPE)
out, err = ps2.communicate()
print(out)
This time, the result is a single zombie process for the grep process (169731 is the pid of the python session):
[reinski#myhost ~]$ ps -eF|grep 169731
reinski 169731 189499 0 37847 6024 198 17:51 pts/2 00:00:00 python
reinski 193999 169731 0 0 0 142 17:53 pts/2 00:00:00 [grep] <defunct>
So, is this just another symptom of the same problem or am I doing something completely wrong here?
Ok, it seems I just found a solution for the zombie processes staying open from the examples:
Simply need to do a
ps1.communicate()
It seems, this is required to close the pipe properly.
I'd expect this to happen when the second process's communicate() is called and it reads the pipe from the first process.
Can someone maybe point out to me, what I am missing here?
I am always willing to learn... ;-)
This question already has answers here:
How do I use subprocess.Popen to connect multiple processes by pipes?
(9 answers)
Closed 7 years ago.
I want to run this command using call subprocess
ls -l folder | wc -l
My code in Python file is here:
subprocess.call(["ls","-l","folder","|","wc","-l"])
I got an error message like this:
ls: cannot access |: No such file or directory
ls: cannot access wc: No such file or directory
It's like command |wc can't be read by call subprocess.
How can i fix it?
Try out the shell option using a string as first parameter:
subprocess.call("ls -l folder | wc -l",shell=True)
Although this work, note that using shell=True is not recommended since it can introduce a security issue through shell injection.
You can setup a command pipeline by connecting one process's stdout with another's stdin. In your example, errors and the final output are written to the screen, so I didn't try to redirect them. This is generally preferable to something like communicate because instead of waiting for one program to complete before starting another (and encouring the expense of moving the data into the parent) they run in parallel.
import subprocess
p1 = subprocess.Popen(["ls","-l"], stdout=subprocess.PIPE)
p2 = subprocess.Popen(["wc","-l"], stdin=p1.stdout)
# close pipe in parent, its still open in children
p1.stdout.close()
p2.wait()
p1.wait()
You'll need to implement the piping logic yourself to make it work properly.
def piped_call(prog1, prog2):
out, err = subprocess.call(prog1).communicate()
if err:
print(err)
return None
else:
return subprocess.call(prog2).communicate(out)
You could try using subprocess.PIPE, assuming you wanted to avoid using subprocess.call(..., shell=True).
import subprocess
# Run 'ls', sending output to a PIPE (shell equiv.: ls -l | ... )
ls = subprocess.Popen('ls -l folder'.split(),
stdout=subprocess.PIPE)
# Read output from 'ls' as input to 'wc' (shell equiv.: ... | wc -l)
wc = subprocess.Popen('wc -l'.split(),
stdin=ls.stdout,
stdout=subprocess.PIPE)
# Trap stdout and stderr from 'wc'
out, err = wc.communicate()
if err:
print(err.strip())
if out:
print(out.strip())
For Python 3 keep in mind the communicate() method used here will return a byte object instead of a string. :
In this case you will need to convert the output to a string using decode():
if err:
print(err.strip().decode())
if out:
print(out.strip().decode())
i have a python script for test:
test.py:
#coding=utf-8
import os
import time
print os.getpid()
call it by subprocess.Popen:
p = sp.Popen("python test.py", shell=True)
print p.pid
different outputs of these two print statement are expected as p.pid should be the pid of the shell process spawned, but the real output is:
In [18]: p = sp.Popen("python test.py", shell=True)
In [19]: 19108
In [19]: p.pid
Out[19]: 19108
I believe you are on UNIX/Linux. If I may restate your question, I think you're asking, given
p = subprocess.Popen("python test.py", shell=True)
why is p.pid the same as that of the test.py process rather than that of the intervening shell, which shell you explicitly requested? That is, you expect the process genealogy to look like this:
python (calling subprocess.Popen) # pid 123
\_ /bin/sh -c 'python test.py' # pid 124
\_ python test.py # pid 125 # note: pids need not be sequential, that's just for demonstration
The answer is, your shell is making an optimization. The shell recognizes that it has been given a simple command and simply execves that command, replacing itself — but not its PID, of course — with the new process. So, the genealogy looks like this:
python (calling Popen) # pid 201
\_ /bin/sh -c ... --execve--> python test.py # pid 202
On Linux you can strace -fe trace=process ... to confirm this. You'll see the top-level python process fork (er, clone) and then the child will exec /bin/sh and then again python.
I have been writing some python code and in my code I was using "command"
The code was working as I intended but then I noticed in the Python docs that command has been deprecated and will be removed in Python 3 and that I should use "subprocess" instead.
"OK" I think, "I don't want my code to go straight to legacy status, so I should change that right now.
The thing is that subprocess.Popen seems to prepend a nasty string to the start of any output e.g.
<subprocess.Popen object at 0xb7394c8c>
All the examples I see have it there, it seems to be accepted as given that it is always there.
This code;
#!/usr/bin/python
import subprocess
output = subprocess.Popen("ls -al", shell=True)
print output
produces this;
<subprocess.Popen object at 0xb734b26c>
brettg#underworld:~/dev$ total 52
drwxr-xr-x 3 brettg brettg 4096 2011-05-27 12:38 .
drwxr-xr-x 21 brettg brettg 4096 2011-05-24 17:40 ..
<trunc>
Is this normal? If I use it as part of a larger program that outputs various formatted details to the console it messes everything up.
I'm using the command to obtain the IP address for an interface by using ifconfig along with various greps and awks to scrape the address.
Consider this code;
#!/usr/bin/python
import commands,subprocess
def new_get_ip (netif):
address = subprocess.Popen("/sbin/ifconfig " + netif + " | grep inet | grep -v inet6 | awk '{print $2}' | sed 's/addr://'i", shell=True)
return address
def old_get_ip (netif):
address = commands.getoutput("/sbin/ifconfig " + netif + " | grep inet | grep -v inet6 | awk '{print $2}' | sed 's/addr://'i")
return address
print "OLD IP is :",old_get_ip("eth0")
print ""
print "NEW IP is :",new_get_ip("eth0")
This returns;
brettg#underworld:~/dev$ ./IPAddress.py
OLD IP is : 10.48.16.60
NEW IP is : <subprocess.Popen object at 0xb744270c>
brettg#underworld:~/dev$ 10.48.16.60
Which is fugly to say the least.
Obviously I am missing something here. I am new to Python of course so I'm sure it is me doing the wrong thing but various google searches have been fruitless to this point.
What if I want cleaner output? Do I have to manually trim the offending output or am I invoking subprocess.Popen incorrectly?
The "ugly string" is what it should be printing. Python is correctly printing out the repr(subprocess.Popen(...)), just like what it would print if you said print(open('myfile.txt')).
Furthermore, python has no knowledge of what is being output to stdout. The output you are seeing is not from python, but from the process's stdout and stderr being redirected to your terminal as spam, that is not even going through the python process. It's like you ran a program someprogram & without redirecting its stdout and stderr to /dev/null, and then tried to run another command, but you'd occasionally see spam from the program. To repeat and clarify:
<subprocess.Popen object at 0xb734b26c> <-- output of python program
brettg#underworld:~/dev$ total 52 <-- spam from your shell, not from python
drwxr-xr-x 3 brettg brettg 4096 2011-05-27 12:38 . <-- spam from your shell, not from python
drwxr-xr-x 21 brettg brettg 4096 2011-05-24 17:40 .. <-- spam from your shell, not from python
...
In order to capture stdout, you must use the .communicate() function, like so:
#!/usr/bin/python
import subprocess
output = subprocess.Popen(["ls", "-a", "-l"], stdout=subprocess.PIPE).communicate()[0]
print output
Furthermore, you never want to use shell=True, as it is a security hole (a major security hole with unsanitized inputs, a minor one with no input because it allows local attacks by modifying the shell environment). For security reasons and also to avoid bugs, you generally want to pass in a list rather than a string. If you're lazy you can do "ls -al".split(), which is frowned upon, but it would be a security hole to do something like ("ls -l %s"%unsanitizedInput).split().
See the subprocess module documentation for more information.
Here is how to get stdout and stderr from a program using the subprocess module:
from subprocess import Popen, PIPE, STDOUT
cmd = 'echo Hello World'
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
print output
results:
b'Hello\r\n'
you can run commands with PowerShell and see results:
from subprocess import Popen, PIPE, STDOUT
cmd = 'powershell.exe ls'
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
output = p.stdout.read()
useful link
The variable output does not contain a string, it is a container for the subprocess.Popen() function. You don't need to print it. The code,
import subprocess
output = subprocess.Popen("ls -al", shell=True)
works perfectly, but without the ugly : <subprocess.Popen object at 0xb734b26c> being printed.