Why am I getting list of files when executing this command?
subprocess.check_call("time ls &>/dev/null", shell=True)
If I will paste
time ls &>/dev/null
into the console, I will just get the timings.
OS is Linux Ubuntu.
On debian-like systems, the default shell is dash, not bash. Dash does not support the &> shortcut. To get only the subprocess return code, try:
subprocess.check_call("time ls >/dev/null 2>&1", shell=True)
To get subprocess return code and the timing information but not the directory listing, use:
subprocess.check_call("time ls >/dev/null", shell=True)
Minus, of course, the subprocess return code, this is the same behavior that you would see on the dash command prompt.
The Python version is running under sh, but the console version is running in whatever your default shell is, which is probably either bash or dash. (Your sh may actually be a different shell running in POSIX-compliant mode, but that doesn't make any difference.)
Both bash and dash have builtin time functions, but sh doesn't, so you get /usr/bin/time, which is a normal program. The most important difference this makes is that the time builtin is not running as a subprocess with its own independent stdout and stderr.
Also, sh, bash, and dash all have different redirection syntax.
But what you're trying to do seems wrong in the first place, and you're just getting lucky on the console because two mistakes are canceling out.
You want to get rid of the stdout of ls but keep the stderr of time, but that's not what you asked for. You're trying to redirect both stdout and stderr: that's what >& means on any shell that actually supports it.
So why are you still getting the time stderr? Either (a) your default shell doesn't support >&, or (b) you're using the builtin instead of the program, and you're not redirecting the stderr of the shell itself, or maybe (c) both of the above.
If you really want to do exactly the same thing in Python, with the exact same bugs canceling out in the exact same way, you can run your default shell manually instead of using shell=True. Depending on which reason it was working, that would be either this:
subprocess.check_call([os.environ['SHELL'], '-c', 'time ls &> /dev/null'])
or this:
subprocess.check_call('{} -c time ls &> /dev/null'.format(os.environ(SHELL), shell=True)
But really, why are you doing this at all? If you want to redirect stdout and not stderr, write that:
subprocess.check_call('time ls > /dev/null', shell=True)
Or, better yet, why are you even using the shell in the first place?
subprocess.check_call(['time', 'ls'], stdout=subprocess.devnull)
Related
I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
Running a long and time consuming number crunching process in the shell with a Python script. In the script, to indicate progress, I have inserted occassional print commands like
#!/usr/bin/env python3
#encoding:utf-8
print('Stage 1 completed')
Triggering the script in the shell by
user#hostname:~/WorkingDirectory$chmod 744 myscript.py && nohup ./myscript.py&
It redirects the output to nohup.out, but I cannot see the output until the entire script is done, probably because of stdout buffering. So in this scenario, how do I somehow adjust the buffering parameters to check the progress periodically? Basically, I want zero buffering, so that as soon a print command is issued in the python script, it will appear on nohup.out. Is that possible?
I know it is a rookie question and in addition to the exact solution, any easy to follow reference to the relevant material (which will help me master the buffering aspects of shell without getting into deeper Kernel or hardware level) will be greatly appreciated too.
If it is important, I am using #54~16.04.1-Ubuntu on x86_64
Python is optimised for reading in and printing out lots of data.
So standard input and output of the Python interpreter are buffered by default.
We can override this behavior some ways:
use interpretator python with option -u.
From man python:
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin, stdout and stderr in
binary mode. Note that there is internal buffering in xreadlines(), readlines() and file-object iterators ("for line in
sys.stdin") which is not influenced by this option. To work around this, you will want to use "sys.stdin.readline()" inside a
"while 1:" loop.
Run script in shell:
nohup python -u ./myscript.py&
Or modify shebang line of script to #!/usr/bin/python -u and then run:
nohup ./myscript.py&
use shell command stdbuf for turn off buffering stream
See man stdbuf.
Set unbuffered stream for output:
stdbuf --output=0 nohup ./myscript.py&
Set unbuffered stream for output and errors:
stdbuf -o0 -e0 nohup ./myscript.py&
I am trying to use w3mimgdisplay to display images on the image terminal, and was looking at the source code for Ranger file manager. The file I was looking at can be found here.
Using this, I made a simple program.
import curses
from subprocess import Popen, PIPE
process = Popen("/usr/libexec/w3m/w3mimgdisplay",
stdin=PIPE, stdout=PIPE, universal_newlines=True)
process.stdin.write("echo -e '0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;'")
process.stdin.flush()
process.stdout.readline()
process.kill()
Whenever I enter
echo -e '0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;' \ /usr/libexec/w3m/w3mimgdisplay
into the terminal, it prints the image properly, however, running the python script does nothing. How can I write the output of the program to the terminal?
the shell echo command adds newline to the end of its output (unless you use the -n switch which you didn't) so you need to mimic that by adding a newline at the end of your command too.
Also, you should write the string contents, not the echo command itself, because this is being sent directly to the w3mimgdisplay process, not to the shell.
I'm also unsure why readline. I suggest using the .communicate() command instead because it makes sure you don't get into a rare but possible read/write buffer race condition. Or, the best method, use the simpler subprocess.run() directly:
import subprocess
subprocess.run(["/usr/libexec/w3m/w3mimgdisplay"],
input=b'0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;\n')
I am calling a bash script in python (3.4.3) using subprocess:
import subprocess as sp
res = sp.check_output("myscript", shell=True)
and myscript contains a line:
ps -ef | egrep somecommand
It was not giving the same result as when myscript is directly called in a bash shell window. After much tinkering, I realized that when myscript is called in python, the stdout of "ps -ef" was truncated by the current $COLUMNS value of the shell window before being piped to "egrep". To me, this is crazy as simply by resizing the shell window, the command can give different results!
I managed to "solve" the problem by passing env argument to the subprocess call to specify a wide enough COLUMNS:
res = sp.check_output("myscript", shell=True, env={'COLUMNS':'100'})
However, this looks very dirty to me and I don't understand why the truncation only happens in python subprocess but not in a bash shell. Frankly I'm amazed that this behavior isn't documented in the official python doc unless it's in fact a bug -- I am using python 3.4.3. What is the proper way of avoiding this strange behavior?
You shoud use -ww, from man ps:
-w
Wide output. Use this option twice for unlimited width.
I have to make graphs from several files with data. I already found a way to run a simple command
xmgrace -batch batch.bfile -nosafe -hardcopy
in which batch.bfile is a text file with grace commands to print the graph I want. I already tried it manually and it works perfectly. To do this with several files I just have to edit one parameter inside batch.bfile and run the same command every time I make a change.
I have already written a python code which edits batch.bfile and goes through all the data files with a for cycle. In each cycle step I want to run the mentioned command directly in the command line.
After searching a bit I found two solutions, one with os.system() and another with subprocess.Popen() and I could only make subprocess.Popen() work without giving any errors by writing:
subprocess.Popen("xmgrace -batch batch.bfile -nosafe -hardcopy", shell=True)
Problem is, this doesn't do anything in practice, i.e., it just isn't the same as running the command directly in the command line. I already tried writing the full directory for the batch.bfile but nothing changed.
I am using Python 2.7 and Mac OS 10.7
Have you checked running xmgrace from the command line using sh? (i.e. invoke /bin/sh, then run xmgrace... which should be the same shell that Popen is using when you set shell=true).
Another solution would be to create a shell script (create a file like myscript.sh, and run chmod +x from the terminal). In the script call xmgrace:
#!/bin/bash
xmgrace -batch batch.bfile -nosafe -hardcopy
You could then test that myscript.sh works, which ought to pick up any environment variables that might be in your profile that might differ from python. If this works, you could call the script from python's subprocess.Popen('myscript.sh'). You can check what the environment variables are set in python for subprocess by running:
import os
os.environ
You may want to check out http://sourceforge.net/projects/graceplot/
When use use Popen, you can capture the application's output to stdout to stderr and print it within your application - this way you can see what is happening:
from subprocess import Popen, PIPE
ps = Popen(reportParameters,bufsize=512, stdout = PIPE, stderr = PIPE)
if ps:
while 1:
stdout = ps.stdout.readline()
stderr = ps.stderr.readline()
exitcode = ps.poll()
if (not stdout and not stderr) and (exitcode is not None):
break
if stdout:
stdout = stdout[:-1]
print stdout
if stderr:
stderr = stderr[:-1]
print stderr