I am calling this piece of code, but it produces some output in the console where I ran the python script (due to tee command):
os.system("echo 3 | sudo tee /proc/sys/vm/drop_caches")
This version does not produce console output but is there another way?
os.system('sudo bash -c "echo 3 > /proc/sys/vm/drop_caches"')
To answer the question based on its title in the most generic form:
To suppress all output from os.system(), append >/dev/null 2>&1 to the shell command, which silences both stdout and stderr; e.g.:
import os
os.system('echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null 2>&1')
Note that os.system() by design passes output from the calling process' stdout and stderr streams through to the console (terminal) - your Python code never sees them.
Also, os.system() does not raise an exception if the shell command fails and instead returns an exit code; note that it takes additional work to extract the shell command's true exit code: you need to extract the high byte from the 16-bit value returned, by applying >> 8 (although you can rely on a return value other than 0 implying an error condition).
Given the above limitations of os.system(), it is generally worthwhile to use the functions in the subprocess module instead:
For instance, subprocess.check_output() could be used as follows:
import subprocess
subprocess.check_output('echo 3 | sudo tee /proc/sys/vm/drop_caches', shell=True)
The above will:
capture stdout output and return it (with the return value being ignored in the example above)
pass stderr output through; passing stderr=subprocess.STDOUT as an additional argument would also capture stderr.
raise an error, if the shell command fails.
Note: Python 3.5 introduced subprocess.run(), a more flexible successor to both os.system() and subprocess.check_output() - see https://docs.python.org/3.5/library/subprocess.html#using-the-subprocess-module
Note:
The reason that the OP is employing tee in the first place - despite not being interested in stdout output - is that a naïve attempt to use > ... instead would be interpreted before sudo is invoked, and thus fail, because the required privileges to write to /proc/sys/... haven't been granted yet.
Whether you're using os.system() or a subprocess function, stdin is not affected by default, so if you're invoking your script from a terminal, you'll get an interactive password prompt when the sudo command is encountered (unless the credentials have been cached).
Write directly to the proc pseudo file instead via Python i/o lib.
This will require your script to run as root (via sudo), which means you should limit its scope to being an admin only tool. This also allows the script to run on boxes where sudo requires a password.
Example:
with open("/proc/sys/vm/drop_caches", "w") as drop_caches:
drop_caches.write("3")
subprocess.check_call(command,stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
you forgot to add stderr.
You also can use this simple method of subprocess module.
command = 'echo 3 | sudo tee /proc/sys/vm/drop_caches'
subprocess.check_call(shlex.split(command),stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL)
All outputs will be passed to DEVNULL. Any issues with the command will be reported by an exception. No issues means no output.
I believe, the easiest way to hide the console output when it's not possible with os.system is using os.popen:
os.popen("echo 3 | sudo tee /proc/sys/vm/drop_caches")
Related
I use Python 3.10.7 and I am trying to get the Python interpreter to run this command:
rg mysearchterm /home/user/stuff
This command, when I run it in bash directly successfully runs ripgrep and searches the directory (recursively) /home/user/stuff for the term mysearchterm. However, I'm trying to do this programmatically with Python's subprocess.Popen() and I am running into issues:
from subprocess import Popen, PIPE
proc1 = Popen(["rg", "term", "/home/user/stuff", "--no-filename"],stdout=PIPE,shell=True)
proc2 = Popen(["wc","-l"],stdin=proc1.stdin,stdout=PIPE,shell=True)
#Note: I've also tried it like below:
proc1 = Popen(f"rg term /home/user/stuff --no-filename",stdout=PIPE,shell=True)
proc2 = Popen("wc -l",stdin=proc1.stdin,stdout=PIPE,shell=True)
result, _ = proc2.communicate()
print(result.decode())
What happens here was bizarre to me; I get an error (from rg itself) which says:
error: The following required arguments were not provided:
<PATTERN>
So, using my debugging/tracing skills, I looked at the process chain and I see that the python interpreter itself is performing:
python3 1921496 953810 0 /usr/bin/python3 ./debug_script.py
sh 1921497 1921496 0 /bin/sh -c rg term /home/user/stuff --no-filename
sh 1921498 1921496 0 /bin/sh -c wc -l
So my next thought is just trying to run that manually in bash, leading to the same error. However, in bash, when I run /bin/sh -c "rg term /home/user/stuff --no-filename" with double quotations, the command works in bash but when I try to do this programmatically in Popen() it again doesn't work even when I try to escape them with \. This time, I get errors about unexpected EOF.
As for the behavior when shell=True is specified,
the python document tells:
If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
Then your command invocation is equivalent to:
/bin/sh -c "rg" "term" "/home/tshiono/stackoverflow/221215" ...
where no arguments are fed to rg.
You need to pass the command as a string (not a list) or just drop shell=True.
I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command
If I run this command in ubuntu shell:
debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'
It runs successfully, but if I run it via python:
>>> from subprocess import run
>>> run("debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'", shell=True)
/bin/sh: 1: Syntax error: redirection unexpected
CompletedProcess(args="debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'", returncode=2)
I don't understand why python is trying to interpret whether there is redirection etc. How does one make the command successfully run so one can script installation of an application, e.g. postfix in this case via python (not a normal bash script)?
I have tried various forms with double and single quotes (as recommended in other posts), with no success.
subprocess uses /bin/sh as shell, and presumably your system's one does not support here-string (<<<), hence the error.
From subprocess source:
if shell:
# On Android the default shell is at '/system/bin/sh'.
unix_shell = ('/system/bin/sh' if
hasattr(sys, 'getandroidapilevel') else '/bin/sh')
You can run the command as an argument to any shell that supports here string e.g. bash:
run('bash -c "debconf-set-selections <<< \"postfix postfix/mailname string server.exmaple.com\""', shell=True)
Be careful with the quoting.
Or better you can stay POSIX and use echo and pipe to pass via STDIN:
run("echo 'postfix postfix/mailname string server.exmaple.com' | debconf-set-selections", shell=True)
I am calling this piece of code, but it produces some output in the console where I ran the python script (due to tee command):
os.system("echo 3 | sudo tee /proc/sys/vm/drop_caches")
This version does not produce console output but is there another way?
os.system('sudo bash -c "echo 3 > /proc/sys/vm/drop_caches"')
To answer the question based on its title in the most generic form:
To suppress all output from os.system(), append >/dev/null 2>&1 to the shell command, which silences both stdout and stderr; e.g.:
import os
os.system('echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null 2>&1')
Note that os.system() by design passes output from the calling process' stdout and stderr streams through to the console (terminal) - your Python code never sees them.
Also, os.system() does not raise an exception if the shell command fails and instead returns an exit code; note that it takes additional work to extract the shell command's true exit code: you need to extract the high byte from the 16-bit value returned, by applying >> 8 (although you can rely on a return value other than 0 implying an error condition).
Given the above limitations of os.system(), it is generally worthwhile to use the functions in the subprocess module instead:
For instance, subprocess.check_output() could be used as follows:
import subprocess
subprocess.check_output('echo 3 | sudo tee /proc/sys/vm/drop_caches', shell=True)
The above will:
capture stdout output and return it (with the return value being ignored in the example above)
pass stderr output through; passing stderr=subprocess.STDOUT as an additional argument would also capture stderr.
raise an error, if the shell command fails.
Note: Python 3.5 introduced subprocess.run(), a more flexible successor to both os.system() and subprocess.check_output() - see https://docs.python.org/3.5/library/subprocess.html#using-the-subprocess-module
Note:
The reason that the OP is employing tee in the first place - despite not being interested in stdout output - is that a naïve attempt to use > ... instead would be interpreted before sudo is invoked, and thus fail, because the required privileges to write to /proc/sys/... haven't been granted yet.
Whether you're using os.system() or a subprocess function, stdin is not affected by default, so if you're invoking your script from a terminal, you'll get an interactive password prompt when the sudo command is encountered (unless the credentials have been cached).
Write directly to the proc pseudo file instead via Python i/o lib.
This will require your script to run as root (via sudo), which means you should limit its scope to being an admin only tool. This also allows the script to run on boxes where sudo requires a password.
Example:
with open("/proc/sys/vm/drop_caches", "w") as drop_caches:
drop_caches.write("3")
subprocess.check_call(command,stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
you forgot to add stderr.
You also can use this simple method of subprocess module.
command = 'echo 3 | sudo tee /proc/sys/vm/drop_caches'
subprocess.check_call(shlex.split(command),stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL)
All outputs will be passed to DEVNULL. Any issues with the command will be reported by an exception. No issues means no output.
I believe, the easiest way to hide the console output when it's not possible with os.system is using os.popen:
os.popen("echo 3 | sudo tee /proc/sys/vm/drop_caches")
I have the following function that is used to execute system commands in Python:
def engage_command(
command = None
):
#os.system(command)
return os.popen(command).read()
I am using the os module instead of the subprocess module because I am dealing with a single environment in which I am interacting with many environment variables etc.
How can I use Bash with this type of function instead of the default sh shell?
output = subprocess.check_output(command, shell=True, executable='/bin/bash')
os.popen() is implemented in terms of subprocess module.
I am dealing with a single environment in which I am interacting with many environment variables etc.
each os.popen(cmd) call creates a new /bin/sh process, to run cmd shell command.
Perhaps, it is not obvious from the os.popen() documentation that says:
Open a pipe to or from command cmd
"open a pipe" does not communicate clearly: "start a new shell process with a redirected standard input or output" -- your could report a documentation issue.
If there is any doubt; the source confirms that each successful os.popen() call creates a new child process
the child can't modify its parent process environment (normally).
Consider:
import os
#XXX BROKEN: it won't work as you expect
print(os.popen("export VAR=value; echo ==$VAR==").read())
print(os.popen("echo ==$VAR==").read())
Output:
==value==
====
==== means that $VAR is empty in the second command because the second command runs in a different /bin/sh process from the first one.
To run several bash commands inside a single process, put them in a script or pass as a string:
output = check_output("\n".join(commands), shell=True, executable='/bin/bash')
Example:
#!/usr/bin/env python
from subprocess import check_output
output = check_output("""
export VAR=value; echo ==$VAR==
echo ==$VAR==
""", shell=True, executable='/bin/bash')
print(output.decode())
Output:
==value==
==value==
Note: $VAR is not empty here.
If you need to generate new commands dynamically (based on the output from the previous commands); it creates several issues and some of the issues could be fixed using pexpect module: code example.