How can I pass python variable in os.system - python

I am trying to pass the output of one os.system() to another os.system().
However, I am getting no output.
user = os.system("whoami")
print (user)
box=os.system("docker ps -a --format \"{{.Names}}\" | grep user")
print (box)
Output:
xvision
256

There are a couple issues with the posted code:
The user in the grep command is a string, not the variable I believe you are intending to use.
The return value from os.system is simply the exit status of the command; not the values which you are looking to retrieve.
If I'm not mistaken, docker will require elevated permissions to execute the a ps command. Perhaps your visudo is setup differently than mine, which allows the command - but something to be aware of.
Additionally, the first system call to get the username is unneeded, as the shell call can be used instead, as grep $( whoami ). However, if you are expecting a different username on the docker system, you can use an f-string as:
f'grep | {user}'
Instead, the subprocess library should be used here as you can retrieve the values from the subprocess call.
For example:
import subprocess
# Note: sudo might be optional in your case, depending on setup.
rtn = subprocess.Popen('sudo docker ps -a --format {{.Names}} | grep $( whoami )',
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()
>>> rtn
(b'username', None) # <-- tuple of (stdout, stderr)
Getting the username:
value = rtn[0].decode()
>>> value
'username'
A note on the shell call:
Some might argue that 'shell should be avoided'. However, in this case I'm choosing to use it for the following reasons:
To make the command string a bit easier to read for the OP.
To (more easily) facilitate the pipe into grep.
Without the pipe into grep, the command could be split into a list of arguments; thus alleviating the need for the shell call.

Related

Call python script as module with input from bash script

From a bash function, I want to call a python script which prompts for input, and I need to run that script as a module using python -m
Here is select_pod.py
# above this will be a print out of pods
pod = input('Pick pod')
print(pod)
Here is the bash function:
function foo() {
POD=$(python3 -m select_pod)
kubectl exec $POD --stdin --tty bash
}
I can't get the input to work, i.e. "Pick pod" is not printed to the terminal.
When you do POD=$(python3 -m select_pod), the POD=$(...) means that any output printed to stdout within the parentheses will be captured within the POD variable instead of getting printed to the screen. Simply echoing out POD is no good, as this will first be done once the Python script has finished.
What you need to do is to duplicate the output of the Python program. Assuming Linux/Posix, this can be done using e.g.
POD=$(python3 -m select_pod | tee /dev/stderr)
Because your terminal shows both stdout and stderr, duplicating the output from stdout to stderr makes the text show up.
Hijacking the error channel for this might not be ideal, e.g. if you want to later sort the error messages using something like 2> .... A different solution is to just duplicate it directly to the tty:
POD=$(python3 -m select_pod | tee /dev/tty)
You can change sys.stdout before input :
import sys
save_sys_stdout = sys.stdout
sys.stdout = sys.stderr
pod = input('Pick pod')
sys.stdout = save_sys_stdout
print(pod)
So that POD=$(python3 -m select_pod) will work and you don't need to do split after.

How does subprocess.call() work with shell=False?

I am using Python's subprocess module to call some Linux command line functions. The documentation explains the shell=True argument as
If shell is True, the specified command will be executed through the shell
There are two examples, which seem the same to me from a descriptive viewpoint (i.e. both of them call some command-line command), but one of them uses shell=True and the other does not
>>> subprocess.call(["ls", "-l"])
0
>>> subprocess.call("exit 1", shell=True)
1
My question is:
What does running the command with shell=False do, in contrast to shell=True?
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell. In other words, how can it possibly not execute the argument through the shell?
It would also be helpful to get some examples of:
Things that can be done with shell=True that can't be done with
shell=False and why they can't be done.
Vice versa (although it seems that there are no such examples)
Things for which it does not matter whether shell=True or False and why it doesn't matter
UNIX programs start each other with the following three calls, or derivatives/equivalents thereto:
fork() - Create a new copy of yourself.
exec() - Replace yourself with a different program (do this if you're the copy!).
wait() - Wait for another process to finish (optional, if not running in background).
Thus, with shell=False, you do just that (as Python-syntax pseudocode below -- exclude the wait() if not a blocking invocation such as subprocess.call()):
pid = fork()
if pid == 0: # we're the child process, not the parent
execlp("ls", "ls", "-l", NUL);
else:
retval = wait(pid) # we're the parent; wait for the child to exit & get its exit status
whereas with shell=True, you do this:
pid = fork()
if pid == 0:
execlp("sh", "sh", "-c", "ls -l", NUL);
else:
retval = wait(pid)
Note that with shell=False, the command we executed was ls, whereas with shell=True, the command we executed was sh.
That is to say:
subprocess.Popen(foo, shell=True)
is exactly the same as:
subprocess.Popen(
["sh", "-c"] + ([foo] if isinstance(foo, basestring) else foo),
shell=False)
That is to say, you execute a copy of /bin/sh, and direct that copy of /bin/sh to parse the string into an argument list and execute ls -l itself.
So, why would you use shell=True?
You're invoking a shell builtin.
For instance, the exit command is actually part of the shell itself, rather than an external command. That said, this is a fairly small set of commands, and it's rare for them to be useful in the context of a shell instance that only exists for the duration of a single subprocess.call() invocation.
You have some code with shell constructs (ie. redirections) that would be difficult to emulate without it.
If, for instance, your command is cat one two >three, the syntax >three is a redirection: It's not an argument to cat, but an instruction to the shell to set stdout=open('three', 'w') when running the command ['cat', 'one', 'two']. If you don't want to deal with redirections and pipelines yourself, you need a shell to do it.
A slightly trickier case is cat foo bar | baz. To do that without a shell, you need to start both sides of the pipeline yourself: p1 = Popen(['cat', 'foo', 'bar'], stdout=PIPE), p2=Popen(['baz'], stdin=p1.stdout).
You don't give a damn about security bugs.
...okay, that's a little bit too strong, but not by much. Using shell=True is dangerous. You can't do this: Popen('cat -- %s' % (filename,), shell=True) without a shell injection vulnerability: If your code were ever invoked with a filename containing $(rm -rf ~), you'd have a very bad day. On the other hand, ['cat', '--', filename] is safe with all possible filenames: The filename is purely data, not parsed as source code by a shell or anything else.
It is possible to write safe scripts in shell, but you need to be careful about it. Consider the following:
filenames = ['file1', 'file2'] # these can be user-provided
subprocess.Popen(['cat -- "$#" | baz', '_'] + filenames, shell=True)
That code is safe (well -- as safe as letting a user read any file they want ever is), because it's passing your filenames out-of-band from your script code -- but it's safe only because the string being passed to the shell is fixed and hardcoded, and the parameterized content is external variables (the filenames list). And even then, it's "safe" only to a point -- a bug like Shellshock that triggers on shell initialization would impact it as much as anything else.
I was under the impression that subprocess.call and check_call and check_output all must execute the argument through the shell.
No, subprocess is perfectly capable of starting a program directly (via an operating system call). It does not need a shell
Things that can be done with shell=True that can't be done with shell=False
You can use shell=False for any command that simply runs some executable optionally with some specified arguments.
You must use shell=True if your command uses shell features. This includes pipelines, |, or redirections or that contains compound statements combined with ; or && or || etc.
Thus, one can use shell=False for a command like grep string file. But, a command like grep string file | xargs something will, because of the | require shell=True.
Because the shell has power features that python programmers do not always find intuitive, it is considered better practice to use shell=False unless you really truly need the shell feature. As an example, pipelines are not really truly needed because they can also be done using subprocess' PIPE feature.

Repo command is not running using subprocess

I'm trying to run repo command using subprocess.check_call. I don't see any error but it's not running.
Here is my code.
def repo(*args):
return subprocess.check_call(['repo'] + list(args), shell = True, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
repo('forall','-pc','"','git','merge','--strategy=ours','\${REPO_REMOTE}/branch_name','"','>log.log','2>&1')
Am I missing something?
Please help.
Thanks.
I'm going on a hunch, I guess you don't see anything because the error messages are stuck in your stderr pipe. Try this;
import subprocess
def repo(command):
subprocess.check_call('repo ' + command, shell=True)
repo('forall -pc "git merge --strategy=ours \${REPO_REMOTE}/branch_name" > log.log 2>&1')
Does that look more like you imagined? Also;
when using shell=True (although I don't recommend that your do) you can just pass a str (as opposed to a list); and
there is not pressing need for the return because check_call() either raise an exception or returns 0.
If you have shell=True and the first argument a sequence, as you have, then the first element in the sequence will be passed as option -c to the shell, and the rest of the elements will be additional arguments to the shell. Example
subprocess.check_call(['ls', '-l'], shell=True)
means the following is run:
sh -c "ls" -l
Note that ls doesn't get the option -l, but the shell sh does.
So, you should not use shell=True. If you have to, use a string instead of a list as args.
Also, the fine manual warns not to use stdout=PIPE and stderr=PIPE with check_call().

python: subprocess.Popen() behaviour

I am trying to use rsync with python. I have read that the preferred way to passing arguments to Popen is using an array.
The code I tried:
p = Popen(["rsync",
"\"{source}\"".format(source=latestPath),
"\"{user}#{host}:{dir}\"".format(user=user, host=host, dir=dir)],
stdout=PIPE, stderr=PIPE)
The result is rsync asking for password, even though I have set up SSH keys to do the authentication.
I think this is a problem with the environment the new process gets executed in. What I tried next is:
p = Popen(["rsync",
"\"{source}\"".format(source=latestPath),
"\"{user}#{host}:{dir}\"".format(user=user, host=host, dir=dir)],
stdout=PIPE, stderr=PIPE, shell=True)
This results in rsync printing the "correct usage", so the arguments are passed incorrectly to rsync. I am not sure if this is even supposed to work(passing an array with shell=True)
If I remove the array altogether like this:
p = Popen("rsync \"{source}\" \"{user}#{host}:{dir}\"".format(
source=latestPath, user=user, host=host, dir=dir),
stdout=PIPE, stderr=PIPE, shell=True)
The program works fine. It really doesn't matter for the sake of this script, but I'd like to know what's the difference? Why don't the other two(mainly the first one) work?
Is it just that the shell environment is required, and the second one is incorrect?
EDIT: Contents of the variables
latestPath='/home/tomcat/.jenkins/jobs/MC 4thworld/workspace/target/FourthWorld-0.1-SNAPSHOT.jar'
user='mc'
host='192.168.0.32'
dir='/mc/test/plugins/'
I'd like to know what's the difference?
When shell=True, the entire command is passed to the shell. The quotes are there so the shell can correctly pick the command apart again. In particular, passing
foo "bar baz"
to the shell causes it to parse the command as (Python syntax) ['foo', 'bar baz'] so that it can execute the foo command with the argument bar baz.
By contrast, when shell=False, Python will pass the arguments in the list to the program immediately. For example, try the following subprocess commands:
>>> import subprocess
>>> subprocess.call(["echo", '"Hello!"'])
"Hello!"
0
>>> subprocess.call('echo "Hello!"', shell=True)
Hello!
0
and note that in the first, the quotes are echoed back at you by the echo program, while in the second case, the shell has stripped them off prior to executing echo.
In your specific case, rsync gets the quotes but doesn't know how it's supposed to handle them; it's not itself a shell, after all.
Could it be to do with the cwd or env parameters? Maybe in the first syntax, it can't find the SSH keys...
Just a suggestion, it might be easier for you to use sh instead of subprocess:
import sh
sh.rsync(latestPath, user+"#"+host+":"+dir)

how to call multiple bash functions using | in python

I am using a scientific software (called vasp) that works only in bash, and using Python to create a script that will make multiple runs for me. When I use subprocess.check_call to call the function normally, it works fine, but when i add the '| tee tee_output' it doesn't work.
subprocess.check_call('vasp') #this works
subprocess.check_call('vasp | tee tee_output') #this doesn't
I am a noobie to python and programming altogether.
Try this. It executes the command (passed as a string) via a shell, instead of executing the command directly. (It's the equivalent of calling the shell itself with the -c flag, i.e. Popen(['/bin/sh', '-c', args[0], args[1], ...])):
subprocess.check_call('vasp | tee tee_output', shell=True)
But attend to the warning in the docs about this method.
You could do this:
vasp = subprocess.Popen('vasp', stdout=subprocess.PIPE)
subprocess.check_call(('tee', 'tee_output'), stdin=vasp.stdout)
This is generally safer than using shell=True, especially if you can't trust the input.
Note that check_call will check the return code of tee, rather than vasp, to see whether it should raise a CalledProcessError. (The shell=True method will do the same, as this matches the behavior of the shell pipe.) If you want, you can check the return code of vasp yourself by calling vasp.poll(). (The other method won't let you do this.)
Don't use shell=True, it has many security holes. Instead do something like this
cmd1 = ['vasp']
cmd2 = ['tee', 'tee_output']
runcmd = subprocess.Popen(cmd1, stdout=subprocess.PIPE)
runcmd2 = subprocess.Popen(cmd2, stdin=runcmd.stdout, stdout=subprocess.PIPE)
runcmd2.communicate()
I know its longer, but its much safer.
You can find more info in documentation:
http://docs.python.org/library/pipes.html
Just append more strings to t object

Categories

Resources