This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 5 years ago.
Is there a way for the Python print statement containing a bash command to run in the terminal directly from the Python script?
In the example below, the awk command is printed on the terminal.
#!/usr/bin/python
import sys
print "awk 'END{print NF}' file"
I can of course think of writing the print statement in a separate file and run that file as a bash script but is there a way to run the awk command directly from the python script rather than just printing it?
is there a way to run the awk command directly from the python script rather than just printing it?
Yes, you can use subprocess module.
import subprocess
subprocess.call(["ls", "-l"])
You can pipe your Python output into a Bash process, for example,
python -c "print 'echo 5'" | bash
will output
5
You could even use the subprocess module to do that from inside Python, if you wanted to.
But I am sure this is pretty bad design, and not a good idea. If you get your coding wrong, there's a risk you could allow hostile users to execute arbitrary commands on the machine running your code.
One solution is to use subprocess to run a shell command and capture its output, for example:
import subprocess
command = "awk 'END{print NF}' file"
p = subprocess.Popen([command], shell=True, bufsize=2000,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=True)
(child_stdout, child_stdin) = (p.stdout, p.stdin)
print(''.join([line for line in child_stdout]))
child_stdout.close()
p.stdout.close()
Adjust bufsize accordingly based on the size of your file.
Related
Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag
I'm executing a set of commands that first require me to call bash. I am trying to automate these commands by writing a Python script to do this. My first command obviously needs to be bash, so I run
p = subprocess.call(['bash'])
and it launches the bash shell no problem.
Where I then have problems is trying to execute the remaining code in the bash environment. I thought perhaps there was a need for process communication (i.e. redirecting stdout as in
p0 = subprocess.Popen(cmd, stdout=subprocess.PIPE)
p1 = subprocess.Popen(['bash'], stdin=p0.stdout)
p1.communicate()
) but the piping doesn't seem to solve my problem.
How can I write this script so that it mimics the following sequential Linux commands?
$ bash
$ cmd1
$ cmd2
...
I'm working with Ubuntu 14.04 and Python 2.7.6.
Thanks in advance for the guidance!
import subprocess
def bash_command(cmd):
subprocess.Popen(cmd, shell=True, executable='/bin/bash')
bash_command('[your_command]')
You don't need to call run bash separately. You can run something like:
p1 = subprocess.call(['cmd1'])
p2 = subprocess.call(['cmd2'])
If you must run bash for some reason (the commands contain bash statements, for example), you can run bash -c "cmd1; cmd2" from subprocess.call().
Edit: As Busturdust pointed out, you can also try setting shell=True, but that uses sh, not bash. But that may be enough for you.
This question already has answers here:
How do I execute a program or call a system command?
(65 answers)
Closed 8 years ago.
Is there a simple method for calling shell command line arguments (like ls or pwd) from within python interpreter?
In plain python, you need to use something along the lines of this:
from subprocess import check_output
check_output("ls", shell=True)
In IPython, you can run either of those commands or a general shell command by starting off with !. For example
! echo "Hello, world!" > /tmp/Hello.txt
If you're using python interactively, you would almost certainly be happier with IPython.
If you meant to use the Python shell interactively while being able to call commands (ls, pwd, ...) check out iPython.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Calling an external command in Python
I want to run commands in another directory using python.
What are the various ways used for this and which is the most efficient one?
What I want to do is as follows,
cd dir1
execute some commands
return
cd dir2
execute some commands
Naturally if you only want to run a (simple) command on the shell via python, you do it via the system function of the os module. For instance:
import os
os.system('touch myfile')
If you would want something more sophisticated that allows for even greater control over the execution of the command, go ahead and use the subprocess module that others here have suggested.
For further information, follow these links:
Python official documentation on os.system()
Python official documentation on the subprocess module
If you want more control over the called shell command (i.e. access to stdin and/or stdout pipes or starting it asynchronously), you can use the subprocessmodule:
import subprocess
p = subprocess.Popen('ls -al', shell=True, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
See also subprocess module documentation.
os.system("/dir/to/executeble/COMMAND")
for example
os.system("/usr/bin/ping www.google.com")
if ping program is located in "/usr/bin"
Naturally you need to import the os module.
os.system does not wait for any output, if you want output, you should use
subprocess.call or something like that
You can use Python Subprocess ,which offers many modules to execute commands, checking outputs and receive error messages etc.
I have to make graphs from several files with data. I already found a way to run a simple command
xmgrace -batch batch.bfile -nosafe -hardcopy
in which batch.bfile is a text file with grace commands to print the graph I want. I already tried it manually and it works perfectly. To do this with several files I just have to edit one parameter inside batch.bfile and run the same command every time I make a change.
I have already written a python code which edits batch.bfile and goes through all the data files with a for cycle. In each cycle step I want to run the mentioned command directly in the command line.
After searching a bit I found two solutions, one with os.system() and another with subprocess.Popen() and I could only make subprocess.Popen() work without giving any errors by writing:
subprocess.Popen("xmgrace -batch batch.bfile -nosafe -hardcopy", shell=True)
Problem is, this doesn't do anything in practice, i.e., it just isn't the same as running the command directly in the command line. I already tried writing the full directory for the batch.bfile but nothing changed.
I am using Python 2.7 and Mac OS 10.7
Have you checked running xmgrace from the command line using sh? (i.e. invoke /bin/sh, then run xmgrace... which should be the same shell that Popen is using when you set shell=true).
Another solution would be to create a shell script (create a file like myscript.sh, and run chmod +x from the terminal). In the script call xmgrace:
#!/bin/bash
xmgrace -batch batch.bfile -nosafe -hardcopy
You could then test that myscript.sh works, which ought to pick up any environment variables that might be in your profile that might differ from python. If this works, you could call the script from python's subprocess.Popen('myscript.sh'). You can check what the environment variables are set in python for subprocess by running:
import os
os.environ
You may want to check out http://sourceforge.net/projects/graceplot/
When use use Popen, you can capture the application's output to stdout to stderr and print it within your application - this way you can see what is happening:
from subprocess import Popen, PIPE
ps = Popen(reportParameters,bufsize=512, stdout = PIPE, stderr = PIPE)
if ps:
while 1:
stdout = ps.stdout.readline()
stderr = ps.stderr.readline()
exitcode = ps.poll()
if (not stdout and not stderr) and (exitcode is not None):
break
if stdout:
stdout = stdout[:-1]
print stdout
if stderr:
stderr = stderr[:-1]
print stderr