I'm executing a set of commands that first require me to call bash. I am trying to automate these commands by writing a Python script to do this. My first command obviously needs to be bash, so I run
p = subprocess.call(['bash'])
and it launches the bash shell no problem.
Where I then have problems is trying to execute the remaining code in the bash environment. I thought perhaps there was a need for process communication (i.e. redirecting stdout as in
p0 = subprocess.Popen(cmd, stdout=subprocess.PIPE)
p1 = subprocess.Popen(['bash'], stdin=p0.stdout)
p1.communicate()
) but the piping doesn't seem to solve my problem.
How can I write this script so that it mimics the following sequential Linux commands?
$ bash
$ cmd1
$ cmd2
...
I'm working with Ubuntu 14.04 and Python 2.7.6.
Thanks in advance for the guidance!
import subprocess
def bash_command(cmd):
subprocess.Popen(cmd, shell=True, executable='/bin/bash')
bash_command('[your_command]')
You don't need to call run bash separately. You can run something like:
p1 = subprocess.call(['cmd1'])
p2 = subprocess.call(['cmd2'])
If you must run bash for some reason (the commands contain bash statements, for example), you can run bash -c "cmd1; cmd2" from subprocess.call().
Edit: As Busturdust pointed out, you can also try setting shell=True, but that uses sh, not bash. But that may be enough for you.
Related
I have a shell script (test.sh -> example shown below) which has a infinte while loop and prints some data to screen.
I am calling all my .sh scripts from python and I need to stop the test.sh before calling my other commands
I am using python 2.7 and linux system is on propritary hardware where I cannot install any python modules.
Here is my test.sh
#!/bin/sh
while :
do
echo "this code is in infinite while loop"
sleep 1
done
Here is my python Scripts
import subprocess as SP
SP.call(['./test.sh']) # I need to stop the test.sh in order for python to
# go and execute more commands and call
# another_script.sh
# some code statements
SP.call(['./another_script.sh'])
Well, quick google search made me look into subprocess call and Popen modules . and Popen has a terminate option and it doesn't work for me (or) I'm doing something wrong here
cmd=['test.sh']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p.terminate()
Any other suggestions on how I can stop the test.sh from python are highly appreciated
PS: I don't mind to run the test.sh for like T seconds and then stop it
I use tmux for these type of processes, python has a good package libtmux which should solve your problem.
Basically you create a tmux session:
import libtmux
server = libtmux.Server()
session = server.new_session(session_name='my_session_name')
then you create a window to run the command in
window = session.new_window(attach=False, window_name='my_window_name')
command = './my_bash_file.sh'
window.select_pane('0').send_keys(command, enter=True)
You'll be able to run subsequent commands right after this one. To access the tmux session from your bash terminal use tmux attach -t my_session_name you'll then be in a tmux window, the one which ran your bash script.
To kill the tmux window use window.kill_window() there's a lot of options look at the libtmux docs.
The project aileen has some useful tmux commands if you want to see some more implementations.
Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag
I am pulling my hair out here. I am spawning a process which I need the feedback from in Python.
When I run the command in the cmd window it runs fine, but when I try to run it via Python the terminal hangs.
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
Where startcmd is a string which when printed in the Python console looks like this:
"C:/Program Files/GRASS GIS 7.2.1/grass72.bat" --version
If I copy and paste this into a Windows cmd, it shows the version information and returns control to the command prompt about a second later, but in Python it freezes up.
I should point out, if I replace the startcmd string with something like "dir" or even "python --version", it works fine!
Additional: I have tried shell=True, this has the same result.
Additional: I have tried sending the cmd and arguments through as an array as suggested in an answer below given that shell=False, but this also hangs the same.
Additional: I have added the GRASS path to the system PATH, so that now I can simply call grass72 --version in the cmd window to get a result, however this also still freezes in Python but works fine in cmd.
Additional: I have created a basic .bat file to test if .bat files run ok via Python, here is what I created:
#echo off
title Test Batch Script
echo I should see this message
This runs fine both in cmd, and in Python.
Problem found but not solved!
So, I'm running the script which spawns the process using subprocess.Popen using Python 3.6. The .bat file which is spawned launches a Python script using a version of Python (based on 2.7) which comes shipped with GRASS:
%GRASS_PYTHON% "\BLAH\BLAH\grass72.py"
What is interesting, is that if I launch the subprocess.Popen script with Python 2.7, it works fine. Ahah, you may think, solved! But this doesn't solve my problem - because I really need Python 3.6 to be launching the process, also why does it matter what version of Python launches the batch file? The new Python script which is spawned is launched with Python 2.7 anyway.
Since I started re-directing stdout I can see that there is an error when I use Python 3.6 to launch the process:
File "C:\ProgramData\Anaconda3\lib\site.py", line 177
file=sys.stderr)
^
SyntaxError: invalid syntax
Notice its reverting to Anaconda3! Even though it is launched using python.exe from 2.7!
I experienced the same issue with Python 3.6 and 3.7 on Windows hanging for subprocess calls:
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
Upon closer investigation I noticed this occurs only if the process writes more than about 4 KB (4096 bytes) of output which might explain why your short script does not reproduce this.
A workaround I found is using tempfile in the standard library:
# Write to a temporary file because pipe redirection seems broken
with tempfile.NamedTemporaryFile(mode="w+") as tmp_out,
tempfile.NamedTemporaryFile(mode="w+") as tmp_err:
p = subprocess.Popen(startcmd, stdout=tmp_out, stderr=tmp_err,
universal_newlines=True)
# `run` waits for command to complete, `Popen` continues Python program
while p.poll() is None:
time.sleep(.1)
# Cursor is after the last write call, reset to read output
tmp_out.seek(0)
tmp_err.seek(0)
out = tmp_out.read()
err = tmp_err.read()
You don't specify shell=True in your arguments to Popen. The recommended usage in that case is to specify a sequence of arguments instead of a string. So you should set startcmd equal to ["C:/Program Files/GRASS GIS 7.2.1/grass72.bat", "--version"].
Try this:
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
Why am I getting list of files when executing this command?
subprocess.check_call("time ls &>/dev/null", shell=True)
If I will paste
time ls &>/dev/null
into the console, I will just get the timings.
OS is Linux Ubuntu.
On debian-like systems, the default shell is dash, not bash. Dash does not support the &> shortcut. To get only the subprocess return code, try:
subprocess.check_call("time ls >/dev/null 2>&1", shell=True)
To get subprocess return code and the timing information but not the directory listing, use:
subprocess.check_call("time ls >/dev/null", shell=True)
Minus, of course, the subprocess return code, this is the same behavior that you would see on the dash command prompt.
The Python version is running under sh, but the console version is running in whatever your default shell is, which is probably either bash or dash. (Your sh may actually be a different shell running in POSIX-compliant mode, but that doesn't make any difference.)
Both bash and dash have builtin time functions, but sh doesn't, so you get /usr/bin/time, which is a normal program. The most important difference this makes is that the time builtin is not running as a subprocess with its own independent stdout and stderr.
Also, sh, bash, and dash all have different redirection syntax.
But what you're trying to do seems wrong in the first place, and you're just getting lucky on the console because two mistakes are canceling out.
You want to get rid of the stdout of ls but keep the stderr of time, but that's not what you asked for. You're trying to redirect both stdout and stderr: that's what >& means on any shell that actually supports it.
So why are you still getting the time stderr? Either (a) your default shell doesn't support >&, or (b) you're using the builtin instead of the program, and you're not redirecting the stderr of the shell itself, or maybe (c) both of the above.
If you really want to do exactly the same thing in Python, with the exact same bugs canceling out in the exact same way, you can run your default shell manually instead of using shell=True. Depending on which reason it was working, that would be either this:
subprocess.check_call([os.environ['SHELL'], '-c', 'time ls &> /dev/null'])
or this:
subprocess.check_call('{} -c time ls &> /dev/null'.format(os.environ(SHELL), shell=True)
But really, why are you doing this at all? If you want to redirect stdout and not stderr, write that:
subprocess.check_call('time ls > /dev/null', shell=True)
Or, better yet, why are you even using the shell in the first place?
subprocess.check_call(['time', 'ls'], stdout=subprocess.devnull)
i want to run and control PSFTP from a Python script in order to get log files from a UNIX box onto my Windows machine.
I can start up PSFTP and log in but when i try to run a command remotely such as 'cd' it isn't recognised by PSFTP and is just run in the terminal when i close PSFTP.
The code which i am trying to run is as follows:
import os
os.system("<directory> -l <username> -pw <password>" )
os.system("cd <anotherDirectory>")
i was just wondering if this is actually possible. Or if there is a better way to do this in Python.
Thanks.
You'll need to run PSFTP as a subprocess and speak directly with the process. os.system spawns a separate subshell each time it's invoked so it doesn't work like typing commands sequentially into a command prompt window. Take a look at the documentation for the standard Python subprocess module. You should be able to accomplish your goal from there. Alternatively, there are a few Python SSH packages available such as paramiko and Twisted. If you're already happy with PSFTP, I'd definitely stick with trying to make it work first though.
Subprocess module hint:
# The following line spawns the psftp process and binds its standard input
# to p.stdin and its standard output to p.stdout
p = subprocess.Popen('psftp -l testuser -pw testpass'.split(),
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the 'cd some_directory' command to the process as if a user were
# typing it at the command line
p.stdin.write('cd some_directory\n')
This has sort of been answered in: SFTP in Python? (platform independent)
http://www.lag.net/paramiko/
The advantage to the pure python approach is that you don't always need psftp installed.