Exiting a bash shell that was started with Popen? - python

I can't figure out how to close a bash shell that was started via Popen. I'm on windows, and trying to automate some ssh stuff. This is much easier to do via the bash shell that comes with git, and so I'm invoking it via Popen in the following manner:
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands')
p.wait()
The problem is that after bash runs the commands I pipe into it, it doesn't close. It stays open causing my wait to block indefinitely.
I've tried stringing an "exit" command at the end, but it doesn't work.
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands && exit')
p.wait()
But still, infinite blocking on the wait. After it finishes its task, it just sits at a bash prompt doing nothing. How do I force it to close?

Try Popen.terminate() this might help kill your process. If you have only synchronous executing commands try to use it directly with subprocess.call().
for example
import subprocess
subprocess.call(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"])
0
Following is an example of using a pipe but this is a little overcomplicated for most use cases and makes sense only if you talk with a service that needs interaction (at least in my opinion).
p = subprocess.Popen(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
print p.stderr.read()
fatal: destination path 'c:\repository' already exists and is not an empty directory.
print p.wait(
128
This can be applied to ssh as well

To kill the process tree, you could use taskkill command on Windows:
Popen("TASKKILL /F /PID {pid} /T".format(pid=p.pid))
As #Charles Duffy said, your bash usage is incorrect.
To run a command using bash, use -c parameter:
p = Popen([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
In simple cases, you could use subprocess.check_call instead of Popen().wait():
import subprocess
subprocess.check_call([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
The latter command raises an exception if bash process returns non-zero status (it indicates an error).

Related

Send Multiple Terminal Commands in Gnome Terminals With Subprocess

So I am currently trying to run two different gnome-terminal windows in Ubuntu that I can send individual commands to after they are initially open.
def ssh_command(cmd):
ssh_terminal_1 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
ssh_terminal_2 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
# Activate the conda environment for our multilateration server
spyder_activate('conda activate flyhound')
time.sleep(10)
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'sudo ./srsepc/src/srsepc ../srsepc/epc.conf.example --hss.db_file=../srsepc/user_db_unknown.csv.example\n')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -l /home/administrator/Downloads/hostedxA5-latest.rbf\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -f /home/administrator/Downloads/bladeRF_fw_v2.4.0.img\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'sudo ./srsenb/src/srsenb ../srsenb/enb.conf.example --enb_files.sib_config=../srsenb/sib.conf.example --e nb.n_prb=50 --enb_files.rr_config=../srsenb/rr.conf.example\n')
However when I start the original subprocess command the terminals open up fine with the command given during the function call but all the following commands don't work and I get a broken pipe error errno 32. While I try to run these commands I also need to keep previous terminal open that looks like this below
def access_command(cmd):
while True:
process = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
if b"f0:9e:4a:5f:a4:5b" and b"handshake" in output:
ssh_command("sshpass -p earth ssh -o StrictHostKeyChecking=no administrator#ipaddress; clear; screen")
I am really not sure how I can send multiple commands to the ssh terminals after they ssh into that ip address. I am very new to subprocess and sending commands to terminals via python so any help would be amazing on this!!
As I explained in the comments, your pipe goes to gnome-terminal and neither to ssh nor to bash. gnome-terminal is not listening to stdin but is only listening to the user at the console. Here is what you do.
Make a FIFO (named pipe) -- os.mkfifo -- for each terminal, give it a name that won't collide with any other file (such as put your process ID in it).
Issue the command gnome-terminal -- bash -c ssh <options> < <fifo name> for each terminal. Do not make this a Popen call, use os.system or something like that.
Do your spydy magic (anaconda).
Open the FIFO as a file
Write your commands to the open file; they will be executed by the bash process in the ssh connection. You will probably have to flush, unless there is a way to open it in line-buffered mode.
When you want to close the terminal, close the file.
What this accomplishes is that we move the pipe from gnome-terminal to ssh and hence across the connection to bash. We feed it on one end and it comes out and gets digested by the shell.

How to convert sh -c "$(curl -fsSL URL)" to native python

I am trying convert the shell command:
sh -c "$(curl -fsSL https://raw.githubusercontent.com/foo/install.sh)"
To native python. I cannot rely on curl being on the system. I start with this to replace curl
from urllib.request import urlretrieve
from urllib.error import URLError
try:
urlretrieve("https://raw.githubusercontent.com/foo/install.sh",
os.path.expanduser('~/' + 'install.sh'))
except URLError as e:
...
What is the best way to then replicate the sh -c install.sh portion of the command in native python? I need an interactive shell to install.sh, then for the script to carryon in python. I need a python interactive subprocess with exception handling to execute install.sh
Some examples of subprocess?
import subprocess
p = subprocess.Popen(['sh install.sh'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout,stderr = p.communicate()
print(stdout)
print(stderr)
Another that does not wait for command to complete before writing out
import subprocess, sys
cmd = "sh install.sh"
p = subprocess.Popen(cmd, shell=True, stderr=subprocess.PIPE)
while True:
out = p.stderr.read(1)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
You could try os.system(), that is, after you save the install script in the current directory, then
os.system('sh -c install.sh')
The script is presumably written in sh, so you'll need sh (or compatible) to run it.
This uses stdin/stdout, so you can interact with it using the terminal if necessary. When the subprocess terminates, os.system() returns its exit code and Python resumes.
(If you're not saving to the working directory, you could use the absolute path to install.sh, or change the working directory using os.chdir().)
If you need to automate this interaction in Python, you should probably use subprocess instead, it's more powerful, but takes more work to configure.
Yes, I need to run sh and os.system() is the easiest way. I would like the script to run unattended and only prompt user if input is needed. I will expand on my question with a subprocess example.
If the script runs and terminates itself on its own in the normal case with no further input, then os.system() is enough, even to run unattended, since Python will resume when the script completes. You can also raise an exception for a nonzero exit code, if desired:
if os.system('sh -c install.sh'): raise ...

Python Popen: pass environment variable to command running as another user with sudo

The subprocess.Popen() function has a "env" parameter. But it doesn't seem to have the desired effect with sudo. This is what I get when I do this in the interactive Python shell:
import subprocess
env={"CVS_RSH":"ssh"}
command = "sudo -u user cvs -d user#1.8.7.2:/usr/local/ncvs co file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
1
>>> error_output
b'cvs [checkout aborted]: cannot exec rsh: No such file or
directory\ncvs [checkout aborted]: end of file from server (consult
above messages if any)\n'
The message is distracting, so let me explain. I'm forced to use ancient CVS and the environment variable tells it to use ssh to connect to the server, rather than the default which sadly is rsh. It also needs an environment variable called CVS_ROOT, but fortunately there's a "-d" option for that, but none for the CVS_RSH that I know of.
Interestingly enough, if I do:
command = "sudo -u user echo $CVS_RSH"
env={"CVS_RSH":"something_else"}
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
0
>>> command_output
b'something_else\n'
Maybe this worked because echo wasn't actually started as a child process? Is it possible to pass an environment to a process executed as another user with sudo?
This doesn't seem possible using the env parameter. The solution seems to be to just pass the environment as I was doing on the shell, for example:
command = "sudo -u user CVS_RSH=ssh
CVSROOT=:ext:user#2.8.7.2:/usr/local/ncvs cvs co dir/file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
The weird thing is, if I do this in a Python CGI script, I can see:
cvs [checkout aborted]: cannot exec ssh: Permission denied
cvs [checkout aborted]: end of file from server (consult above messages if
any)
But if I try on the interactive Python shell, it goes past this, so it must be another weird (because the user has permission to ssh) issue, unrelated to this question.

Syntax error when killing process tree through Python

I am trying to kill a process tree using this shell command:
kill -TERM -- -3333
so in python I use subprocess:
subprocess.call(['kill', '-TERM', '--', '-3333'])
the process is terminated as expected but I get this message:
ERROR: garbage process ID "--".
Usage:
kill pid ... Send SIGTERM to every process listed.
kill signal pid ... Send a signal to every process listed.
kill -s signal pid ... Send a signal to every process listed.
kill -l List all signal names.
kill -L List all signal names in a nice table.
kill -l signal Convert between signal numbers and names.
Why do I get this message and what am I doing wrong?
I am using Python 2.6.5 on Ubuntu 10.04.
You are passing the kill command an argument it doesn't recognise. You could simply drop the --:
subprocess.call(['kill', '-TERM', '-3333'])
You probably should be passing in the PID without a dash as well, if -- is not supported, neither will a negative PID; at which point you'd be signalling just the single process.
Note that you are not executing this through a shell, while your shell probably has its own kill command implementation, Python instructs the OS to find the first kill binary executable on the path instead. The shell built-in may accept -- but that's not the command you are executing here.
If you must use the shell built-in, then you'll have to set shell=True and pass in a string command line:
subprocess.call('kill -TERM -- -3333', shell=True)
This uses /bin/sh; you can set a different shell to run the command through with the executable argument:
subprocess.call('kill -TERM -- -3333', shell=True, executable='/bin/bash')
Last but not least, you may not need the kill command at all. Python can send signals directly with the os.kill() function:
import os, signal
os.kill(3333, signal.SIGTERM)
and the os.killpg() function can send a signal to a process group:
import os, signal
os.killpg(3333, signal.SIGTERM)

Killing process on Ubuntu with os.system() issue

I try to send command from python shell to Ubuntu OS to define process existed on particular port and kill it:
port = 8000
os.system("netstat -lpn | grep %s" % port)
Output:
tcp 0 0 127.0.0.1.8000 0.0.0.0:* LISTEN 22000/python
Then:
os.system("kill -SIGTERM 22000")
but got following trace
sh: 1: kill: Illegal option -S
For some reason command can not be transferred to OS with full signal -SIGTERM, but only with -S. I can simply kill this process directly from Terminal, so seems that it's Python or os issue... How can I run kill command using Python?
Any help is appreciated
os.system uses sh to execute the command, not bash which you get in a terminal. The kill builtin in sh requires giving the signal names without the SIG prefix. Change your os.system command line to kill -TERM 22000.
[EDIT] As #DJanssens suggested, using os.kill is a better option than calling the shell for such a simple thing.
You could try using
import signal
os.kill(process.pid, signal.SIGKILL)
documentation can be found here.
you could also use signal.CTRL_C_EVENT, which corresponds to the CTRL+C keystroke event.

Categories

Resources