So I am currently trying to run two different gnome-terminal windows in Ubuntu that I can send individual commands to after they are initially open.
def ssh_command(cmd):
ssh_terminal_1 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
ssh_terminal_2 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
# Activate the conda environment for our multilateration server
spyder_activate('conda activate flyhound')
time.sleep(10)
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'sudo ./srsepc/src/srsepc ../srsepc/epc.conf.example --hss.db_file=../srsepc/user_db_unknown.csv.example\n')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -l /home/administrator/Downloads/hostedxA5-latest.rbf\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -f /home/administrator/Downloads/bladeRF_fw_v2.4.0.img\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'sudo ./srsenb/src/srsenb ../srsenb/enb.conf.example --enb_files.sib_config=../srsenb/sib.conf.example --e nb.n_prb=50 --enb_files.rr_config=../srsenb/rr.conf.example\n')
However when I start the original subprocess command the terminals open up fine with the command given during the function call but all the following commands don't work and I get a broken pipe error errno 32. While I try to run these commands I also need to keep previous terminal open that looks like this below
def access_command(cmd):
while True:
process = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
if b"f0:9e:4a:5f:a4:5b" and b"handshake" in output:
ssh_command("sshpass -p earth ssh -o StrictHostKeyChecking=no administrator#ipaddress; clear; screen")
I am really not sure how I can send multiple commands to the ssh terminals after they ssh into that ip address. I am very new to subprocess and sending commands to terminals via python so any help would be amazing on this!!
As I explained in the comments, your pipe goes to gnome-terminal and neither to ssh nor to bash. gnome-terminal is not listening to stdin but is only listening to the user at the console. Here is what you do.
Make a FIFO (named pipe) -- os.mkfifo -- for each terminal, give it a name that won't collide with any other file (such as put your process ID in it).
Issue the command gnome-terminal -- bash -c ssh <options> < <fifo name> for each terminal. Do not make this a Popen call, use os.system or something like that.
Do your spydy magic (anaconda).
Open the FIFO as a file
Write your commands to the open file; they will be executed by the bash process in the ssh connection. You will probably have to flush, unless there is a way to open it in line-buffered mode.
When you want to close the terminal, close the file.
What this accomplishes is that we move the pipe from gnome-terminal to ssh and hence across the connection to bash. We feed it on one end and it comes out and gets digested by the shell.
Related
I am writing a sample python program in linux system. I am using tmux to create a session and execute another script within the tmux-session. I would like to get the stdout and stderr out of the tmux session to the parent script but that somehow does not work.
Code snippet:
cmd = "tmux new-session -d -s 'test' '/my_dir/fibonacci.py __name:=fibonacci_0'"
proc = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
(stdout, stderr) = proc.communicate()
print(stderr)
I have came across answers to use show-buffer and pipe-pane. But that did not help. Maybe I need to modify the tmux command itself.
Thank you for your support. After digging a bit, I came up with a workaround. I am just adding it here for someone with similar needs.
What I have done is created a named pipe, redirect the output of tmux session to named pipe and then finally read from it.
# this attach if a session exists or creates one, then exits from the session
call("tmux new-session -A -s test \; detach", shell=True)
# to avoid conflict, remove existing named pipe and then create named pipe
call("rm -f /tmp/mypipe && mkfifo /tmp/mypipe && tmux pipe-pane -t test -o 'cat > /tmp/mypipe'", shell=True)
# feed the pipe to the stdout and stderr
poc = Popen(['cat', '/tmp/mypipe'], stdout=PIPE, stderr=PIPE)
# finally execute the command in tmux session
Popen(['tmux', 'send-keys', '-t', '/my_dir/fibonacci.py', 'C-m'])
(stdout, stderr) = proc.communicate()
print(stderr)
Hope this is helpful.
TLDR: tmux is like sandboxed cmd, you can't get in the session and reach sdtout or stderr
tmux creates a dispatched process. stdout and stderr is tmux would be the text messages that tmux provies, not the commands you run in a tmux session.
So you can't get your commands' output from a dispatched process.
In order to get the stdout and stderr, you have to change how fibonacci.py dumps text. Python logging framework can help you in this situation. You can write all stdout to stdout.txt and get content of that file.
The subprocess.Popen() function has a "env" parameter. But it doesn't seem to have the desired effect with sudo. This is what I get when I do this in the interactive Python shell:
import subprocess
env={"CVS_RSH":"ssh"}
command = "sudo -u user cvs -d user#1.8.7.2:/usr/local/ncvs co file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
1
>>> error_output
b'cvs [checkout aborted]: cannot exec rsh: No such file or
directory\ncvs [checkout aborted]: end of file from server (consult
above messages if any)\n'
The message is distracting, so let me explain. I'm forced to use ancient CVS and the environment variable tells it to use ssh to connect to the server, rather than the default which sadly is rsh. It also needs an environment variable called CVS_ROOT, but fortunately there's a "-d" option for that, but none for the CVS_RSH that I know of.
Interestingly enough, if I do:
command = "sudo -u user echo $CVS_RSH"
env={"CVS_RSH":"something_else"}
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
0
>>> command_output
b'something_else\n'
Maybe this worked because echo wasn't actually started as a child process? Is it possible to pass an environment to a process executed as another user with sudo?
This doesn't seem possible using the env parameter. The solution seems to be to just pass the environment as I was doing on the shell, for example:
command = "sudo -u user CVS_RSH=ssh
CVSROOT=:ext:user#2.8.7.2:/usr/local/ncvs cvs co dir/file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
The weird thing is, if I do this in a Python CGI script, I can see:
cvs [checkout aborted]: cannot exec ssh: Permission denied
cvs [checkout aborted]: end of file from server (consult above messages if
any)
But if I try on the interactive Python shell, it goes past this, so it must be another weird (because the user has permission to ssh) issue, unrelated to this question.
Used subprocess to ssh to host.
Here is the code snippet i used to ssh and run commands as my user. When i try to use sudo command i get error related to tty -- not a terminal (something like that)
sshProcess = subprocess.Popen(['ssh', hostname], stdin=subprocess.PIPE, stdout = subprocess.PIPE, universal_newlines=True, bufsize=0)
sshProcess.stdin.write("hostname\n")
sshProcess.stdin.write("ls -ldr %s \n"%path)
sshProcess.stdin.write("pwd \n")
sshProcess.stdin.close()
for line in sshProcess.stdout:
if line == "END\n":
break
print(line,end="")
But i am not able to run below command
sshProcess.stdin.write("sudo su - serviceuser \n")
You can force a pseudo-tty allocation by specifying -t option.
subprocess.Popen(['ssh -t', hostname], .....
From man ssh
-t
Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t optionsforce tty allocation, even if ssh has no local tty.
Here is my code:
import subprocess
HOST = 'host_name'
PORT = '111'
USER = 'user_name'
CMD = 'sudo su - ec2-user; ls'
process = subprocess.Popen(['ssh','{}#{}'.format(USER, HOST),
'-p', PORT, CMD],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = process.stdout.readlines()
if not result:
print "Im an error"
err = process.stderr.readlines()
print('ERROR: {}'.format(err))
else:
print "I'm a success"
print(result)
When I run this I receive the following output in my terminal:
dredbounds-computer: documents$ python terminal_test.py
Im an error
ERROR: ['sudo: sorry, you must have a tty to run sudo\n']
I've tried multiple things but I keep getting that error "sudo: sorry, you must have a tty to run sudo". It works fine if I just do it through the terminal manually, but I need to automate this. I read that a workaround might be to use '-t' or '-tt' in my ssh call, but I haven't been able to implement this successfully in subprocess yet (terminal just hangs for me). Anyone know how I can fix my code, or work around this issue? Ideally I'd like to ssh, then switch to the sudo user, and then run a file from there (I just put ls for testing purposes).
sudo is prompting you for a password, but it needs a terminal to do that. Passing -t or -tt provides a terminal for the remote command to run in, but now it is waiting for you to enter a password.
process = subprocess.Popen(['ssh','-tt', '{}#{}'.format(USER, HOST),
'-p', PORT, CMD],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
process.stdin.write("password\r\n")
Keep in mind, though, that the ls doesn't run until after the shell started by su exits. You should either log into the machine as ec2-user directly (if possible), or just use sudo to run whatever command you want without going through su first.
You can tell sudo to work without requiring a password. Just add this to /etc/sudoers on the remote server host_name.
user ALL = (ec2-user) NOPASSWD: ls
This allows the user named user to execute the command ls as ec2-user without entering a password.
This assumes you change your command to look like this, which seems more reasonable to me:
CMD = 'sudo -u ec2-user ls'
I can't figure out how to close a bash shell that was started via Popen. I'm on windows, and trying to automate some ssh stuff. This is much easier to do via the bash shell that comes with git, and so I'm invoking it via Popen in the following manner:
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands')
p.wait()
The problem is that after bash runs the commands I pipe into it, it doesn't close. It stays open causing my wait to block indefinitely.
I've tried stringing an "exit" command at the end, but it doesn't work.
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands && exit')
p.wait()
But still, infinite blocking on the wait. After it finishes its task, it just sits at a bash prompt doing nothing. How do I force it to close?
Try Popen.terminate() this might help kill your process. If you have only synchronous executing commands try to use it directly with subprocess.call().
for example
import subprocess
subprocess.call(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"])
0
Following is an example of using a pipe but this is a little overcomplicated for most use cases and makes sense only if you talk with a service that needs interaction (at least in my opinion).
p = subprocess.Popen(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
print p.stderr.read()
fatal: destination path 'c:\repository' already exists and is not an empty directory.
print p.wait(
128
This can be applied to ssh as well
To kill the process tree, you could use taskkill command on Windows:
Popen("TASKKILL /F /PID {pid} /T".format(pid=p.pid))
As #Charles Duffy said, your bash usage is incorrect.
To run a command using bash, use -c parameter:
p = Popen([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
In simple cases, you could use subprocess.check_call instead of Popen().wait():
import subprocess
subprocess.check_call([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
The latter command raises an exception if bash process returns non-zero status (it indicates an error).