How to get each dependent command execution output using Paramiko exec_command - python

I'm using Paramiko in order to execute a single or a multiple commands and get its output.
Since Paramiko doesn't allow executing multiple commands on the same channel session I'm concatenating each command from my command list and executing it in a single line, but the output can be a whole large output text depending on the commands so it's difficult to differentiate which output is for each command.
ssh.exec_command("pwd ls- l cd / ls -l")
I want to have something like:
command_output = [('pwd','output_for_pwd'),('ls -l','output_for_ls'), ... ]
to work easier with every command output.
Is there a way to do it without changing the Paramiko library?

The only solution is (as #Barmar already suggested) to insert unique separator between individual commands. Like:
pwd && echo "end-of-pwd" && cd /foo && echo "end-of-cd" && ls -l && echo "end-of-ls"
And then look for the unique string in the output.
Though imo, it is much better to simply separate the commands into individual exec_command calls. Though I do not really think that you need to execute multiple commands in a row often. Usually you only need something like, cd or set, and these commands do not really output anything.
Like:
pwd
ls -la /foo (or cd /foo && ls -la)
For a similar questions, see:
Execute multiple dependent commands individually with Paramiko and find out when each command finishes (for "shell" channel)
Combining interactive shell and recv_exit_status method using Paramiko

I used to do this for sending commands in ssh and telnet, you can capture the output with each command and try.
cmd = ['pwd', 'ls - lrt', 'exit']
cmd_output =[]
for cmd in cmd:
tn.write(cmd)
tn.write("\r\n")
out = tn.read_until('#')
cmd_output.append((cmd,out))
print out

Related

executing unix command after sudo command using python [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

Python Script skips process of Shell Script file

I'm trying to automate the deletion of a program with python and shell scripts.
this is the code I use to execute my shell scripts.
import subprocess
self.shellscript = subprocess.Popen([self.shellScriptPath], shell=True, stdin=subprocess.PIPE )
self.shellscript.stdin.write('yes\n'.encode("utf-8"))
self.shellscript.stdin.close()
self.returncode = self.shellscript.wait()
This is the shell script that I want to run.
echo *MY PASSWORD* | sudo -S apt-get --purge remove *PROGRAM*
echo *MY PASSWORD* | sudo -S apt-get autoremove
echo *MY PASSWORD* | sudo -S apt-get clean
I know it's not secure to code my password into it like this but I will fix this later.
My problem is that the commandline asks me to type y/n but the program skips that and nothing happens.
In this particular case, the absolutely simplest fix is to use
apt-get -y ...
and do away with passing input to the command entirely.
In the general case, you want to avoid Popen in every scenario where you can. You are reimplementing subprocess.call() but not doing it completely. Your entire attempt can be reduced to (and fixed with)
self.returncode = subprocess.run(
self.shellScriptPath, input='yes\n', text=True).returncode
Unless the commands in self.shellScriptPath require a shell, probably remove shell=True and, if necessary, shlex.split() the value into a list of tokens (though if it's already a single token, just split it yourself, trivially: [self.shellScriptPath]).

Filter output of a process with `grep` while keeping the return value

I think this is not a Python question but in order to provide the context I'll tell, what exactly I'm doing.
I run a command on a remote machine using ssh -t <host> <command> like this:
if os.system('ssh -t some_machine [ -d /some/directory ]') != 0:
do_something()
(note: [ -d /some/directory ] is only an example. Could be replaced by any command which returns 0 in case everything went fine)
Unfortunately ssh prints "Connection to some_machine close." every time I run it.
Stupidly I tried to run ssh -t some_machine <command> | grep -v "Connection" but this returns the result of grep of course.
So in short: In Python I'd like to run a process via ssh and evaluate it's return value while filtering away some unwanted output.
Edit: this question suggests s.th. like
<command> | grep -v "bla"; return ${PIPESTATUS[0]}
Indeed this might be an approach but it seems to work with bash only. At least with zsh PIPESTATUS seems to be not defined.
Use subprocess, and connect the two commands in Python rather than a shell pipeline.
from subprocess import Popen, PIPE, call
p1 = Popen(["ssh", "-t", "some_machine", "test", "-d", "/some/directory"],
stdout=PIPE)
if call(["grep", "-v", "Connection"], stdin=p1.stdout) != 0:
# use p1.returncode for the exit status of ssh
do_something()
Taking this a step further, try to avoid running external programs when unnecessary. You can examine the output of ssh directly in Python without using grep; for example, using the re library to examine the data read from p1.stdout yourself. You can also use a library like Paramiko to connect to the remote host instead of shelling out to run ssh.

How to do multiple arguments with Python Popen?

I am trying to make a PyGtk Gui, that has a button. When the user presses this button, gnome-terminal prompts the user to write their password.
Then it will clone this Git repository for gedit JQuery snippets.
And then, it copies the js.xml file to /usr/share/gedit/plugins/snippets/js.xml
In the end, it forcefully removes the Git repository.
The command:
gnome-terminal -x sudo git clone git://github.com/pererinha/gedit-snippet-jquery.git && sudo cp -f gedit-snippet-jquery/js.xml /usr/share/gedit/plugins/snippets/js.xml && sudo rm -rf gedit-snippet-jquery
It works fine in my terminal.
But, via the GUI it just opens, I add my password, press enter, and then it closes again.
I'd like to only run the command to the first &&
This is my Python function (with command):
def on_install_jquery_code_snippet_for_gedit_activate(self, widget):
""" Install Jquery code snippet for Gedit. """
cmd="gnome-terminal -x sudo git clone git://github.com/pererinha/gedit-snippet-jquery.git && sudo cp -f gedit-snippet-jquery/js.xml /usr/share/gedit/plugins/snippets/js.xml && sudo rm -rf gedit-snippet-jquery"
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT,
close_fds=False)
self.status.set_text(p.stdout.read()) #show response in 'status
To directly answer your question, read below. But there's a lot of problems with your program, some of which I cover in "Better practice."
By default, subprocess.Popen commands are supplied as a list of strings.
However, you can also you can use the shell argument to execute a command "formatted exactly as it would be when typed at the shell prompt."
No:
>>> p = Popen("cat -n file1 file2")
Yes:
>>> p = Popen("cat -n file1 file2", shell=True)
>>> p = Popen(["cat", "-n", "file1", "file2"])
There are a number of differences between these two options, and valid use cases for each. I won't attempt to summarize the differences- the Popen docs already do an excellent job of that.
So, in the case of your commands, you'd do something like this:
cmd = "gnome-terminal -x sudo git clone git://github.com/pererinha/gedit-snippet-jquery.git && sudo cp -f gedit-snippet-jquery/js.xml /usr/share/gedit/plugins/snippets/js.xml && sudo rm -rf gedit-snippet-jquery"
p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT,
close_fds=False)
Better practice
However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script. But if you insist on Python, there are better practices.
The os module should take the place of calls to rm and cp. And while I have no experience with it, you might want to look at tools like GitPython to interact with Git repositories.
Compatibility concerns
Lastly, you should be careful about making calls to gnome-terminal and sudo. Not all GNU/Linux users run Ubuntu, and not everyone has sudo, or the GNOME terminal emulator installed. In its current form, your script will crash, rather unhelpfully, if:
The sudo command is not installed
The user is not in the sudoers group
The user doesn't use GNOME, or its default terminal emulator
Git is not installed
If you're willing to assume your users are running Ubuntu, calling x-terminal-emulator is a much better option than calling gnome-terminal directly, as it will call whatever terminal emulator they've installed (e.g. xfce4-terminal for users of Xubuntu).

HOW TO use fabric use with dtach,screen,is there some example

i have googled a lot,and in fabric faq also said use screen dtach with it ,but didn't find how to implement it?
bellow is my wrong code,the sh will not execute as excepted it is a nohup task
def dispatch():
run("cd /export/workspace/build/ && if [ -f spider-fetcher.zip ];then mv spider-fetcher.zip spider-fetcher.zip.bak;fi")
put("/root/build/spider-fetcher.zip","/export/workspace/build/")
run("cd /export/script/ && sh ./restartCrawl.sh && echo 'finished'")
I've managed to do it in two steps:
Start tmux session on remote server in detached mode:
run("tmux new -d -s foo")
Send command to the detached tmux session:
run("tmux send -t foo.0 ls ENTER")
here '-t' determines target session ('foo') and 'foo.0' tells the
number of the pane the 'ls' command is to be executed in.
you can just prepend screen to the command you want to run:
run("screen long running command")
Fabric though doesn't keep state like something like expect would, as each run/sudo/etc are their own sperate command runs without knowing the state of the last command. Eg run("cd /var");run("pwd") will not print /var but the home dir of the user who has logged into the box.

Categories

Resources