Terminal messed up (not displaying new lines) after running Python script - python

I have a Python script I use to execute commands in parallel across multiple hosts using the Python subprocess module. It wraps SSH, and basically makes a call like this:
output = subprocess.Popen(["/bin/env", env, "/usr/bin/ssh", "-t", "%s#%s" % (user, host), "--", command], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
The effective command gets executed like this:
/bin/env TERM=$TERM:password /usr/bin/ssh -t "%s#%s" % (user, host), "--", command
It works fine, except I get an intermittent error where my terminal gets messed up (loses newlines) after running the script. A "reset" from the command line fixes it, but I'm not sure how this is happening, exactly. I noticed that sometimes there's a "\r\n" at the end of the the first item in the tuple's output, and sometimes it's not there. See the following, specifically "Permission denied\r\n":
**** Okay output ****
[user#/home/user]# ./command.py hosts.lists "grep root /etc/shadow"
Running command "grep root /etc/shadow" on hosts in file "hosts.test"
('grep: /etc/shadow: Permission denied\r\n', 'Connection to server1.example.com closed.\r\n')
('grep: /etc/shadow: Permission denied\r\n', 'Connection to server2.example.com closed.\r\n')
[user#/home/user]#
**** Output causes terminal to not display newlines ****
[user#/home/user]# ./command.py hosts.list "grep root /etc/shadow"
('grep: /etc/shadow: Permission denied\r\n', 'Connection to server1.example.com closed.\r\n')
('grep: /etc/shadow: Permission denied\n', 'Connection to server2.example.com closed.\r\n')
[user#/home/user]# [user#/home/user]# [user#/home/user]
The second output has been slightly modified, but shows the missing "\r", and how my prompt gets "wacked" after running the script.
I think this is related to using the "-t" option in my subprocess command. Somehow I'm losing the \r. If I remove the "-t" option, this issue goes away, but long story short, I need it for passing through environmental variables for use on the remote machine (I'm hackishly using the TERM variable to pass through the user's password for sudo purposes, because I can't assume AcceptEnv is allowing arbitrary variable passing on the remote sshd server; I'm doing this to avoid passing the password on the command line, which will show up in the process list on the remote machine).
Just wondering if anyone knows a way to get around this, without removing the "-t" option?
UPDATE:
It looks like my tty settings get altered after running the subprocess.Popen(...).communicate() command within my script, regardless of whether or not I actually print the output to screen. I find that really strange. Here are the before/after differences in my tty config (from stty -a):
-ignbrk brkint ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff
-ignbrk brkint ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
-opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon -iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt
-isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt
I'm wondering how to stop communicate() from altering my terminal settings? Is it possible, or is this a bug?

I found that
stty sane
restores the console to how it was before.
I didn't really understand the other answer here so I this helps someone.
Found the answer here.

I had the same issue in a Perl script. To solve the problem I had to save the current settings of the local terminal (in order to restore it at the end of the script) and prepending "stty -raw" before executing the remote command.
So in Perl:
#Save current terminal settings (you may add the PID in the filename)
`stty -g > ~/tmp/.currentTtySettings`;
#Execute remote command prepending "stty -raw"
my #out=`ssh -t -q user#server1.example.com "stty -raw ; grep root /etc/shadow"`;
#Restore terminal settings
`stty \`cat ~/tmp/.currentTtySettings\``;
Hope it helps you!
Other very useful links:
-Detailed explanation of ssh and tty (-t option) https://unix.stackexchange.com/questions/151916/why-is-this-binary-file-being-changed
-Some Perl and ssh inspiration http://search.cpan.org/~bnegrao/Net-SSH-Expect-1.09/lib/Net/SSH/Expect.pod
-How to avoid "-t" for sudo https://unix.stackexchange.com/questions/122616/why-do-i-need-a-tty-to-run-sudo-if-i-can-sudo-without-a-password

Related

Attempting to run commands on an external server via python using paramiko ssh [duplicate]

I am slowly trying to make a python script to SSH then FTP to do some manual file getting I have to do all the time. I am using Paramiko and the session seems to command, and prints the directory but my change directory command doesn't seem to work, it prints the directory I start in: /01/home/.
import paramiko
hostname = ''
port = 22
username = ''
password = ''
#selecting PROD instance, changing to data directory, checking directory
command = {
1:'ORACLE_SID=PROD',2:'cd /01/application/dataload',3:'pwd'
}
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,port,username,password)
for key,value in command.items():
stdin,stdout,stderr=ssh.exec_command(value)
outlines=stdout.readlines()
result=''.join(outlines)
print (result)
ssh.close()
When you run exec_command multiple times, each command is executed in its own "shell". So the previous commands have no effect on an environment of the following commands.
If you need the previous commands to affect the following commands, just use an appropriate syntax of your server shell. Most *nix shells use a semicolon or an double-ampersand (with different semantics) to specify a list of commands. In your case, the ampersand is more appropriate, as it executes following commands, only if previous commands succeed:
command = "ORACLE_SID=PROD && cd /01/application/dataload && pwd"
stdin,stdout,stderr = ssh.exec_command(command)
In many cases, you do not even need to use multiple commands.
For example, instead of this sequence, that you might do when using shell interactively:
cd /path
ls
You can do:
ls /path
See also:
How to get each dependent command execution output using Paramiko exec_command
Obligatory warning: Do not use AutoAddPolicy on its own – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
Well by accidentally trying something I managed to figure this out I believe. You need to do all the commands at one time and do not need to do them in a loop. for for my instance it would be
import paramiko
hostname = ''
port = 22
username = ''
password = ''
#selecting PROD instance, changing to data directory, checking directory
command = 'ORACLE_SID=PROD;cd /01/application/dataload;pwd'
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname,port,username,password)
stdin,stdout,stderr=ssh.exec_command(value)
outlines=stdout.readlines()
result=''.join(outlines)
print (result)
ssh.close()

How can I store the output of a command I ran using SSH from a Python script into a file?

I want to be able to store the output of a command I run (say top) on a remote host, from within a Python script using SSH, into a file.
I know how to use SSH( I am currently using Paramiko to connect to the remote device). I need to run the command, and then store the output in a text file.
If you use paramiko's exec_command method to execute the command, you could do something like this:
stdin, stdout, stderr = your_ssh_client_object.exec_command("top")
with open("out.txt", "w") as f:
f.writelines(stdout.readlines())
If you care about error output too, you need to append it to the same file or store it in a separate one.
(NOTE: The above code is neither tested nor working on it's own, since the OP didn't provide a minimal, workable example himself.
What have you tried something?
Depend on your command you can do something like this:
ssh user#machine command > log
The log will be saved to your machine.
A real example:
ssh root#192.168.x.x ls > log
If your command does not support outputs to stdout, run it like this:
ssh root#192.168.x.x "command -o output; cat output" > log
With python, you can use subprocess and do the respective command, something like:
import subprocess
def yourfunction():
subprocess.call("ssh user#machine command > log",shell=False)
yourfunction()

execute which command over ssh in python script

I'm trying to run the command which solsql over SSH in a Python script.
I think the problem is in the ssh command and not the Python part, but maybe it's both.
I tried
subprocess.check_output("ssh root#IP which solsql",
stderr=subprocess.STDOUT, shell=True)
but I get an error.
I tried to run the command manually:
ssh root#{server_IP}" which solsql"
and I get a different output.
On the server I get the real path (/opt/solidDB/soliddb-6.5/bin/solsql)
but over SSH I get this:
which: no solsql in
(/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
I think what your looking for is something like paramiko. An example of how to use the library and issue a command to the remote system.
import base64
import paramiko
key = paramiko.RSAKey(data=base64.b64decode(b'AAA...'))
client = paramiko.SSHClient()
client.get_host_keys().add('ssh.example.com', 'ssh-rsa', key)
client.connect('ssh.example.com', username='THE_USER', password='THE_PASSWORD')
stdin, stdout, stderr = client.exec_command('which solsql')
for line in stdout:
print('... ' + line.strip('\n'))
client.close()
When you run a command over SSH, your shell executes a different set of startup files than when you connect interactively to the server. So the fundamental problem is really that the path where this tool is installed is not in your PATH when you connect via ssh from a script.
A common but crude workaround is to force the shell to read in the file with the PATH definition you want; but of course that basically requires you to know at least where the correct PATH is set, so you might as well just figure out where exactly the tool is installed in the first place anyway.
ssh server '. .bashrc; type -all solsql'
(assuming that the PATH is set up in your .bashrc; and ignoring for the time being the difference between executing stuff as yourself and as root. The dot and space before .bashrc are quite significant. Notice also how we use the POSIX command type rather than the brittle which command which should have died a natural but horrible death decades ago).
If you have a good idea of where the tool might be installed, perhaps instead do
subprocess.check_output(['ssh', 'root#' + ip, '''
for path in /opt/solidDB/*/bin /usr/local/bin /usr/bin; do
test -x "$path/solsql" || continue
echo "$path"
exit 0
done
exit 1'''])
Notice how we also avoid the (here, useless) shell=True. Perhaps see also Actual meaning of 'shell=True' in subprocess
First, you need to debug your error.
Use the code like this:
command = "ssh root#IP which solsql"
try:
retult = subprocess.check_output(command,shell=True,stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
print ("Result:", result)
It will output error message to you, and you'll know what to do, for example, ssh could have asked for a password, or didn't find your key, or something else.

python values to bash line on a remote server

So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.

Python rsync error in reading remote root-level files

I try to setup a cron job to rsync remote files (contains root-level files) into my local server, if I run the command in shell, it works. But if I run this in Python, I got into strange command not found error:
This works if run it in a shell:
rsync -ave ssh --rsync-path='sudo rsync' --delete root#192.168.1.100:/tmp/test2 ./test
But this Python script doesn't:
#!/usr/bin/python
from subprocess import call
....
for src_dir in backup_list:
call(["rsync", "-ave", "ssh", "--rsync-path='sudo rsync'", "--delete", src_host+src_dir, dst_dir])
It fails with:
local server:$ backup.py
bash: sudo rsync: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: remote command not found (code 127) at io.c(226) [Receiver=3.1.0]
...
It is most likely a spacing error or something small, the way I debug commands is to make sure to prints out. OS.system is a great alternative thats easier although subprocess is better. I am not around my computer to test it but you can either set your subprocess like that, or use this example. This is assuming your on Linux or Mac.
import os
cmd = ('rsync -ave --delete root' +str(src_host) + str(src_directory) + '' + str(dst_dir)) #variable you can call anytime
os.system(cmd) # actually performs the command
print x # how to test and make sure
Quotes around an argument with spaces like you have in "--rsync-path='sudo rsync'" are needed when the shell splits up a long string into arguments, to avoid treating rsync as a separate argument. In your call(), you're providing the individual arguments, so that splitting of a string into arguments is not performed. With your code as-is, the quotes end up as part of the argument passed to rsync. Just drop them. Here's a working example of the list passed to a call() for a very similar rsync invocation:
['rsync',
'-arvz',
'-delete',
'-e',
'ssh',
'--rsync-path=sudo rsync',
'192.168.0.17:/remote/directory/',
'/local/directory/']
I have been facing the same issue:
This piece of code work for me…
join the command while passing to call or Popen and add shell=True.
from subprocess import call
for src_dir in backup_list:
call( " ".join(["rsync", "-ave", "ssh", "--rsync-path='sudo rsync'", "--delete", src_host+src_dir, dst_dir]) , shell=True)

Categories

Resources