sudo rsync on target devices - python

Good morning,
I am currently having a problem that I cannot find the answer to on StackOverflow or google searches, and I have not yet solved.
I am trying to use rsync as a sudo user on a target device.
The issue:
I do not own the target device, so I cannot change ssh/sudo perms.
I do not have credentials to the root user
I do have credentials to sudo user
The transaction must be completed programmatically (minimal user input)
What I've tried:
rsync -a --rsync-path "sudo rsync" USER#HOST:/root/FILE ./
Issue: "A terminal is required to read password"
ok, so let's try passing it through stdin
rsync -a --rsync-path "echo 'PASSWORD' | sudo -S rsync" USER#HOST:/root/FILE ./
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
Issue: rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]
rsync: connection unexpectedly closed (4 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-52.200.1/rsync/io.c(453) [receiver=2.6.9]
Do you guys have any other ideas about what I could be doing?
I am aware that echoing the password is not best practice, however I do not have many other options in the case that the server I am connecting to has not done a key exchange with root user and I cannot change the SUDOPASS settings.
In the end this is all getting plugged into a Python script, so if there is a better pythonic means of using rsync as a sudoer, please inform me.

If your remote sudo is configured so that once you have given the password, you do not need to give it again for a while, then you can try this:
rsync -a --rsync-path "echo 'PASSWORD' | sudo -S date >&/dev/null; sudo rsync" \
USER#HOST:/root/FILE ./
To debug what command is being run on the remote add --debug=CMD2.
If your remote does not understand the bash syntax >&/dev/null, use the longer >/dev/null 2>/dev/null.

Related

SSHing from within a python script and run a sudo command having to give sudo password

I am trying to SSH into another host from within a python script and run a command that requires sudo.
I'm able to ssh from the python script as follows:
import subprocess
import sys
import json
HOST="hostname"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="sudo command"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print(error)
else:
print(result)
But I want to run a command like this after sshing :
extract_response = subprocess.check_output(['sudo -u username internal_cmd',
'-m', 'POST',
'-u', 'jobRun/-/%s/%s' % (job_id, dataset_date)])
return json.loads(extract_response.decode('utf-8'))[0]['id']
How do I do that?
Also, I don't want to be providing the sudo password every time I run this sudo command, for that I have added this command (i.e., internal_cmd from above) at the end of visudo in the new host I'm trying to ssh into. But still when just typing this command directly in the terminal like this:
ssh -t hostname sudo -u username internal_cmd -m POST -u/-/1234/2019-01-03
I am being prompted to give the password. Why is this happening?
You can pipe the password by using the -S flag, that tells sudo to read the password from the standard input.
echo 'password' | sudo -S [command]
You may need to play around with how you put in the ssh command, but this should do what you need.
Warning: you may know this already... but never store your password directly in your code, especially if you plan to push code to something like Github. If you are unaware of this, look into using environment variables or storing the password in a separate file.
If you don't want to worry about where to store the sudo password, you might consider adding the script user to the sudoers list with sudo access to only the command you want to run along with the no password required option. See sudoers(5) man page.
You can further restrict command access by prepending a "command" option to the beginning of your authorized_keys entry. See sshd(8) man page.
If you can, disable ssh password authentication to require only ssh key authentication. See sshd_config(5) man page.

How to Automate Login to Intermediate Host using ProxyCommand with Paramiko in Python [duplicate]

I need to create a script that automatically inputs a password to OpenSSH ssh client.
Let's say I need to SSH into myname#somehost with the password a1234b.
I've already tried...
#~/bin/myssh.sh
ssh myname#somehost
a1234b
...but this does not work.
How can I get this functionality into a script?
First you need to install sshpass.
Ubuntu/Debian: apt-get install sshpass
Fedora/CentOS: yum install sshpass
Arch: pacman -S sshpass
Example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM
Custom port example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM:2400
Notes:
sshpass can also read a password from a file when the -f flag is passed.
Using -f prevents the password from being visible if the ps command is executed.
The file that the password is stored in should have secure permissions.
After looking for an answer to the question for months, I finally found a better solution: writing a simple script.
#!/usr/bin/expect
set timeout 20
set cmd [lrange $argv 1 end]
set password [lindex $argv 0]
eval spawn $cmd
expect "password:"
send "$password\r";
interact
Put it to /usr/bin/exp, So you can use:
exp <password> ssh <anything>
exp <password> scp <anysrc> <anydst>
Done!
Use public key authentication: https://help.ubuntu.com/community/SSH/OpenSSH/Keys
In the source host run this only once:
ssh-keygen -t rsa # ENTER to every field
ssh-copy-id myname#somehost
That's all, after that you'll be able to do ssh without password.
You could use an expects script. I have not written one in quite some time but it should look like below. You will need to head the script with #!/usr/bin/expect
#!/usr/bin/expect -f
spawn ssh HOSTNAME
expect "login:"
send "username\r"
expect "Password:"
send "password\r"
interact
Variant I
sshpass -p PASSWORD ssh USER#SERVER
Variant II
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
sshpass + autossh
One nice bonus of the already-mentioned sshpass is that you can use it with autossh, eliminating even more of the interactive inefficiency.
sshpass -p mypassword autossh -M0 -t myusername#myserver.mydomain.com
This will allow autoreconnect if, e.g. your wifi is interrupted by closing your laptop.
With a jump host
sshpass -p `cat ~/.sshpass` autossh -M0 -Y -tt -J me#jumphost.mydomain.com:22223 -p 222 me#server.mydomain.com
sshpass with better security
I stumbled on this thread while looking for a way to ssh into a bogged-down server -- it took over a minute to process the SSH connection attempt, and timed out before I could enter a password. In this case, I wanted to be able to supply my password immediately when the prompt was available.
(And if it's not painfully clear: with a server in this state, it's far too late to set up a public key login.)
sshpass to the rescue. However, there are better ways to go about this than sshpass -p.
My implementation skips directly to the interactive password prompt (no time wasted seeing if public key exchange can happen), and never reveals the password as plain text.
#!/bin/sh
# preempt-ssh.sh
# usage: same arguments that you'd pass to ssh normally
echo "You're going to run (with our additions) ssh $#"
# Read password interactively and save it to the environment
read -s -p "Password to use: " SSHPASS
export SSHPASS
# have sshpass load the password from the environment, and skip public key auth
# all other args come directly from the input
sshpass -e ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no "$#"
# clear the exported variable containing the password
unset SSHPASS
I don't think I saw anyone suggest this and the OP just said "script" so...
I needed to solve the same problem and my most comfortable language is Python.
I used the paramiko library. Furthermore, I also needed to issue commands for which I would need escalated permissions using sudo. It turns out sudo can accept its password via stdin via the "-S" flag! See below:
import paramiko
ssh_client = paramiko.SSHClient()
# To avoid an "unknown hosts" error. Solve this differently if you must...
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# This mechanism uses a private key.
pkey = paramiko.RSAKey.from_private_key_file(PKEY_PATH)
# This mechanism uses a password.
# Get it from cli args or a file or hard code it, whatever works best for you
password = "password"
ssh_client.connect(hostname="my.host.name.com",
username="username",
# Uncomment one of the following...
# password=password
# pkey=pkey
)
# do something restricted
# If you don't need escalated permissions, omit everything before "mkdir"
command = "echo {} | sudo -S mkdir /var/log/test_dir 2>/dev/null".format(password)
# In order to inspect the exit code
# you need go under paramiko's hood a bit
# rather than just using "ssh_client.exec_command()"
chan = ssh_client.get_transport().open_session()
chan.exec_command(command)
exit_status = chan.recv_exit_status()
if exit_status != 0:
stderr = chan.recv_stderr(5000)
# Note that sudo's "-S" flag will send the password prompt to stderr
# so you will see that string here too, as well as the actual error.
# It was because of this behavior that we needed access to the exit code
# to assert success.
logger.error("Uh oh")
logger.error(stderr)
else:
logger.info("Successful!")
Hope this helps someone. My use case was creating directories, sending and untarring files and starting programs on ~300 servers as a time. As such, automation was paramount. I tried sshpass, expect, and then came up with this.
# create a file that echo's out your password .. you may need to get crazy with escape chars or for extra credit put ASCII in your password...
echo "echo YerPasswordhere" > /tmp/1
chmod 777 /tmp/1
# sets some vars for ssh to play nice with something to do with GUI but here we are using it to pass creds.
export SSH_ASKPASS="/tmp/1"
export DISPLAY=YOURDOINGITWRONG
setsid ssh root#owned.com -p 22
reference: https://www.linkedin.com/pulse/youre-doing-wrong-ssh-plain-text-credentials-robert-mccurdy?trk=mp-reader-card
This is how I login to my servers:
ssp <server_ip>
alias ssp='/home/myuser/Documents/ssh_script.sh'
cat /home/myuser/Documents/ssh_script.sh
ssp:
#!/bin/bash
sshpass -p mypassword ssh root#$1
And therefore:
ssp server_ip
This is basically an extension of abbotto's answer, with some additional steps (aimed at beginners) to make starting up your server, from your linux host, very easy:
Write a simple bash script, e.g.:
#!/bin/bash
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no <YOUR_USERNAME>#<SEVER_IP>
Save the file, e.g. 'startMyServer', then make the file executable by running this in your terminal:
sudo chmod +x startMyServer
Move the file to a folder which is in your 'PATH' variable (run 'echo $PATH' in your terminal to see those folders). So for example move it to '/usr/bin/'.
And voila, now you are able to get into your server by typing 'startMyServer' into your terminal.
P.S. (1) this is not very secure, look into ssh keys for better security.
P.S. (2) SMshrimant answer is quite similar and might be more elegant to some. But I personally prefer to work in bash scripts.
I am using below solution but for that you have to install sshpass If its not already installed, install it using sudo apt install sshpass
Now you can do this,
sshpass -p *YourPassword* ssh root#IP
You can create a bash alias as well so that you don't have to run the whole command again and again.
Follow below steps
cd ~
sudo nano .bash_profile
at the end of the file add below code
mymachine() { sshpass -p *YourPassword* ssh root#IP }
source .bash_profile
Now just run mymachine command from terminal and you'll enter your machine without password prompt.
Note:
mymachine can be any command of your choice.
If security doesn't matter for you here in this task and you just want to automate the work you can use this method.
If you are doing this on a Windows system, you can use Plink (part of PuTTY).
plink your_username#yourhost -pw your_password
I have a better solution that inclueds login with your account than changing to root user.
It is a bash script
http://felipeferreira.net/index.php/2011/09/ssh-automatic-login/
The answer of #abbotto did not work for me, had to do some things differently:
yum install sshpass changed to - rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/sshpass-1.05-1.el6.x86_64.rpm
the command to use sshpass changed to - sshpass -p "pass" ssh user#mysite -p 2122
I managed to get it working with that:
SSH_ASKPASS="echo \"my-pass-here\""
ssh -tt remotehost -l myusername
This works:
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
BUT!!! If you have an error like below, just start your script with expect, but not bash, as shown here: expect myssh.sh
instead of bash myssh.sh
/bin/myssh.sh: 2: spawn: not found /bin/myssh.sh: 3: expect: not found /bin/myssh.sh: 4: send: not found /bin/myssh.sh: 5: expect: not found /bin/myssh.sh: 6: send: not found
I got this working as follows
.ssh/config was modified to eliminate the yes/no prompt - I'm behind a firewall so I'm not worried about spoofed ssh keys
host *
StrictHostKeyChecking no
Create a response file for expect i.e. answer.expect
set timeout 20
set node [lindex $argv 0]
spawn ssh root#node service hadoop-hdfs-datanode restart
expect "*?assword {
send "password\r" <- your password here.
interact
Create your bash script and just call expect in the file
#!/bin/bash
i=1
while [$i -lt 129] # a few nodes here
expect answer.expect hadoopslave$i
i=[$i + 1]
sleep 5
done
Gets 128 hadoop datanodes refreshed with new config - assuming you are using a NFS mount for the hadoop/conf files
Hope this helps someone - I'm a Windows numpty and this took me about 5 hours to figure out!
In the example bellow I'll write the solution that I used:
The scenario: I want to copy file from a server using sh script:
#!/usr/bin/expect
$PASSWORD=password
my_script=$(expect -c "spawn scp userName#server-name:path/file.txt /home/Amine/Bureau/trash/test/
expect \"password:\"
send \"$PASSWORD\r\"
expect \"#\"
send \"exit \r\"
")
echo "$my_script"
Solution1:use sshpass
#~/bin/myssh.sh
sshpass -p a1234b ssh myname#somehost
You can install by
# Ubuntu/Debian
$ sudo apt-get install sshpass
# Red Hat/Fedora/CentOS
$ sudo yum install sshpass
# Arch Linux
$ sudo pacman -S sshpass
#OS X
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
or download the Source Code from here, then
tar xvzf sshpass-1.08.tar.gz
cd sshpass-1.08.tar.gz
./configure
sudo make install
Solution2:Set SSH passwordless login
Let's say you need to SSH into bbb#2.2.2.2(Remote server B) with the password 2b2b2b from aaa#1.1.1.1(Client server A).
Generate the public key(.ssh/id_rsa.pub) and private key(.ssh/id_rsa) in A with the following commands
ssh-keygen -t rsa
[Press enter key]
[Press enter key]
[Press enter key]
Use the following command to distribute the generated public key(.ssh/id_rsa.pub) to server B under bbb‘s .ssh directory as a file name authorized_keys
ssh-copy-id bbb#2.2.2.2
You need to enter a password for the first ssh login, and it will be logged in automatically in the future, no need to enter it again!
ssh bbb#2.2.2.2 [Enter]
2b2b2b
And then your script can be
#~/bin/myssh.sh
ssh myname#somehost
Use this script tossh within script, First argument is the hostname and second will be the password.
#!/usr/bin/expect
set pass [lindex $argv 1]
set host [lindex $argv 0]
spawn ssh -t root#$host echo Hello
expect "*assword: "
send "$pass\n";
interact"
To connect remote machine through shell scripts , use below command:
sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USERNAME#IPADDRESS
where IPADDRESS, USERNAME and PASSWORD are input values which need to provide in script, or if we want to provide in runtime use "read" command.
This should help in most of the cases (you need to install sshpass first!):
#!/usr/bin/bash
read -p 'Enter Your Username: ' UserName;
read -p 'Enter Your Password: ' Password;
read -p 'Enter Your Domain Name: ' Domain;
sshpass -p "$Password" ssh -o StrictHostKeyChecking=no $UserName#$Domain
In linux/ubuntu
ssh username#server_ip_address -p port_number
Press enter and then enter your server password
if you are not a root user then add sudo in starting of command

pass the password as an argument in pssh

I am trying to write a script which will run commands on multiple machines with pssh.
Is there any way to pass the password also in the same command line like below:
$ pssh -h pssh-host.txt -l root -A "pswd" echo "hi"
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
Password:
Tried following solution:
sshpass -pabc pssh -h pssh-host.txt -l root -A echo "hi"
Warning: do not enter your password if anyone else has superuser
privileges or access to your account.
[1] 13:55:56 [SUCCESS] x
[2] 13:55:56 [SUCCESS] y
I do not want this Password: prompt. Can someone suggest a way for this?
Crossposting an answer by user568109 on Unix SE as Community Wiki:
Found the solution on net not long after posting the question.
The solution is:
Install and use sshpass
Use interactive mode to force the password which is just an empty string
Used command cat local | sshpass -ppassword parallel-ssh -I -h new_hosts -l root -A 'cat >> remote'
Original solution at:
http://www.getreu.net/public/downloads/doc/Secure_Computer_Cluster_Administration_with_SSH/

Ansible with Github: Permission denied (Publickey)

I'm trying to understand the GitHub ssh configuration with Ansible (I'm working on the Ansible: Up & Running book). I'm running into two issues.
Permission denied (publickey) -
When I first ran the ansible-playbook mezzanine.yml playbook, I got a permission denied:
failed: [web] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
Ok, fair enough, I see several people have had this problem. So I jumped to appendix A on running Git with SSH and it said to run the ssh-agent and add the id_rsa public key:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
Output: Identity AddedI ran ssh-agent -l to check and got the long string: 2048 e3:fb:... But I got the same output. So I checked the Github docs on ssh key generations and troubleshooting which recommended updating the ssh config file on my host machine:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
But this still provides the same error. So at this point, I start thinking it's my rsa file, which leads me to my second problem.
Key Generation Issues - I tried to generate an additional cert to use, because the Github test threw another "Permission denied (publickey)" error.
Warning: Permanently added the RSA host key for IP address '192.30.252.131' to the list of known hosts.
Permission denied (publickey).
I followed the Github instructions from scratch and generated a new key with a different name.
ssh-keygen -t rsa -b 4096 -C "me#example.com"
I didn't enter a passphrase and saved it to the .ssh folder with the name git_rsa.pub. I ran the same test and got the following:
$ ssh -i ~/.ssh/git_rsa.pub -T git#github.com
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/Users/antonioalaniz1/.ssh/git_rsa.pub' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: ~/.ssh/github_rsa.pub
Permission denied (publickey).
I checked on the permissions and did a chmod 700 on the file and I still get Permission denied (publickey). I even attempted to enter the key into my Github account, but first got a message that the key file needs to start with ssh-rsa. So I started researching and hacking. Started with just entering the long string in the file (it started with --BEGIN PRIVATE KEY--, but I omitted that part after it failed); however, Github's not accepting it, saying it's invalid.
This is my Ansible command in the YAML file:
- name: check out the repository on the host
git: repo={{ repo_url }} dest={{ proj_path }} accept_hostkey=yes
vars:
repo_url: git#github.com:lorin/mezzanine-example.git
This is my ansible.cfg file with ForwardAgent configured:
[defaults]
hostfile = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes
The box is an Ubuntu Trusty64 using Mac OS. If anyone could clue me into the file permissions and/or Github key generation, I would appreciate it.
I suspect the key permissions issue is because you are passing the public key instead of the private key as the arugment to "ssh -i". Try this instead:
ssh -i ~/.ssh/git_rsa -T git#github.com
(Note that it's git_rsa and not git_rsa.pub).
If that works, then make sure it's in your ssh-agent. To add:
ssh-add ~/.ssh/git_rsa
To verify:
ssh-add -l
Then check that Ansible respects agent forwarding by doing:
ansible web -a "ssh-add -l"
Finally, check that you can reach GitHub via ssh by doing:
ansible web -a "ssh -T git#github.com"
You should see something like:
web | FAILED | rc=1 >>
Hi lorin! You've successfully authenticated, but GitHub does not provide shell access.
I had the same problem, it took me some time, but I have found the solution.
The problem is the URL is incorrect.
Just try to change it to:
repo_url: git://github.com/lorin/mezzanine-example.git
I ran into this issue and discovered it by turning verbosity up on the ansible commands (very very useful for debugging).
Unfortunately, ssh often throws error messages that don't quite lead you in the right direction (aka permission denied is very generic...though to be fair that is often thrown when there is a file permission issue so perhaps not quite so generic). Anyways, running the ansible test command with verbose on helps recreate the issue as well as verify when it is solved.
ansible -vvv all -a "ssh -T git#github.com"
Again, the setup I use (and a typical one) is to load your ssh key into the agent on the control machine and enable forwarding.
steps are found here Github's helpful ssh docs
it also stuck out to me that when I ssh'd to the box itself via the vagrant command and ran the test, it succeeded. So I had narrowed it down to how ansible was forwarding the connection. For me what eventually worked was setting
[paramiko_connection]
record_host_keys = False
In addition to the other config that controls host keys verification
host_key_checking = False
which essentially adds
-o StrictHostKeyChecking=no
to the ssh args for you, and
-o UserKnownHostsFile=/dev/null
was added to the ssh args as well
found here:
Ansible issue 9442
Again, this was on vagrant VMs, more careful consideration around host key verification should be taken on actual servers.
Hope this helps

Fabric not working when using ~/.ssh/config

I have the following configuration on my ~/.ssh/config
Host death-star
HostName deathstar.empire.com
User vader
IdentityFile ~/.ssh/death_id_rsa
And the following fabfile
from fabric.api import env, task
env.use_ssh_config = True
#task
def destroy_rebels():
run("echo Alderaan has been destroyed")
I'm calling the task like this:
$ fab --host death-star destroy_rebels
This is the output I get:
[death-star] Executing task 'destroy_rebels'
[death-star] run: echo Alderaan has been destroyed
Fatal error: run() received nonzero return code -1 while executing!
Requested: echo Alderaan has been destroyed
Executed: /bin/bash -l -c "echo Alderaan has been destroyed"
Aborting.
Disconnecting from vader#deathstar.empire.com... done.
I'm pretty sure the ssh config is correct since I can ssh death-star with no problems.
Also, when I specify the hostname and use the default key for user root instead of using the ssh config file, it works:
$ fab --user root --host deathstar.empire.com destroy_rebels
Any ideas why this happens?
EDIT: This is my fabric version
$ fab --version
Fabric 1.4.1
ssh (library) 1.7.13
EDIT 2:
I've rewritten bits of the original post. I realized that the root (using the default key id_rsa always works, even using .ssh/config, if I add a new entry:
Host root-death-star
HostName deathstar.empire.com
User root
IdentityFile ~/.ssh/id_rsa
$ fab --host root-death-star destroy_rebels # this works
But using the non-root user vader, with its own key death_id_rsa, it doesnt. SSHing to the server is still working though, as root and as vader.
From that output it's nothing to do with the ssh connection but what the return code from the echo being run is returning. As to what would make it the -1, from your additional notes, it could be that something custom in your zsh or zshrc is throwing a bad return code and it's bubbling up.

Categories

Resources