Fabric not working when using ~/.ssh/config - python

I have the following configuration on my ~/.ssh/config
Host death-star
HostName deathstar.empire.com
User vader
IdentityFile ~/.ssh/death_id_rsa
And the following fabfile
from fabric.api import env, task
env.use_ssh_config = True
#task
def destroy_rebels():
run("echo Alderaan has been destroyed")
I'm calling the task like this:
$ fab --host death-star destroy_rebels
This is the output I get:
[death-star] Executing task 'destroy_rebels'
[death-star] run: echo Alderaan has been destroyed
Fatal error: run() received nonzero return code -1 while executing!
Requested: echo Alderaan has been destroyed
Executed: /bin/bash -l -c "echo Alderaan has been destroyed"
Aborting.
Disconnecting from vader#deathstar.empire.com... done.
I'm pretty sure the ssh config is correct since I can ssh death-star with no problems.
Also, when I specify the hostname and use the default key for user root instead of using the ssh config file, it works:
$ fab --user root --host deathstar.empire.com destroy_rebels
Any ideas why this happens?
EDIT: This is my fabric version
$ fab --version
Fabric 1.4.1
ssh (library) 1.7.13
EDIT 2:
I've rewritten bits of the original post. I realized that the root (using the default key id_rsa always works, even using .ssh/config, if I add a new entry:
Host root-death-star
HostName deathstar.empire.com
User root
IdentityFile ~/.ssh/id_rsa
$ fab --host root-death-star destroy_rebels # this works
But using the non-root user vader, with its own key death_id_rsa, it doesnt. SSHing to the server is still working though, as root and as vader.

From that output it's nothing to do with the ssh connection but what the return code from the echo being run is returning. As to what would make it the -1, from your additional notes, it could be that something custom in your zsh or zshrc is throwing a bad return code and it's bubbling up.

Related

Execute Host OS Command from Flask container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

How to Automate Login to Intermediate Host using ProxyCommand with Paramiko in Python [duplicate]

I need to create a script that automatically inputs a password to OpenSSH ssh client.
Let's say I need to SSH into myname#somehost with the password a1234b.
I've already tried...
#~/bin/myssh.sh
ssh myname#somehost
a1234b
...but this does not work.
How can I get this functionality into a script?
First you need to install sshpass.
Ubuntu/Debian: apt-get install sshpass
Fedora/CentOS: yum install sshpass
Arch: pacman -S sshpass
Example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM
Custom port example:
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no YOUR_USERNAME#SOME_SITE.COM:2400
Notes:
sshpass can also read a password from a file when the -f flag is passed.
Using -f prevents the password from being visible if the ps command is executed.
The file that the password is stored in should have secure permissions.
After looking for an answer to the question for months, I finally found a better solution: writing a simple script.
#!/usr/bin/expect
set timeout 20
set cmd [lrange $argv 1 end]
set password [lindex $argv 0]
eval spawn $cmd
expect "password:"
send "$password\r";
interact
Put it to /usr/bin/exp, So you can use:
exp <password> ssh <anything>
exp <password> scp <anysrc> <anydst>
Done!
Use public key authentication: https://help.ubuntu.com/community/SSH/OpenSSH/Keys
In the source host run this only once:
ssh-keygen -t rsa # ENTER to every field
ssh-copy-id myname#somehost
That's all, after that you'll be able to do ssh without password.
You could use an expects script. I have not written one in quite some time but it should look like below. You will need to head the script with #!/usr/bin/expect
#!/usr/bin/expect -f
spawn ssh HOSTNAME
expect "login:"
send "username\r"
expect "Password:"
send "password\r"
interact
Variant I
sshpass -p PASSWORD ssh USER#SERVER
Variant II
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
sshpass + autossh
One nice bonus of the already-mentioned sshpass is that you can use it with autossh, eliminating even more of the interactive inefficiency.
sshpass -p mypassword autossh -M0 -t myusername#myserver.mydomain.com
This will allow autoreconnect if, e.g. your wifi is interrupted by closing your laptop.
With a jump host
sshpass -p `cat ~/.sshpass` autossh -M0 -Y -tt -J me#jumphost.mydomain.com:22223 -p 222 me#server.mydomain.com
sshpass with better security
I stumbled on this thread while looking for a way to ssh into a bogged-down server -- it took over a minute to process the SSH connection attempt, and timed out before I could enter a password. In this case, I wanted to be able to supply my password immediately when the prompt was available.
(And if it's not painfully clear: with a server in this state, it's far too late to set up a public key login.)
sshpass to the rescue. However, there are better ways to go about this than sshpass -p.
My implementation skips directly to the interactive password prompt (no time wasted seeing if public key exchange can happen), and never reveals the password as plain text.
#!/bin/sh
# preempt-ssh.sh
# usage: same arguments that you'd pass to ssh normally
echo "You're going to run (with our additions) ssh $#"
# Read password interactively and save it to the environment
read -s -p "Password to use: " SSHPASS
export SSHPASS
# have sshpass load the password from the environment, and skip public key auth
# all other args come directly from the input
sshpass -e ssh -o PreferredAuthentications=keyboard-interactive -o PubkeyAuthentication=no "$#"
# clear the exported variable containing the password
unset SSHPASS
I don't think I saw anyone suggest this and the OP just said "script" so...
I needed to solve the same problem and my most comfortable language is Python.
I used the paramiko library. Furthermore, I also needed to issue commands for which I would need escalated permissions using sudo. It turns out sudo can accept its password via stdin via the "-S" flag! See below:
import paramiko
ssh_client = paramiko.SSHClient()
# To avoid an "unknown hosts" error. Solve this differently if you must...
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# This mechanism uses a private key.
pkey = paramiko.RSAKey.from_private_key_file(PKEY_PATH)
# This mechanism uses a password.
# Get it from cli args or a file or hard code it, whatever works best for you
password = "password"
ssh_client.connect(hostname="my.host.name.com",
username="username",
# Uncomment one of the following...
# password=password
# pkey=pkey
)
# do something restricted
# If you don't need escalated permissions, omit everything before "mkdir"
command = "echo {} | sudo -S mkdir /var/log/test_dir 2>/dev/null".format(password)
# In order to inspect the exit code
# you need go under paramiko's hood a bit
# rather than just using "ssh_client.exec_command()"
chan = ssh_client.get_transport().open_session()
chan.exec_command(command)
exit_status = chan.recv_exit_status()
if exit_status != 0:
stderr = chan.recv_stderr(5000)
# Note that sudo's "-S" flag will send the password prompt to stderr
# so you will see that string here too, as well as the actual error.
# It was because of this behavior that we needed access to the exit code
# to assert success.
logger.error("Uh oh")
logger.error(stderr)
else:
logger.info("Successful!")
Hope this helps someone. My use case was creating directories, sending and untarring files and starting programs on ~300 servers as a time. As such, automation was paramount. I tried sshpass, expect, and then came up with this.
# create a file that echo's out your password .. you may need to get crazy with escape chars or for extra credit put ASCII in your password...
echo "echo YerPasswordhere" > /tmp/1
chmod 777 /tmp/1
# sets some vars for ssh to play nice with something to do with GUI but here we are using it to pass creds.
export SSH_ASKPASS="/tmp/1"
export DISPLAY=YOURDOINGITWRONG
setsid ssh root#owned.com -p 22
reference: https://www.linkedin.com/pulse/youre-doing-wrong-ssh-plain-text-credentials-robert-mccurdy?trk=mp-reader-card
This is how I login to my servers:
ssp <server_ip>
alias ssp='/home/myuser/Documents/ssh_script.sh'
cat /home/myuser/Documents/ssh_script.sh
ssp:
#!/bin/bash
sshpass -p mypassword ssh root#$1
And therefore:
ssp server_ip
This is basically an extension of abbotto's answer, with some additional steps (aimed at beginners) to make starting up your server, from your linux host, very easy:
Write a simple bash script, e.g.:
#!/bin/bash
sshpass -p "YOUR_PASSWORD" ssh -o StrictHostKeyChecking=no <YOUR_USERNAME>#<SEVER_IP>
Save the file, e.g. 'startMyServer', then make the file executable by running this in your terminal:
sudo chmod +x startMyServer
Move the file to a folder which is in your 'PATH' variable (run 'echo $PATH' in your terminal to see those folders). So for example move it to '/usr/bin/'.
And voila, now you are able to get into your server by typing 'startMyServer' into your terminal.
P.S. (1) this is not very secure, look into ssh keys for better security.
P.S. (2) SMshrimant answer is quite similar and might be more elegant to some. But I personally prefer to work in bash scripts.
I am using below solution but for that you have to install sshpass If its not already installed, install it using sudo apt install sshpass
Now you can do this,
sshpass -p *YourPassword* ssh root#IP
You can create a bash alias as well so that you don't have to run the whole command again and again.
Follow below steps
cd ~
sudo nano .bash_profile
at the end of the file add below code
mymachine() { sshpass -p *YourPassword* ssh root#IP }
source .bash_profile
Now just run mymachine command from terminal and you'll enter your machine without password prompt.
Note:
mymachine can be any command of your choice.
If security doesn't matter for you here in this task and you just want to automate the work you can use this method.
If you are doing this on a Windows system, you can use Plink (part of PuTTY).
plink your_username#yourhost -pw your_password
I have a better solution that inclueds login with your account than changing to root user.
It is a bash script
http://felipeferreira.net/index.php/2011/09/ssh-automatic-login/
The answer of #abbotto did not work for me, had to do some things differently:
yum install sshpass changed to - rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/sshpass-1.05-1.el6.x86_64.rpm
the command to use sshpass changed to - sshpass -p "pass" ssh user#mysite -p 2122
I managed to get it working with that:
SSH_ASKPASS="echo \"my-pass-here\""
ssh -tt remotehost -l myusername
This works:
#!/usr/bin/expect -f
spawn ssh USERNAME#SERVER "touch /home/user/ssh_example"
expect "assword:"
send "PASSWORD\r"
interact
BUT!!! If you have an error like below, just start your script with expect, but not bash, as shown here: expect myssh.sh
instead of bash myssh.sh
/bin/myssh.sh: 2: spawn: not found /bin/myssh.sh: 3: expect: not found /bin/myssh.sh: 4: send: not found /bin/myssh.sh: 5: expect: not found /bin/myssh.sh: 6: send: not found
I got this working as follows
.ssh/config was modified to eliminate the yes/no prompt - I'm behind a firewall so I'm not worried about spoofed ssh keys
host *
StrictHostKeyChecking no
Create a response file for expect i.e. answer.expect
set timeout 20
set node [lindex $argv 0]
spawn ssh root#node service hadoop-hdfs-datanode restart
expect "*?assword {
send "password\r" <- your password here.
interact
Create your bash script and just call expect in the file
#!/bin/bash
i=1
while [$i -lt 129] # a few nodes here
expect answer.expect hadoopslave$i
i=[$i + 1]
sleep 5
done
Gets 128 hadoop datanodes refreshed with new config - assuming you are using a NFS mount for the hadoop/conf files
Hope this helps someone - I'm a Windows numpty and this took me about 5 hours to figure out!
In the example bellow I'll write the solution that I used:
The scenario: I want to copy file from a server using sh script:
#!/usr/bin/expect
$PASSWORD=password
my_script=$(expect -c "spawn scp userName#server-name:path/file.txt /home/Amine/Bureau/trash/test/
expect \"password:\"
send \"$PASSWORD\r\"
expect \"#\"
send \"exit \r\"
")
echo "$my_script"
Solution1:use sshpass
#~/bin/myssh.sh
sshpass -p a1234b ssh myname#somehost
You can install by
# Ubuntu/Debian
$ sudo apt-get install sshpass
# Red Hat/Fedora/CentOS
$ sudo yum install sshpass
# Arch Linux
$ sudo pacman -S sshpass
#OS X
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
or download the Source Code from here, then
tar xvzf sshpass-1.08.tar.gz
cd sshpass-1.08.tar.gz
./configure
sudo make install
Solution2:Set SSH passwordless login
Let's say you need to SSH into bbb#2.2.2.2(Remote server B) with the password 2b2b2b from aaa#1.1.1.1(Client server A).
Generate the public key(.ssh/id_rsa.pub) and private key(.ssh/id_rsa) in A with the following commands
ssh-keygen -t rsa
[Press enter key]
[Press enter key]
[Press enter key]
Use the following command to distribute the generated public key(.ssh/id_rsa.pub) to server B under bbb‘s .ssh directory as a file name authorized_keys
ssh-copy-id bbb#2.2.2.2
You need to enter a password for the first ssh login, and it will be logged in automatically in the future, no need to enter it again!
ssh bbb#2.2.2.2 [Enter]
2b2b2b
And then your script can be
#~/bin/myssh.sh
ssh myname#somehost
Use this script tossh within script, First argument is the hostname and second will be the password.
#!/usr/bin/expect
set pass [lindex $argv 1]
set host [lindex $argv 0]
spawn ssh -t root#$host echo Hello
expect "*assword: "
send "$pass\n";
interact"
To connect remote machine through shell scripts , use below command:
sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USERNAME#IPADDRESS
where IPADDRESS, USERNAME and PASSWORD are input values which need to provide in script, or if we want to provide in runtime use "read" command.
This should help in most of the cases (you need to install sshpass first!):
#!/usr/bin/bash
read -p 'Enter Your Username: ' UserName;
read -p 'Enter Your Password: ' Password;
read -p 'Enter Your Domain Name: ' Domain;
sshpass -p "$Password" ssh -o StrictHostKeyChecking=no $UserName#$Domain
In linux/ubuntu
ssh username#server_ip_address -p port_number
Press enter and then enter your server password
if you are not a root user then add sudo in starting of command

How fabric work with 'sudo su user'

My request is simple:
ssh to a remote server with user0
switch user to user1 using: 'sudo su user1'
list all items in current folder
My expected code:
def startRedis():
run('sudo su - user1')
print(run('ls'))
However, it ends with out: user1#server:~$
And waiting for my interactive command forever, never executing the second line. It seems sudo su opened a new shell.
Can anyone help solving this simple task?
You can set sudo_user property in env. this way fabric will switch user to the desired user.
Official doc: http://docs.fabfile.org/
Password for switching user can be specified in the env. itself to avoid getting a prompt when the method is invoked.
fabfile.py
from fabric.api import env, sudo
env.sudo_user='user1'
env.password = '***'
def list_items():
sudo('ls')
Run below command & specify the hosts after -H
fab -H host1 list_items

Why does my remote host return a error code -1 when I use fabric reboot()?

Local Host Environment: CentOS 7, Python 3.5.1, Fabric3 (1.11.1.post1)
Remote Host Environment: CentOS 7
fibfile:
def fuc():
reboot()
bash:
fab -f fibfile.py -H host -u root -p password
The remote host did reboot but returns a fatalError:
sudo() received nonzero return code -1 while executing 'reboot'!
Now I use warn_only to prevent failure:
fabfile:
def test():
with settings(warn_only=True):
reboot()
I started having this problem with some new virtual machines. I think they do shut down too fast, as Jon Stark said.
To fix it, I ignore the error and the warning, like this.
with settings(hide('warnings'),
warn_only=True,
):
sudo("shutdown -r now")
I find a similar question when use ansible: link
I think the top answer is right:
reboot is shutting down the server so quickly that the server is tearing down the SSH connection.
shutdown -r now return the same fatal error:
sudo() received nonzero return code -1 while executing 'shutdown -r now'!
shutdown -r +1 return success:
out: Shutdown scheduled for Mon 2016-05-23 14:16:48 UTC, use 'shutdown -c' to cancel.
But shutdown can only delay at least one minute.
So we can only choose to wait for a minute or ignore the error.
You can put a shell session into the background which sleeps for 1 second then executes the reboot command. Must be done without the use of nohup command because of the nohup issue. I use tmux...
reboot(command='tmux new-session -d "sleep 1; reboot;"')

Python - Using Fabric with Sudo

I'm pretty new to python and fabric and I am trying to do a simple code where I can get the output on two hosts that uses sudo, although I keep getting an error.... Can anyone help me out with what I might be missing ?
My code:
from fabric.api import *
from getpass import getpass
from fabric.decorators import runs_once
env.hosts = ['host1','host2']
env.port = '22'
env.user = 'username'
env.password="password"
def sudo_dsmc(cmd):
sudo("-l")
When I run: fab sudo_dsmc:"-1" :
MacBookPRO:PYTHON username$ fab sudo_dsmc:"-l"
[host1] Executing task 'sudo_dsmc'
[host1] sudo: -l
[host1] out: sudo password:
[host1] out: Sorry, user username is not allowed to execute '/bin/bash -l -c - l' as root on host1.
[host1] out:
Fatal error: sudo() received nonzero return code 1 while executing!
Requested: -l
Executed: sudo -S -p 'sudo password:' /bin/bash -l -c "-l"
Aborting.
Disconnecting from host1... done.
Although I can run the apt-get update with my below function fine without any errors:
def sudo_command(cmd):
sudo("apt-get update")
# run like: fab sudo_command:"apt-get-update"
It looks like your sudoers file is preventing you from running that command as sudo. Check your /etc/sudoers file and read the sudo documentation.
Also "-l" isn't a valid command. sudo takes -l as an optional flag (which lists commands allowed by the user). But Fabric's sudo appears to be taking unknown strings and routing them through /bin/bash instead of using them directly as sudo command parameters.

Categories

Resources