sshpass not copying complete data between two linux machines - python

Am using python and paramiko to copy a 5GB file between server A and Server B and script will be executed from serverX, which will open a ssh session to serverb from serverX and run the command to copy the file from server B using sshpass. Script is working, but it is not copying the complete 5GB file. it's copying only half and some time less than half.
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(serverb, username=user, password=password)
try:
stdin, stdout, stderr = client.exec_command("sshpass -p password scp -v -r root#serverA:/tmp/file_to_copy_name /tmp/",timeout=None)
except Exception as err:
print("copy between server error")
raise

You may want to use the Rsync over SSH instead of scp (secure remote file copy) with sshpass (noninteractive ssh password provider). It supports the fast incremental file transfer (can resume unfinished upload) and using the SSH key is much more secure than passing the raw password via sshpass.
Something like:
rsync -az /root/bigfile.txt 198.211.117.129:/root/
-a for archive mode
-z to compress file data during the transfer
The manual: https://download.samba.org/pub/rsync/rsync.html
Moreover, it can resume the copy started with scp.
Here is the instruction on how to use it over SSH:
https://www.digitalocean.com/community/tutorials/how-to-copy-files-with-rsync-over-ssh
Also, as already pointed out by #pynexj, the client.exec_command() will not wait until the command execution will be finished. So you may want to have some alternative way to check if the file was successfully copied and have the same data as the source. One of the options could be checking the MD5 hash: https://stackoverflow.com/search?q=Python+md5+hash
And you may want to check the: What is the fastest hash algorithm to check if two files are equal?

I guess you can use
rsync -avP --partial source target
where source or target can be both the remote server path or the local server path in your required order.

Related

Execute Host OS Command from Flask container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

How to know whether a copying process(scp) is complete using python?

I have a python program for log analysis.
The log is in another server which has a port number and password.
I cannot store my python code in that server. So I need to scp the file to the server where my program is stored.
I did this:
popen('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName)
But if the file is big the program will run before completing the copying process.
popen() does not wait for the process to complete. You can use subprocess.call():
exitcode = subprocess.call('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName,
shell=True)
According to Python's doc:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*

Read tar file on remote SSH server without using FTP in Python

I am creating some tar file on a remote server and I want to be able to get it to my machine. Due to security reasons I can't use FTP on that server.
So how I see it, I have two options:
get the file (as file) in some other way and then use tarfile library - if so, I need help with getting the file without FTP.
get the content of the file and then extract it.
If there is another way, I would like to hear it.
import spur
#creating the connection
shell = spur.SshShell(
hostname=unix_host,
username=unix_user,
password=unix_password,
missing_host_key=spur.ssh.MissingHostKey.accept
)
# running ssh command that is creating a tar file on the remote server
with shell:
command = "tar -czvf test.gz test"
shell.run(
["sh", "-c", command],
cwd=unix_path
)
# getting the content of the tar file to gz_file_content
command = "cat test.gz"
gz_file_content = shell.run(
["sh", "-c", command],
cwd=unix_path
)
More info:
My project is running on a virtualenv. I am using Python 3.4.
If you have SSH access, you have SFTP access for 99%.
So you can use the SFTP to download the file. See Download files over SSH using Python.
Or once you are using spur, see its SshShell.open method:
For instance, to copy a binary file over SSH, assuming you already have an instance of SshShell:
with ssh_shell.open("/path/to/remote", "rb") as remote_file:
with open("/path/to/local", "wb") as local_file:
shutil.copyfileobj(remote_file, local_file)
The SshShell.open method uses SFTP under the hood (via Paramiko library).

Copy one tar.gz file without scp(using echo or cat)

Is it possible for us to copy contents of a .tar.gz file using echo command?
I am using telnet(through telnetlib in python) to execute commands in a server. I need to copy few files into the server. However, scp just hangs after authentication. The server is a busybox server. Another team is looking into the issue for now. The scp command I used is this:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
I side stepped by reading the contents of the file, put them in the echo command in the remote. However, when I try to read a tar.gz file, it fails. I could not untar the file and copy the files within it as the tar file has nearly 500 files in it. Including a few tar files.
So any possible way to copy a tar file contents(read through open command in python) without scp?
Or is it possible to copy a file using the telnetlib in python? using the Telnet function?
To be more clear, I need to upload a tar.gz file from local machine to the remote machine. But without the help of scp. It will be more helpful if it is a python solution. If bash is the way to go, I could run os.system too. So python/shell scripting solution is what I am looking for.
If you need any more information, please ask away in the comments.
You can cat and redirect, for example:
ssh user#server cat file.tar.gz > file.tar.gz
Note that cat will happen at the server side, but the redirection will happen locally, to a local file.
You could also directly gunzip + untar to the local filesystem:
ssh user#server cat file.tar.gz | tar zxv
To do it the other way around, copy from local to server:
ssh user#server 'cat > file.tar.gz' < file.tar.gz
And gzip + tar to the server:
tar zc . | ssh user#server 'cat > file.tar.gz'
if you try to the run the command outside of the python script it will ask you for password:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
to pass the password for Unix scp/ssh command you need to redirect the password as input to the command like:
myPass > scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
There is an alternative method using the base64 utility. By base64-encoding the file you wish to transfer, you'll avoid issues with any escape chars, etc. that may trip echo. For example:
some_var="$( base64 -w 0 path_to_file )"
ssh user#server "echo $some_var | base64 -d > path_to_remote_file"
Option -w 0 is important to prevent base64 from inserting line breaks (after 76 characters by default).

Python ssh tunneling over multiple machines with agent

A little context is in order for this question: I am making an application that copies files/folders from one machine to another in python. The connection must be able to go through multiple machines. I quite literally have the machines connected in serial so I have to hop through them until I get to the correct one.
Currently, I am using python's subprocess module (Popen). As a very simplistic example I have
import subprocess
# need to set strict host checking to no since we connect to different
# machines over localhost
tunnel_string = "ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string.split())
# Do work, copy files etc. over ssh on localhost with port 9999
proc.terminate()
My question:
When doing it like this, I cannot seem to get agent forwarding to work, which is essential in something like this. Is there a way to do this?
I tried using the shell=True keyword in Popen like so
tunnel_string = "eval `ssh-agent` && ssh-add && ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string, shell=True)
# etc
The problem with this is that the name of the machines is given by user input, meaning they could easily inject malicious shell code. A second problem is that I then have a new ssh-agent process running every time I make a connection.
I have a nice function in my bashrc which identifies already running ssh-agents and sets the appropriate environment variables and adds my ssh key, but of cource subprocess cannot reference functions defined in my bashrc. I tried setting the executable="/bin/bash" variable with shell=True in Popen to no avail.
You should give Fabric a try.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
The program below will give you a test run.
First install fabric with pip install fabric then save the code below in fabfile.py
from fabric.api import *
env.hosts = ['server url/IP'] #change to ur server.
env.user = #username for the server
env.password = #password
def run_interactive():
with settings(warn_only = True)
cmd = 'clear'
while cmd is not 'stop fabric':
run(cmd)
cmd = raw_input('Command to run on server')
Change to the directory containing your fabfile and run fab run_interactive then each command you enter will be run on the server
I tested your first simplistic example and agent forwarding worked. The only think that I can see that might cause problems is that the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are not set correctly in the shell that you execute your script from. You might use ssh -v to get a better idea of where things are breaking down.
Try setting up a SSH config file: https://linuxize.com/post/using-the-ssh-config-file/
I frequently am required to tunnel through a bastion server and I use a configuration like so in my ~/.ssh/config file. Just change the host and user names. This also presumes that you have entries for these host names in your hosts (/etc/hosts) file.
Host my-bastion-server
Hostname my-bastion-server
User user123
AddKeysToAgent yes
UseKeychain yes
ForwardAgent yes
Host my-target-host
HostName my-target-host
User user123
AddKeysToAgent yes
UseKeychain yes
I then gain access with syntax like:
ssh my-bastion-server -At 'ssh my-target-host -At'
And I issue commands against my-target-host like:
ssh my-bastion-server -AT 'ssh my-target-host -AT "ls -la"'

Categories

Resources