I am creating some tar file on a remote server and I want to be able to get it to my machine. Due to security reasons I can't use FTP on that server.
So how I see it, I have two options:
get the file (as file) in some other way and then use tarfile library - if so, I need help with getting the file without FTP.
get the content of the file and then extract it.
If there is another way, I would like to hear it.
import spur
#creating the connection
shell = spur.SshShell(
hostname=unix_host,
username=unix_user,
password=unix_password,
missing_host_key=spur.ssh.MissingHostKey.accept
)
# running ssh command that is creating a tar file on the remote server
with shell:
command = "tar -czvf test.gz test"
shell.run(
["sh", "-c", command],
cwd=unix_path
)
# getting the content of the tar file to gz_file_content
command = "cat test.gz"
gz_file_content = shell.run(
["sh", "-c", command],
cwd=unix_path
)
More info:
My project is running on a virtualenv. I am using Python 3.4.
If you have SSH access, you have SFTP access for 99%.
So you can use the SFTP to download the file. See Download files over SSH using Python.
Or once you are using spur, see its SshShell.open method:
For instance, to copy a binary file over SSH, assuming you already have an instance of SshShell:
with ssh_shell.open("/path/to/remote", "rb") as remote_file:
with open("/path/to/local", "wb") as local_file:
shutil.copyfileobj(remote_file, local_file)
The SshShell.open method uses SFTP under the hood (via Paramiko library).
Related
I have a Raspberry pi running on my home network along with a Macintosh that acts as a backup server. I am attempting to craft a Python3 script that will run on the Pi to create a tar.gz backup of the Pi's home folder (along with all of the contents) and transfer it to the Macintosh using SSH. I have the SSH connection (using keys) running but am stumped when I try to create the backup file and transfer it.
My code so far:
#!/usr/bin/python3
import paramiko
import tarfile
import os
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connnect(hostname = '192.168.1.151', username = 'usernamehere')
print("I'm connected!")
#The following statement is the problem as I see it
tar czvf - ./home/PiOne/ | ssh_client -w:gz "cat > ./Volumes/PiBackups/PiOne/PiOneClone.tar.gz"
print("File transferred!")
ssh_client.close
I would appreciate any help in creating the script!
I think there is something easier to transfer a file from your VM to your local machine. Also that you are using SSH
scp -i /path/your/targz/file user#ip:/path/to/local/machine
Check permission while doing that using
ls -lh
I am using this code
tar zcf - somefolder/ | ssh user#server "cd /path/to/remote && tar zxf -"
to copy files between 2 system
i want to do it in python
i did
import subprocess
p=subprocess.Popen('tar zcf - somefolder/ | ssh user#server "cd /path/to/remote && tar zxf -')
i also tried
p=subprocess.Popen(["tar","-zcf somefolder | ssh ubuntu#192.168.100.110 /path/to/remote && tar -zxf"])
but both not working
i also tried with run instead of popen but still not working
but
stream = os.popen("cmd")
this is working fine but problem is i am not getting status
in first methods i can use
os.waitpid(p.pid, 0)
to get live status of process
what i want is transfer files between remote and local without using external libraries
and with live status
how can i achive this?
I would keep it simple and use the os module, which is also faster than subprocess:
result = os.popen("command").read()
Update: I overlooked "no external module" sorry. But maybe it's useful for others searching.
There is a module for that :)
from paramiko import SSHClient
from scp import SCPClient
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect('example.com')
# SCPCLient takes a paramiko transport as an argument
scp = SCPClient(ssh.get_transport())
scp.put('test.txt', 'test2.txt')
scp.get('test2.txt')
# Uploading the 'test' directory with its content in the
# '/home/user/dump' remote directory
scp.put('test', recursive=True, remote_path='/home/user/dump')
scp.close()
Or with usage of with:
from paramiko import SSHClient
from scp import SCPClient
with SSHClient() as ssh:
ssh.load_system_host_keys()
ssh.connect('example.com')
with SCPClient(ssh.get_transport()) as scp:
scp.put('test.txt', 'test2.txt')
scp.get('test2.txt')
See: https://pypi.org/project/scp/
From my experience, using subprocess.run() to run an external Ubuntu program/process I've had to use each command parameter or such as a different list entry, like so:
subprocess.run(['pip3', 'install', 'someModule'])
So maybe try putting every single space-separated argument as it's own list element.
Am using python and paramiko to copy a 5GB file between server A and Server B and script will be executed from serverX, which will open a ssh session to serverb from serverX and run the command to copy the file from server B using sshpass. Script is working, but it is not copying the complete 5GB file. it's copying only half and some time less than half.
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(serverb, username=user, password=password)
try:
stdin, stdout, stderr = client.exec_command("sshpass -p password scp -v -r root#serverA:/tmp/file_to_copy_name /tmp/",timeout=None)
except Exception as err:
print("copy between server error")
raise
You may want to use the Rsync over SSH instead of scp (secure remote file copy) with sshpass (noninteractive ssh password provider). It supports the fast incremental file transfer (can resume unfinished upload) and using the SSH key is much more secure than passing the raw password via sshpass.
Something like:
rsync -az /root/bigfile.txt 198.211.117.129:/root/
-a for archive mode
-z to compress file data during the transfer
The manual: https://download.samba.org/pub/rsync/rsync.html
Moreover, it can resume the copy started with scp.
Here is the instruction on how to use it over SSH:
https://www.digitalocean.com/community/tutorials/how-to-copy-files-with-rsync-over-ssh
Also, as already pointed out by #pynexj, the client.exec_command() will not wait until the command execution will be finished. So you may want to have some alternative way to check if the file was successfully copied and have the same data as the source. One of the options could be checking the MD5 hash: https://stackoverflow.com/search?q=Python+md5+hash
And you may want to check the: What is the fastest hash algorithm to check if two files are equal?
I guess you can use
rsync -avP --partial source target
where source or target can be both the remote server path or the local server path in your required order.
I have a python program for log analysis.
The log is in another server which has a port number and password.
I cannot store my python code in that server. So I need to scp the file to the server where my program is stored.
I did this:
popen('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName)
But if the file is big the program will run before completing the copying process.
popen() does not wait for the process to complete. You can use subprocess.call():
exitcode = subprocess.call('''sshpass -p "password" scp -r \
admin#192.158.11.109:/home/admin/DontDeleteMe/%s /home/admin/''' % fileName,
shell=True)
According to Python's doc:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Is it possible for us to copy contents of a .tar.gz file using echo command?
I am using telnet(through telnetlib in python) to execute commands in a server. I need to copy few files into the server. However, scp just hangs after authentication. The server is a busybox server. Another team is looking into the issue for now. The scp command I used is this:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
I side stepped by reading the contents of the file, put them in the echo command in the remote. However, when I try to read a tar.gz file, it fails. I could not untar the file and copy the files within it as the tar file has nearly 500 files in it. Including a few tar files.
So any possible way to copy a tar file contents(read through open command in python) without scp?
Or is it possible to copy a file using the telnetlib in python? using the Telnet function?
To be more clear, I need to upload a tar.gz file from local machine to the remote machine. But without the help of scp. It will be more helpful if it is a python solution. If bash is the way to go, I could run os.system too. So python/shell scripting solution is what I am looking for.
If you need any more information, please ask away in the comments.
You can cat and redirect, for example:
ssh user#server cat file.tar.gz > file.tar.gz
Note that cat will happen at the server side, but the redirection will happen locally, to a local file.
You could also directly gunzip + untar to the local filesystem:
ssh user#server cat file.tar.gz | tar zxv
To do it the other way around, copy from local to server:
ssh user#server 'cat > file.tar.gz' < file.tar.gz
And gzip + tar to the server:
tar zc . | ssh user#server 'cat > file.tar.gz'
if you try to the run the command outside of the python script it will ask you for password:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
to pass the password for Unix scp/ssh command you need to redirect the password as input to the command like:
myPass > scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
There is an alternative method using the base64 utility. By base64-encoding the file you wish to transfer, you'll avoid issues with any escape chars, etc. that may trip echo. For example:
some_var="$( base64 -w 0 path_to_file )"
ssh user#server "echo $some_var | base64 -d > path_to_remote_file"
Option -w 0 is important to prevent base64 from inserting line breaks (after 76 characters by default).