Is it possible for us to copy contents of a .tar.gz file using echo command?
I am using telnet(through telnetlib in python) to execute commands in a server. I need to copy few files into the server. However, scp just hangs after authentication. The server is a busybox server. Another team is looking into the issue for now. The scp command I used is this:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
I side stepped by reading the contents of the file, put them in the echo command in the remote. However, when I try to read a tar.gz file, it fails. I could not untar the file and copy the files within it as the tar file has nearly 500 files in it. Including a few tar files.
So any possible way to copy a tar file contents(read through open command in python) without scp?
Or is it possible to copy a file using the telnetlib in python? using the Telnet function?
To be more clear, I need to upload a tar.gz file from local machine to the remote machine. But without the help of scp. It will be more helpful if it is a python solution. If bash is the way to go, I could run os.system too. So python/shell scripting solution is what I am looking for.
If you need any more information, please ask away in the comments.
You can cat and redirect, for example:
ssh user#server cat file.tar.gz > file.tar.gz
Note that cat will happen at the server side, but the redirection will happen locally, to a local file.
You could also directly gunzip + untar to the local filesystem:
ssh user#server cat file.tar.gz | tar zxv
To do it the other way around, copy from local to server:
ssh user#server 'cat > file.tar.gz' < file.tar.gz
And gzip + tar to the server:
tar zc . | ssh user#server 'cat > file.tar.gz'
if you try to the run the command outside of the python script it will ask you for password:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
to pass the password for Unix scp/ssh command you need to redirect the password as input to the command like:
myPass > scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
There is an alternative method using the base64 utility. By base64-encoding the file you wish to transfer, you'll avoid issues with any escape chars, etc. that may trip echo. For example:
some_var="$( base64 -w 0 path_to_file )"
ssh user#server "echo $some_var | base64 -d > path_to_remote_file"
Option -w 0 is important to prevent base64 from inserting line breaks (after 76 characters by default).
Related
I am using this code
tar zcf - somefolder/ | ssh user#server "cd /path/to/remote && tar zxf -"
to copy files between 2 system
i want to do it in python
i did
import subprocess
p=subprocess.Popen('tar zcf - somefolder/ | ssh user#server "cd /path/to/remote && tar zxf -')
i also tried
p=subprocess.Popen(["tar","-zcf somefolder | ssh ubuntu#192.168.100.110 /path/to/remote && tar -zxf"])
but both not working
i also tried with run instead of popen but still not working
but
stream = os.popen("cmd")
this is working fine but problem is i am not getting status
in first methods i can use
os.waitpid(p.pid, 0)
to get live status of process
what i want is transfer files between remote and local without using external libraries
and with live status
how can i achive this?
I would keep it simple and use the os module, which is also faster than subprocess:
result = os.popen("command").read()
Update: I overlooked "no external module" sorry. But maybe it's useful for others searching.
There is a module for that :)
from paramiko import SSHClient
from scp import SCPClient
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect('example.com')
# SCPCLient takes a paramiko transport as an argument
scp = SCPClient(ssh.get_transport())
scp.put('test.txt', 'test2.txt')
scp.get('test2.txt')
# Uploading the 'test' directory with its content in the
# '/home/user/dump' remote directory
scp.put('test', recursive=True, remote_path='/home/user/dump')
scp.close()
Or with usage of with:
from paramiko import SSHClient
from scp import SCPClient
with SSHClient() as ssh:
ssh.load_system_host_keys()
ssh.connect('example.com')
with SCPClient(ssh.get_transport()) as scp:
scp.put('test.txt', 'test2.txt')
scp.get('test2.txt')
See: https://pypi.org/project/scp/
From my experience, using subprocess.run() to run an external Ubuntu program/process I've had to use each command parameter or such as a different list entry, like so:
subprocess.run(['pip3', 'install', 'someModule'])
So maybe try putting every single space-separated argument as it's own list element.
I use gsutil for uploading a file to Google cloud storage. I would like to write the output to a file.
I have created a shortcut with this command in it
%windir%\system32\cmd.exe /k python2 c:\gsutil\gsutil -m rsync -r -n -d "XX" gs://xx/XX > C:\myoutput.txt
I run the cmd as admin. The output.txt file is created but it's empty after the script exits.
Any idea how I solve this?
Old question:
I have tried adding /myoutput.txt cf here after gs://xx/XX it doesn't works : I get a Access is denied. message.
I guess output went to errorstream, to merge with normal output append 2>&1
To redirect to a file and see on screen you need a tee or t-pipe tool.
There is one contained in GNU utilities for Win32 or one from Bill Stewart's Site
So your command could look like (untested)
%windir%\system32\cmd.exe /k python2 c:\gsutil\gsutil -m rsync -r -n -d "XX" gs://xx/XX 2>&1|tee "%USERPROFILE%\Desktop\myoutput.txt"
try to write to a different place or give the cmd admin, that would be my guess. Hope this helps)
I am trying to pull files from my webservers and would like to do this with python. I have the command below that willtar the files on the remote machine and pull them all back to the local machine. I do this manually with os.system and it prompts me for password. I enter it and it pulls the files. Is there anyway to detect a password prompt from os.system or if I use pexpect I detect the password prompt and enter it but the files do not get copied over. Any ideas?
ssh user1#myserver 'tar -cvf - -C /usr/home/user1 .' | tar -xvf -
username = "user1"
servername = "myserver"
mypath = "/usr/home/user1"
import os
os.system("ssh user1#myserver 'tar -cvf - -C /usr/home/user1 .' | tar -xvf -")
user1#myserver's password:
You should be able to configure your ssh to connect without asking for password, see for example: http://web.archive.org/web/20160404025901/http://jaybyjayfresh.com/2009/02/04/logging-in-without-a-password-certificates-ssh/
Note, you can also use scp to copy the whole folder recursively:
scp -rp sourcedirectory user#dest:/path
-r means recursively
-p means preserve attributes
I am creating some tar file on a remote server and I want to be able to get it to my machine. Due to security reasons I can't use FTP on that server.
So how I see it, I have two options:
get the file (as file) in some other way and then use tarfile library - if so, I need help with getting the file without FTP.
get the content of the file and then extract it.
If there is another way, I would like to hear it.
import spur
#creating the connection
shell = spur.SshShell(
hostname=unix_host,
username=unix_user,
password=unix_password,
missing_host_key=spur.ssh.MissingHostKey.accept
)
# running ssh command that is creating a tar file on the remote server
with shell:
command = "tar -czvf test.gz test"
shell.run(
["sh", "-c", command],
cwd=unix_path
)
# getting the content of the tar file to gz_file_content
command = "cat test.gz"
gz_file_content = shell.run(
["sh", "-c", command],
cwd=unix_path
)
More info:
My project is running on a virtualenv. I am using Python 3.4.
If you have SSH access, you have SFTP access for 99%.
So you can use the SFTP to download the file. See Download files over SSH using Python.
Or once you are using spur, see its SshShell.open method:
For instance, to copy a binary file over SSH, assuming you already have an instance of SshShell:
with ssh_shell.open("/path/to/remote", "rb") as remote_file:
with open("/path/to/local", "wb") as local_file:
shutil.copyfileobj(remote_file, local_file)
The SshShell.open method uses SFTP under the hood (via Paramiko library).
I am running iperf between a set of hosts that are read from a txt file, here's how I am running it:
h1,h2 = net.getNodeByName(node_id_1, node_id_2)
net.iperf((h1, h2))
It runs well and displays the results. But, I want to save the output of iperf result in a separate txt file. Does anyone know how I can apply it on the above code?
In order to store the results of iperf test in a file , add | tee followed by the filename.txt to your command line for example :
iperf -c ipaddress -u -t 10 -i 1 | tee result.txt
Do you already try:
--output test.log
(in newer versions --logfile)
or using
youriperfexpr > test.log
I had this problem as well. Although the manpage specifies "-o" or "--output" to save your output to a file, this does not actually work.
It seems that this was marked as "WontFix":
https://code.google.com/p/iperf/issues/detail?id=24:
Looks like -o/--output existed in a previous version but in not in the
current version. The consensus in yesterday's meeting was that if
--output existed then we should fix it, otherwise people should just use shell redirection and we'll mark this WontFix. So, WontFix.
So maybe just use typescript or ">test.log" as suggested by Paolo
I think the answer is given by Chiara Contoli in here: iperf result in output file
In summary:
h1.cmd('iperf -s > server_output.txt &')
h2.cmd('iperf -t 5 -c ', h1.IP() + ' > client_output.txt &')
Since you are running it on python, another method to save the result is to use popen:
popen( '<command> > <filename>', shell=True)
For example:
popen('iperf -s -u -i 1 > outtest.txt', shell=True)
You can check this for further information:
https://github.com/mininet/mininet/wiki/Introduction-to-Mininet#popen
If you need to save a file in the txt format.
On the client machine run cmd(adm) and after that you need to write this:
cd c:\iperf3
iperf3.exe -c "you server address" -p "port" -P 10 -w 32000 -t 0 >> c:\iperf3\text.txt
(-t 0) - infinity
On the client machine, you will see a black screen in cmd. It's normal. You will see all the process in the server machine. After your test, on the client machine in cmd need push ctrl+ c and after (y).
Your file in directory c:\iperf3\text.txt after that collect all information about this period.
If you push close in cmd this file text.txt will be empty.
Recommended open this file in NotePad or WordPad for the correct view.
Server
iperf3 -s -p -B >> &
Client
iperf3 -p -c <server_ip> -B <client_ip> -t 5 >>
Make sure you kill the iperf process on the server when done