I want to connect to and execute a process on a remote server using Python. I want to be able to get the return code and stderr (if any) of the process. Has anyone ever done anything like this before. I have done it with ssh, but I want to do it from Python script.
Cheers.
Use the ssh module called paramiko which was created for this purpose instead of using subprocess. Here's an example below:
from paramiko import SSHClient
client = SSHClient()
client.load_system_host_keys()
client.connect("hostname", username="user")
stdin, stdout, stderr = client.exec_command('program')
print "stderr: ", stderr.readlines()
print "pwd: ", stdout.readlines()
UPDATE: The example used to use the ssh module, but that is now deprecated and paramiko is the up-to-date module that provides ssh functionality in python.
Well, you can call ssh from python...
import subprocess
ret = subprocess.call(["ssh", "user#host", "program"]);
# or, with stderr:
prog = subprocess.Popen(["ssh", "user#host", "program"], stderr=subprocess.PIPE)
errdata = prog.communicate()[1]
Maybe if you want to wrap the nuts and bolts of the ssh calls you could use Fabric
This library is geared towards deployment and server management, but it could also be useful for these kind of problems.
Also have a look at Celery. This implements a task queue for Python/Django on various brokers. Maybe an overkill for your problem, but if you are going to call more functions on multiple machines it will save you a lot of headache managing your connections.
Related
I am using Paramiko in my Python and Django code to execute command. Here is my code:
client = SSHClient()
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(<host>, username=<username>, password=<password>)
stdin, stdout, stderr =
client.exec_command("curl -X POST http://127.0.0.1:8080/predictions -T image.jpg")
lines = stdout.readlines()
The execution time of stdout.readlines() is 0.59s for each command. This is not acceptable time for my close-to-real time system. Could anyone give any suggestion to make reading process faster?
The SSHClient.exec_command only starts the command. It does not wait for the command to complete. That's what readlines does. So the readlines takes as long as the command does.
Obligatory warning: Do not use AutoAddPolicy – You are losing a protection against MITM attacks by doing so. For a correct solution, see Paramiko "Unknown Server".
I have a requirement to telnet from one Windows PC to another. I would like to log in and issue commands (and see replies) using Python.
This is very easy to achieve this in my local cmd window:
Call up cmd and type 'telnet REMOTECOMPUTERNAME'.
Reply in window is:
'Welcome to the ChyronHego telnet server on REMOTECOMPUTERNAME'
I can issue commands (e.g. 'V\6\1\\') by typing directly into prompt.
Remote system responds by carrying out task or issuing error message in prompt.
(I have tried using telnetlib and system.process and os without any result so far)
Does anyone know how I can achieve this programmatically using Python?
Many thanks in advance.
Ian
You can use the subprocess module to perform a telnet cmd on windows. Additional parameters can be added to the list as a separate element. EX:["telnet", "HOST", 'V']
import subprocess
p = subprocess.Popen(["telnet", "HOST"], stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE)
I'm using the Python library Paramiko to run a command over ssh on another server. The problem I'm facing is that the SSHClient.exec_command() call returns immediately, sending me stdin, stdout, and stderr and giving me no other way I can see to tell if the process is still running or not. I thought that I might try monitoring to see if the streams it returns are still open, but I can't find any way to do this except by trying to read from stdout or stderr, or write to stdin and waiting to receive a ValueError. Can anyone tell me of something I've missed that should work instead?
Thanks to advice from #fixxxer I found what I needed to know. My test code now looks like this:
import paramiko
import time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('localhost', username='user', password='password')
transport = ssh.get_transport()
channel = transport.open_session()
channel.exec_command('./exec_test.py')
status = channel.recv_exit_status()
This works marvellously. It blocks until the command is finished, then allows me to continue.
I am using paramiko to open a sftp connection to access a remote file. All my code below in a built in function seems to work only if I don't have the logging enabled for paramiko:
paramiko.util.log_to_file( 'paramiko.log' )
So when I do NOT have the above line of code in my file the code below works:
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy( paramiko.AutoAddPolicy() )
client.connect( hostname,user, password)
sftp = client.open_sftp()
file = sftp.open( fpath, mode='r', bufsize=1 )
Otherwise python will hang on this line client.connect( hostname,user, password) and writes to the stderr log like crazy eventually killing the VM my code is running on.
Specifically paramiko hangs on this line:
t.start_client()
within the client.connect method. Nothing useful comes out in the paramiko log and stderr is filled with errors with no description or tracebacks.
Researching this problem I came across "There is a single import lock available so when a child thread attempts another import it can block it indefinitely" how do I make sure the code opening a sftp connection is never blocked?
This is a bit of a long shot, but I have had issues with logging's use of threads causing deadlock. I was not able to track the exact problem down (though I suspect it may have been exacerbated by the use of subprocess; but I did solve it by disabling the logging module's thread support.
Try this before you activate logging:
import logging
logging.thread = None
I'd be interested to know if this solves your problem or not.
I'm using Python's subprocess.Popen to perform some FTP using the binary client of the host operating system. I can't use ftplib or any other library for various reasons.
The behavior of the binary seems to change if I attach a stdin handler to the Popen instance. For example, using XP's ftp client, which accepts a text file of commands to issue:
>>>from subprocess import Popen, PIPE
>>>p = Popen(['ftp','-A','-s:commands.txt','example.com'], stdout=PIPE)
>>>p.communicate()[0]
'Connected to example.com.
220 ProFTPD 1.3.1 Server (Debian) ...
331 Anonymous login ok, send your complete email address as your password
<snip>
ftp> binary
200 Type set to I
ftp> get /testfiles/100.KiB
200 PORT command successful
150 Opening BINARY mode data connection for /testfiles/100.KiB (102400 bytes)
226 Transfer complete
ftp: 102400 bytes received in 0.28Seconds 365.71Kbytes/sec.
ftp> quit
>>>
commands.txt:
binary
get /testfiles/100.KiB
quit
When also supplying stdin, all you get in stdout is:
>>>from subprocess import Popen, PIPE
>>>p = Popen(['ftp','-A','-s:commands.txt','example.com'], stdin=PIPE, stdout=PIPE)
>>>p.communicate()[0]
'binary
get /testfiles/100.KiB
quit'
>>>
Initially I thought this was a quirk of the XP ftp client, perhaps knowing it wasn't in interactive mode and therefore limiting its output. However, the same behaviour happens with OS X's ftp - all the server responses are missing from stdout if stdin is supplied - which leads me to think that this is normal behaviour.
In Windows I can use the -s switch to effectively script ftp without using stdin, but on other platforms one relies on the shell for that kind of interaction.
Python version is 2.6.x on both platforms. Why would supplying a handle for stdin change stdout, and where have the server responses gone to?
The program may be using isatty(3) to detect presence of a tty on stdin.
I think I read somewhere (but can't remember where) that Windows ftp client came from one of the original BSD implementations. In that it would certainly shares some relationship with Mac OS X's ftp implementation.
For me, this is not related to Popen but to the client ftp program implementation, which makes some checks about the context in which it is launched (to see if it's interacting with a human or a shell script), using isatty(3) as mentionned with Ignacio in his answer. This is common practise for programs which can be used in both context. A well known example is GNU grep implementation for the --color=auto option : it will colorize output only if stdout is a tty, and not if the output of grep is piped into another command.