i want to run and control PSFTP from a Python script in order to get log files from a UNIX box onto my Windows machine.
I can start up PSFTP and log in but when i try to run a command remotely such as 'cd' it isn't recognised by PSFTP and is just run in the terminal when i close PSFTP.
The code which i am trying to run is as follows:
import os
os.system("<directory> -l <username> -pw <password>" )
os.system("cd <anotherDirectory>")
i was just wondering if this is actually possible. Or if there is a better way to do this in Python.
Thanks.
You'll need to run PSFTP as a subprocess and speak directly with the process. os.system spawns a separate subshell each time it's invoked so it doesn't work like typing commands sequentially into a command prompt window. Take a look at the documentation for the standard Python subprocess module. You should be able to accomplish your goal from there. Alternatively, there are a few Python SSH packages available such as paramiko and Twisted. If you're already happy with PSFTP, I'd definitely stick with trying to make it work first though.
Subprocess module hint:
# The following line spawns the psftp process and binds its standard input
# to p.stdin and its standard output to p.stdout
p = subprocess.Popen('psftp -l testuser -pw testpass'.split(),
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the 'cd some_directory' command to the process as if a user were
# typing it at the command line
p.stdin.write('cd some_directory\n')
This has sort of been answered in: SFTP in Python? (platform independent)
http://www.lag.net/paramiko/
The advantage to the pure python approach is that you don't always need psftp installed.
Related
I'm new in a company for IT and very few people here know Python so I can't ask then for help.
The problem: I need to create a script in Python that connects via ssh from my VM to my client server, after I access with my script I need to find a log file and search for a few data.
I tested my script within my Windows with a copy of that file and it searched everything that I need. However, I don't know how to do that connection via SSH.
I tried like this but I don't know where to start:
from subprocess import Popen, PIPE
import sys
ssh = subprocess.check_output(['ssh', 'my_server', 'password'], shell = True)
ssh.stdin.write("cd /path/")
ssh.stdin.write("cat file | grep err|error")
This generates a error "name 'subprocess' is not defined".
I don't understand how to use the subprocess nor how to begin to develop the solution.
Note: I can't use Paramiko because I don't have permission to install packages via pip or download the package manually.
You didn't import subprocess itself so you can't refer to it.
check_output simply runs a process and waits for it to finish, so you can't use that to run a process you want to interact with. But there is nothing interactive here, so let's use that actually.
The first argument to subprocess.Popen() and friends is either a string for the shell to parse, with shell=True; or a list of token passed directly to exec with no shell involved. (On some platforms, passing a list of tokens with shell=True actually happens to work, but this is coincidental, and could change in a future version of Python.)
ssh myhost password will try to run the command password on myhost so that's not what you want. Probably you should simply set things up for passwordless SSH in the first place.
... But you can use this syntax to run the commands in one go; just pass the shell commands to ssh as a string.
from subprocess import check_output
#import sys # Remove unused import
result = check_output(['ssh', 'my_server',
# Fix quoting and Useless Use of Cat, and pointless cd
"grep 'err|error' /path/file"])
Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag
I have been trying rather unsuccesfully to open several terminals (though one would be enough to start with) from say an ipython terminal that executes my main python script. I would like this main python script to open as many cmd terminals as needed and execute a specific python script on each of them. I need the terminal windows to remain open when the script finishes.
I can manage to start one terminal using the command:
import os
os.startfile('cmd')
but I don't know how to pass arguments to it, like:
/K python myscript.py
Does anyone have any ideas on how this could be done?
Cheers
H.H.
Use subprocess module. Se more info at. Google>>python subprocess
http://docs.python.org/2/library/subprocess.html
import subprocess
subprocess.check_output(["python", "c:\home\user\script.py"])
or
subprocess.call(["python", "c:\home\user\script.py"])
I need to execute and send command to external app from python:
.\Ext\PrintfPC /p “C:\Leica\DBX” /l “.\joblist.log”
It is cmd app, Is it possible to hide its console and terminate after all also using only
python?
You are probably looking for the subprocess module. Example for executing the ls -l bash command on a unix system:
subprocess.call(['ls', '-l'])
So, in your case it should probably look something like:
subprocess.call(['.\Ext\PrintfPC', '/p', 'C:\Leica\DBX', '/l', '.\joblist.log'])
Have a look at the linked documentation though, because you can also get the output back from the command line execution by using pipes / Popen objects.
I have to make graphs from several files with data. I already found a way to run a simple command
xmgrace -batch batch.bfile -nosafe -hardcopy
in which batch.bfile is a text file with grace commands to print the graph I want. I already tried it manually and it works perfectly. To do this with several files I just have to edit one parameter inside batch.bfile and run the same command every time I make a change.
I have already written a python code which edits batch.bfile and goes through all the data files with a for cycle. In each cycle step I want to run the mentioned command directly in the command line.
After searching a bit I found two solutions, one with os.system() and another with subprocess.Popen() and I could only make subprocess.Popen() work without giving any errors by writing:
subprocess.Popen("xmgrace -batch batch.bfile -nosafe -hardcopy", shell=True)
Problem is, this doesn't do anything in practice, i.e., it just isn't the same as running the command directly in the command line. I already tried writing the full directory for the batch.bfile but nothing changed.
I am using Python 2.7 and Mac OS 10.7
Have you checked running xmgrace from the command line using sh? (i.e. invoke /bin/sh, then run xmgrace... which should be the same shell that Popen is using when you set shell=true).
Another solution would be to create a shell script (create a file like myscript.sh, and run chmod +x from the terminal). In the script call xmgrace:
#!/bin/bash
xmgrace -batch batch.bfile -nosafe -hardcopy
You could then test that myscript.sh works, which ought to pick up any environment variables that might be in your profile that might differ from python. If this works, you could call the script from python's subprocess.Popen('myscript.sh'). You can check what the environment variables are set in python for subprocess by running:
import os
os.environ
You may want to check out http://sourceforge.net/projects/graceplot/
When use use Popen, you can capture the application's output to stdout to stderr and print it within your application - this way you can see what is happening:
from subprocess import Popen, PIPE
ps = Popen(reportParameters,bufsize=512, stdout = PIPE, stderr = PIPE)
if ps:
while 1:
stdout = ps.stdout.readline()
stderr = ps.stderr.readline()
exitcode = ps.poll()
if (not stdout and not stderr) and (exitcode is not None):
break
if stdout:
stdout = stdout[:-1]
print stdout
if stderr:
stderr = stderr[:-1]
print stderr