So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
Related
Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.
I'm trying to run the command which solsql over SSH in a Python script.
I think the problem is in the ssh command and not the Python part, but maybe it's both.
I tried
subprocess.check_output("ssh root#IP which solsql",
stderr=subprocess.STDOUT, shell=True)
but I get an error.
I tried to run the command manually:
ssh root#{server_IP}" which solsql"
and I get a different output.
On the server I get the real path (/opt/solidDB/soliddb-6.5/bin/solsql)
but over SSH I get this:
which: no solsql in
(/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
I think what your looking for is something like paramiko. An example of how to use the library and issue a command to the remote system.
import base64
import paramiko
key = paramiko.RSAKey(data=base64.b64decode(b'AAA...'))
client = paramiko.SSHClient()
client.get_host_keys().add('ssh.example.com', 'ssh-rsa', key)
client.connect('ssh.example.com', username='THE_USER', password='THE_PASSWORD')
stdin, stdout, stderr = client.exec_command('which solsql')
for line in stdout:
print('... ' + line.strip('\n'))
client.close()
When you run a command over SSH, your shell executes a different set of startup files than when you connect interactively to the server. So the fundamental problem is really that the path where this tool is installed is not in your PATH when you connect via ssh from a script.
A common but crude workaround is to force the shell to read in the file with the PATH definition you want; but of course that basically requires you to know at least where the correct PATH is set, so you might as well just figure out where exactly the tool is installed in the first place anyway.
ssh server '. .bashrc; type -all solsql'
(assuming that the PATH is set up in your .bashrc; and ignoring for the time being the difference between executing stuff as yourself and as root. The dot and space before .bashrc are quite significant. Notice also how we use the POSIX command type rather than the brittle which command which should have died a natural but horrible death decades ago).
If you have a good idea of where the tool might be installed, perhaps instead do
subprocess.check_output(['ssh', 'root#' + ip, '''
for path in /opt/solidDB/*/bin /usr/local/bin /usr/bin; do
test -x "$path/solsql" || continue
echo "$path"
exit 0
done
exit 1'''])
Notice how we also avoid the (here, useless) shell=True. Perhaps see also Actual meaning of 'shell=True' in subprocess
First, you need to debug your error.
Use the code like this:
command = "ssh root#IP which solsql"
try:
retult = subprocess.check_output(command,shell=True,stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
print ("Result:", result)
It will output error message to you, and you'll know what to do, for example, ssh could have asked for a password, or didn't find your key, or something else.
I'm running flask on an Azure server and send data from a form using POST, as an argument to a python script.
Here's how I pass the argument to the script and run it
os.system("python3 script.py " + postArgument)
The output is displayed normally in the logs as it would on a terminal.
How do I get the output back onto the new web page?
You can use pipe , Here is how it is done
os.popen("python3 script.py " + postArgument).read()
From security perspective i would suggest you do some sanity check on the postArguements before using
EDIT:answering comment asking why sanity check
The code is vulnurable to command injection
Command injection is an attack in which the goal is execution of
arbitrary commands on the host operating system via a vulnerable
application. Command injection attacks are possible when an
application passes unsafe user supplied data (forms, cookies, HTTP
headers etc.) to a system shell. In this attack, the attacker-supplied
operating system commands are usually executed with the privileges of
the vulnerable application. Command injection attacks are possible
largely due to insufficient input validation.
Let me try to demonstrate a possibile attack in your case
if
postArgument = "blah ; rm -rf /"
then
os.popen("python3 script.py " + postArgument).read()
will be equalent to
os.popen("python3 script.py blah ; rm -rf /").read()
This will try to remove all the files in the systems .
How to avoid this
Either use pipes.Quote
import pipes
p = os.popen("python3 script.py " + pipes.quote(postArgument)).read()
or use subprocess,this is recomended since os.popen is depricated
import subprocess
p = subprocess.Popen(["python3", "script.py", postArguemnt])
Read here about command injection
I am trying to run a gerrit cherry pick query in python
query_to_run='git fetch https://gerritserver.com/projectname refs/changes/51/1151/1 ' + '&&' + ' git cherry-pick FETCH_HEAD'
I am getting error:
fatal: Couldn't find remote ref &&
Unexpected end of command stream
My code works with other gerrit queries but not this one, is it the && which is causing problem!
thanks
Pratibha
The && token has no meaning to Git or Gerrit but is interpreted by your shell. By default the subprocess module doesn't pass off commands to the shell but runs the process directly, so the string in query_to_run is sent as a single command. To force subprocess.Popen(), subprocess.check_call() or whatever you're using to pass the command to a shell, pass shell=True:
subprocess.check_call(query_to_run, shell=True)
However, the use of shell=True is discouraged and is unnecessary in this case. What && does is simply run one command and, if successful, run another command. It's basically equivalent to this sequence of Python statements:
subprocess.check_call(command1)
subprocess.check_call(command2)
Alternatively, if you prefer not have exceptions thrown when either of the commands fail:
subprocess.call(command1) != 0 and subprocess.call(command2) != 0
In addition to this, I strongly recommend making a good habit out of passing lists of arguments to process execution functions instead of strings. Passing strings works fine a lot of the time, but when arguments contain spaces you suddenly need to think about quoting.
Putting everything together, this is what I think your code should look like:
try:
subprocess.check_call(['git', 'fetch',
'https://gerritserver.com/projectname',
'refs/changes/51/1151/1'])
subprocess.check_call(['git', 'cherry-pick', 'FETCH_HEAD'])
except (EnvironmentError, subprocess.CalledProcessError):
# Suitable error handling here. I'm not sure about
# the possibility of EnvironmentError exceptions.
Also, a note on terminology: You're talking about Gerrit queries, but using that language might confuse people. By Gerrit query one usually means the Lucene query string entered into the search box in the UI (or the equivalent REST API).
I'm using Python code to run a Hadoop program on a Linux (Cloudera) machine using SSH.
I'm having some trouble with compiling java files to class files. When I'm executing the command:
javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/* from the Linux terminal all the files get compiled successfully.
When I'm executing the same command through my Python SSH client I'm receiving an 'invalid flag' error:
spur.results.RunProcessError: return code: 2
output: b''
stderr output: b'javac: invalid flag: remote_hadoop/javasrc\nUsage: javac \nuse -help for a list of possible options\n'
The python code:
list_of_commands = ["javac", "-cp", r"/usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/*", input_folder + r"/*"]
print ' '.join(list_of_commands)
self.shell.run(list_of_commands)
The command is getting rendered correctly, since what is getting printed is javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*.
UPDATE: It's pretty weird. I can compile one file at a time over ssh, but not all of them. Seems like something happens to the "*" over ssh.
You're passing a list of arguments, not a list of commands. It's not even an accurate list of arguments.
If your underlying tool expects a list of arguments, then pass:
['sh', '-c', 'javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*']
If it expects a list of commands:
['javac -cp /usr/lib/hadoop/*:/usr/lib/hadoop/client-0.20/* remote_hadoop/javasrc/*']
If it expects something else -- read the documentation and determine what that something is!
Note that SSH doesn't provide a way to pass a literal argv array when running an arbitrary command; rather, it expects -- at the protocol level -- a string ready for parsing by the remote shell. If your self.shell.run code is doing shell quoting before joining the argument list given, then it would be passing the last argument as the literal string remote_hadoop/javasrc/* -- not expanding it into a list of filenames as a shell would.
Using the sh -c form forces the remote shell to perform expansion on its end, assuming that contents are being given to it in a form which doesn't have remote expansion performed already.
The problem is the way that spur builds the command list into a command string. It takes every command token and encloses it in single quotes (["ls", "*.txt"]) becomes 'ls' '*.txt'). There is no shell expansion of * inside quotes, so the command doesn't work.
You can see the problem in spur's ssh.py on line 323:
def escape_sh(value):
return "'" + value.replace("'", "'\\''") + "'"
I don't use spur, but it looks like it just doesn't allow you to do such things. The problem with "simplifiers" like spur is that if they simplify in a way you don't want, you can't use them.