display python script output in a web page [server running flask] - python

I'm running flask on an Azure server and send data from a form using POST, as an argument to a python script.
Here's how I pass the argument to the script and run it
os.system("python3 script.py " + postArgument)
The output is displayed normally in the logs as it would on a terminal.
How do I get the output back onto the new web page?

You can use pipe , Here is how it is done
os.popen("python3 script.py " + postArgument).read()
From security perspective i would suggest you do some sanity check on the postArguements before using
EDIT:answering comment asking why sanity check
The code is vulnurable to command injection
Command injection is an attack in which the goal is execution of
arbitrary commands on the host operating system via a vulnerable
application. Command injection attacks are possible when an
application passes unsafe user supplied data (forms, cookies, HTTP
headers etc.) to a system shell. In this attack, the attacker-supplied
operating system commands are usually executed with the privileges of
the vulnerable application. Command injection attacks are possible
largely due to insufficient input validation.
Let me try to demonstrate a possibile attack in your case
if
postArgument = "blah ; rm -rf /"
then
os.popen("python3 script.py " + postArgument).read()
will be equalent to
os.popen("python3 script.py blah ; rm -rf /").read()
This will try to remove all the files in the systems .
How to avoid this
Either use pipes.Quote
import pipes
p = os.popen("python3 script.py " + pipes.quote(postArgument)).read()
or use subprocess,this is recomended since os.popen is depricated
import subprocess
p = subprocess.Popen(["python3", "script.py", postArguemnt])
Read here about command injection

Related

How do I embed my shell scanning-script into a Python script?

Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.

execute which command over ssh in python script

I'm trying to run the command which solsql over SSH in a Python script.
I think the problem is in the ssh command and not the Python part, but maybe it's both.
I tried
subprocess.check_output("ssh root#IP which solsql",
stderr=subprocess.STDOUT, shell=True)
but I get an error.
I tried to run the command manually:
ssh root#{server_IP}" which solsql"
and I get a different output.
On the server I get the real path (/opt/solidDB/soliddb-6.5/bin/solsql)
but over SSH I get this:
which: no solsql in
(/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin)
I think what your looking for is something like paramiko. An example of how to use the library and issue a command to the remote system.
import base64
import paramiko
key = paramiko.RSAKey(data=base64.b64decode(b'AAA...'))
client = paramiko.SSHClient()
client.get_host_keys().add('ssh.example.com', 'ssh-rsa', key)
client.connect('ssh.example.com', username='THE_USER', password='THE_PASSWORD')
stdin, stdout, stderr = client.exec_command('which solsql')
for line in stdout:
print('... ' + line.strip('\n'))
client.close()
When you run a command over SSH, your shell executes a different set of startup files than when you connect interactively to the server. So the fundamental problem is really that the path where this tool is installed is not in your PATH when you connect via ssh from a script.
A common but crude workaround is to force the shell to read in the file with the PATH definition you want; but of course that basically requires you to know at least where the correct PATH is set, so you might as well just figure out where exactly the tool is installed in the first place anyway.
ssh server '. .bashrc; type -all solsql'
(assuming that the PATH is set up in your .bashrc; and ignoring for the time being the difference between executing stuff as yourself and as root. The dot and space before .bashrc are quite significant. Notice also how we use the POSIX command type rather than the brittle which command which should have died a natural but horrible death decades ago).
If you have a good idea of where the tool might be installed, perhaps instead do
subprocess.check_output(['ssh', 'root#' + ip, '''
for path in /opt/solidDB/*/bin /usr/local/bin /usr/bin; do
test -x "$path/solsql" || continue
echo "$path"
exit 0
done
exit 1'''])
Notice how we also avoid the (here, useless) shell=True. Perhaps see also Actual meaning of 'shell=True' in subprocess
First, you need to debug your error.
Use the code like this:
command = "ssh root#IP which solsql"
try:
retult = subprocess.check_output(command,shell=True,stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output))
print ("Result:", result)
It will output error message to you, and you'll know what to do, for example, ssh could have asked for a password, or didn't find your key, or something else.

Execute bash-command with "at" (<<<) via python: syntax error, last token seen

I'm using a radio sender on my RPi to control some light-devices at home. I'm trying to implement a time control and had successfully used the program "at" in the past.
#!/usr/bin/python
import subprocess as sp
##### some code #####
sp.call(['at', varTime, '<<<', '\"sudo', './codesend', '111111\"'])
When I execute the program, i receive the
errmsg:
syntax error. Last token seen: <
Garbled time
This codesnipped works fine with every command by itself (as long every parameter is from type string).
It's neccessary to call "at" in this way: at 18:25 <<< "sudo ./codesend 111111" to hold the command in the queue (viewable in "atq"),
because sudo ./codesend 111111 | at 18:25 just executes the command directly and writes down the execution in "/var/mail/user".
My question ist, how can I avoid the syntax error.
I'm using a lot of other packages in this program, so I have to stay with Python
I hope someone has a solution for this problem or can help to find my mistake.
Many thanks in advance
Preface: Shared Code
Consider the following context to be part of both branches of this answer.
import subprocess as sp
try:
from shlex import quote # Python 3
except ImportError:
from pipes import quote # Python 2
# given the command you want to schedule, as an array...
cmd = ['sudo', './codesend', '111111']
# ...generate a safely shell-escaped string.
cmd_str = ' '.join(quote(x) for x in cmd))
Solution A: Feed Stdin In Python
<<< is shell syntax. It has no meaning to at, and it's completely normal and expected for at to reject it if given as a literal argument.
You don't need to invoke a shell, though -- you can do the same thing directly from native Python:
p = sp.Popen(['at', vartime], stdin=sp.PIPE)
p.communicate(cmd_str)
Solution B: Explicitly Invoke A Shell
Moreover, <<< isn't /bin/sh syntax -- it's an extension honored in bash, ksh, and others; so you can't reliably get it just by adding the shell=True flag (which uses /bin/sh and so guarantees only POSIX-baseline features). If you want it, you need to explicitly invoke a shell with the feature, like so:
bash_script = '''
at "$1" <<<"$2"
'''
sp.call(['bash', '-c', bash_script,
'_', # this is $0 for that script
vartime, # this is its $1
cmd_str, # this is its $2
])
In either case, note that we're using shlex.quote() or pipes.quote() (as appropriate for our Python release) when generating a shell command from an argument list; this is critical to avoid creating shell injection vulnerabilities in our software.

calling command line app from flask or django

I have a command line tool like as follow:
tool -o output -a authentication.txt -i input.txt
(it gets some files as input)
what I would like to do is to provide an web interface for this command line tool using flask or django since the tool is based on python.
So can you guide me how can I call the tool and gets its results from stdout!
Also tool has the config file and I want to open and edit it in web interface too.
My initial idea was to call python subprocess to call the app to gets its stdout but do not know how wise it is?s
There are many ways to do it!
I have no idea what you are trying to do. But basic example would be something like this. You mention that it is python program/app. You can import you command line tool as a package. If it is not package, then package it https://packaging.python.org/. Parameters you pass as arguments, you can pass in your app. In Flask you would do something like this.
pseudo app.py
import tool
#app.route("/")
def index():
return render_template("index.html")
#app.route('/tool')
def tool(input):
auth = tool.auth()
input = tool.input(input)
return output
Assuming Python 3.5:
import subprocess
res = subprocess.run(["tool", "-o", "output", "-a", "authentication.txt", "-i", "input.txt"])
res.stdout
In an earlier version, you could potentially use the following, which would raise a CalledProcessError on non-zero return codes.
print(subprocess.check_output(
"/usr/local/bin/spam bacon spam",
stderr=subprocess.STDOUT,
shell=True))
# => EGGS AND SPAM!
Do be careful with running commands using untrustworthy (e.g. user) input, make sure that arguments are properly sanitized. Remember to test for malicious input, such as attempting to escape the command to run something different.
Try to ensure that the system user (typically the application user / web server user in this case) that winds up executing the command can't muck up too much if/when something inevitably makes it through by restricting its rights on the machine.

python values to bash line on a remote server

So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.

Categories

Resources