refresh a shell subprocess in python - python

I have a webpy code that sends "ps aux" data to a webpage using a subprocess.
import subprocess
ps = subprocess.Popen(('ps', 'aux'), stdout-subprocess.PIPE)
out = ps.communicate()[0]
(bunch of webpy stuff)
class index:
def GET(self):
return (output)
(more webpy to start the web server)
It sends the ps aux data across no problem however it does not refresh the ps aux data so i only get 1 continuous set rather than a changing set of data i am needing.
How do i refresh the subprocess to send new data every time I reload the webpage ?

Put the Popen call into the def GET. By the way, if you’re using Python 2.7 or newer, you can use check_output to simplify the actual subprocess call:
def GET(self):
return subprocess.check_output(['ps', 'aux'])

Related

Python streamlit - subprocess

I have created a streamlit app which runs fine on my local machine. However, I cannot get it running on the streamlit-cloud. In a nutshell my app does the following:
take some user input
create a markdown file from the input ("deck.md")
convert markdown file to html file ("deck.html") using a npx command via subprocess
opens a new tab showing the html file via subprocess
On my local machine I use the following command to do steps 3 and 4:
import subprocess
def markdown2marp(file):
# Create HTML
marp_it = f"npx #marp-team/marp-cli#latest --theme custom.css --html {file}"
file = file.split(".")[0] # remove .md
proc = subprocess.run([marp_it], shell=True, stdout=subprocess.PIPE)
# Open HTML in Browser
subprocess.Popen(['open', f'{file}.html'])
return proc
Now on the streamlit cloud this is not working obviously.
Question: is there a workaround to achieve the described functionality in the streamlit-cloud.
Any help is much appreciated!

Subprocess to open python file and return data

I am trying to use Python to open another file. This file is going to start up a socket and create threads for listening for additional connections, and threads for sending/receiving data. The main thread will not return.
However, if the setup of sockets fail, I want to return a error code to the other python script that executed the subprocess.
main.py
py3output = subprocess.check_output(['python3', 'py3.py'])
print('py3 said:' + str(py3output))
py3.py
def returnme():
return 10
returnme()
When I run this, it prints:
py3 said:b''
I am just trying to figure out how to get the return value back to the main calling program.
To return an exit code n back to the OS, you need sys.exit(n). But seems like you do not want to check the exit code but the stdout otput. So your program might need to rewrite to:
def returnme():
return 10
print(returnme())
You should only return a string as a standard output using following code:
sample.py
import sys
def returnme():
sys.stdout.write(str(10))
sys.stdout.flush()
returnme()
main.py
from subprocess import check_output
output = check_output(['python','sample.py'])
print('Sample.py says :' + output)

How can python wait for a batch SGE script finish execution?

I have a problem I'd like you to help me to solve.
I am working in Python and I want to do the following:
call an SGE batch script on a server
see if it works correctly
do something
What I do now is approx the following:
import subprocess
try:
tmp = subprocess.call(qsub ....)
if tmp != 0:
error_handler_1()
else:
correct_routine()
except:
error_handler_2()
My problem is that once the script is sent to SGE, my python script interpret it as a success and keeps working as if it finished.
Do you have any suggestion about how could I make the python code wait for the actual processing result of the SGE script ?
Ah, btw I tried using qrsh but I don't have permission to use it on the SGE
Thanks!
From your code you want the program to wait for job to finish and return code, right? If so, the qsub sync option is likely what you want:
http://gridscheduler.sourceforge.net/htmlman/htmlman1/qsub.html
Additional Answer for an easier processing:
By using the python drmaa module : link which allows a more complete processing with SGE.
A functioning code provided in the documentation is here: [provided you put a sleeper.sh script in the same directory]
please notice that the -b n option is needed to execute a .sh script, otherwise it expects a binary by default like explained here
import drmaa
import os
def main():
"""Submit a job.
Note, need file called sleeper.sh in current directory.
"""
s = drmaa.Session()
s.initialize()
print 'Creating job template'
jt = s.createJobTemplate()
jt.remoteCommand = os.getcwd()+'/sleeper.sh'
jt.args = ['42','Simon says:']
jt.joinFiles=False
jt.nativeSpecification ="-m abe -M mymail -q so-el6 -b n"
jobid = s.runJob(jt)
print 'Your job has been submitted with id ' + jobid
retval = s.wait(jobid, drmaa.Session.TIMEOUT_WAIT_FOREVER)
print('Job: {0} finished with status {1}'.format(retval.jobId, retval.hasExited))
print 'Cleaning up'
s.deleteJobTemplate(jt)
s.exit()
if __name__=='__main__':
main()

Is it possible to stream output from a python subprocess to a webpage in real time?

Thanks in advance for any help. I am fairly new to python and even newer to html.
I have been trying the last few days to create a web page with buttons to perform tasks on a home server.
At the moment I have a python script that generates a page with buttons:
(See the simplified example below. removed code to clean up post)
Then a python script which runs said command and outputs to an iframe on the page:
(See the simplified example below. removed code to clean up post)
This does output the entire finished output after the command is finished. I have also tried adding the -u option to the python script to run it unbuffered. I have also tried using the Python subprocess as well. If it helps the types of commands I am running are apt-get update, and other Python scripts for moving files and fixing folder permissions.
And when run from normal Ubuntu server terminal it runs fine and outputs in real time and from my research it should be outputting as the command is run.
Can anyone tell me where I am going wrong? Should I be using a different language to perform this function?
EDIT Simplified example:
initial page:
#runcmd.html
<head>
<title>Admin Tasks</title>
</head>
<center>
<iframe src="/scripts/python/test/createbutton.py" width="650" height="800" frameborder="0" ALLOWTRANSPARENCY="true"></iframe>
<iframe width="650" height="800" frameborder="0" ALLOWTRANSPARENCY="true" name="display"></iframe>
</center>
script that creates button:
cmd_page = '<form action="/scripts/python/test/runcmd.py" method="post" target="display" >' + '<label for="run_update">run updates</label><br>' + '<input align="Left" type="submit" value="runupdate" name="update" title="run_update">' + "</form><br>" + "\n"
print ("Content-type: text/html")
print ''
print cmd_page
script that should run command:
# runcmd.py:
import os
import pexpect
import cgi
import cgitb
import sys
cgitb.enable()
fs = cgi.FieldStorage()
sc_command = fs.getvalue("update")
if sc_command == "runupdate":
cmd = "/usr/bin/sudo apt-get update"
pd = pexpect.spawn(cmd, timeout=None, logfile=sys.stdout)
print ("Content-type: text/html")
print ''
print "<pre>"
line = pd.readline()
while line:
line = pd.readline()
I havent tested the above simplified example so unsure if its functional.
EDIT:
Simplified example should work now.
Edit:
Imrans code below if I open a browser to the ip:8000 it displays the output just like it was running in a terminal which is Exactly what I want. Except I am using Apache server for my website and an iframe to display the output. How do I do that with Apache?
edit:
I now have the output going to the iframe using Imrans example below but it still seems to buffer for example:
If I have it (the script through the web server using curl ip:8000) run apt-get update in terminal it runs fine but when outputting to the web page it seems to buffer a couple of lines => output => buffer => ouput till the command is done.
But running other python scripts the same way buffer then output everything at once even with the -u flag. While again in terminal running curl ip:800 outputs like normal.
Is that just how it is supposed to work?
EDIT 19-03-2014:
any bash / shell command I run using Imrans way seems to output to the iframe in near realtime. But if I run any kind of python script through it the output is buffered then sent to the iframe.
Do I possibly need to PIPE the output of the python script that is run by the script that runs the web server?
You need to use HTTP chunked transfer encoding to stream unbuffered command line output. CherryPy's wsgiserver module has built-in support for chunked transfer encoding. WSGI applications can be either functions that return list of strings, or generators that produces strings. If you use a generator as WSGI application, CherryPy will use chunked transfer automatically.
Let's assume this is the program, of which the output will be streamed.
# slowprint.py
import sys
import time
for i in xrange(5):
print i
sys.stdout.flush()
time.sleep(1)
This is our web server.
2014 Version (Older cherrpy Version)
# webserver.py
import subprocess
from cherrypy import wsgiserver
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
proc = subprocess.Popen(['python', 'slowprint.py'], stdout=subprocess.PIPE)
line = proc.stdout.readline()
while line:
yield line
line = proc.stdout.readline()
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8000), application)
server.start()
2018 Version
#!/usr/bin/env python2
# webserver.py
import subprocess
import cherrypy
class Root(object):
def index(self):
def content():
proc = subprocess.Popen(['python', 'slowprint.py'], stdout=subprocess.PIPE)
line = proc.stdout.readline()
while line:
yield line
line = proc.stdout.readline()
return content()
index.exposed = True
index._cp_config = {'response.stream': True}
cherrypy.quickstart(Root())
Start the server with python webapp.py, then in another terminal make a request with curl, and watch output being printed line by line
curl 'http://localhost:8000'

Python telnetlib telnet.write() dropping messages

Given the following code:
import telnetlib
import sys
def func1(IP,user,passw):
t=telnetlib.Telnet(IP)
t.write(user.encode('ascii')+b'\n')
t.write(passw.encode('ascii')+b'\n')
return t
def func2(t,command):
t.write(command.encode('ascii')+b'\n')
print(command)
user=sys.argv[1]
passw=sys.argv[2]
IP=sys.argv[3]
t=func1(IP,user,passw)
for i in range(6):
func2(t, "message "+str(i))
By looking at the server and also Wiresharking it, only messages 1 and 2 gets through.
Now, if I change the func2(t,command) function as follows:
def func2(t,command):
t.write(command.encode('ascii')+b'\n')
t.read_eager() #This is the new line.
print(command)
It all works fine and all messages are been transmitted.
Any idea?
Python3.3 WindowsXP
You need to read the text coming back to prevent the socket blocking. This is why your other method works.
In a real world session you would always be reading back the results of logging in and of commands.

Categories

Resources