This may not specifically be an IronPython question, so a Python dev out there might be able to assist.
I want to run python scripts in my .Net desktop app using IronPython, and would like to give users the ability to forcibly terminate a script. Here's my test script (I'm new to Python so it might not be totally correct):-
import atexit
import time
import sys
#atexit.register
def cleanup():
print 'doing cleanup/termination code'
sys.exit()
for i in range(100):
print 'doing something'
time.sleep(1)
(Note that I might want to specify an "atexit" function in some scripts, allowing them to perform any cleanup during normal or forced termination).
In my .Net code I'm using the following code to terminate the script:
_engine.Runtime.Shutdown();
This results in the script's atexit function being called, but the script doesn't actually terminate - the for loop keeps going. A couple of other SO articles (here and here) say that sys.exit() should do the trick, so what am I missing?
It seems that it's not possible to terminate a running script - at least not in a "friendly" way. One approach I've seen is to run the IronPython engine in another thread, and abort the thread if you need to stop the script.
I wasn't keen on this brute-force approach, which would risk leaving any resources used by the script (e.g. files) open.
In the end, I create a C# helper class like this:-
public class HostFunctions
{
public bool AbortScript { get; set; }
// Other properties and functions that I want to expose to the script...
}
When the hosting application wants to terminate the script it sets AbortScript to true. This object is passed to the running script via the scope:-
_hostFunctions = new HostFunctions();
_scriptScope = _engine.CreateScope();
_scriptScope.SetVariable("HostFunctions", _hostFunctions);
In my scripts I just need to strategically place checks to see if an abort has been requested, and deal with it appropriately, e.g.:-
for i in range(100):
print 'doing something'
time.sleep(1)
if HostFunctions.AbortScript:
cleanup()
It seems that if you are using ".NET 5" or higher then aborting Thread might work imperfect.
Thread.Abort() is not supported on ".NET 5" or higher and throws PlatformNotSupportedException.
You probably will find a solution to use Thread.Interrupt(), but it has slightly different behavior:
If your Python script does not have any Thread.Sleep() it won't stop your script;
It looks like you couldn't Abort that Thread twice, but you can Interrupt that Thread twice. So, if your Python script is using finally blocks or "Context Manager", you will be able to Interrupt it by calling Thread.Interrupt() twice (with some delays between those calls).
Related
I am making a python module to help manage some tasks in Linux (and BSD) - namely managing Linux Containers. I'm aware of a couple of the ways to run terminal commands from python, such as Popen(), call(), and check_call(). When should I use these specific functions? More specifically, when is it proper to use the blocking or non-blocking function?
I have functions which build the commands to be run, which then pass the command (a list) to another function to execute it, using Popen.
Passing a command such as:
['lxc-start', '-n', 'myContainer']
to
...
def executeCommand(command, blocking=False):
try:
if blocking:
subprocess.check_call(command)
else:
(stdout, stderr) = Popen(command, stdout=PIPE).communicate()
self.logSelf(stdout)
except:
as_string = ' '.join(command)
logSelf("Could not execute :", as_string) #logging function
return
...
the code defaults to using Popen(), which is a non-blocking function. Under which kinds of cases should I override blocking and let the function perform check_call()?
My initial thoughts were to using blocking when the process is a one-time temporary process, such as the creation of the container, and to use non-blocking when the process is continuously running, such as starting a container.
Am I understanding the purpose of these functions correctly?
To answer the wider question - I would suggest :
Use a blocking call when you are doing something which either :
You know will always be quick - regardless of whether it works or fails.
Something which is critical to your application, and where is make no sense for your application to do anything else unless and until that task is complete - for instance connecting to or creating critical resources.
Use non-blocking calls in all other cases if you can, and especially if :
The task could take a while or
It would be useful to be doing something else while the task executes (even if that is a gui update to show progress).
I have a python program which operates an external program and starts a timeout thread. Timeout thread should countdown for 10 minutes and if the script, which operates the external program isn't finished in that time, it should kill the external program.
My thread seems to work fine on the first glance, my main script and the thread run simultaneously with no issues. But if a pop up window appears in the external program, it stops my scripts, so that even the countdown thread stops counting, therefore totally failing it's job.
I assume the issue is that the script calls a blocking function in API for the external program, which is blocked by the pop up window. I understand why it blocks my main program, but don't understand why it blocks my countdown thread. So, one possible solution might be to run a separate script for the countdown, but I would like to keep it as clean as possible and it seems really messy to start a script for this.
I have searched everywhere for a clue, but I didn't find much. There was a reference to the gevent library here:
background function in Python
, but it seems like such a basic task, that I don't want to include external library for this.
I also found a solution which uses a windows multimedia timer here, but I've never worked with this before and am afraid the code won't be flexible with this. Script is Windows-only, but it should work on all Windows from XP on.
For Unix I found signal.alarm which seems to do exactly what I want, but it's not available for Windows. Any alternatives for this?
Any ideas on how to work with this in the most simplified manner?
This is the simplified thread I'm creating (run in IDLE to reproduce the issue):
import threading
import time
class timeToKill():
def __init__(self, minutesBeforeTimeout):
self.stop = threading.Event()
self.countdownFrom = minutesBeforeTimeout * 60
def startCountdown(self):
self.countdownThread= threading.Thread(target=self.countdown, args=(self.countdownFrom,))
self.countdownThread.start()
def stopCountdown(self):
self.stop.set()
self.countdownThread.join()
def countdown(self,seconds):
for second in range(seconds):
if(self.stop.is_set()):
break
else:
print (second)
time.sleep(1)
timeout = timeToKill(1)
timeout.startCountdown()
raw_input("Blocking call, waiting for input:\n")
One possible explanation for a function call to block another Python thread is that CPython uses global interpreter lock (GIL) and the blocking API call doesn't release it (NOTE: CPython releases GIL on blocking I/O calls therefore your raw_input() example should work as is).
If you can't make the buggy API call to release GIL then you could use a process instead of a thread e.g., multiprocessing.Process instead of threading.Thread (the API is the same). Different processes are not limited by GIL.
For quick and dirty threading, I usually resort to subprocess commands. it is quite robust and os independent. It does not give as fine grained control as the thread and queue modules but for external calls to programs generally does nicely. Note the shell=True must be used with caution.
#this can be any command
p1 = subprocess.Popen(["python", "SUBSCRIPTS/TEST.py", "0"], shell=True)
#the thread p1 will run in the background - asynchronously. If you want to kill it after some time, then you need
#here do some other tasks/computations
time.sleep(10)
currentStatus = p1.poll()
if currentStatus is None: #then it is still running
try:
p1.kill() #maybe try os.kill(p1.pid,2) if p1.kill does not work
except:
#do something else if process is done running - maybe do nothing?
pass
I have two scripts: "autorun.py" and "main.py". I added "autorun.py" as service to the autorun in my linux system. works perfectly!
Now my question is: When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well! So when I do
sudo service autorun-test start
the command also never finishes!
How can I run "main.py" and then exit, and to finish it up, how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ? (this is how all other services work I think)
EDIT:
Solution:
if sys.argv[1] == "start":
print "Starting..."
with daemon.DaemonContext(working_directory="/home/pi/python"):
execfile("main.py")
else:
pid = int(open("/home/pi/python/main.pid").read())
try:
os.kill(pid, 9)
print "Stopped!"
except:
print "No process with PID "+str(pid)
First, if you're trying to create a system daemon, you almost certainly want to follow PEP 3143, and you almost certainly want to use the daemon module to do that for you.
When I want to launch "main.py" from my autorun script, and "main.py" will run forever, "autorun.py" never terminates as well!
You didn't say how you're running it. If you're doing anything that launches main.py as a child and waits (or, worse, tries to import/execfile/etc. in the same process), you can't do that. Either autorun.py has to launch and detach main.py (or do so indirectly via some external tool), or main.py has to daemonize when launched.
how can I then stop "main.py" when "autorun.py" is launched with the parameter "stop" ?
You need some form of inter-process communication (IPC), and some way for autorun to find the right IPC channel to use.
If you're building a network server, the right answer might be to connect to it as a client. But otherwise, the simplest thing to do is kill the process with a signal.
If you're using the daemon module, it can easily map signals to callbacks. Or, if you don't need any cleanup, just use SIGTERM, which by default will abruptly terminate. If neither of those applies, you will have to set up a custom signal handler (and within that handler do something useful—e.g., set a flag that your main code checks periodically).
How do you know what process to send the signal to? The standard way to do this is to have main.py record its PID in a pidfile at startup. You read that pidfile, and signal whatever process is specified there. (If you get an error because there is no process with that PID, that just means the daemon already quit for some reason—possibly because of an unhandled exception, or even a segfault. You may want to log that, but treat the "stop" as successful otherwise.) Again, if you're using daemon, it does the pidfile stuff for you; if not, you have to do it yourself.
You may want to take a look at the service scripts for daemons that came with your computer. They're probably all written in bash rather than Python, but it's not that hard to figure out what they're doing. Or… just use one of them as a skeleton, in which case you don't really need any bash knowledge; it's just search-and-replace on the name.
If your distro has LSB-style init functions, you can use something like this example. That one does a whole lot more than you need to, but it's a good example of all of the details. Or do it all from scratch with something like this example. This one is doing the pidfile management and the backgrounding from the service script (turning a non-daemon program into a daemon), which you don't need if you're using daemon properly, and it's using SIGHUP instead of SIGTERM. You can google yourself for other examples of init.d service scripts.
But again, if you're just trying to do this for your own system, the best thing to do is look inside the /etc/init.d on your distro. There will be dozens of examples there, and 90% of them will be exactly the same except for the name of the daemon.
I've seen a few of these questions, but haven't found a real answer yet.
I have an application that launches a gstreamer pipe, and then listens to the data it sends back.
In the example application I based mine one, it ends with this piece of code:
gtk.main()
there is no gtk window, but this piece of code does cause it to keep running. Without it, the program exits.
Now, I have read about constructs using while True:, but they include the sleep command, and if I'm not mistaken that will cause my application to freeze during the time of the sleep so ...
Is there a better way, without using gtk.main()?
gtk.main() runs an event loop. It doesn't exit, and it doesn't just freeze up doing nothing, because inside it has code kind of like this:
while True:
timeout = timers.earliest() - datetime.now()
try:
message = wait_for_next_gui_message(timeout)
except TimeoutError:
handle_any_expired_timers()
else:
handle_message(message)
That wait_for_next_gui_message function is a wrapper around different platform-specific functions that wait for X11, WindowServer, the unnamed thing in Windows, etc. to deliver messages like "user clicked your button" or "user hit ctrl-Q".
If you call http.serve_forever() or similar on a twisted, HTTPServer, etc., it's doing exactly the same thing, except it's a wait_for_next_network_message(sources, timeout) function, which wraps something like select.select, where sources is a list of all of your sockets.
If you're listening on a gstreamer pipe, your sources can just be that pipe, and the wait_for_next function just select.select.
Or, of course, you could use a networking framework like twisted.
However, you don't need to design your app this way. If you don't need to wait for multiple sources, you can just block:
while True:
data = pipe.read()
handle_data(data)
Just make sure the pipe is not set to nonblocking. If you're not sure, you can use setblocking on a socket, fcntl on a Unix pipe, or something I can't remember off the top of my head on a Windows pipe to make sure.
In fact, even if you need to wait for multiple sources, you can do this, by putting a blocking loop for each source into a separate thread (or process). This won't work for thousands of sockets (although you can use greenlets instead of threads for that case), but it's fine for 3, or 30.
I've become a fan of the Cmd class. It gives you a shell prompt for your programs and will stay in the loop while waiting for input. Here's the link to the docs. It might do what you want.
Let's say I have this blob of code that's made to be one long-running thread of execution, to poll for events and fire off other events (in my case, using XMLRPC calls). It needs to be refactored into clean objects so it can be unit tested, but in the meantime I want to capture some of its current behavior in some integration tests, treating it like a black box. For example:
# long-lived code
import xmlrpclib
s = xmlrpclib.ServerProxy('http://XXX:yyyy')
def do_stuff():
while True:
...
if s.xyz():
s.do_thing(...)
_
# test code
import threading, time
# stub out xmlrpclib
def run_do_stuff():
other_code.do_stuff()
def setUp():
t = threading.Thread(target=run_do_stuff)
t.setDaemon(True)
def tearDown():
# somehow kill t
t.join()
def test1():
t.start()
time.sleep(5)
assert some_XMLRPC_side_effects
The last big issue is that the code under test is designed to run forever, until a Ctrl-C, and I don't see any way to force it to raise an exception or otherwise kill the thread so I can start it up from scratch without changing the code I'm testing. I lose the ability to poll any flags from my thread as soon as I call the function under test.
I know this is really not how tests are designed to work, integration tests are of limited value, etc, etc, but I was hoping to show off the value of testing and good design to a friend by gently working up to it rather than totally redesigning his software in one go.
The last big issue is that the code under test is designed to run forever, until a Ctrl-C, and I don't see any way to force it to raise an exception or otherwise kill the thread
The point of Test-Driven Development is to rethink your design so that it is testable.
Loop forever -- while seemingly fine for production use -- is untestable.
So make the loop terminate. It won't hurt production. It will improve testability.
The "designed to run forever" is not designed for testability. So fix the design to be testable.
I think I found a solution that does what I was looking for: Instead of using a thread, use a separate process.
I can write a small python stub to do mocking and run the code in a controlled way. Then I can write the actual tests to run my stub in a subprocess for each test and kill it when each test is finished. The test process could interact with the stub over stdio or a socket.