Do not exit python program, but keep running - python

I've seen a few of these questions, but haven't found a real answer yet.
I have an application that launches a gstreamer pipe, and then listens to the data it sends back.
In the example application I based mine one, it ends with this piece of code:
gtk.main()
there is no gtk window, but this piece of code does cause it to keep running. Without it, the program exits.
Now, I have read about constructs using while True:, but they include the sleep command, and if I'm not mistaken that will cause my application to freeze during the time of the sleep so ...
Is there a better way, without using gtk.main()?

gtk.main() runs an event loop. It doesn't exit, and it doesn't just freeze up doing nothing, because inside it has code kind of like this:
while True:
timeout = timers.earliest() - datetime.now()
try:
message = wait_for_next_gui_message(timeout)
except TimeoutError:
handle_any_expired_timers()
else:
handle_message(message)
That wait_for_next_gui_message function is a wrapper around different platform-specific functions that wait for X11, WindowServer, the unnamed thing in Windows, etc. to deliver messages like "user clicked your button" or "user hit ctrl-Q".
If you call http.serve_forever() or similar on a twisted, HTTPServer, etc., it's doing exactly the same thing, except it's a wait_for_next_network_message(sources, timeout) function, which wraps something like select.select, where sources is a list of all of your sockets.
If you're listening on a gstreamer pipe, your sources can just be that pipe, and the wait_for_next function just select.select.
Or, of course, you could use a networking framework like twisted.
However, you don't need to design your app this way. If you don't need to wait for multiple sources, you can just block:
while True:
data = pipe.read()
handle_data(data)
Just make sure the pipe is not set to nonblocking. If you're not sure, you can use setblocking on a socket, fcntl on a Unix pipe, or something I can't remember off the top of my head on a Windows pipe to make sure.
In fact, even if you need to wait for multiple sources, you can do this, by putting a blocking loop for each source into a separate thread (or process). This won't work for thousands of sockets (although you can use greenlets instead of threads for that case), but it's fine for 3, or 30.

I've become a fan of the Cmd class. It gives you a shell prompt for your programs and will stay in the loop while waiting for input. Here's the link to the docs. It might do what you want.

Related

python: Create a detached process and communicate with it via command line

I want to create a command-line tool which is a countdown timer with some custom features I need.
My idea is to use a python script to fire up a process which does the work in the background (e.g. play sound when close to the end). Once the timer process is running I would like to communicate with it via the command line (send inquiries like 'remaining' or commands like 'start XXmin' and 'stop'). There should be only a single instance of the timer process, of course.
Usage might look like
>>> timer start 25min
>>> timer remaining
17:34 min remaining
>>> timer stop
timer stopped.
>>> timer start 90sec
What would the timer process need to look like to do its work while waiting for messages to arrive? What, in turn, would the interface script need to do to fire up the process and to communicate with it later? Is using a separate process the best idea to achieve my goal?
I have no clue how to go about it. My idea sounds very simple yet almost all of what I found is concerned with the concurrency of child processes of a parent script, which is not what I want.
Thank you.
What you're looking for here is a basic client-server architecture. You'll need two programs - one which runs in the background and listens for messages (the server), and a second that sends messages to the server, and does something with the responses (the client).
There are a lot of ways to do this, and the area is legitimately complex, so don't expect it to be super easy. For just starting out, I'd recommend you just try to use a simple http.server server using the standard library (http server module). For the client side, I'd recommend the requests library. HTTP is definitely not the best possible choice for a local client-server setup, but with the existing libraries it's going to be by far the easiest to get up and running, and once you're comfortable with that, you can look into other approaches if you want to.
The easy way would be to use & in the shell to execute your script in the background. And then communicate with the process with just USR1 and USR2 signals.
In Python, I guess easiest way would be to use daemon module.
import daemon
def do_something():
pass
if __name__ == "__main__":
with daemon.DaemonContext():
do_something()
Or you could fork() your own daemon process.
import os
def doSomething():
pass
def createDaemon():
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
os.chdir("/")
os.setsid()
os.umask(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doSomething()
Then, for example, you could use os.pipe to communicate with that daemon process. On in this simple case, on a *nix system, even just signals.
Another option is to use multiprocessing module to create the daemon process and also to communicate with it.

How to integrate killable processes/thread in Python GUI?

Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all!
First, you have to choose whether you are going to use threads or processes.
I will not go too much into differences, google it ;) Anyway, here are some things to consider: it is much easier to establish communication between threads than betweeween processes; in Python, all threads will run on the same CPU core (see Python GIL), but subprocesses may use multiple cores.
Processes
If you are using subprocesses, there are two ways: subprocess.Popen and multiprocessing.Process. With Popen you can run anything, whereas Process gives a simpler thread-like interface to running python code which is part of your project in a subprocess.
Both can be killed using terminate method.
See documentation for multiprocessing and subprocess
Of course, if you want a more graceful exit, you will want to send an "exit" message to the subprocess, rather than just terminate it, so that it gets a chance to do the clean-up. You could do that e.g. by writing to its stdin. The process should read from stdin and when it gets message "exit", it should do whatever you need before exiting.
Threads
For threads, you have to implement your own mechanism for stopping, rather than using something as violent as process.terminate().
Usually, a thread runs in a loop and in that loop you check for a flag which says stop. Then you break from the loop.
I usually have something like this:
class MyThread(Thread):
def __init__(self):
super(Thread, self).__init__()
self._stop_event = threading.Event()
def run(self):
while not self._stop_event.is_set():
# do something
self._stop_event.wait(SLEEP_TIME)
# clean-up before exit
def stop(self, timeout):
self._stop_event.set()
self.join(timeout)
Of course, you need some exception handling etc, but this is the basic idea.
EDIT: Answers to questions in comment
thread.start_new_thread(your_function) starts a new thread, that is correct. On the other hand, module threading gives you a higher-level API which is much nicer.
With threading module, you can do the same with:
t = threading.Thread(target=your_function)
t.start()
or you can make your own class which inherits from Thread and put your functionality in the run method, as in the example above. Then, when user clicks the start button, you do:
t = MyThread()
t.start()
You should store the t variable somewhere. Exactly where depends on how you designed the rest of your application. I would probably have some object which hold all active threads in a list.
When user clicks stop, you should:
t.stop(some_reasonable_time_in_which_the_thread_should_stop)
After that, you can remove the t from your list, it is not usable any more.
First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop.

Is there anyway to terminate a running function from a thread?

I've tried lately to write my own Socket-Server in python.
While i was writing a thread to handle server commands (sort of command line in the server), I've tried to implement a code that will restart the server when the raw_input() receives specific command.
Basically, i want to restart the server as soon as the "Running" variable changes its state from True to False, and when it does, i would like to stop the function (The function that called the thread) from running (get back to main function) and then run it again. Is there a way to do it?
Thank you very much, and i hope i was clear about my problem,
Idan :)
Communication between threads can be done with Events, Queues, Semaphores, etc. Check them out and choose the one, that fits your problem best.
You can't abort a thread, or raise an exception into it asynchronously, in Python.
The standard Unix solution to this problem is to use a non-blocking socket, create a pipe with pipe, replace all your blocking sock.recv calls with a blocking r, _, _ = select.select([sock, pipe], [], []), and then the other thread can write to the pipe to wake up the other thread.
To make this portable to Windows you'll need to create a UDP localhost socket instead of a pipe, which makes things slightly more complicated, but it's still not hard.
Or, of course, you can use a higher-level framework, like asyncio in 3.4+, or twisted or another third-party lib, which will wrap this up for you. (Most of them are already running the equivalent of a loop around select to service lots of clients in one thread or a small thread pool, so it's trivial to toss in a stop pipe.)
Are there other alternatives? Yes, but all less portable and less good in a variety of other ways.
Most platforms have a way to asynchronously kill or signal another thread, which you can access via, e.g., ctypes. But this is a bad idea, because it will prevent Python from doing any normal cleanup. Even if you don't get a segfault, this could mean files never get flushed and end up with incomplete/garbage data, locks are left acquired to deadlock your program somewhere completely unrelated a short time later, memory gets leaked, etc.
If you're specifically trying to interrupt the main thread, and you only care about CPython on Unix, you can use a signal handler and the kill function. The signal will take effect on the next Python bytecode, and if the interpreter is blocked on any kind of I/O (or most other syscalls, e.g., inside a sleep), the system will return to the interpreter with an EINTR, allowing it to interrupt immediately. If the interpreter is blocked on something else, like a call to a C library that blocks signals or just does nothing but CPU work for 30 seconds, then you'll have to wait 30 seconds (although that doesn't come up that often, and you should know if it will in your case). Also, threads and signals don't play nice on some older *nix platforms. And signals don't work the same way on Windows, or in some other Python implementations like Jython.
On some platforms (including Windows--but not most modern *nix plafforms), you can wake up a blocking socket call just by closing the socket out from under the waiting thread. On other platforms, this will not unblock the thread, or will do it sometimes but not other times (and theoretically it could even segfault your program or leave the socket library in an unusable state, although I don't think either of those will happen on any modern platform).
As far as I understand the documentation, and some experiments I've over the last weeks, there is no way to really force another thread to 'stop' or 'abort'. Unless the function is aware of the possibility of being stopped and has a foolproof method of avoiding getting stuck in some of the I/O functions. Then you can use some communication method such as semaphores. The only exception is the specialized Timer function, which has a Cancel method.
So, if you really want to stop the server thread forcefully, you might want to think about running it in a separate process, not a thread.
EDIT: I'm not sure why you want to restart the server - I just thought it was in case of a failure. Normal procedure in a server is to loop waiting for connections on the socket, and when a connection appears, attend it and return to that loop.
A better way, is to use the GIO library (part of glib), and connect methods to the connection event, to attend the connection even asynchronously. This avoids the loop completely. I don't have any real code for this in Python, but here's an example of a client in Python (which uses GIO for reception events) and a server in C, which uses GIO for connections.
Use of GIO makes life so much easier...

Python: GUI for continuously running script

I am writing a script which will run continuously on a computer. As it has to run on a computer without python installation, I am planning to convert it to executable. I also want to have a GUI to start and stop this application but I don't want this GUI to be opened all the time. I mean if the GUI is closed, I don't want the executable to stop running. It should stop only if user presses stop button on GUI. This GUI is just a interface for users to start and stop the executable.
How can I achieve this behavior?
The obvious solution is to have two separate programs: a backgrounder/daemon/agent/service that just chugs along in the background detached from user input and output, and a GUI program to control it. A nice advantage of this design is that you can also have a command-line program to control it, if you ever want to ssh in remotely, or control it from a script.
The traditional Unix way of handling this is to use a daemon designed like a system service (even if it's run like a normal user): it writes its pid to a file when it starts up, and the control program reads that file and sends a signal to the pid that it finds to kill it.
So, the control program has functions something like this:
def is_running():
try:
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, 0)
except Exception:
return False
else:
return True
def stop():
with open(PID_PATH) as f:
pid = int(f.read())
os.kill(pid, signal.SIGUSR1)
def start():
subprocess.check_call(DAEMON_PATH)
Of course in real life, you'll want some better error handling. Also, which signal you use depends on whether you want the daemon to die hard and instantly, or to gracefully shut down. And so on.
An alternative is to have the background process listen on a socket—whether TCP with a known port, or a Unix socket with a known filename—and communicate with it that way. This allows you to do fancier things that just start and stop.
On Windows, the details aren't quite the same, but you can do something similar.
Finally, Windows, OS X, and various linux distros also all have platform-specific ways of wrapping this kind of thing up at a higher level, so you might want to build a Windows Service, LaunchAgent, etc.
Thanks #abarnert.I used your method and converted your code for windows. Please see below my solution which works. It's starting and stopping helloworld.exe. I have removed error handling to keep it simple.
import subprocess
import time
def startprocess():
#start helloworld.exe
process = subprocess.Popen(['helloworld.exe'])
#Write down the prog id into a file for later use
f = open('progid.txt','w')
f.writelines(str(int(process._handle)))
f.close()
def endprocess():
f = open('progid.txt','r')
progid = int(f.read())
f.close()
# Kill the process using pywin32
import win32api
win32api.TerminateProcess(progid, -1)
startprocess()
time.sleep(60) #wait for 60 second before kill
endprocess()

A thread is blocked by a blocking call - how do I make a timeout on the blocking call?

I have a python program which operates an external program and starts a timeout thread. Timeout thread should countdown for 10 minutes and if the script, which operates the external program isn't finished in that time, it should kill the external program.
My thread seems to work fine on the first glance, my main script and the thread run simultaneously with no issues. But if a pop up window appears in the external program, it stops my scripts, so that even the countdown thread stops counting, therefore totally failing it's job.
I assume the issue is that the script calls a blocking function in API for the external program, which is blocked by the pop up window. I understand why it blocks my main program, but don't understand why it blocks my countdown thread. So, one possible solution might be to run a separate script for the countdown, but I would like to keep it as clean as possible and it seems really messy to start a script for this.
I have searched everywhere for a clue, but I didn't find much. There was a reference to the gevent library here:
background function in Python
, but it seems like such a basic task, that I don't want to include external library for this.
I also found a solution which uses a windows multimedia timer here, but I've never worked with this before and am afraid the code won't be flexible with this. Script is Windows-only, but it should work on all Windows from XP on.
For Unix I found signal.alarm which seems to do exactly what I want, but it's not available for Windows. Any alternatives for this?
Any ideas on how to work with this in the most simplified manner?
This is the simplified thread I'm creating (run in IDLE to reproduce the issue):
import threading
import time
class timeToKill():
def __init__(self, minutesBeforeTimeout):
self.stop = threading.Event()
self.countdownFrom = minutesBeforeTimeout * 60
def startCountdown(self):
self.countdownThread= threading.Thread(target=self.countdown, args=(self.countdownFrom,))
self.countdownThread.start()
def stopCountdown(self):
self.stop.set()
self.countdownThread.join()
def countdown(self,seconds):
for second in range(seconds):
if(self.stop.is_set()):
break
else:
print (second)
time.sleep(1)
timeout = timeToKill(1)
timeout.startCountdown()
raw_input("Blocking call, waiting for input:\n")
One possible explanation for a function call to block another Python thread is that CPython uses global interpreter lock (GIL) and the blocking API call doesn't release it (NOTE: CPython releases GIL on blocking I/O calls therefore your raw_input() example should work as is).
If you can't make the buggy API call to release GIL then you could use a process instead of a thread e.g., multiprocessing.Process instead of threading.Thread (the API is the same). Different processes are not limited by GIL.
For quick and dirty threading, I usually resort to subprocess commands. it is quite robust and os independent. It does not give as fine grained control as the thread and queue modules but for external calls to programs generally does nicely. Note the shell=True must be used with caution.
#this can be any command
p1 = subprocess.Popen(["python", "SUBSCRIPTS/TEST.py", "0"], shell=True)
#the thread p1 will run in the background - asynchronously. If you want to kill it after some time, then you need
#here do some other tasks/computations
time.sleep(10)
currentStatus = p1.poll()
if currentStatus is None: #then it is still running
try:
p1.kill() #maybe try os.kill(p1.pid,2) if p1.kill does not work
except:
#do something else if process is done running - maybe do nothing?
pass

Categories

Resources