This question already has answers here:
Best way to implement a non-blocking wait?
(5 answers)
Closed 1 year ago.
I need to run my Python program forever in an infinite loop..
Currently I am running it like this -
#!/usr/bin/python
import time
# some python code that I want
# to keep on running
# Is this the right way to run the python program forever?
# And do I even need this time.sleep call?
while True:
time.sleep(5)
Is there any better way of doing it? Or do I even need time.sleep call?
Any thoughts?
Yes, you can use a while True: loop that never breaks to run Python code continually.
However, you will need to put the code you want to run continually inside the loop:
#!/usr/bin/python
while True:
# some python code that I want
# to keep on running
Also, time.sleep is used to suspend the operation of a script for a period of time. So, since you want yours to run continually, I don't see why you would use it.
How about this one?
import signal
signal.pause()
This will let your program sleep until it receives a signal from some other process (or itself, in another thread), letting it know it is time to do something.
I know this is too old thread but why no one mentioned this
#!/usr/bin/python3
import asyncio
loop = asyncio.get_event_loop()
try:
loop.run_forever()
finally:
loop.close()
sleep is a good way to avoid overload on the cpu
not sure if it's really clever, but I usually use
while(not sleep(5)):
#code to execute
sleep method always returns None.
Here is the complete syntax,
#!/usr/bin/python3
import time
def your_function():
print("Hello, World")
while True:
your_function()
time.sleep(10) #make function to sleep for 10 seconds
for OS's that support select:
import select
# your code
select.select([], [], [])
I have a small script interruptableloop.py that runs the code at an interval (default 1sec), it pumps out a message to the screen while it's running, and traps an interrupt signal that you can send with CTL-C:
#!/usr/bin/python3
from interruptableLoop import InterruptableLoop
loop=InterruptableLoop(intervalSecs=1) # redundant argument
while loop.ShouldContinue():
# some python code that I want
# to keep on running
pass
When you run the script and then interrupt it you see this output, (the periods pump out on every pass of the loop):
[py36]$ ./interruptexample.py
CTL-C to stop (or $kill -s SIGINT pid)
......^C
Exiting at 2018-07-28 14:58:40.359331
interruptableLoop.py:
"""
Use to create a permanent loop that can be stopped ...
... from same terminal where process was started and is running in foreground:
CTL-C
... from same user account but through a different terminal
$ kill -2 <pid>
or $ kill -s SIGINT <pid>
"""
import signal
import time
from datetime import datetime as dtt
__all__=["InterruptableLoop",]
class InterruptableLoop:
def __init__(self,intervalSecs=1,printStatus=True):
self.intervalSecs=intervalSecs
self.shouldContinue=True
self.printStatus=printStatus
self.interrupted=False
if self.printStatus:
print ("CTL-C to stop\t(or $kill -s SIGINT pid)")
signal.signal(signal.SIGINT, self._StopRunning)
signal.signal(signal.SIGQUIT, self._Abort)
signal.signal(signal.SIGTERM, self._Abort)
def _StopRunning(self, signal, frame):
self.shouldContinue = False
def _Abort(self, signal, frame):
raise
def ShouldContinue(self):
time.sleep(self.intervalSecs)
if self.shouldContinue and self.printStatus:
print( ".",end="",flush=True)
elif not self.shouldContinue and self.printStatus:
print ("Exiting at ",dtt.now())
return self.shouldContinue
If you mean run as service then you can use any rest framework
from flask import Flask
class A:
def one(port):
app = Flask(__name__)
app.run(port = port)
call it:
one(port=1001)
it will always keep listening on 1001
* Running on http://127.0.0.1:1001/ (Press CTRL+C to quit)
Related
Im pretty independent when using oython since i wouldnt consider myself a beginner etc, but Iv been coding up a program that I want to sell. The problem is that I want the program to have a timer on it and when it runs out the program will no longer work. Giving the user a specified amount of time they have to use the program.
You will want to run your program from another program using multithreading or asynchronous stuff. If you are looking for a single thing to send to your program (here, an interruption signal), then you should take a look at the signal built in package (for CPython).
(based on this answer from another post)
If you're calling external script using subprocess.Popen, you can just .kill() it after some time.
from subprocess import Popen
from time import sleep
with Popen(["python3", script_path]) as proc:
sleep(1.0)
proc.kill()
Reading documentation helps sometimes.
One way this can be done by interrupting the main thread
from time import sleep
from threading import Thread
from _thread import interrupt_main
import sys
TIME_TO_WAIT = 10
def kill_main(time):
sleep(time)
interrupt_main()
thread = Thread(target=kill_main, args=(TIME_TO_WAIT,))
thread.start()
try:
while True:
print('Main code is running')
sleep(0.5)
except KeyboardInterrupt: print('Time is up!')
sys.exit()
What is the most efficient way (in terms of polling overhead) to request a Python program to stop (in a controlled way) from a Bash script. On python side I want a function (which executes as fast as possible) which returns true when a stop is requested or false if not. If true we save our work, release resources and exit.
For some simple tools I implemented the following:
In bash I do a touch /tmp/stop
My Python program polls on a frequent basis /tmp/stop does exist. If it exists if quits in a controlled way.
My bash script waits (loop - sleep - ps) until the related process is stopped.
This solution works, but polling for this file is most likely not the most efficient way.
Are there other options with less overhead (in terms of Python polling time)?
You could send an interrupt signal (SIGINT) to the python process. That's the same signal your shell would send when you hit Ctrl+C:
Looks like this in Bash:
python my_script.py & # start the script in background
pyscript_pid=$! # store the python interpreter' PID
sleep(5) # pause 5 seconds
kill -s SIGINT $pyscript_pid # send the SIGINT signal to the process
And in Python you simply catch the KeyboardInterrupt exception that gets thrown when the interpreter receives the SIGINT signal:
try:
print ("I'm still running...")
# do something useful, but it must be interruptible at any time!
except KeyboardInterrupt:
print ("I'm going to quit now.")
# tidy up...
# ... and exit
You should not do stuff that would break anything when interrupted half way inside the try block though, only perform stuff that can be interrupted or reset to the last valid state in the tidying up code. Alternatively you might use try: ... finally: ... to ensure code in the finally block will be started always, even if the code in the try gets interrupted while it's running.
You may also look at How do I capture SIGINT in Python? or #Robᵩ's answer to find out how to capture all possible signals and not only SIGINT and how to register event handler for them instead of using try-catch(-finally), but this here would be the simplest approach.
The UNIX signal mechanism would be an excellent choice. You don't need any temporary files, and the polling overhead is essentially zero.
You may shutdown the following python program gracefully like so: kill -USR1 $pid.
import signal
import time
import sys
please_stop = False
def setup_signal():
def handler(x,y):
global please_stop
please_stop = True
signal.signal(signal.SIGUSR1, handler)
def main_task():
for i in range(10):
print "Working hard on iteration #%d"%i
time.sleep(1)
if please_stop:
print "Stopping now"
sys.exit(0)
setup_signal()
main_task()
I've got a script that runs an infinite loop and adds things to a database and does things that I can't just stop halfway through, so I can't just press Ctrl+C and stop it.
I want to be able to somehow stop a while loop, but let it finish it's last iteration before it stops.
Let me clarify:
My code looks something like this:
while True:
do something
do more things
do more things
I want to be able to interrupt the while loop at the end, or the beginning, but not between doing things because that would be bad.
And I don't want it to ask me after every iteration if I want to continue.
Thanks for the great answers, I'm super grateful but my implementation doesn't seem to be working:
def signal_handler(signal, frame):
global interrupted
interrupted = True
class Crawler():
def __init__(self):
# not relevant
def crawl(self):
interrupted = False
signal.signal(signal.SIGINT, signal_handler)
while True:
doing things
more things
if interrupted:
print("Exiting..")
break
When I press Ctrl+C the program just keeps going ignoring me.
What you need to do is catch the interrupt, set a flag saying you were interrupted but then continue working until it's time to check the flag (at the end of each loop). Because python's try-except construct will abandon the current run of the loop, you need to set up a proper signal handler; it'll handle the interrupt but then let python continue where it left off. Here's how:
import signal
import time # For the demo only
def signal_handler(signal, frame):
global interrupted
interrupted = True
signal.signal(signal.SIGINT, signal_handler)
interrupted = False
while True:
print("Working hard...")
time.sleep(3)
print("All done!")
if interrupted:
print("Gotta go")
break
Notes:
Use this from the command line. In the IDLE console, it'll trample on IDLE's own interrupt handling.
A better solution would be to "block" KeyboardInterrupt for the duration of the loop, and unblock it when it's time to poll for interrupts. This is a feature of some Unix flavors but not all, hence python does not support it (see the third "General rule")
The OP wants to do this inside a class. But the interrupt function is invoked by the signal handling system, with two arguments: The signal number and a pointer to the stack frame-- no place for a self argument giving access to the class object. Hence the simplest way to set a flag is to use a global variable. You can rig a pointer to the local context by using closures (i.e., define the signal handler dynamically in __init__(), but frankly I wouldn't bother unless a global is out of the question due to multi-threading or whatever.
Caveat: If your process is in the middle of a system call, handling an signal may interrupt the system call. So this may not be safe for all applications. Safer alternatives would be (a) Instead of relying on signals, use a non-blocking read at the end of each loop iteration (and type input instead of hitting ^C); (b) use threads or interprocess communication to isolate the worker from the signal handling; or (c) do the work of implementing real signal blocking, if you are on an OS that has it. All of them are OS-dependent to some extent, so I'll leave it at that.
the below logic will help you do this,
import signal
import sys
import time
run = True
def signal_handler(signal, frame):
global run
print("exiting")
run = False
signal.signal(signal.SIGINT, signal_handler)
while run:
print("hi")
time.sleep(1)
# do anything
print("bye")
while running this, try pressing CTRL + C
To clarify #praba230890's solution: The interrupted variable was not defined in the correct scope. It was defined in the crawl function and the handler could not reach it as a global variable, according to the definition of the handler at the root of the program.
Here is edited example of the principle above. It is the infinitive python loop in a separate thread with the safe signal ending. Also has thread-blocking sleep step - up to you to keep it, replace for asyncio implementation or remove.
This function could be imported to any place in an application, runs without blocking other code (e.g. good for REDIS pusub subscription). After the SIGINT catch the thread job ends peacefully.
from typing import Callable
import time
import threading
import signal
end_job = False
def run_in_loop(job: Callable, interval_sec: int = 0.5):
def interrupt_signal_handler(signal, frame):
global end_job
end_job = True
signal.signal(signal.SIGINT, interrupt_signal_handler)
def do_job():
while True:
job()
time.sleep(interval_sec)
if end_job:
print("Parallel job ending...")
break
th = threading.Thread(target=do_job)
th.start()
You forgot to add global statement in crawl function.
So result will be
import signal
def signal_handler(signal, frame):
global interrupted
interrupted = True
class Crawler():
def __init__(self):
... # or pass if you don't want this to do anything. ... Is for unfinished code
def crawl(self):
global interrupted
interrupted = False
signal.signal(signal.SIGINT, signal_handler)
while True:
# doing things
# more things
if interrupted:
print("Exiting..")
break
I hope below code would help you:
#!/bin/python
import sys
import time
import signal
def cb_sigint_handler(signum, stack):
global is_interrupted
print("SIGINT received")
is_interrupted = True
if __name__ == "__main__":
is_interrupted = False
signal.signal(signal.SIGINT, cb_sigint_handler)
while True:
# do stuff here
print("processing...")
time.sleep(3)
if is_interrupted:
print("Exiting..")
# do clean up
sys.exit(0)
I have two functions, draw_ascii_spinner and findCluster(companyid).
I would like to:
Run findCluster(companyid) in the backround and while its processing....
Run draw_ascii_spinner until findCluster(companyid) finishes
How do I begin to try to solve for this (Python 2.7)?
Use threads:
import threading, time
def wrapper(func, args, res):
res.append(func(*args))
res = []
t = threading.Thread(target=wrapper, args=(findcluster, (companyid,), res))
t.start()
while t.is_alive():
# print next iteration of ASCII spinner
t.join(0.2)
print res[0]
You can use multiprocessing. Or, if findCluster(companyid) has sensible stopping points, you can turn it into a generator along with draw_ascii_spinner, to do something like this:
for tick in findCluster(companyid):
ascii_spinner.next()
Generally, you will use Threads. Here is a simplistic approach which assumes, that there are only two threads: 1) the main thread executing a task, 2) the spinner thread:
#!/usr/bin/env python
import time
import thread
def spinner():
while True:
print '.'
time.sleep(1)
def task():
time.sleep(5)
if __name__ == '__main__':
thread.start_new_thread(spinner, ())
# as soon as task finishes (and so the program)
# spinner will be gone as well
task()
This can be done with threads. FindCluster runs in a separate thread and when done, it can simply signal another thread that is polling for a reply.
You'll want to do some research on threading, the general form is going to be this
Create a new thread for findCluster and create some way for the program to know the method is running - simplest in Python is just a global boolean
Run draw_ascii_spinner in a while loop conditioned on whether it is still running, you'll probably want to have this thread sleep for a short period of time between iterations
Here's a short tutorial in Python - http://linuxgazette.net/107/pai.html
Run findCluster() in a thread (the Threading module makes this very easy), and then draw_ascii_spinner until some condition is met.
Instead of using sleep() to set the pace of the spinner, you can wait on the thread's wait() with a timeout.
It is possible to have a working example? I am new in Python. I have 6 tasks to run in one python program. These 6 tasks should work in coordinations, meaning that one should start when another finishes. I saw the answers , but I couldn't adopted the codes you shared to my program.
I used "time.sleep" but I know that it is not good because I cannot know how much time it takes each time.
# Sending commands
for i in range(0,len(cmdList)): # port Sending commands
cmd = cmdList[i]
cmdFull = convert(cmd)
port.write(cmd.encode('ascii'))
# s = port.read(10)
print(cmd)
# Terminate the command + close serial port
port.write(cmdFull.encode('ascii'))
print('Termination')
port.close()
# time.sleep(1*60)
How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)