I want to execute a function every 60 seconds on Python but I don't want to be blocked meanwhile.
How can I do it asynchronously?
import threading
import time
def f():
print("hello world")
threading.Timer(3, f).start()
if __name__ == '__main__':
f()
time.sleep(20)
With this code, the function f is executed every 3 seconds within the 20 seconds time.time.
At the end it gives an error and I think that it is because the threading.timer has not been canceled.
How can I cancel it?
You could try the threading.Timer class: http://docs.python.org/library/threading.html#timer-objects.
import threading
def f(f_stop):
# do something here ...
if not f_stop.is_set():
# call f() again in 60 seconds
threading.Timer(60, f, [f_stop]).start()
f_stop = threading.Event()
# start calling f now and every 60 sec thereafter
f(f_stop)
# stop the thread when needed
#f_stop.set()
The simplest way is to create a background thread that runs something every 60 seconds. A trivial implementation is:
import time
from threading import Thread
class BackgroundTimer(Thread):
def run(self):
while 1:
time.sleep(60)
# do something
# ... SNIP ...
# Inside your main thread
# ... SNIP ...
timer = BackgroundTimer()
timer.start()
Obviously, if the "do something" takes a long time, then you'll need to accommodate for it in your sleep statement. But, 60 seconds serves as a good approximation.
I googled around and found the Python circuits Framework, which makes it possible to wait
for a particular event.
The .callEvent(self, event, *channels) method of circuits contains a fire and suspend-until-response functionality, the documentation says:
Fire the given event to the specified channels and suspend execution
until it has been dispatched. This method may only be invoked as
argument to a yield on the top execution level of a handler (e.g.
"yield self.callEvent(event)"). It effectively creates and returns
a generator that will be invoked by the main loop until the event has
been dispatched (see :func:circuits.core.handlers.handler).
I hope you find it as useful as I do :)
./regards
It depends on what you actually want to do in the mean time. Threads are the most general and least preferred way of doing it; you should be aware of the issues with threading when you use it: not all (non-Python) code allows access from multiple threads simultaneously, communication between threads should be done using thread-safe datastructures like Queue.Queue, you won't be able to interrupt the thread from outside it, and terminating the program while the thread is still running can lead to a hung interpreter or spurious tracebacks.
Often there's an easier way. If you're doing this in a GUI program, use the GUI library's timer or event functionality. All GUIs have this. Likewise, if you're using another event system, like Twisted or another server-process model, you should be able to hook into the main event loop to cause it to call your function regularly. The non-threading approaches do cause your program to be blocked while the function is pending, but not between functioncalls.
Why dont you create a dedicated thread, in which you put a simple sleeping loop:
#!/usr/bin/env python
import time
while True:
# Your code here
time.sleep(60)
I think the right way to run a thread repeatedly is the next:
import threading
import time
def f():
print("hello world") # your code here
myThread.run()
if __name__ == '__main__':
myThread = threading.Timer(3, f) # timer is set to 3 seconds
myThread.start()
time.sleep(10) # it can be loop or other time consuming code here
if myThread.is_alive():
myThread.cancel()
With this code, the function f is executed every 3 seconds within the 10 seconds time.sleep(10). At the end running of thread is canceled.
If you want to invoke the method "on the clock" (e.g. every hour on the hour), you can integrate the following idea with whichever threading mechanism you choose:
import time
def wait(n):
'''Wait until the next increment of n seconds'''
x = time.time()
time.sleep(n-(x%n))
print(time.asctime())
[snip. removed non async version]
To use asyncing you would use trio. I recommend trio to everyone who asks about async python. It is much easier to work with especially sockets. With sockets I have a nursery with 1 read and 1 write function and the write function writes data from an deque where it is placed by the read function; and waiting to be sent. The following app works by using trio.run(function,parameters) and then opening an nursery where the program functions in loops with an await trio.sleep(60) between each loop to give the rest of the app a chance to run. This will run the program in a single processes but your machine can handle 1500 TCP connections insead of just 255 with the non async method.
I have not yet mastered the cancellation statements but I put at move_on_after(70) which is means the code will wait 10 seconds longer than to execute a 60 second sleep before moving on to the next loop.
import trio
async def execTimer():
'''This function gets executed in a nursery simultaneously with the rest of the program'''
while True:
trio.move_on_after(70):
await trio.sleep(60)
print('60 Second Loop')
async def OneTime_OneMinute():
'''This functions gets run by trio.run to start the entire program'''
with trio.open_nursery() as nursery:
nursery.start_soon(execTimer)
nursery.start_soon(print,'do the rest of the program simultaneously')
def start():
'''You many have only one trio.run in the entire application'''
trio.run(OneTime_OneMinute)
if __name__ == '__main__':
start()
This will run any number of functions simultaneously in the nursery. You can use any of the cancellable statements for checkpoints where the rest of the program gets to continue running. All trio statements are checkpoints so use them a lot. I did not test this app; so if there are any questions just ask.
As you can see trio is the champion of easy-to-use functionality. It is based on using functions instead of objects but you can use objects if you wish.
Read more at:
[1]: https://trio.readthedocs.io/en/stable/reference-core.html
Related
I want to run this filemover function at certain time like after 60s every time. But if the previous file mover function hadn't finished then then the new thread of this function after 60s will not run.It will only run when there is no thread already running and time is 60s. How can I achieve this functionality? Thank you in advance for help.
I have limited knowledge on thread.
def filemover():
threading.Timer(60.0, filemover).start()
oldp="D:/LCT Work/python code/projectexcelupload/notprocessed"
newp="D:/LCT Work/python code/projectexcelupload/processed"
onlyfiles = [f for f in listdir(oldp) if isfile(join(oldp, f))]
#print(onlyfiles.index("hello"))
global globalfilenamearra
global globalpos
for file in onlyfiles:
if (file in globalfilenamearra):
txt=1
else:
globalfilenamearra.append(file)
filemover()
Well, the principle I would suggest is a bit more different. With each execution of a thread, you have to create a lock, so that other threads are aware that some other thread is in execution in a same time. I would say the easiest way would be creating and deleting lock file. Example from the top of my head would be something like this:
import os
import shutil
import threading
def moveFile(source, destination):
print("waiting for other to finish\n")
error_flag = 0
while(os.path.exists("thread.lck")):
error_flag = 0
print("creating the new lock\n")
f = open("thread.lck", "w")
f.write("You can even do the identification of threads if you want")
f.close()
print("starting the work\n")
if(os.path.exists(source) and os.path.exist(destination)==False):
shutil.move(source, destination)
else:
error_flag = 1
print("remove the lock")
os.remove("thread.lck")
return error_flag
for i in range(0, 5):
threading.Timer(1.0*i, moveFile, args=("some.txt", "some1.txt")).start()
The way threads work is another one thread does not begin unless another thread is currently not working, so it makes sense that the other thread won't begin after 60 seconds unless the other one is done. what you can do
this program has two time.sleep() functions for example if you had a code
def stuff():
print('hello threads')
time.sleep(1)
print('done')
stuff()
stuff()
but if you are using threads it would look something more like
if you look closely you will notice that the next function starts right when the other one begins to sleep. Threads do not run concurrently, either one can be running but not two, that would be multiprocessing.
In your code you are using the threading.Timer() function. The issue with that is that two threads do not run at once. If you want this functionality you will have to refactor your code. you can decide to set time limits within the function you want to use threads for using the time module such that it sleeps after 60 seconds so that the other thread or threads can start
Although i would advice against this because it might get out of hand very quickly when it comes to maintainability if you are somewhat new to it.
I am somewhat new to python. I have been trying to find the answer to this coding question for some time. I have a function set up to run on a threading timer. This allows it to execute every second while my other code is running. I would like this function to simply execute continuously, that is every time it is done executing it starts over, rather than on a timer. The reason for this is that due to a changing delay in a stepper motor the function takes different amounts of time run.
Is this what you're looking for?
from threading import Thread
def f():
print('hello')
while True:
t = Thread(target=f)
t.start()
t.join()
Or maybe this, which shows the concurrent paths of execution (for production, remove sleep() calls, of course):
from threading import Thread
from time import sleep
def g():
print('hello')
sleep(1)
def f():
while True: g()
Thread(target=f).start()
sleep(1)
print('other code here')
I have two functions, draw_ascii_spinner and findCluster(companyid).
I would like to:
Run findCluster(companyid) in the backround and while its processing....
Run draw_ascii_spinner until findCluster(companyid) finishes
How do I begin to try to solve for this (Python 2.7)?
Use threads:
import threading, time
def wrapper(func, args, res):
res.append(func(*args))
res = []
t = threading.Thread(target=wrapper, args=(findcluster, (companyid,), res))
t.start()
while t.is_alive():
# print next iteration of ASCII spinner
t.join(0.2)
print res[0]
You can use multiprocessing. Or, if findCluster(companyid) has sensible stopping points, you can turn it into a generator along with draw_ascii_spinner, to do something like this:
for tick in findCluster(companyid):
ascii_spinner.next()
Generally, you will use Threads. Here is a simplistic approach which assumes, that there are only two threads: 1) the main thread executing a task, 2) the spinner thread:
#!/usr/bin/env python
import time
import thread
def spinner():
while True:
print '.'
time.sleep(1)
def task():
time.sleep(5)
if __name__ == '__main__':
thread.start_new_thread(spinner, ())
# as soon as task finishes (and so the program)
# spinner will be gone as well
task()
This can be done with threads. FindCluster runs in a separate thread and when done, it can simply signal another thread that is polling for a reply.
You'll want to do some research on threading, the general form is going to be this
Create a new thread for findCluster and create some way for the program to know the method is running - simplest in Python is just a global boolean
Run draw_ascii_spinner in a while loop conditioned on whether it is still running, you'll probably want to have this thread sleep for a short period of time between iterations
Here's a short tutorial in Python - http://linuxgazette.net/107/pai.html
Run findCluster() in a thread (the Threading module makes this very easy), and then draw_ascii_spinner until some condition is met.
Instead of using sleep() to set the pace of the spinner, you can wait on the thread's wait() with a timeout.
It is possible to have a working example? I am new in Python. I have 6 tasks to run in one python program. These 6 tasks should work in coordinations, meaning that one should start when another finishes. I saw the answers , but I couldn't adopted the codes you shared to my program.
I used "time.sleep" but I know that it is not good because I cannot know how much time it takes each time.
# Sending commands
for i in range(0,len(cmdList)): # port Sending commands
cmd = cmdList[i]
cmdFull = convert(cmd)
port.write(cmd.encode('ascii'))
# s = port.read(10)
print(cmd)
# Terminate the command + close serial port
port.write(cmdFull.encode('ascii'))
print('Termination')
port.close()
# time.sleep(1*60)
Is it possible to schedule an event in python without multithreading?
I am trying to obtain something like scheduling a function to execute every x seconds.
Maybe sched?
You could use a combination of signal.alarm and a signal handler for SIGALRM like so to repeat the function every 5 seconds.
import signal
def handler(sig, frame):
print ("I am done this time")
signal.alarm(5) #Schedule this to happen again.
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
The other option is to use the sched module that comes along with Python but I don't know whether it uses threads or not.
Sched is probably the way to go for this, as #eumiro points out. However, if you don't want to do that, then you could do this:
import time
while 1:
#call your event
time.sleep(x) #wait for x many seconds before calling the script again
Without threading it seldom makes sense to periodically call a function. Because your main thread is blocked by waiting - it simply does nothing.
However if you really want to do so:
import time
for x in range(3):
print('Loop start')
time.sleep(2)
print('Calling some function...')
I this what you really want?
You could use celery:
Celery is an open source asynchronous
task queue/job queue based on
distributed message passing. It is
focused on real-time operation, but
supports scheduling as well.
The execution units, called tasks, are
executed concurrently on one or more
worker nodes. Tasks can execute
asynchronously (in the background) or
synchronously (wait until ready).
and a code example:
You probably want to see some code by
now, so here’s an example task adding
two numbers:
from celery.decorators import task
#task
def add(x, y):
return x + y
You can execute the task in the
background, or wait for it to finish:
>>> result = add.delay(4, 4)
>>> result.wait() # wait for and return the result 8
This is of more general use than the problem you describe requires, though.
I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:
while True:
pass
or
while True:
time.sleep(1)
Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?
I would imagine time.sleep() will have less overhead on the system. Using pass will cause the loop to immediately re-evaluate and peg the CPU, whereas using time.sleep will allow the execution to be temporarily suspended.
EDIT: just to prove the point, if you launch the python interpreter and run this:
>>> while True:
... pass
...
You can watch Python start eating up 90-100% CPU instantly, versus:
>>> import time
>>> while True:
... time.sleep(1)
...
Which barely even registers on the Activity Monitor (using OS X here but it should be the same for every platform).
Why sleep? You don't want to sleep, you want to wait for the threads to finish.
So
# store the threads you start in a your_threads list, then
for a_thread in your_threads:
a_thread.join()
See: thread.join
If you are looking for a short, zero-cpu way to loop forever until a KeyboardInterrupt, you can use:
from threading import Event
Event().wait()
Note: Due to a bug, this only works on Python 3.2+. In addition, it appears to not work on Windows. For this reason, while True: sleep(1) might be the better option.
For some background, Event objects are normally used for waiting for long running background tasks to complete:
def do_task():
sleep(10)
print('Task complete.')
event.set()
event = Event()
Thread(do_task).start()
event.wait()
print('Continuing...')
Which prints:
Task complete.
Continuing...
signal.pause() is another solution, see https://docs.python.org/3/library/signal.html#signal.pause
Cause the process to sleep until a signal is received; the appropriate handler will then be called. Returns nothing. Not on Windows. (See the Unix man page signal(2).)
I've always seen/heard that using sleep is the better way to do it. Using sleep will keep your Python interpreter's CPU usage from going wild.
You don't give much context to what you are really doing, but maybe Queue could be used instead of an explicit busy-wait loop? If not, I would assume sleep would be preferable, as I believe it will consume less CPU (as others have already noted).
[Edited according to additional information in comment below.]
Maybe this is obvious, but anyway, what you could do in a case where you are reading information from blocking sockets is to have one thread read from the socket and post suitably formatted messages into a Queue, and then have the rest of your "worker" threads reading from that queue; the workers will then block on reading from the queue without the need for neither pass, nor sleep.
Running a method as a background thread with sleep in Python:
import threading
import time
class ThreadingExample(object):
""" Threading example class
The run() method will be started and it will run in the background
until the application exits.
"""
def __init__(self, interval=1):
""" Constructor
:type interval: int
:param interval: Check interval, in seconds
"""
self.interval = interval
thread = threading.Thread(target=self.run, args=())
thread.daemon = True # Daemonize thread
thread.start() # Start the execution
def run(self):
""" Method that runs forever """
while True:
# Do something
print('Doing something imporant in the background')
time.sleep(self.interval)
example = ThreadingExample()
time.sleep(3)
print('Checkpoint')
time.sleep(2)
print('Bye')