Python core application - python

I'm new to Python and I'm writing a script that
includes some timed routines.
My current approach is to instantiate a class
that includes those Timers (from: threading.Timer),
but I don't want the script to return when it gets to the
end of the function:
import mytimer
timer = mytimer()
Suppose I have a imple script like that one. All it
does is instantiate a mytimer object which performs a series
of timed activities.
In order for the application not to exit, I could use Qt like this:
from PyQt4.QtCore import QCoreApplication
import mytimer
import sys
def main():
app = QCoreApplication(sys.argv)
timer = mytimer()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
This way, the sys.exit() call won't return immediately, and the
timer would just keep doing its thing 'forever' in background.
Although this is a solution I've used before, using Qt just for this doesn't
fell right to me.
So my question is, Is there any way to accomplish this using standard Python?
Thanks

Create a function in your script which tests a select or poll object to terminate a loop. Check out serve_forever in SocketServer.py from the standard library as an example.

A Google search for "python timer" finds:
http://docs.python.org/library/sched.html
http://docs.python.org/release/2.5.2/lib/timer-objects.html
The sched module seems to be exactly what you need.
Example:
>>> import sched, time
>>> s = sched.scheduler(time.time, time.sleep)
>>> def print_time(): print "From print_time", time.time()
...
>>> def print_some_times():
... print time.time()
... s.enter(5, 1, print_time, ())
... s.enter(10, 1, print_time, ())
... s.run()
... print time.time()
...
>>> print_some_times()
930343690.257
From print_time 930343695.274
From print_time 930343700.273
930343700.276
Once you have built your queue of times for things to happen, you just call the .run() method on your sched instance, and it will automatically wait until the queue is emptied, then will complete. So you can just put s.run() as the last thing in your script, and it will automatically exit only when the timed tasks are all done.

import mytimer
import sys
from threading import Lock
lock = Lock()
lock.acquire() # put lock into locked state
def main():
timer = mytimer()
lock.acquire() # blocks until someone calls lock.release()
if __name__ == '__main__':
main()
If you want a clean exit, you can just make mytimer() call lock.release() at some point.

Related

Get current process id when using `concurrent.futures.ProcessPoolExecutor`

I am using concurent.futures.ProcessPoolExecutor has a high level API to multiprocessing.
I want to identify the current process in the worker functions.
With the low level API multiprocessing I can do it like this.
import multiprocessing
def worker():
print(multiprocessing.current_process())
Is there a current_process() pendant when using workers with the ProcessPoolExecutor()?
Since each execution happens in a separate process, you can simply do
import os
def worker():
# Get the process ID of the current process
pid = os.getpid()
..
.. do something with pid
For example,
from concurrent.futures import ProcessPoolExecutor
import os
import time
def task():
time.sleep(1)
print("Executing on Process {}".format(os.getpid()))
def main():
with ProcessPoolExecutor(max_workers=3) as executor:
for i in range(3):
executor.submit(task)
if __name__ == '__main__':
main()
➜ python3.9 so.py
Executing on Process 71137
Executing on Process 71136
Executing on Process 71138
Note that if the task in hand is small and executed fast enough, your pid might stay the same. Try it out by removing the time.sleep call from my example.

Python thread.Timer() not works in my process

import os
import sys
from multiprocessing import Process, Queue
import threading
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
print 'Hello :D'
threading.Timer(1, self.say_hello_again_and_again).start()
test = Test()
#test.say_hello_again_and_again()
process = Process(target=test.say_hello_again_and_again)
process.start()
this is test code.
the result:
pi#raspberrypi:~/Plant2 $ python test2.py
__init__ is called
Hello :D
If I use test.say_hello_again_and_again() , "Hello :D" is printed repeatedly.
But, process is not working as I expected. Why is "Hello :D" not being printed in my process?
What's happening in my process?
There are two problems with your code:
First: You start a process with start(). This is doing a fork, that means now you have two processes, the parent and the child running side by side. Now, the parent process immediately exits, because after start() it's the end of the program. To wait until the child has finished (which in your case is never), you have to add process.join().
I did test your suggestion, but it not works
Indeed. There is a second issue: You start a new thread with threading.Timer(1, ...).start() but then immediately end the process. Now, you don't wait until your thread started because the underlying process immediately dies. You'd need to also wait until the thread has stopped with join().
Now that's how your program would look like:
from multiprocessing import Process
import threading
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
print 'Hello :D'
timer = threading.Timer(1, self.say_hello_again_and_again)
timer.start()
timer.join()
test = Test()
process = Process(target=test.say_hello_again_and_again)
process.start()
process.join()
But this is suboptimal at best because you mix multiprocessing (which is using fork to start independent processes) and threading (which starts a thread within the process). While this is not really a problem it makes debugging a lot harder (one problem e.g. with the code above is that you can't stop it with ctrl-c because of some reason your spawned process is inherited by the OS and kept running). Why don't you just do this?
from multiprocessing import Process, Queue
import time
class Test:
def __init__(self):
print '__init__ is called'
def say_hello_again_and_again(self):
while True:
print 'Hello :D'
time.sleep(1)
test = Test()
process = Process(target=test.say_hello_again_and_again)
process.start()
process.join()

Python Multithreading (while and apscheduler)

I am trying to call two functions simultaneously in Python. One is an infinite loop and the other one is started using apscheduler. Like this:
Thread.py
from multiprocessing import Process
import _While
import _Scheduler
if __name__ == '__main__':
p1 = Process(target=_While.main())
p1.start()
p2 = Process(target=_Scheduler.main())
p2.start()
_While.py
import time
def main():
while True:
print "while"
time.sleep(0.5)
_Scheduler.py
import logging
from apscheduler.scheduler import Scheduler
def _scheduler():
print "scheduler"
if __name__ == '__main__':
logging.basicConfig()
scheduler = Scheduler(standalone=True)
scheduler.add_interval_job(lambda: _scheduler(), seconds=2)
scheduler.start()
Since only while is printed it seems that _Scheduler isn’t starting.
Can somone help me?
You've got at least a couple problems here. First, the target keyword should be a function, not the result of a function. e.g.:
p1 = Process(target=_While.main) # Note the lack of function call
Second, I don't see any _Scheduler.main function. Maybe you meant to do something like:
import logging
from apscheduler.scheduler import Scheduler
def _scheduler():
print "scheduler"
def main():
logging.basicConfig()
scheduler = Scheduler(standalone=True)
scheduler.add_interval_job(_scheduler, seconds=2) # I doubt that `lambda` is necessary here ...
scheduler.start()
if __name__ == "__main__":
main()

Allow Greenlet to finish work at end of main module execution

I'm making a library that uses gevent to do some work asynchronously. I'd like to guarantee that the work is completed, even if the main module finishes execution.
class separate_library(object):
def __init__(self):
import gevent.monkey; gevent.monkey.patch_all()
def do_work(self):
from gevent import spawn
spawn(self._do)
def _do(self):
from gevent import sleep
sleep(1)
print 'Done!'
if __name__ == '__main__':
lib = separate_library()
lib.do_work()
If you run this, you'll notice the program ends immediately, and Done! doesn't get printed.
Now, the main module doesn't know, or care, how separate_library actually accomplishes the work (or even that gevent is being used), so it's unreasonable to require joining there.
Is there any way separate_library can detect certain types of program exits, and stall until the work is done? Keyboard interrupts, SIGINTs, and sys.exit() should end the program immediately, as that is probably the expected behaviour.
Thanks!
Try using a new thread that is not a daemon thread that spawns your gevent threads. Your program will not exit due to this non daemon thread.
import gevent
import threading
class separate_library(object):
def __init__(self):
import gevent.monkey; gevent.monkey.patch_all()
def do_work(self):
t = threading.Thread(target=self.spawn_gthreads)
t.setDaemon(False)
t.start()
def spawn_gthreads(self):
from gevent import spawn
gthreads = [spawn(self._do,x) for x in range(10)]
gevent.joinall(gthreads)
def _do(self,sec):
from gevent import sleep
sleep(sec)
print 'Done!'
if __name__ == '__main__':
lib = separate_library()
lib.do_work()

Different way to implement this threading?

I'm trying out threads in python. I want a spinning cursor to display while another method runs (for 5-10 mins). I've done out some code but am wondering is this how you would do it? i don't like to use globals, so I assume there is a better way?
c = True
def b():
for j in itertools.cycle('/-\|'):
if (c == True):
sys.stdout.write(j)
sys.stdout.flush()
time.sleep(0.1)
sys.stdout.write('\b')
else:
return
def a():
global c
#code does stuff here for 5-10 minutes
#simulate with sleep
time.sleep(2)
c = False
Thread(target = a).start()
Thread(target = b).start()
EDIT:
Another issue now is that when the processing ends the last element of the spinning cursor is still on screen. so something like \ is printed.
You could use events:
http://docs.python.org/2/library/threading.html
I tested this and it works. It also keeps everything in sync. You should avoid changing/reading the same variables in different threads without synchronizing them.
#!/usr/bin/python
from threading import Thread
from threading import Event
import time
import itertools
import sys
def b(event):
for j in itertools.cycle('/-\|'):
if not event.is_set():
sys.stdout.write(j)
sys.stdout.flush()
time.sleep(0.1)
sys.stdout.write('\b')
else:
return
def a(event):
#code does stuff here for 5-10 minutes
#simulate with sleep
time.sleep(2)
event.set()
def main():
c = Event()
Thread(target = a, kwargs = {'event': c}).start()
Thread(target = b, kwargs = {'event': c}).start()
if __name__ == "__main__":
main()
Related to 'kwargs', from Python docs (URL in the beginning of the post):
class threading.Thread(group=None, target=None, name=None, args=(), kwargs={})
...
kwargs is a dictionary of keyword arguments for the target invocation. Defaults to {}.
You're on the right track mostly, except for the global variable. Normally you'd needed to coordinate access to shared data like that with a lock or semaphore, but in this special case you can take a short-cut and just use whether one of the threads is running or not instead. This is what I mean:
from threading import Thread
from threading import Event
import time
import itertools
import sys
def monitor_thread(watched_thread):
chars = itertools.cycle('/-\|')
while watched_thread.is_alive():
sys.stdout.write(chars.next())
sys.stdout.flush()
time.sleep(0.1)
sys.stdout.write('\b')
def worker_thread():
# code does stuff here - simulated with sleep
time.sleep(2)
if __name__ == "__main__":
watched_thread = Thread(target=worker_thread)
watched_thread.start()
Thread(target=monitor_thread, args=(watched_thread,)).start()
This is not properly synchronized. But I will not try to explain it all to you right now because it's a whole lot of knowledge. Try to read this: http://effbot.org/zone/thread-synchronization.htm
But in your case it's not that bad that things aren't synchronized correctyl. The only thing that could happen, is that the spining bar spins a few ms longer than the background task actually needs.

Categories

Resources