I am adding job in redis and on completion of job I have added an event handler.
In eventhandler I am returning value based on which I am removing job id from jobstore. It is removed successfully but immediately it throws an exception.
Code
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.events import EVENT_JOB_EXECUTED
import logging
logging.basicConfig()
scheduler = BackgroundScheduler()
scheduler.add_jobstore('redis')
scheduler.start()
def tick():
print('Tick! The time is: %s' % datetime.now())
return 'success'
def removing_jobs(event):
if event.retval == 'success':
scheduler.remove_job(event.job_id)
scheduler.add_listener(removing_jobs, EVENT_JOB_EXECUTED)
try:
count = 0
while True:
count += 1
time.sleep(10)
job_ret = scheduler.add_job(tick, 'interval', id = str(count), seconds=10)
except (KeyboardInterrupt, SystemExit):
scheduler.shutdown()
Exception
Exception in thread APScheduler:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/.virtualenvs/py3/lib/python3.5/site-packages/apscheduler/schedulers/blocking.py", line 30, in _main_loop
wait_seconds = self._process_jobs()
File "/.virtualenvs/py3/lib/python3.5/site-packages/apscheduler/schedulers/base.py", line 995, in _process_jobs
jobstore.update_job(job)
File "/.virtualenvs/py3/lib/python3.5/site-packages/apscheduler/jobstores/redis.py", line 91, in update_job
raise JobLookupError(job.id)
apscheduler.jobstores.base.JobLookupError: 'No job by the id of 1 was found'
In short: you are removing the job while it is processed;
so you should remove the job outside its execution.
That's because the scheduler doesn't know what the job's execution will do; so it launches tick and sends a job object to the redis jobstore thinking it will be executed again. Before that, the EVENT_JOB_LISTENER launches removing_jobs.
The problem is that when the redis' jobstore gets the job for update its status, it is already deleted so it raises the JobLookupError.
Related
I am working on a python service that spawns Process to handle the workload. Since I don't know at the start of the service how many workers I need, I chose to not use Pool. The following is a simplified version:
import multiprocessing as mp
import time
from datetime import datetime
def _print(s): # just my cheap logging utility
print(f'{datetime.now()} - {s}')
def run_in_process(q, evt):
_print(f'starting process job')
while not evt.is_set(): # True
try:
x = q.get(timeout=2)
_print(f'received {x}')
except:
_print(f'timed-out')
if __name__ == '__main__':
with mp.Manager() as manager:
q = manager.Queue()
evt = manager.Event()
p = mp.Process(target=run_in_process, args=(q, evt))
p.start()
time.sleep(2)
data = 100
while True:
try:
q.put(data)
time.sleep(0.5)
data += 1
if data > 110:
break
except KeyboardInterrupt:
_print('finishing...')
#p.terminate()
break
time.sleep(3)
_print('setting event 0')
evt.set()
_print('joining process')
p.join()
_print('done')
The program works and exits gracefully, without any error messages. However, if I use Ctrl-C before I have all 10 events processed, I get the following error before it exits.
2022-04-01 12:41:06.866484 - received 101
2022-04-01 12:41:07.367628 - received 102
^C2022-04-01 12:41:07.507805 - timed-out
2022-04-01 12:41:07.507886 - finishing...
Process Process-2:
Traceback (most recent call last):
File "/<path-omitted>/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/<path-omitted>/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "mp.py", line 10, in run_in_process
while not evt.is_set(): # True
File "/<path-omitted>/python3.7/multiprocessing/managers.py", line 1088, in is_set
return self._callmethod('is_set')
File "/<path-omitted>/python3.7/multiprocessing/managers.py", line 819, in _callmethod
kind, result = conn.recv()
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
2022-04-01 12:41:10.511334 - setting event 0
Traceback (most recent call last):
File "mp.py", line 42, in <module>
evt.set()
File "/<path-omitted>/python3.7/multiprocessing/managers.py", line 1090, in set
return self._callmethod('set')
File "/<path-omitted>/python3.7/multiprocessing/managers.py", line 818, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/<path-omitted>/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
A few observations:
The double error message looks exactly the same when I press Ctrl-C with my actual project. I think this is a good representation of my problem.
If I add p.terminate(), it doesn't change the behavior if the program is left to finish by itself. But if I press Ctrl-C halfway, I encounter the error message only once, I guess it's from the main thread/process.
If I change while not evt.is_set(): in run_in_process to an infinite loop: while Tre: and let the program finish its course I would continue to see periodic time-out prints which make sense. What I don't understand is that, if I press Ctrl-C, then the terminal will start spewing time-out without any time gap between them. What happened?
My ultimate question is: what is the correct way of construct this program so that when Ctrl-C is used (or a termination signal is generated to the program for that matter), the program stops gracefully?
I found out a solution to this problem myself by using signal.
The idea is to set up a signal catcher to catch specific signals, such as signal.SIGINT, signal.SIGTERM.
import multiprocessing as mp
from threading import Event
import signal
if __name__ == '__main__':
main_evt = Event()
def stop_main_handler(signum, frame):
if not main_evt.is_set():
main_evt.set()
signal.signal(signal.SIGINT, stop_main_handler)
with mp.Manager() as manager:
# creating mp queue, event and process
q = manager.Queue()
evt = manager.Event()
p = mp.Process(target=..., args=(q, evt))
p.start()
while not main_evt.is_set():
# processing data
# cleanup
evt.set()
p.join()
Or you can wrap it in an object-oriented fashion:
class SignalCatcher(object):
def __init__(self):
self._main_evt = Event()
def _stop_handler(self, signum, frame):
if not self._main_evt.is_set():
self._main_evt.set()
def block_until_signaled(self):
while not self._main_evt.is_set()
time.sleep(2)
Then you can use it as follows:
if __name__ == '__main__':
sc = SignalCatcher()
# this has to be outside. It seems that there is another process
# created by multiprocessing library, if you put sc creation in
# with-context, it would fail to signal each process.
with mp.Manager() as manager:
# creating process and starting it
# ...
sc.block_until_signaled()
# cleanup
# ...
I am using multiprocessing.pool to work with the http server in python - it works great, but when I terminate, I get a slew of errors from all the spawnpoolworkers - and I'm just wondering how I avoid this.
My main code:
def run(self):
global pool
port = self.arguments.port
pool = multiprocessing.Pool( processes= self.arguments.threads)
with http.server.HTTPServer( ("", port), Handler ) as daemon:
print(f"serving on port {port}")
while True:
try:
daemon.handle_request()
except KeyboardInterrupt:
print("\nexiting")
pool.terminate()
pool.join()
return 0
I've tried doing nothing to the pool, I've tried doing pool.close() - I've tried not joining. But even if I just run that - never even access the port or call anything onto the pool, I still get a random list of things like this when I press control-c
Process SpawnPoolWorker-8:
Process SpawnPoolWorker-4:
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python#3.10/3.10.1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/homebrew/Cellar/python#3.10/3.10.1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/homebrew/Cellar/python#3.10/3.10.1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/opt/homebrew/Cellar/python#3.10/3.10.1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/queues.py", line 365, in get
with self._rlock:
File "/opt/homebrew/Cellar/python#3.10/3.10.1/F
how do I exit the pool cleanly, with no errors, and with no output?
ok - I'm stupid - the control-c was also interrupting all the child processes. This fixed it:
def ignore_control_c():
signal.signal(signal.SIGINT, signal.SIG_IGN)
pool = multiprocessing.Pool( processes = self.arguments.threads, initializer = ignore_control_c )
I have a web site using Tornado. And I need to call a function (do_work() in the following exmaple) periodically.
import asyncio
import datetime
from typing import Any, Callable, Coroutine
import tornado
import tornado.ioloop
def schedule_next_sync(loop, func, seconds):
time = datetime.datetime.timestamp(
datetime.datetime.now() + datetime.timedelta(seconds=seconds)
)
def wrapper():
loop.run_until_complete(func())
schedule_next_sync(loop, func, seconds)
print("The next run is sheduled for %s", datetime.datetime.fromtimestamp(time))
tornado.ioloop.IOLoop.current().add_timeout(time, wrapper)
async def do_work(x):
"""To be run periodically"""
print(f"...{datetime.datetime.now()}: {x}")
await asyncio.sleep(1)
print('....')
event_loop = asyncio.get_event_loop()
_func = lambda: do_work("test")
schedule_next_sync(event_loop, _func, 3) # every 3 seconds
tornado.ioloop.IOLoop.current().start()
However, the code got the following error?
The next run is sheduled for %s 2022-03-30 01:48:23.141079
ERROR:tornado.application:Exception in callback functools.partial(<function schedule_next_sync.<locals>.wrapper at 0x000001A7D6B99700>)
Traceback (most recent call last):
File "C:\users\xxx\anaconda3\envs\x\lib\site-packages\tornado\ioloop.py", line 741, in _run_callback
ret = callback()
File ".\test.py", line 23, in wrapper
loop.run_until_complete(func())
File "C:\users\xxx\anaconda3\envs\x\lib\asyncio\base_events.py", line 592, in run_until_complete
self._check_running()
File "C:\users\xxx\anaconda3\envs\x\lib\asyncio\base_events.py", line 552, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
C:\users\xxx\anaconda3\envs\x\lib\site-packages\tornado\ioloop.py:761: RuntimeWarning: coroutine 'do_work' was never awaited
app_log.error("Exception in callback %r", callback, exc_info=True)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
You cannot use loop.run_until_complete() when the event loop is already running.
To schedule the execution of async code from sync code or to run async code concurrently, you need to use asyncio.create_task().
Your example works for me, if I change the wrapper to this:
def wrapper():
asyncio.create_task(func())
schedule_next_sync(loop, func, seconds)
views.py:
from threading import Thread
class PoliceJobs:
def call_police_defence_jobs(request):
job = PoliceDefenceJobs.police_jobs(request)
sleep(0.5)
job_details = PoliceDefenceJobDetails.police_defence_job_details(request)
message = call_all(job,job_details)
return HttpResponse(message)
def call_statewise_police_jobs(request):
job = PoliceDefenceJobs.statewise_police_jobs(request)
sleep(0.5)
job_details = PoliceDefenceJobDetails.statewise_police_job_details(request)
message = call_all(job,job_details)
return HttpResponse(message)
def police_jobs(request):
try:
t1 = Thread(target=PoliceJobs.call_police_defence_jobs,args=[request])
t2 = Thread(target=PoliceJobs.call_statewise_police_jobs,args=[request])
t1.start()
t2.start()
t1.join()
t2.join()
return HttpResponse("success")
except:
return HttpResponse("error")
urls.py
from django.urls import path
from .views import police_jobs
urlpatterns = [
path('finish_police_jobs/', police_jobs),
]
error in shell:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/soubhagya/Desktop/carrier-circle/backend/finalize/views.py", line 840, in call_police_defence_jobs
job = PoliceDefenceJobs.police_jobs(request)
AttributeError: type object 'PoliceDefenceJobs' has no attribute 'police_jobs'
in PoliceJobs class i changed the function name to PoliceDefenceJobs.police_jobs which is not exist to make error.
Here i am making error and handling it by adding except block but
it is still showing error in console but not in browser.
in browser it is showing success wheather there is an exception.
Exceptions in threads don't propagate to the thread that created them. See Catch a thread's exception in the caller thread in Python for a workaround.
I am trying to use the APScheduler and SendGrid on Heroku to create a cron job for sending an email.
Even though the add_job method call seems to be executing correctly, I am getting the following error.
Below are the logs from heroku
2016-09-11T22:33:37.776867+00:00 heroku[clock.1]: State changed from crashed to starting
2016-09-11T22:33:40.672563+00:00 heroku[clock.1]: Starting process with command `python clock.py`
2016-09-11T22:33:41.353373+00:00 heroku[clock.1]: State changed from starting to up
2016-09-11T22:33:43.527949+00:00 app[clock.1]: created background scheduler
2016-09-11T22:33:43.528848+00:00 app[clock.1]: started background scheduler
2016-09-11T22:33:43.572751+00:00 app[clock.1]: added cron job
2016-09-11T22:33:43.585801+00:00 app[clock.1]: Exception in thread APScheduler (most likely raised during interpreter shutdown):
2016-09-11T22:33:43.585807+00:00 app[clock.1]: Traceback (most recent call last):
2016-09-11T22:33:43.585808+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/threading.py", line 801, in __bootstrap_inner
2016-09-11T22:33:43.585810+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/threading.py", line 754, in run
2016-09-11T22:33:43.585827+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/site-packages/apscheduler/schedulers/blocking.py", line 29, in _main_loop
2016-09-11T22:33:43.585829+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/threading.py", line 614, in wait
2016-09-11T22:33:43.585848+00:00 app[clock.1]: File "/app/.heroku/python/lib/python2.7/threading.py", line 364, in wait
2016-09-11T22:33:43.585851+00:00 app[clock.1]: <type 'exceptions.ValueError'>: list.remove(x): x not in list
2016-09-11T22:33:43.695569+00:00 heroku[clock.1]: Process exited with status 0
2016-09-11T22:33:43.719265+00:00 heroku[clock.1]: State changed from up to crashed
I am running one clock process, which is in the clock.py file below.
from apscheduler.schedulers.background import BackgroundScheduler
import sendgrid
import os
from sendgrid.helpers.mail import *
def send_email():
try:
sg = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
print("created send grid api client")
from_email = Email("ohta.g#husky.neu.edu")
print("created from email")
subject = "Weekly update"
to_email = Email("ohta.g#husky.neu.edu")
print("created to email")
content = Content("text/plain", "Hello, Email!")
print("created content")
mail = Mail(from_email, subject, to_email, content)
print("created mail")
response = sg.client.mail.send.post(request_body=mail.get())
except Exception as e:
return e
try:
sched = BackgroundScheduler()
print("created background scheduler")
sched.start()
print("started background scheduler")
sched.add_job(send_email, 'cron', day_of_week=6, hour=22, minute=20)
print("added cron job")
except Exception as e:
print e.message
Here is my Procfile.
clock: python clock.py
Here is my requirements.txt file.
APScheduler==3.1.0
sendgrid==3.4.0
Could someone please tell me what I'm doing wrong?
You started a background scheduler but then allowed your main thread to exit, which also exits the clock process. This is the entire reason why BlockingScheduler exists. Have you not read Heroku's APScheduler instructions?