Apscheduler script file stops silently - no error - python

I have a scheduler_project.py script file.
code:
from apscheduler.schedulers.background import BackgroundScheduler
scheduler = BackgroundScheduler()
def func_1():
# updating file-1
def func_2():
# updating file-2
scheduler.add_job(func_1, 'cron', hour='10-23' , minute='0-58', second=20)
scheduler.add_job(func_2, 'cron', hour='1')
scheduler.start()
When I run, (on Windows machine)
E:\> python scheduler_project.py
E:\> # there is no error
In Log: (I have added logging in above code at debug level)
It says, job is added and starts in (some x seconds), but it is not
starting.
In task manager, the command prompt process display for a second and disappear.
And my files are also not getting updated, which is obvious.
What's happening? What is the right way to execute this scheduler script?

Scheduler was created to run with other code - so after starting scheduler you can run other code.
If you don't have any other job then you have to use some loop to run it all time.
In documentation you can see link to examples on GitHub and one of example uses
while True:
time.sleep(2)

Related

How can i refresh the current date inside a running Python process?

I'm using supervisord to keep a Telegram bot developed with Python running on a server and I'm using the datetime library to get the current date and perform an operation that depends on the current date.
from datetime import date
today = date.today()
The problem: I noticed that, when the process is running, the Python always returns the same date; so, my bot doesn't return a different output each day, but the same.
To solve that, I had to enter the server, stop supervisor, kill the process and, manually, execute the python script to run the bot with the current date.
I thought about using a crontab to run supervisorctl restart all one time per day, but when i run that command the Python process doesn't stopped, even when I killed the process and run that command the output still returned the date of yesterday, I needed to manually run python3 myfile.py to refresh it. There is a way that I can actualize the Python date.today() without killing the process, or a way that I can kill and restart the Python process refreshing the current date?
The current code:
def get_today_meditation(update, context):
chat_id = update.message.chat_id
today = date.today()
print(today)
[ ... ]
def main():
key_api = os.environ.get('PYTHON_API_BREVIARIO_KEY')
locale.setlocale(locale.LC_ALL, "pt_BR.UTF-8")
updater = Updater(key_api, use_context=True)
dispatcher = updater.dispatcher
dispatcher.add_handler(CommandHandler('meditacaodehoje', get_today_meditation))
updater.start_polling()
updater.idle()

Python Pool.map() is not terminating but only in debug mode

This is not the first time I have problems with Pool() while trying to debug something.
Please note, that I do not try to debug the code which is paralellized. I want to debug code which gets executed much later. The problem is, that this code is never reached and the cause for this is that pool.map() is not terminating.
I have created a simple pipeline which does a few things. Among those things is a very simple preprocessing step for textual data.
To speed things up I am running:
print('Preprocess text')
with Pool() as pool:
df_preprocessed['text'] = pool.map(preprocessor.preprocess, df.text)
But here is the thing:
For some reason this code runs only one time. The second time I am ending up in an endless loop in _handle_workers() of the pool.py module:
#staticmethod
def _handle_workers(pool):
thread = threading.current_thread()
# Keep maintaining workers until the cache gets drained, unless the pool
# is terminated.
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
pool._maintain_pool()
time.sleep(0.1)
# send sentinel to stop workers
pool._taskqueue.put(None)
util.debug('worker handler exiting')
Note the while-loop. My script simply ends up there when calling my preprocessing function a second time.
Please note: This only happens during a debugging session! If the script is executed without a debugger everything is working fine.
What could be the reason for this?
Environment
$ python --version
Python 3.5.6 :: Anaconda, Inc.
Update
This could be a known bug
infinite waiting in multiprocessing.Pool

In Python run scheduled event function and while loop function with multiprocess with pool

I have two functions. One is celen() which checks the calendar and make schedule to execute something and another one is infinite while loop, tech(). I tried to run by multi-process, couldn't see anything printing on shell and ended up doing following code which at least showing the first process's output.
But, while the first process/ the calendar event with apsscheduler running it shows the all the pending jobs, the second job/function, the infinite loop doesn't start!
How can I run both with multiprocess/subprocess/multithreading while I can still see the output in shell or anywhere from both function?
def trade():
return(calen(),tech())
with Pool(cpu_count()) as p:
results = p.map(trade())
print(list(results))
Previously I also did try
if __name__ == '__main__':
with Pool(processes=2) as pool:
r1 = pool.apply_async(calen, ())
r2 = pool.apply_async(tech, ())
print(r1.get(timeout=120))
print(r2.get(timeout=120))
I will appreciate if anyone can give a solve how to run while loop & scheduled event together while outputs are visible.
I guess I am doing mistake with apscheduler. Apschduler it self run multiprocess with schdule and also in interval/while loop.
The while loop should be executed from apscheduler, not as separate function.
Instead I trid to do as seperate, one with apsscheduler & another ordinary while loop. WHile apscheduler started it was blocking any other operation.
This helped me https://devcenter.heroku.com/articles/clock-processes-python
It's actually good solution for multiprocess as well (as far I have understood)
from apscheduler.schedulers.blocking import BlockingScheduler
sched = BlockingScheduler()
#sched.scheduled_job('interval', minutes=3)
def timed_job():
print('This job is run every three minutes.')
#sched.scheduled_job('cron', day_of_week='mon-fri', hour=17)
def scheduled_job():
print('This job is run every weekday at 5pm.')
sched.start()

Trigger 'cron' do not working in apscheduler 3 while 'interval' works fine

I am trying to use package apscheduler 3.1.0 to run a python job every day at the same time. But it seems do not run the job correctly. Then I find that even using the simplest case, the trigger "interval" can work, but "cron" won't. When run the following code in python 2.7.11, it seems running, but did not print anything.
from apscheduler.schedulers.blocking import BlockingScheduler
def job_function():
print "Hello World"
sched = BlockingScheduler()
sched.add_job(job_function, 'cron', second = '*/2')
sched.start()
When replacing
sched.add_job(job_function, 'cron', second = '*/2')
to
sched.add_job(job_function, 'interval', seconds = 2), it works fine.
I already update the setuptools to 20.6.7. Does anybody know what is wrong?

APScheduler not executing the python

I am learning Python and was tinkering with Advanced scheduler. I am not able to gt it working though.
import time
from datetime import datetime
from apscheduler.scheduler import Scheduler
sched = Scheduler(standalone=True)
sched.start()
##sched.cron_schedule(second=5)
def s():
print "hi"
sched.add_interval_job(s, seconds=10)
i=0
while True:
print i
i=i+1
time.sleep(3)
sched.shutdown()
I am sure I am missing something basic. Could someone please point it out?
Also would you recommend a crontab to the advanced scheduler? I want my script to run every 24 hours.
Thanks
Standalone mode means that sched.start() will block, so the code below it will NOT be executed. So first create the scheduler, then add the interval job and finally start the scheduler.
As for the crontab, how about just sched.add_cron_job(s, hour=0)? That will execute it at midnight every day.

Categories

Resources