I am using background scheduler to schedule my jobs. When I am executing python script in the console the print statements are not executed. Is the scheduler being terminated? Below is my sample code
from apscheduler.schedulers.background import BackgroundScheduler
def my_task1():
print("Task 1")
def ny_task2():
print("Task 2")
if __name__=='__main__':
scheduler = BackgroundScheduler()
scheduler.add_job(my_task1, 'cron', id='my_task1', seconds=5)
scheduler.add_job(my_task1, 'cron', id='my_task1', seconds=10)
scheduler.start()
When I run the following script in the command line. I am not able to see the print statements in the console. Am I missing something?
You have selected a scheduler that runs in a background thread. Then you let the script exit. This is why nothing happens. The jobs have not had any time to be executed. Use BlockingScheduler instead if you want to keep the script running.
You can use while loop to keep it alive
from apscheduler.schedulers.background import BackgroundScheduler
def my_task1():
print("Task 1")
def ny_task2():
print("Task 2")
if __name__=='__main__':
scheduler = BackgroundScheduler()
scheduler.add_job(my_task1, 'cron', id='my_task1', seconds=5)
scheduler.add_job(my_task1, 'cron', id='my_task1', seconds=10)
scheduler.start()
while True:
time.sleep(1)
Related
EDIT: I found the issue. It was a problem with PyCharm. I ran the .py outside of PyCharm and it worked as expected. In PyCharm I enabled "Emulate terminal in output console" and it now also works there...
Expectations:
Apscheduler spawns a thread that checks a website for something.
If the something was found (or multiple of it), the thread spawns (multiple) processes to download it/them.
After five seconds the next check thread spawns. While the other downloads may continue in the background.
Problem:
The spawned processes never stop to exist, which makes other parts of the code (not included) not work, because I need to check if the processes are done etc.
If I use a simple time.sleep(5) instead (see code), it works as expected.
No I cannot set max_instances to 1 because this will stop the scheduled job from running if there is one active download process.
Code:
import datetime
import multiprocessing
from apscheduler.schedulers.background import BackgroundScheduler
class DownloadThread(multiprocessing.Process):
def __init__(self):
super().__init__()
print("Process started")
def main():
print(multiprocessing.active_children())
# prints: [<DownloadThread name='DownloadThread-1' pid=3188 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-3' pid=12228 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-2' pid=13544 parent=7088 started daemon>
# ...
# ]
new_process = DownloadThread()
new_process.daemon = True
new_process.start()
new_process.join()
if __name__ == '__main__':
sched = BackgroundScheduler()
sched.add_job(main, 'interval', args=(), seconds=5, max_instances=999, next_run_time=datetime.datetime.now())
sched.start()
while True:
# main() # works. Processes despawn.
# time.sleep(5)
input()
I'm trying to stop the while loop execution from inside the foo() function. I've tried exit() / sys.exit without success. How can I stop completely the execution of the program from inside the function?
from apscheduler.schedulers.background import BackgroundScheduler
from datetime import datetime, timedelta
import time
import sys
def foo(stop = False):
print('Function foo executed')
if stop:
sys.exit
scheduler = BackgroundScheduler()
dd = datetime.now() + timedelta(seconds=10)
scheduler.add_job(foo, 'date', run_date=dd, args=[True])
scheduler.start()
while True:
print('Inside the loop')
time.sleep(2)
Use psutil
import psutil
psutil.Process().terminate()
From the doc,
psutil.Process() If PID is omitted current process PID (os.getpid()) is used.
Be aware terminate will leave with exit code 0. If you want other exit code, you can use send_signal() or even kill()
The job_function isnt getting executed even once, even when i waited for more than 10 mins.
from apscheduler.schedulers.background import BackgroundScheduler
import send_mail
def job_function():
print("Hello World")
send_mail('abc#test.com')
sched = BackgroundScheduler()
sched.add_job(job_function, 'interval', minutes=1)
sched.start()
Based on this code, and this code only, the problem looks like that your program terminates before the time limit is reached.
Try adding this infinite loop at the end of your program which will prevent it to quit:
while True:
time.sleep(1000)
Then terminate your program with CTRL+C.
I am using python apscheduler to schedule a specific task every 45 minutes. The problem is, when i add the job and start the scheduler, it starts at 45 minutes from now.
from apscheduler.schedulers.blocking import BlockingScheduler
class myClass:
def schedule(self):
self.scheduler = BlockingScheduler()
self.scheduler.add_job(self.myJob, 'interval', minutes=45)
self.scheduler.start()
def myJob(self):
print('I finally started')
I tried setting start_date, but with no success. How can i make sure the job is executed immediately, and not after waiting the interval for the first time?
Try next_run_time=datetime.now().
Not a good solution but works for me.
from apscheduler.schedulers.blocking import BlockingScheduler
class myClass:
def schedule(self):
self.myJob()#run your job immediately here, then scheduler
self.scheduler = BlockingScheduler()
self.scheduler.add_job(self.myJob, 'interval', minutes=45)
self.scheduler.start()
def myJob(self):
print('I finally started')
The given answers are too complex for a simple task that is well documented:
https://apscheduler.readthedocs.io/en/3.x/modules/triggers/date.html#examples
To add a job to be run immediately:
The 'date' trigger and datetime.now() as run_date are implicit
sched.add_job(my_job)
I'm having trouble using
python-daemon 1.6 getting along with APScheduler to manage a list of tasks.
(The scheduler needs to run them periodically at a specific chosen times - seconds resolution)
Working (until pressing Ctrl+C),
from apscheduler.scheduler import Scheduler
import logging
import signal
def job_function():
print "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
# Start the scheduler
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
# APSScheduler.Scheduler only works until the main thread exits
signal.pause()
# Or
#time.sleep(300)
Sample Output:
INFO:apscheduler.threadpool:Started thread pool with 0 core threads and 20 maximum threads
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:No jobs; waiting until a job is added
INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 2011-10-06 18:30:39)" to job store "default"
INFO:apscheduler.scheduler:Added job "job_function (trigger: interval[0:00:30], next run at: 2011-10-06 18:30:33)" to job store "default"
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-10-06 18:30:33 (in 10.441128 seconds)
With python-daemon,
Output is blank. Why isn't the DaemonContext correctly spawning the processes?
EDIT - Working
After reading python-daemon source, I've added stdout and stderr to the DaemonContext and finally was able to know what was going on.
def job_function():
print "Hello World"
print >> test_log, "Hello World"
def init_schedule():
logging.basicConfig(level=logging.DEBUG)
sched = Scheduler()
sched.start()
return sched
def schedule_job(sched, function, periodicity, start_time):
sched.add_interval_job(job_function, seconds=periodicity, start_date=start_time)
if __name__ == "__main__":
test_log = open('daemon.log', 'w')
daemon.DaemonContext.files_preserve = [test_log]
try:
with daemon.DaemonContext():
from datetime import datetime
from apscheduler.scheduler import Scheduler
import signal
logging.basicConfig(level=logging.DEBUG)
sched = init_schedule()
schedule_job(sched, job_function, 120, '2011-10-06 12:30:09')
schedule_job(sched, job_function, 120, '2011-10-06 12:31:03')
signal.pause()
except Exception, e:
print e
I do not know much about python-daemon, but test_log in job_function() is not defined. The same problem occurs in init_schedule() where you reference Schedule.