gevent + flask seems to block - python

I want to run a background worker in the same script as flask is running and flask seems to be blocking which I guess is understandable. Pretty much I want a script to check key system metrics every second so I don't want to use something like celery or a big queueing system to do it.
Simple code example
#!/usr/bin/env python
import gevent
from flask import Flask
class Monitor:
def __init__(self, opts):
self.opts = opts
def run(self):
print "do something: %i" % self.opts
gevent.sleep(1)
self.run()
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
threads = []
for mon in [1,2]:
monitor = Monitor(mon)
threads.append(gevent.spawn(monitor.run))
threads.append(gevent.spawn(app.run))
gevent.joinall(threads)
My output looks like
$ ./so.py
do something: 1
do something: 2
* Running on http://127.0.0.1:5000/
If I remove the theads.append for the app.run it runs fine. Is this possible to do or am I barking up the wrong tree?
Thanks

add in your script next two lines:
from gevent import monkey
monkey.patch_all()
before the line:
from flask import Flask
and all be OK

This is how I ended up handling the issue using apscheduler v2
#!/usr/bin/env python
import gevent
import time
from flask import Flask
from apscheduler.scheduler import Scheduler
sched = Scheduler()
sched.start()
class Monitor:
def __init__(self, opts):
self.opts = opts
def run(self):
#sched.interval_schedule(seconds=1)
def handle_run():
print "do something: %i" % self.opts
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
for mon in [1,2]:
monitor = Monitor(mon)
monitor.run()
app.run(threaded=True)

try using the below:
class Monitor:
def init(self, opts):
self.opts = opts
def run(self):
While True:
print "do something: %i" % self.opts
gevent.sleep(1)
and then maybe don't joinall, since it doesn't seem like you really want to wait for them to complete before doing something else.
You may also need to put a try/except statement inside the while loop, and respawn if there is an error that kills the greenlet.

Related

how to run background function after response return from web.py

I would like to know, Is it possible to run a function after response from web.py service, which function takes long time to run?
Lets say some example as below.
file Name: code.py
import web
import time
urls = (
'/', 'index'
)
app = web.application(urls, globals())
class index:
def GET(self):
try:
with open('filename.txt', 'a') as file:
for i in range(100):
time.sleep(1)
file.write("No of times: {}".format(i))
return "some json response"
except:
return "Exception occurred"
if __name__ == "__main__":
app.run()
When I run the above code, obviously it will take time because as we are using time module for sleep one sec and then write into file. So, I should wait 100 seconds for get the response from service.
I want to skip this 100 seconds waiting time.
Expected: First return response to client and then run this part in background?
Can somebody provide some solution. Thanks..
Have a look python documentation for Thread.run()
Note:
With background task you won't be able to return "Exception occurred" as you're doing now. I believe you're OK with it.
Here's a small easy solution. There are other ways too but I feel you should explore more by yourself since you're a python beginner. :)
import web
import time
urls = (
'/', 'index'
)
app = web.application(urls, globals())
class index:
def writeToFile():
try:
with open('filename.txt', 'a') as file:
for i in range(100):
time.sleep(1)
file.write("No of times: {}".format(i))
# Log completion
except:
# Log error
def GET(self):
thread = Thread(target=writeToFile)
thread.start()
return {<myJSON>}
if __name__ == "__main__":
app.run()

Running a simple period task with celery

I can't seem to figure out how to get this working. I want to run a function every ten seconds
from __future__ import absolute_import, unicode_literals, print_function
from celery import Celery
import app as x # the library which hold the func i want to task
app = Celery(
'myapp',
broker='amqp://guest#localhost//',
)
app.conf.timezone = 'UTC'
#app.task
def shed_task():
x.run()
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
# Calls say('hello') every 10 seconds.
sender.add_periodic_task(10.0, shed_task.s(), name='add every 10')
if __name__ == '__main__':
app.start()
Then when I run the script it just shows me a bunch of commands I can use with celery. How can I get this running ? Do I have to run it in command line or something ?
Additionally when I get it running will I be able to see a list of complete tasks along with any errors ?
You can do it simply with python threading module like below
import time, threading
def foo():
print(time.ctime())
threading.Timer(10, foo).start()
foo()

Threading with Bottle.py Server

I'm having an issue with threading that I can't solve in any way I've tried. I searched in StackOverflow too, but all I could find was cases that didn't apply to me, or explanations that I didn't understand.
I'm trying to build an app with BottlePy, and one of the features I want requires a function to run in background. For this, I'm trying to make it run in a thread. However, when I start the thread, it runs twice.
I've read in some places that it would be possible to check if the function was in the main script or in a module using if __name__ == '__main__':, however I'm not able to do this, since __name__ is always returning the name of the module.
Below is an example of what I'm doing right now.
The main script:
# main.py
from MyClass import *
from bottle import *
arg = something
myObject = Myclass(arg1)
app = Bottle()
app.run('''bottle args''')
The class:
# MyClass.py
import threading
import time
class MyClass:
def check_list(self, theList, arg1):
a_list = something()
time.sleep(5)
self.check_list(a_list, arg1)
def __init__(self, arg1):
if __name__ == '__main__':
self.a_list = arg.returnAList()
t = threading.Thread(target=self.check_list, args=(a_list, arg1))
So what I intend here is to have check_list running in a thread all the time, doing something and waiting some seconds to run again. All this so I can have the list updated, and be able to read it with the main script.
Can you explain to me what I'm doing wrong, why the thread is running twice, and how can I avoid this?
This works fine:
import threading
import time
class MyClass:
def check_list(self, theList, arg1):
keep_going=True
while keep_going:
print("check list")
#do stuff
time.sleep(1)
def __init__(self, arg1):
self.a_list = ["1","2"]
t = threading.Thread(target=self.check_list, args=(self.a_list, arg1))
t.start()
myObject = MyClass("something")
Figured out what was wrong thanks to the user Weeble's comment. When he said 'something is causing your main.py to run twice' I remembered that Bottle has an argument that is called 'reloader'. When set to True, this will make the application load twice, and thus the thread creation is run twice as well.

Best way to keep Python script running?

I'm using APScheduler to run some recurring tasks as follows:
from apscheduler.scheduler import Scheduler
from time import time, sleep
apsched = Scheduler()
apsched.start()
def doSomethingRecurring():
pass # Do something really interesting here..
apsched.add_interval_job(doSomethingRecurring, seconds=2)
while True:
sleep(10)
Because the interval_job ends when this script ends I simply added the ending while True loop. I don't really know if this is the best, let alone pythonic way to do this though. Is there a "better" way of doing this? All tips are welcome!
Try using the blocking scheduler. apsched.start() will just block. You have to set it up before starting.
EDIT: Some pseudocode in response to the comment.
apsched = BlockingScheduler()
def doSomethingRecurring():
pass # Do something really interesting here..
apsched.add_job(doSomethingRecurring, trigger='interval', seconds=2)
apsched.start() # will block
Try this code. it runs a python script as a daemon :
import os
import time
from datetime import datetime
from daemon import runner
class App():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = '/var/run/mydaemon.pid'
self.pidfile_timeout = 5
def run(self):
filepath = '/tmp/mydaemon/currenttime.txt'
dirpath = os.path.dirname(filepath)
while True:
if not os.path.exists(dirpath) or not os.path.isdir(dirpath):
os.makedirs(dirpath)
f = open(filepath, 'w')
f.write(datetime.strftime(datetime.now(), '%Y-%m-%d %H:%M:%S'))
f.close()
time.sleep(10)
app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
Usage:
> python mydaemon.py
usage: md.py start|stop|restart
> python mydaemon.py start
started with pid 8699
> python mydaemon.py stop
Terminating on signal 15

Python Multithreading (while and apscheduler)

I am trying to call two functions simultaneously in Python. One is an infinite loop and the other one is started using apscheduler. Like this:
Thread.py
from multiprocessing import Process
import _While
import _Scheduler
if __name__ == '__main__':
p1 = Process(target=_While.main())
p1.start()
p2 = Process(target=_Scheduler.main())
p2.start()
_While.py
import time
def main():
while True:
print "while"
time.sleep(0.5)
_Scheduler.py
import logging
from apscheduler.scheduler import Scheduler
def _scheduler():
print "scheduler"
if __name__ == '__main__':
logging.basicConfig()
scheduler = Scheduler(standalone=True)
scheduler.add_interval_job(lambda: _scheduler(), seconds=2)
scheduler.start()
Since only while is printed it seems that _Scheduler isn’t starting.
Can somone help me?
You've got at least a couple problems here. First, the target keyword should be a function, not the result of a function. e.g.:
p1 = Process(target=_While.main) # Note the lack of function call
Second, I don't see any _Scheduler.main function. Maybe you meant to do something like:
import logging
from apscheduler.scheduler import Scheduler
def _scheduler():
print "scheduler"
def main():
logging.basicConfig()
scheduler = Scheduler(standalone=True)
scheduler.add_interval_job(_scheduler, seconds=2) # I doubt that `lambda` is necessary here ...
scheduler.start()
if __name__ == "__main__":
main()

Categories

Resources