Starting Redis worker from Flask as new thread - python

I would like to start new Redis workers (as new thread) from a Flask interface.
For this, I have the following function (utils.py):
def start_worker():
listen = ['default']
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
conn = redis.from_url(redis_url)
with Connection(conn):
print('STARTING WORKER..')
worker = Worker(Queue('default'), connection=conn, name='foo2')
worker.work()
and the following call in routes.py:
#from threading import Thread #import threading # for starting worker as new thread)
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=2)
#app.route('/')
#app.route('/index', methods=['GET','POST'])
#login_required
def index():
form = WorkerForm()
if form.validate_on_submit():
#w = Thread(target=utils.start_worker)
#w.daemon = True
#w.start()
executor.submit(utils.start_worker)
return redirect(url_for('index'))
Now, when I run the function start_worker() manually from the console, I see the worker registering.
When calling the function through Flask, I see the comment thrown ("STARTING WORKER.."). But no worker is registering.
Initially, I wanted to start it as normal Thread (commented code), but this results in ValueError: signal only works in main thread.
What may I be missing here?
Thanks

Related

How to exit the previously opened function created from socketio.start_background_task() when new connection is being made

Every time when I refresh the page from client side a new connection is made with the flask server and it runs the function 'backgroundFunction()' without exiting the recent opened function and the number increases as I refresh the page again and again.
from flask import Flask
from flask_socketio import SocketIO, send, emit
import socket
from time import sleep
import datetime
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret'
app.config['DEBUG'] = True
socketio = SocketIO(app , cors_allowed_origins="*" , async_mode = None , logger = False , engineio_logger = False)
def backgroundFunction():
while True:
data = "I am Data"
socketio.emit('data', data, broadcast=True)
socketio.sleep(2)
#socketio.on('connect')
def socketcon():
print('Client connected')
socketio.start_background_task(backgroundFunction)
if __name__ == ("__main__"):
socketio.run(app, port=5009)
Look at the example code in the Flask-SocketIO repository to learn one possible way to implement a background job that starts the first time an event is triggered.
Code is here. Here is the relevant excerpt:
thread = None
thread_lock = Lock()
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
socketio.sleep(10)
count += 1
socketio.emit('my_response',
{'data': 'Server generated event', 'count': count})
#socketio.event
def connect():
global thread
with thread_lock:
if thread is None:
thread = socketio.start_background_task(background_thread)

Passing argument in thread without invoking - how to put Flask server in thread

How I can start flask server in a thread with custom ip (listening the network)?
This line doesn't block the main thread, but it doesn't listen connections from the network.
threading.Thread(target = app.run).start()
When this is used it waits to this thread to finish, and main thread is blocked.
#threading.Thread(target = app.run(host='192.168.1.42')).start()
I have tried to make game, Pygame is running in mainthread and flask server is used to host webpage, which offer joystick for players.
At the moment i can control it with local machine, but not via mobilephone. And if i config the flask with custom ip, the main thread will stop for to wait the server thread.
It's something to do with invoking and referring, but i don't know how to setup the thread with argument, but without invoking.
Whole pycharm project is in GitHUB
Here is the server.py
from flask import Flask,render_template,request
from flask_wtf import FlaskForm
from wtforms import StringField, SubmitField
import threading
def initServer(controlDataToPyGame): #argument is queue for transferring data to main thread
app = Flask(__name__)
app.debug= False
#app.route('/')
def index():
print("index")
return render_template('index.html')
#app.route('/play/')
def play():
print("play")
return render_template('controller.html')
#app.route("/Control/")
def UP():
x = request.args.get('joyX')
y = request.args.get('joyY')
controlDict = {"name":"ice", "x":x,"y":y}
controlDataToPyGame.put(controlDict)
return ("nothing")
##this doesn't block the main thread, but it doesn't listen connections from the network.
threading.Thread(target = app.run).start()
#when this is used it waits to this thread to finish, and main thread is blocked.
#threading.Thread(target = app.run(host='192.168.1.42')).start()
If you read documentation for Thread then you see args= and kwargs=
threading.Thread(target=app.run, kwargs={'host': '192.168.1.42'}).start()
Using
threading.Thread(target = app.run(host='192.168.1.42')).start()
you simply run app.run() before sending its result to Thread like
result = app.run(host='192.168.1.42')
Thread(target=result).start()
and it runs app.run() in main thread forever - and it never use Thread

Python - How to use FastAPI and uvicorn.run without blocking the thread?

I'm looking for a possibility to use uvicorn.run() with a FastAPI app but without uvicorn.run() is blocking the thread. I already tried to use processes, subprocessesand threads but nothing worked.
My problem is that I want to start the Server from another process that should go on with other tasks after starting the server. Additinally I have problems closing the server like this from another process.
Has anyone an idea how to use uvicorn.run() non blocking and how to stop it from another process?
Approach given by #HadiAlqattan will not work because uvicorn.run expects to be run in the main thread. Errors such as signal only works in main thread will be raised.
Correct approach is:
import contextlib
import time
import threading
import uvicorn
class Server(uvicorn.Server):
def install_signal_handlers(self):
pass
#contextlib.contextmanager
def run_in_thread(self):
thread = threading.Thread(target=self.run)
thread.start()
try:
while not self.started:
time.sleep(1e-3)
yield
finally:
self.should_exit = True
thread.join()
config = uvicorn.Config("example:app", host="127.0.0.1", port=5000, log_level="info")
server = Server(config=config)
with server.run_in_thread():
# Server is started.
...
# Server will be stopped once code put here is completed
...
# Server stopped.
Very handy to run a live test server locally using a pytest fixture:
# conftest.py
import pytest
#pytest.fixture(scope="session")
def server():
server = ...
with server.run_in_thread():
yield
Credits: uvicorn#742 by florimondmanca
This is an alternate version which works and was inspired by Aponace uvicorn#1103. The uvicorn maintainers want more community engagement with this issue, so if you are experiencing it, please join the conversation.
Example conftest.py file.
import pytest
from fastapi.testclient import TestClient
from app.main import app
import multiprocessing
from uvicorn import Config, Server
class UvicornServer(multiprocessing.Process):
def __init__(self, config: Config):
super().__init__()
self.server = Server(config=config)
self.config = config
def stop(self):
self.terminate()
def run(self, *args, **kwargs):
self.server.run()
#pytest.fixture(scope="session")
def server():
config = Config("app.main:app", host="127.0.0.1", port=5000, log_level="debug")
instance = UvicornServer(config=config)
instance.start()
yield instance
instance.stop()
#pytest.fixture(scope="module")
def mock_app(server):
client = TestClient(app)
yield client
Example test_app.py file.
def test_root(mock_app):
response = mock_app.get("")
assert response.status_code == 200
When I set reload to False, fastapi will start a multi-process web service. If it is true, there will only be one process for the web service
import uvicorn
from fastapi import FastAPI, APIRouter
from multiprocessing import cpu_count
import os
router = APIRouter()
app = FastAPI()
#router.post("/test")
async def detect_img():
print("pid:{}".format(os.getpid()))
return os.getpid
if __name__ == '__main__':
app.include_router(router)
print("cpu个数:{}".format(cpu_count()))
workers = 2*cpu_count() + 1
print("workers:{}".format(workers))
reload = False
#reload = True
uvicorn.run("__main__:app", host="0.0.0.0", port=8082, reload=reload, workers=workers, timeout_keep_alive=5,
limit_concurrency=100)
According to Uvicorn documentation there is no programmatically way to stop the server.
instead, you can stop the server only by pressing ctrl + c (officially).
But I have a trick to solve this problem programmatically using multiprocessing standard lib with these three simple functions :
A run function to run the server.
A start function to start a new process (start the server).
A stop function to join the process (stop the server).
from multiprocessing import Process
import uvicorn
# global process variable
proc = None
def run():
"""
This function to run configured uvicorn server.
"""
uvicorn.run(app=app, host=host, port=port)
def start():
"""
This function to start a new process (start the server).
"""
global proc
# create process instance and set the target to run function.
# use daemon mode to stop the process whenever the program stopped.
proc = Process(target=run, args=(), daemon=True)
proc.start()
def stop():
"""
This function to join (stop) the process (stop the server).
"""
global proc
# check if the process is not None
if proc:
# join (stop) the process with a timeout setten to 0.25 seconds.
# using timeout (the optional arg) is too important in order to
# enforce the server to stop.
proc.join(0.25)
With the same idea you can :
use threading standard lib instead of using multiprocessing standard lib.
refactor these functions into a class.
Example of usage :
from time import sleep
if __name__ == "__main__":
# to start the server call start function.
start()
# run some codes ....
# to stop the server call stop function.
stop()
You can read more about :
Uvicorn server.
multiprocessing standard lib.
threading standard lib.
Concurrency to know more about multi processing and threading in python.

Kill individual threads when timeout in a service

I am building a python flask service for which I am trying to setup a timeout for each individual POST request.
As I understand whenever someone sends a post request to my RESTful service, a new thread (virtual or real) starts executing it.
Now in order for my server to serve a lot of requests I want it to return a TIME-OUT response if a process runs for more than a constant time defined for it (TIMEOUT_TIME) set for each POST method and stop the execution of that individual thread.
Can you propose me an abstract scheme that I could implement, using flask-methods?
One way to do it is to run the request processing in a separate process and terminate it if a timeout is exceeded:
#!/usr/bin/env python3
import time
from multiprocessing import Process
from flask import Flask, request, jsonify
app = Flask(__name__)
#app.route('/api/sleep', methods=['POST'])
def sleep():
duration = int(request.args.get('duration', 1))
timeout = float(request.args.get('timeout', 2))
proc = Process(target=process_request, args=(duration,))
proc.start()
proc.join(timeout)
if proc.is_alive():
proc.terminate()
proc.join()
return jsonify(success=False, message='timeout exceeded'), 408
return jsonify(success=True, message='well done')
def process_request(t):
time.sleep(t)
if __name__ == '__main__':
app.run(host='localhost', port=8080, debug=True)
In this example, when a sleep duration is less than a given timeout, a user will get a successful response:
curl -X POST http://localhost:8080/api/sleep?duration=1\&timeout=2
{
"message": "well done",
"success": true
}
Otherwise, the user will get 408 error:
curl -X POST http://localhost:8080/api/sleep?duration=2\&timeout=1
{
"message": "timeout exceeded",
"success": false
}
The problem with this approach is noted in the docs
Note that exit handlers and finally clauses, etc., will not be executed.
It means that the running processes won't be able to clean up before exiting which might cause problems. Another solution is to use a special Joiner thread which will be used to join worker processes or threads later on in the case the timeout is exceeded:
#!/usr/bin/env python3
import time
from queue import Queue
from threading import Thread
from flask import Flask, request, jsonify
class Joiner(Thread):
def __init__(self):
super().__init__()
self.workers = Queue()
def run(self):
while True:
worker = self.workers.get()
if worker is None:
break
worker.join()
app = Flask(__name__)
#app.route('/api/sleep', methods=['POST'])
def sleep():
duration = int(request.args.get('duration', 1))
timeout = int(request.args.get('timeout', 2))
worker = Thread(target=process_request, args=(duration,))
worker.start()
worker.join(timeout)
if worker.is_alive():
joiner.workers.put(worker)
return jsonify(success=False, message='timeout exceeded'), 408
return jsonify(success=True, message='well done')
def process_request(t):
time.sleep(t)
if __name__ == '__main__':
joiner = Joiner()
joiner.start()
app.run(host='localhost', port=8080, debug=True)
joiner.workers.put(None)
joiner.join()
Here, before running the flask server a Joiner thread instance is created and started. Once the server is stopped, we put None into the joiner.workers queue to signal the joiner thread to finish.

How to send interrupt to Celery worker from Flask?

Question
I've read up some on accessing status from a Celery worker from a Flask application, like in this tutorial, but can you go the other way? Send an interrupt or get introspection into a Celery worker after it's been started?
I've read a bit about signals, but either don't understand them yet or it's not what I'm looking for. Possibly both.
Background
I'm using Celery to kick off a long-running loop that subscribes to an MQTT topic, I'd like to be able to also shut down that process/subscription from another endpoint in my Flask app. What's the best way to do this? Or a way?
Example Code
from flask import Flask
from celery import Celery
import time
app = Flask(__name__)
app.config['CELERY_BROKER_URL'] = 'redis://localhost:6379/0'
app.config['CELERY_RESULT_BACKEND'] = 'redis://localhost:6379/0'
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
#celery.task(bind=True)
def test_loop(self):
i=0
running = True
while running:
i = i+1
print "loop running %d" % i
time.sleep(1)
#app.route('/')
def index():
return 'index page'
#app.route('/start')
def start():
global task
task = test_loop.delay()
return "started loop"
#app.route('/stop')
def stop():
global task ### What I'm having trouble with
task.running = False ### How can I interrupt/introspect into the task?
return "stopped loop"
TL/DR
Is there a way to send an interrupt or get introspection into a Celery worker after it's been started? How can I stop a long-running loop started in a Celery Worker from Flask?
My personal thoughts behind this would be to stay away from tasks that run forever.
If you absolutely must abort a task then you can use revoke.
http://docs.celeryproject.org/en/latest/userguide/workers.html#revoke-revoking-tasks
#app.route('/stop')
def stop():
global task
task.revoke(terminate=True, signal='SIGKILL')
return "stopped loop"
Celery may be overkill for your use case but I'm not totally sure what your end goal is so I can't really offer any alternatives.

Categories

Resources