I am running a Python 3.7 Flask application which uses flask_socketio to setup a socketio server for browser clients, another python process to connect to a separate remote socketio server & exchange messages, and another python process to read input from a PIR sensor.
Both python processes communicate over multiprocessing.Queue - but, the socketio process always gets either [WinError6] - Invalid Handle or [WinError5] - Permission Denied. I have absolutely no idea what I'm doing wrong.
Here's the top-level (server) code; it does not appear to have issues:
from shotsocket import init as shotsocket_init
from shotsocket import util as matchmaking_util
import multiprocessing, os, config, uuid
match_queue = multiprocessing.Queue()
shot_queue = multiprocessing.Queue()
app = Flask(__name__, static_url_path='', static_folder='templates')
socketio = SocketIO(app)
_rooms = [] # I don't plan to keep this in memory, just doing it for debug / dev
...
The above works fine and dandy. The 2nd to last line in the following block is the issue.
# THIS IS THE FUNC WHERE WE ARE TRYING TO USE
# THE BROKEN QUEUE
#socketio.on('connect')
def listen():
room_key = str(uuid.uuid4())
join_room(room_key)
_rooms.append((room_key, request.sid))
possible_match = matchmaking_util.match_pending_clients(_rooms)
if possible_match:
shot_queue.put_nowait(possible_match)
print('put it in there')
Here's how I start these processes:
if __name__ == '__main__':
debug = os.environ.get('MOONSHOT_DEBUG', False)
try:
proc = multiprocessing.Process(target=start, args=(debug,match_queue))
proc.start()
shot_proc = multiprocessing.Process(target=shotsocket_init, args=(shot_queue,))
shot_proc.start()
socketio.run(app, host='0.0.0.0')
except KeyboardInterrupt:
socketio.stop()
proc.join()
shot_proc.join()
And here's the entirety of shotsocket (the code that cannot read the queue)
import socketio, multiprocessing # mp for the type
sio = socketio.Client(engineio_logger=True)
sio.connect('redacted woot', transports=['websocket'])
#sio.on('connect')
def connect():
print("connected to shot server")
def init(queue: multiprocessing.Queue):
while True:
try:
# WE NEVER GET PAST THIS LINE
print(queue.get())
except Exception as e:
continue
if not queue.empty():
print('queue empty')
shot = queue.get()
print(shot)
match_id, opponents = shot
sio.emit('start', {'id': match_id, 'opponents': [opponents[0], opponents[1]]})
I'm pulling my hair out. What the heck am I doing wrong?
Solution
I have no idea why this fixes the problem, but switching from multiprocessing.Queue to queue.Queue and multiprocessing.Process to threading.Thread did it.
Related
I am working on a project building an API that is able to send the live location of vehicles to a frontend.
I get this location data by subscribing to a ZMQ stream by running a while loop. This is all working and if I just run my stream as a script I can print all kinds of information to the terminal (I'll store those in a database later on).
I also have the FastAPI server up and running
Now what I'd like to do is:
At startup start the server so I can make API calls
Start the while loop and start receiving data from the ZMQ stream
What happens instead:
It seems either / or.
I can import a function with the while loop but this blocks the server from starting up
Or I can run the server with no means to start the stream
Here is my code:
# General FastAPI Imports
from fastapi import Depends, FastAPI, Request
from data_collection.livestream import enable_data_stream
from client_service import client_api
app = FastAPI()
app.include_router(client_api.router, prefix="/API/V1")
#app.get('/')
def read_root(request: Request):
return {"Hello": "World"}
The Stream:
from gzip import GzipFile
from io import BytesIO
import zmq
import xml.etree.ElementTree as ET
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://SERVER")
subscriber.setsockopt(zmq.SUBSCRIBE)
while True:
multipart = subscriber.recv_multipart()
address = multipart[0]
try:
contents = GzipFile('', 'r', 0, BytesIO(multipart[1])).read()
root = ET.fromstring(contents)
print("Updates Received:")
# Gets the timestamp
print('time', root[3].text)
print('X Coord: ', root[4][0][12].text)
print('Y Coord: ', root[4][0][13].text)
I tried looking into the multiprocess and threading implementations for python but I'm unsure how those tie in with starting the FastAPI process (as that's enabled from Uvicorn)
In the example below, the server and worker are started in separate processes because the While loop won't resolve. It seems that you were on the right track. In my example, I have these functions in one file, but there are no restrictions on someone breaking them out into their own files:
import uvicorn
import multiprocessing
import time
import zmq
import xml.etree.ElementTree as ET
from gzip import GzipFile
from io import BytesIO
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
async def root():
return {"message": "Hello World"}
def server():
uvicorn.run(app, host="localhost", port=8000)
def worker():
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://SERVER")
subscriber.setsockopt(zmq.SUBSCRIBE)
while True:
multipart = subscriber.recv_multipart()
address = multipart[0]
try:
contents = GzipFile('', 'r', 0, BytesIO(multipart[1])).read()
root = ET.fromstring(contents)
print("Updates Received:")
# Gets the timestamp
print('time', root[3].text)
print('X Coord: ', root[4][0][12].text)
print('Y Coord: ', root[4][0][13].text)
except Exception as e:
print(e)
print("Error: %s" % multipart[1])
break
if __name__ == '__main__':
# Runs server and worker in separate processes
p1 = multiprocessing.Process(target=server)
p1.start()
time.sleep(1) # Wait for server to start
p2 = multiprocessing.Process(target=worker)
p2.start()
p1.join()
p2.join()
I'm looking for a possibility to use uvicorn.run() with a FastAPI app but without uvicorn.run() is blocking the thread. I already tried to use processes, subprocessesand threads but nothing worked.
My problem is that I want to start the Server from another process that should go on with other tasks after starting the server. Additinally I have problems closing the server like this from another process.
Has anyone an idea how to use uvicorn.run() non blocking and how to stop it from another process?
Approach given by #HadiAlqattan will not work because uvicorn.run expects to be run in the main thread. Errors such as signal only works in main thread will be raised.
Correct approach is:
import contextlib
import time
import threading
import uvicorn
class Server(uvicorn.Server):
def install_signal_handlers(self):
pass
#contextlib.contextmanager
def run_in_thread(self):
thread = threading.Thread(target=self.run)
thread.start()
try:
while not self.started:
time.sleep(1e-3)
yield
finally:
self.should_exit = True
thread.join()
config = uvicorn.Config("example:app", host="127.0.0.1", port=5000, log_level="info")
server = Server(config=config)
with server.run_in_thread():
# Server is started.
...
# Server will be stopped once code put here is completed
...
# Server stopped.
Very handy to run a live test server locally using a pytest fixture:
# conftest.py
import pytest
#pytest.fixture(scope="session")
def server():
server = ...
with server.run_in_thread():
yield
Credits: uvicorn#742 by florimondmanca
This is an alternate version which works and was inspired by Aponace uvicorn#1103. The uvicorn maintainers want more community engagement with this issue, so if you are experiencing it, please join the conversation.
Example conftest.py file.
import pytest
from fastapi.testclient import TestClient
from app.main import app
import multiprocessing
from uvicorn import Config, Server
class UvicornServer(multiprocessing.Process):
def __init__(self, config: Config):
super().__init__()
self.server = Server(config=config)
self.config = config
def stop(self):
self.terminate()
def run(self, *args, **kwargs):
self.server.run()
#pytest.fixture(scope="session")
def server():
config = Config("app.main:app", host="127.0.0.1", port=5000, log_level="debug")
instance = UvicornServer(config=config)
instance.start()
yield instance
instance.stop()
#pytest.fixture(scope="module")
def mock_app(server):
client = TestClient(app)
yield client
Example test_app.py file.
def test_root(mock_app):
response = mock_app.get("")
assert response.status_code == 200
When I set reload to False, fastapi will start a multi-process web service. If it is true, there will only be one process for the web service
import uvicorn
from fastapi import FastAPI, APIRouter
from multiprocessing import cpu_count
import os
router = APIRouter()
app = FastAPI()
#router.post("/test")
async def detect_img():
print("pid:{}".format(os.getpid()))
return os.getpid
if __name__ == '__main__':
app.include_router(router)
print("cpu个数:{}".format(cpu_count()))
workers = 2*cpu_count() + 1
print("workers:{}".format(workers))
reload = False
#reload = True
uvicorn.run("__main__:app", host="0.0.0.0", port=8082, reload=reload, workers=workers, timeout_keep_alive=5,
limit_concurrency=100)
According to Uvicorn documentation there is no programmatically way to stop the server.
instead, you can stop the server only by pressing ctrl + c (officially).
But I have a trick to solve this problem programmatically using multiprocessing standard lib with these three simple functions :
A run function to run the server.
A start function to start a new process (start the server).
A stop function to join the process (stop the server).
from multiprocessing import Process
import uvicorn
# global process variable
proc = None
def run():
"""
This function to run configured uvicorn server.
"""
uvicorn.run(app=app, host=host, port=port)
def start():
"""
This function to start a new process (start the server).
"""
global proc
# create process instance and set the target to run function.
# use daemon mode to stop the process whenever the program stopped.
proc = Process(target=run, args=(), daemon=True)
proc.start()
def stop():
"""
This function to join (stop) the process (stop the server).
"""
global proc
# check if the process is not None
if proc:
# join (stop) the process with a timeout setten to 0.25 seconds.
# using timeout (the optional arg) is too important in order to
# enforce the server to stop.
proc.join(0.25)
With the same idea you can :
use threading standard lib instead of using multiprocessing standard lib.
refactor these functions into a class.
Example of usage :
from time import sleep
if __name__ == "__main__":
# to start the server call start function.
start()
# run some codes ....
# to stop the server call stop function.
stop()
You can read more about :
Uvicorn server.
multiprocessing standard lib.
threading standard lib.
Concurrency to know more about multi processing and threading in python.
I start a server and use some data from my function. But I want this function to update data and display new one on my server. However when I start a web server it only takes the first compiled data from function.
I use "schedule" - imported library, that can schedule my task to compile my function in time i choose. Also bottle web framework to start server and make routing.
def read_file():
f=open("345.txt", "r")
hi.contents = f.read()
print(hi.contents)
def server_start():
#route('/as', method = 'GET')
def display_status():
try:
return hi.contents
except Exception:
logging.exception("")
return "Service unavailable. Check logs"
run(host='0.0.0.0', port=8033)
print("sadq")
schedule.every(3).seconds.do(read_file)
server_start()
while True:
schedule.run_pending()
time.sleep(1)
I expect to get updated results on my web server. Would be very glad if you help me or give some good advices. Thak you all.
First I would run bottle with an async process, specifically gevent.
import gevent
from gevent import monkey, signal
monkey.patch_all()
from bottle import app
import scheduler
app = Bottle()
#route('/as', method = 'GET')
def display_status():
try:
return scheduler.contents
except Exception:
logging.exception("")
return "Service unavailable. Check logs"
print("sadq")
server = WSGIServer(("0.0.0.0", int(8083)), app)
def shutdown():
print('Shutting down ...')
server.stop(timeout=60)
exit(signal.SIGTERM)
gevent.signal(signal.SIGTERM, shutdown)
gevent.signal(signal.SIGINT, shutdown) #CTRL C
server.serve_forever()
Then I would launch your scheduler as such in a separate file scheduler.py:
from gevent import spawn, sleep
import schedule
contents = ''
def read_file():
global contents
f=open("345.txt", "r")
contents = f.read()
print(contents)
def start_thread():
while 1:
schedule.run_pending()
sleep(1)
schedule.every(3).seconds.do(read_file)
spawn(start_thread)
My code (Python 3.5 on Raspbian 9 - Stretch) is divided up into a number of separate processes, which are run from main.py. A simplified example of my code is below, which I believe is plain vanilla use of Flask, socketIO, eventlet with multiprocessing.Process. The problem is it hangs when I try to access the pipe that connects the different processes.
My understanding (which wouldn’t surprise me if I was completely wrong) is that this is a long standing issue related to eventlet and multiprocessing.Process and as of January 2018 has not been resolved. How to combine multiprocessing and eventlet
https://github.com/eventlet/eventlet/issues/147
My question is this. This seems like a common use case, but doesn’t work. So, what work around or different approach would you recommend?
--- in webprocess.py ---
#!/usr/bin/python3
def WebFunc(outfrompipe, intopipe):
global thread
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app, async_mode=”eventlet”)
thread = None
thread_lock = Lock()
#app.route('/')
def index():
return render_template('index.html', async_mode=socketio.async_mode)
#socketio.on('my_event', namespace='/test')
def test_msg(msg):
# Receive a message from a web app
print(“Received message”, msg)
# Send this message to another process
# THIS IS WHERE IT HANGS!!
intopipe.send(msg)
socketio.run(app, debug=False, host='0.0.0.0')
--- in main.py ---
#!/usr/bin/python3
import webprocess as webproc
import multiprocessing
import time
if __name__ == '__main__':
multiprocessing.set_start_method('spawn')
outfrompipe, intopipe = multiprocessing.Pipe()
wf = multiprocessing.Process(name=”WebProc”, target=webproc.WebFunc,
args=(outfrompipe, intopipe))
wf.start()
while True:
message = outfrompipe.recv()
print(message)
time.sleep(1)
wf.join()
I have a multiproccessing tornado web server and I want to create another process that will do some things in the background.
I have a server with to following code
start_background_process
app = Application([<someurls>])
server = HTTPServer(app)
server.bind(8888)
server.start(4) # Forks multiple sub-processes
IOLoop.current().start()
def start_background_process():
process = multiprocessing.Process(target=somefunc)
process.start()
and everything is working great.
However when I try to close the server (by crtl c or send signal)
I get AssertionError: can only join a child process
I understood the cause of this problem:
when I create a process with multiprocess a call for the process join method
is registered in "atexit" and because tornado does a simple fork all its childs also call the join method of the process I created and the can't since the process is their brother and not their son?
So how can I open a process normally in tornado?
"HTTPTserver start" uses os.fork to fork the 4 sub-processes as it can be seen in its source code.
If you want your method to be executed by all the 4 sub-processes, you have to call it after the processes have been forked.
Having that in mind your code can be changed to look as below:
import multiprocessing
import tornado.web
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
# A simple external handler as an example for completion
from handlers.index import IndexHandler
def method_on_sub_process():
print("Executing in sub-process")
def start_background_process():
process = multiprocessing.Process(target=method_on_sub_process)
process.start()
def main():
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
if __name__ == "__main__":
main()
Furthermore to keep the behavior of your program clean during any keyboard interruption , surround the instantiation of the sever by a try...except clause as below:
def main():
try:
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
except KeyboardInterrupt:
IOLoop.instance().stop()