Why does not uWSGI work with boto3? Respawned uWSGI worker - python

When I import boto3, uWSGI does not work. Without boto3, it works smoothly.
...
from flask import Flask
import boto3
app = Flask(__name__)
...
uwsgi.ini
[uwsgi]
socket = :8010
module = main
callable = app
protocol = http
master = true
error: DAMN ! worker 1 (pid: x) died, killed by signal 6 :( trying respawn ...
Respawned uWSGI worker 1 (new pid: x)
How can I solve the problem?

Related

Constantly dying workers Flask-SocketIO + uWSGI

I'm trying to implement sockets usage in my flask project and followed by flask-socketio documentation I had made it run perfectly with gunicorn, but found myself unable to make it work using uwsgi. It seems like it should be working even with the simplest app and settings but it just keeps killing the workers whatever configuration I try to use. Here's an example of an environment and code I have:
app.py:
from flask import Flask
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'extremelysecret'
socketio = SocketIO(app)
if __name__ == '__main__':
app.run()
pip freeze:
click==7.1.2
Flask==1.1.2
Flask-SocketIO==4.3.0
gevent==20.5.0
greenlet==0.4.15
gunicorn==20.0.4
itsdangerous==1.1.0
Jinja2==2.11.2
MarkupSafe==1.1.1
python-engineio==3.12.1
python-socketio==4.5.1
six==1.15.0
uWSGI==2.0.18
Werkzeug==1.0.1
Running with an example from documentation:
uwsgi --http :5000 --gevent 1000 --http-websockets --master --wsgi-file app.py --callable app
Output:
your processes number limit is 63299
your memory page size is 4096 bytes
detected max file descriptor number: 1024
- async cores set to 10 - fd table size: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :5008 fd 4
uwsgi socket 0 bound to TCP address 127.0.0.1:39875 (port auto-assigned) fd 3
Python version: 3.8.0 (default, Jan 9 2020, 23:03:43) [GCC 7.4.0]
Python main interpreter initialized at 0x55899af6a860
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 501072 bytes (489 KB) for 20 cores
*** Operational MODE: preforking+async ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x55899af6a860 pid: 22479
(default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 22479)
spawned uWSGI worker 1 (pid: 22481, cores: 10)
spawned uWSGI worker 2 (pid: 22482, cores: 10)
*** running gevent loop engine [addr:0x558998a7f860] ***
spawned uWSGI http 1 (pid: 22483)
DAMN ! worker 1 (pid: 22481) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 22484)
DAMN ! worker 2 (pid: 22482) died :( trying respawn ...
Respawned uWSGI worker 2 (new pid: 22485)
*** running gevent loop engine [addr:0x558998a7f860] ***
DAMN ! worker 1 (pid: 22484) died :( trying respawn ...
So far I've tried:
- playing with uwsgi parameters;
- using different app configurations in app.py;
- using couple different versions of python;
- figuring out is everything installed properly;
And still, my guess is that I miss something obvious here. Thanks in advance for pointing me on that.

gunicorn + flask (connexion/swagger_server) time out/not responding to API requests

swagger_server connexion/flask running fine when I do:
python3 -m swagger_server
It's running on port 8080.
When I try to put it on gunicorn (reference: how to use gunicorn with swagger_server on flask), gunicorn runs fine but requests to port 8080 fail.
First, when I use the same port 8080, it complains of bind/already in use (expected, I believe, as they're both on port 8080):
gunicorn "swagger_server.__main__:main" -b 0.0.0.0:8080 -w 4
...
OSError: [Errno 48] Address already in use
But when I move to port 4000, for example, requests time out:
gunicorn "swagger_server.__main__:main" -b 0.0.0.0:4000 -w 4
...
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
[2020-02-16 17:21:18 -0500] [34798] [CRITICAL] WORKER TIMEOUT (pid:34802)
When I enable debug, I see it's trying to connect to port 4000, instead of 8080.
[2020-02-16 17:28:13 -0500] [34866] [INFO] Starting gunicorn 20.0.4
[2020-02-16 17:28:13 -0500] [34866] [ERROR] Connection in use: ('0.0.0.0', 4000)
[2020-02-16 17:28:13 -0500] [34866] [ERROR] Retrying in 1 second.
[2020-02-16 17:28:14 -0500] [34866] [ERROR] Connection in use: ('0.0.0.0', 4000)
...
Here's my main.py
def main(arg1, arg2):
app = connexion.App(__name__, specification_dir='./swagger_server/', debug=False)
app.app.json_encoder = encoder.JSONEncoder
app.add_api('api-v2.yaml', arguments={'title': 'API'})
app.run(host='0.0.0.0', port=8080)
if __name__ == '__main__':
main(None, None)
Please advise what I'm missing here. Thank you.
Swagger generates __main__.py in wrong way, you need to make a modifition base on it.
#!/usr/bin/env python3
import connexion
from swagger_server import encoder
app = connexion.App(__name__, specification_dir='./swagger/')
app.app.json_encoder = encoder.JSONEncoder
app.add_api('swagger.yaml', arguments={'title': 'My API'}, pythonic_params=True)
Then try it again
gunicorn swagger_server.__main__:app
Gunicorn wraps netowrk request in WSGI and forward to flask(werkzeug), the varible app in __main__.py is werkzeug WSGI entry point.

When I enable uwsgi thread support and start the scheduler the api stops working

I'm new to python, flask, nginx and all that stuff.
I have a flask app that acts as API for a frontend. Also when the flask app is started I would like to start a scheduled task with APScheduler.
The problem is that when I enable uwsgi thread support and start the scheduler the api stops working (504 Gateway Time-out). But the scheduler works as seen in the logfile. When i remove the scheduler / thread support the api works but I obviously don't have the scheduler anymore.
Somehow I suspect the scheduler prevents the flask app from being run properly?
As I am new to those technologies I'll post my setup below. If you need any more file infos please tell me.(The whole thing is run on a raspberry pi and the api is accessed from my pc over lan)
app.service
[Unit]
Description=uWSGI instance to serve app
After=network.target
[Service]
User=pi
Group=www-data
WorkingDirectory=/home/pi/flask
Environment="PATH=/home/pi/flask/appenv/bin"
ExecStart=/home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
[Install]
WantedBy=multi-user.target
app.ini
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = /home/pi/flask/app.sock
chmod-socket = 660
vacuum = true
die-on-term = true
app.py
#!/usr/bin/env python3
from flask import Flask, request
from apscheduler.schedulers.background import BackgroundScheduler
import logging
logging.basicConfig(filename='logfile.log',level=logging.DEBUG)
from api.Controller import Controller
from Handler.Handler import Handler
from apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutor
api_controller = Controller()
handler = Handler()
def startHandlerJob():
handler.ExecuteAllSensors()
app = Flask( __name__ )
#app.route('/app')
def apiDefinition():
return 'API Definition: GetHumidityValues, TODO'
#app.route( "/app/GetHumidityValues", methods=["GET"] )
def GetHumidityValues():
logging.info("app.py: API-call GetHumidityValues")
return api_controller.GetHumidityValues()
if (__name__ == "__main__"):
app.run(host='0.0.0.0')
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 1
}
scheduler = BackgroundScheduler(daemon=True, executors=executors, job_defaults=job_defaults)
scheduler.start()
scheduler.add_job(startHandlerJob,'cron', minute='*')
logfile.log
WARNING:apscheduler.scheduler:Execution of job "startHandlerJob (trigger: cron[minute=''], next run at: 2019-12-18 19:01:00 CET)" skipped: maximum number of running instances reached (1)
DEBUG:apscheduler.scheduler:Next wakeup is due at 2019-12-18 19:02:00+01:00 (in 59.980780 seconds)
DEBUG:apscheduler.scheduler:Looking for jobs to run
WARNING:apscheduler.scheduler:Execution of job "startHandlerJob (trigger: cron[minute=''], next run at: 2019-12-18 19:02:00 CET)" skipped: maximum number of running instances reached (1)
DEBUG:apscheduler.scheduler:Next wakeup is due at 2019-12-18 19:03:00+01:00 (in 59.979407 seconds)
systemctl status app
* app.service - uWSGI instance to serve app
Loaded: loaded (/etc/systemd/system/app.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-12-18 18:40:57 CET; 23min ago
Main PID: 21129 (uwsgi)
Tasks: 8 (limit: 2200)
Memory: 22.9M
CGroup: /system.slice/app.service
|-21129 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
|-21148 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
|-21149 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
|-21150 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
|-21151 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
`-21152 /home/pi/flask/appenv/bin/uwsgi --ini app.ini --enable-threads
Dec 18 18:40:57 raspberrypi uwsgi[21129]: mapped 386400 bytes (377 KB) for 5 cores
Dec 18 18:40:57 raspberrypi uwsgi[21129]: *** Operational MODE: preforking ***
Dec 18 18:40:59 raspberrypi uwsgi[21129]: WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0xa6f900 pid: 21129 (default app)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: *** uWSGI is running in multiple interpreter mode ***
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI master process (pid: 21129)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI worker 1 (pid: 21148, cores: 1)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI worker 2 (pid: 21149, cores: 1)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI worker 3 (pid: 21150, cores: 1)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI worker 4 (pid: 21151, cores: 1)
Dec 18 18:40:59 raspberrypi uwsgi[21129]: spawned uWSGI worker 5 (pid: 21152, cores: 1)
The logfile shows that the scheduler is active. But when I try to use http:/raspberryipaddress/app the answer is a 504 Gateway Time-out response. When I remove the scheduler and disable thread support this call works as desired.
Any help is appreciated. Probably I am missing something obvious, because I am new to all those things.
Thanks!
I hesitate a bit to put this as an answer but here goes...
In uwsgi app.ini after setting processes (5 in your case, I use 4) I also set
threads = 2. I don't know if this has any direct relation to the --enable-threads option because that seems to be for your app to start it's own threads, but it might help for uwsgi to have its own threads per process. Also the UWSGI documentation points out (somewhere) that more processes are not necessarily better.
Also on that, your logfile shows warnings from the scheduler that the maximum number of running instances being reached. I take this means the job you've given the scheduler is not finishing before the next scheduled run (1 minute later)? If that's the case is it stuck somewhere, blocking everything else?
Finally, if all else fails, something else from the docs (UWSGI Security and availability)
A common problem with webapp deployment is “stuck requests”. All of your threads/workers are stuck (blocked on request) and your app cannot accept more requests. To avoid that problem you can set a harakiri timer.It is a monitor (managed by the master process) that will destroy processes stuck for more than the specified number of seconds (choose harakiri value carefully).
And finally, finally :) there's a top like monitoring tool for uwsgi which might be handy to see what's going on, just
pip install uwsgitop
uwsgitop 127.0.0.1:9191
plus telnetting into that adress:port apparently shows you a lot more info (haven't tried it myself).
Okay thanks for the answers. I did not get it to work with your suggested solutions. Therefore I simply decided to run the scheduler as separate systemd service. That way the scheduling does not stop the flask api from working.
I had exactly the same problem. The fix for me was to use ProcessPoolExecutor instead of ThreadPoolExecutor.
For some unknown reasons (at least unknown for me), when there were several threads in ThreadPoolExecutor, that damn problem showed itself.
After I changed most cpu-bound threads to processes (i.e. using ProcessPoolExecutor for those jobs), that damn problem disappeared ...

uWSGI unable to load Flask app [duplicate]

This question already has answers here:
Flask and uWSGI - unable to load app 0 (mountpoint='') (callable not found or import error)
(3 answers)
Gunicorn can't find app when name changed from "application"
(2 answers)
Closed 5 years ago.
I have been poking around on this for a couple days and I am still having issues. My uWSGI instance is apparently not loading my Flask app. I am running a CentOS 7 Vagrant and using Ansible for config mgmt. I will post the final templated files
My file setup is as follows
#/etc/tickets/tickets.ini
[uwsgi]
module = wsgi
master = true
processes = 4
socket = tickets.sock
chmod-socket = 664
uid = uwsgi
gid = nginx
vacuum = true
die-on-term = true
logto2 = /var/log/uwsgi/tickets.log
Module
#/opt/tickets/wsgi.py
import os
from create import create_app
app = create_app(os.getenv('APP_CONFIG') or 'default')
if __name__ == '__main__':
app.run(host='0.0.0.0')
Systemd service
#/etc/systemd/system/tickets.service
[Unit]
Description=uWSGI instance to serve tickets
After=network.target
[Service]
User=uwsgi
Group=nginx
WorkingDirectory=/opt/tickets
ExecStart=/usr/bin/uwsgi --ini /etc/tickets/tickets.ini
[Install]
WantedBy=multi-user.target
Here is my apps uwsgi log
[vagrant#tickets uwsgi]$ cat tickets.log
*** Operational MODE: preforking ***
/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py:839: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 14820)
spawned uWSGI worker 1 (pid: 14886, cores: 1)
spawned uWSGI worker 2 (pid: 14887, cores: 1)
spawned uWSGI worker 3 (pid: 14888, cores: 1)
spawned uWSGI worker 4 (pid: 14889, cores: 1)
I'm at a loss right now. I've tried moving the .ini for the uwsgi into the directory with wysgi.py but that didn't work either. ticket.sock does get created in the /opt/tickets directory. The service is running but it again it appears it is not loading the app.
EDIT
Modeled after this tutorial.

uWSGI "timeout waiting for header" Error

I'm using uwsgi-0.9.8.4 under Ubuntu 10.04 (32bit), here's the uwsgi section in my Pyramid application (which works fine with paster) .ini file --
[uwsgi]
socket = 127.0.0.1:6543
master = true
processes = 1
pythonpath = /home/jerry/virtualenv/lib/python2.6/site-packages/*.egg
pythonpath = /home/jerry/myapp
uwsgi runs and binds to port 6543 --
$ uwsgi --ini-paste development.ini -b 32768
...
2011-08-23 16:43:11,128 INFO sqlalchemy.engine.base.Engine {}
WSGI application 0 (SCRIPT_NAME=) ready on interpreter 0x9472fa8 pid: 14161 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 14161)
spawned uWSGI worker 1 (pid: 14170, cores: 1)
timeout waiting for header. skip request.
timeout waiting for header. skip request.
But http://localhost:6543/ requests in the browser just time out while uWSGI infrequently reports receiving nothing.
What could be wrong and is there any way to debug this situation?
Any pointer will be much appreciated.
uWSGI by default speaks the uwsgi (all lowercase) protocol, not the http one. So you cannot connect to it via browser. Add --protocol=http to let it speak http (slower obviously)

Categories

Resources