UWSGI + nginx repeated logging in django - python

I am having some weird issues when I run my application on a dev. server using UWSGI+nginx.. It works fine when my request completes within 5-6 mins.. For long deployments and requests taking longer than that, the UWSGI logs repeats the logs after around 5mins.. It's as if it spawns another process and I get two kinds of logs(one for current process and the repeated process). I am not sure why this is happening.. Did not find anything online. I am sure this is not related to my code because the same thing works perfectly fine in the lab env.. where I use the django runserver. Any insight would be appreciated..
The uwsgi.ini:-
# netadc_uwsgi.ini file
#uid = nginx
#gid = nginx
# Django-related settings
env = HTTPS=on
# the base directory (full path)
chdir = /home/netadc/apps/netadc
# Django's wsgi file
module = netadc.wsgi
# the virtualenv (full path)
home = /home/netadc/.venvs/netadc
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 10
buffer-size = 65536
# the socket (use the full path to be safe
socket = /home/netadc/apps/netadc/netadc/uwsgi/tmp/netadc.sock
# ... with appropriate permissions - may be needed
#chmod-socket = 666
# daemonize
daemonize = true
# logging
logger = file:/home/netadc/apps/netadc/netadc/uwsgi/tmp/netadc_uwsgi.log
# clear environment on exit
vacuum = true

Related

Reloading vs. restarting uWSGI to activate code changes

After reading the uWSGI's documentation on reloading, my understanding was that, for an app that uses lazy-apps, writing w to uWSGI's master FIFO should trigger a restart of all workers (and hence activate changes in the Python code).
However, that doesn't seem to work for me. I need to restart the systemd service (systemctl restart myservice) for code changes to take effect. Am I misunderstanding the documentation, or is there an issue with my setup?
My myservice.service file looks like this:
...
ExecStart=/usr/lib/myservice/virtualenv/bin/uwsgi --ini /etc/myservice/uwsgi.ini
ExecReload=/bin/echo 'w' > /run/myservice/masterfifo
ExecStop=/bin/kill -INT $MAINPID
...
In particular, systemctl reload myservice should write w to the master FIFO. I can see from the logs in systemctl status myservice that the reload was executed, but the responses to HTTP requests tell me that the old code is still active.
My /etc/myservice/uwsgi.ini like this:
[uwsgi]
processes = 16
procname-master = myservice
master-fifo = /run/myservice/masterfifo
touch-chain-reload
listen = 128
thunder-lock
reload-on-as = 4096
limit-as = 8192
max-requests = 2000
; termination options
vacuum
die-on-term
; application
chdir = /usr/lib/myservice
virtualenv = /usr/lib/myservice/virtualenv
module = myservice.uwsgi
callable = app
master
need-app
enable-threads
lazy = True
lazy-apps = True
; logging
logto = /var/log/myservice/uwsgi.log
log-maxsize = 5242880
logdate = [%%Y/%%m/%%d %%H:%%M:%%S]
disable-logging
; stats server
stats-server = :8201
memory-report
; unix socket config (nginx->uwsgi)
socket = /run/myservice/myservice.sock
chown-socket = api
chmod-socket = 660
I'm running version 2.0.19.1 of uWSGI.
All I know about uWSGI is that it exists, but I noticed a mistake here:
ExecReload=/bin/echo 'w' > /run/myservice/masterfifo
The man page explains the problem:
This syntax is inspired by shell syntax, but only the meta-characters
and expansions described in the following paragraphs are understood,
and the expansion of variables is different. Specifically, redirection
using "<", "<<", ">", and ">>", pipes using "|", running programs in
the background using "&", and other elements of shell syntax are not
supported.
In other words, no redirection is taking place and echo will simply receive 3 arguments to print: char w, char > and the string /run/myservice/masterfifo.
Try this instead:
ExecReload=/bin/sh -c '/bin/echo w > /run/myservice/masterfifo'

Gunicorn throws error 403 when accessing static files

python==2.7.5, django==1.11.10, gunicorn==19.7.1, RHEL 7.4
I have a django project at my job written not by me.
It was in eventcat user's home directory and with time we ran out of available space on the disk. I was to move the project to /data/.
After I moved the project directory and set up a new environment I faced the problem that static files are not loaded and throwing 403 forbidden error.
Well, I know that gunicorn is not supposed to serve static files on production, but this is an internal project with low load. I have to deal with it as is.
The server is started with a selfwritten script (I changed the environment line to new path):
#!/bin/sh
. ~/.bash_profile
. /data/eventcat/env/bin/activate
exec gunicorn -c gunicorn.conf.py eventcat.wsgi:application
The gunicorn.conf.py consists of:
bind = '127.0.0.1:8000'
backlog = 2048
workers = 1
worker_class = 'sync'
worker_connections = 1000
timeout = 120
keepalive = 2
spew = False
daemon = True
pidfile = 'eventcat.pid'
umask = 0
user = None
group = None
tmp_upload_dir = None
errorlog = 'er.log'
loglevel = 'debug'
accesslog = 'ac.log'
access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"'
proc_name = None
def post_fork(server, worker):
server.log.info("Worker spawned (pid: %s)", worker.pid)
def pre_fork(server, worker):
pass
def pre_exec(server):
server.log.info("Forked child, re-executing.")
def when_ready(server):
server.log.info("Server is ready. Spawning workers")
def worker_int(worker):
worker.log.info("worker received INT or QUIT signal")
import threading, sys, traceback
id2name = dict([(th.identm, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId, ""), threadId))
for filename, lineno, name, line in traceback.exctract_stack(stack):
code.append('File: "%s", line %d, in %s' %(filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
worker.log.debug("\n".join(code))
def worker_abort(worker):
worker.log.info("worker received SIGABRT signal")
All the files in static directory are owned by eventcat user just like the directory itself.
I couldn't find any useful information in er.log and ac.log.
The server is running on https protocol and there is an ssl.conf in project directory. It has aliases for static and media pointing to previous project location and I changed all these entries to the new ones. Though I couldn't find where this config file is used.
Please, advise how can I find out what is the cause of the issue. What config files or anything should I look into?
UPDATE:
Thanks to #ruddra, gunicorn wasn't serving static at all. It was httpd that was. After making changes in httpd config everything is working.
As far as I know, gunicorn does not serve static contents. So to serve static contents, its best to use either whitenoise or you can use NGINX, Apache or any reverse proxy server. You can check Gunicorn's documentation on deployment using NGINX.
If you want to use whitenoise, then please install it using:
pip install whitenoise
Then add whitenoise to MIDDLEWARES like this(inside settings.py):
MIDDLEWARE = [
# 'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
# ...
]

How to tell a systemd initiated uWSGI daemon to use a specific python?

I cobbled together the below uWSGI configuration file from examples online.
(◠﹏◠)
Given this configuration -- which resides in /etc/uwsgi.d/myapp.ini and which is used to start the uwsgi daemon and, in turn, myapp via systemd/systemctl -- which configuration directive do I use to tell it to use a specific virtual-environment PYTHON for myapp?
Is it home =?
In other words, when it invokes the django.wsgi application, how can I tell it (or how does it know) to use: /home/myapp_unixHome/.virtualenvs/myapp/bin/python?
[uwsgi]
# =======================================================
# Directories ...
# =======================================================
home = /home/myapp_unixHome/.virtualenvs/myapp/ <--- Python virtualenv dir.
chdir = /home/myapp_unixHome/myapp/ <--- Django App here.
wsgi-file = /home/myapp_unixHome/myapp/django.wsgi <--- Including this django.wsgi file.
static-map = /m=/home/myapp_unixHome/myapp/static/ <--- Static files.
# =======================================================
# =======================================================
# TO BE NAMED ...
# =======================================================
master = true
processes = 5
# =======================================================
# =======================================================
# myapp communicates w/ nginx via a UNIX domain socket.
# =======================================================
socket = /run/uwsgi/myapp.sock
chmod-socket = 664
uid = nginx
gid = nginx
vacuum = true
# =======================================================
# =======================================================
# uWSGI Log file.
# =======================================================
logto = /var/log/uwsgi.log
# =======================================================
Thank you.
Yes, check in this link http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
home = /path/to/virtualenv
I hope it helps

Gunicorn Flask Caching

I have a Flask application that is running using gunicorn and nginx. But if I change the value in the db, the application fails to update in the browser under some conditions.
I have a flask script that has the following commands
from msldata import app, db, models
path = os.path.dirname(os.path.abspath(__file__))
manager = Manager(app)
#manager.command
def run_dev():
app.debug = True
if os.environ.get('PROFILE'):
from werkzeug.contrib.profiler import ProfilerMiddleware
app.config['PROFILE'] = True
app.wsgi_app = ProfilerMiddleware(app.wsgi_app, restrictions=[30])
if 'LISTEN_PORT' in app.config:
port = app.config['LISTEN_PORT']
else:
port = 5000
print app.config
app.run('0.0.0.0', port=port)
print app.config
#manager.command
def run_server():
from gunicorn.app.base import Application
from gunicorn.six import iteritems
# workers = multiprocessing.cpu_count() * 2 + 1
workers = 1
options = {
'bind': '0.0.0.0:5000',
}
class GunicornRunner(Application):
def __init__(self, app, options=None):
self.options = options or {}
self.application = app
super(GunicornRunner, self).__init__()
def load_config(self):
config = dict([(key, value) for key, value in iteritems(self.options) if key in self.cfg.settings and value is not None])
for key, value in iteritems(config):
self.cfg.set(key.lower(), value)
def load(self):
return self.application
GunicornRunner(app, options).run()
Now if i run the server run_dev in debug mode db modifications are updated
if run_server is used the modifications are not seen unless the app is restarted
However if i run like gunicorn -c a.py app:app, the db updates are visible.
a.py contents
import multiprocessing
bind = "0.0.0.0:5000"
workers = multiprocessing.cpu_count() * 2 + 1
Any suggestions on where I am missing something..
I also ran into this situation. Running flask in Gunicorn with several workers and the flask-cache won´t work anymore.
Since you are already using
app.config.from_object('default_config') (or similar filename)
just add this to you config:
CACHE_TYPE = "filesystem"
CACHE_THRESHOLD = 1000000 (some number your harddrive can manage)
CACHE_DIR = "/full/path/to/dedicated/cache/directory/"
I bet you used "simplecache" before...
I was/am seeing the same thing, Only when running gunicorn with flask. One workaround is to set Gunicorn max-requests to 1. However thats not a real solution if you have any kind of load due to the resource overhead of restarting the workers after each request. I got around this by having nginx serve the static content and then changing my flask app to render the template and write to static, then return a redirect to the static file.
Flask-Caching SimpleCache doesn't work w. workers > 1 Gunicorn
Had similar issue using version Flask 2.02 and Flask-Caching 1.10.1.
Everything works fine in development mode until you put on gunicorn with more than 1 worker. One probably reason is that on development there is only one process/worker so weirdly under this restrict circumstances SimpleCache works.
My code was:
app.config['CACHE_TYPE'] = 'SimpleCache' # a simple Python dictionary
cache = Cache(app)
Solution to work with Flask-Caching use FileSystemCache, my code now:
app.config['CACHE_TYPE'] = 'FileSystemCache'
app.config['CACHE_DIR'] = 'cache' # path to your server cache folder
app.config['CACHE_THRESHOLD'] = 100000 # number of 'files' before start auto-delete
cache = Cache(app)

Performance issue with uwsgi + flask + nginx with ab

I have a question to ask regarding the performance of my flask app when I incorporated uwsgi and nginx.
My app.view file looks like this:
import app.lib.test_case as test_case
from app import app
import time
#app.route('/<int:test_number>')
def test_case_match(test_number):
rubbish = test_case.test(test_number)
return "rubbish World!"
My app.lib.test_case file look like this:
import time
def test_case(test_number):
time.sleep(30)
return None
And my config.ini for my uwsgi looks like this:
[uwsgi]
socket = 127.0.0.1:8080
chdir = /home/ubuntu/test
module = app:app
master = true
processes = 2
daemonize = /tmp/uwsgi_daemonize.log
pidfile = /tmp/process_pid.pid
Now if I run this test case just purely through the flask framework without switching on uwsgi + nginx, using the ab benchmark, I received a response in 31seconds which is expected owning to the sleep function. What I dont get is when I run the app through uwsgi + nginx , the response time I got was 38 seconds, which is an overhead of around 25%. Can anyone enlighten me?
time.sleep() is not time-safe.
From the documentation of time.sleep(secs):
[…] Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.

Categories

Resources