I have a service:
[Unit]
Description=tweetsift
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/var/www/html
ExecStart=sudo /usr/bin/nice -n -20 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
Restart=on-failure
[Install]
WantedBy=multi-user.target
When I run sudo systemctl status tweet I see that I am using /usr/bin/nice for the main PID. However it is not taking on the workers.
tweetsift.service - tweet
Loaded: loaded (/etc/systemd/system/tweet.service; enabled; preset: enabled)
Active: active (running) since Mon 2023-01-09 04:36:08 UTC; 5min ago
Main PID: 3124 (sudo)
Tasks: 12 (limit: 4661)
Memory: 702.8M
CPU: 7.580s
CGroup: /system.slice/tweet.service
├─3124 sudo /usr/bin/nice -n -20 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3125 sudo -u root sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3126 sudo gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3127 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3128 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3129 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
├─3130 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
└─3131 /usr/bin/python3 /usr/local/bin/gunicorn -w 4 -b 0.0.0.0:5000 endpoint:app
I am running a machine learning script that sucks down the CPU. I tried using nice python3 tweet.py and it works and doesn't kill the process.
However, when I try call the api endpoint I built. The service starts up using a worker and then gets killed for OOM (Out of Memory).
I am using Ubuntu 20.04 & Apache2
Any ideas? I was able to get nice running on the main PID by updating the /etc/sudoers/ and adding a line to allow sudo to use it.
But I still can't get the script to run as a service using nice for the workers PIDs too when they start up upon an API call to the flask app I've got running.
I am using gunicorn (version 20.1.0)
Thanks!
I've tried everything at this point. I want nice to be applied to gunicorn workers when my flask app has an api call sent to it without getting killed for OOM.
I'm using a 4GB Intel 80GB Disk premium intel droplet on DigitalOcean.
Related
Can you please check if I am missing anything in the config?
I am running a python flask app using Gunicorn.
Our current flow of events is:
Jmeter will trigger 8-10 jobs in parallel and send them to the AWS load balancer
A requests then goes through an Nginix proxy and is forwarded to a Gunicorn/Flask app running on EC2 instance.
I am using the following configurations to enable multi-processing on Gunicorn/Flask, but these commands are not having any effect, as I see the jobs are being executed serially and not in parallel.
Please help me understand what I need to change in order to have all of these jobs execute in parallel.
Here is the list of the commands I have tried out but nothing has been working:
These commands are sync commands which I tried:
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --threads 2
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --threads 2 max_requests_jitter 4
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --max-requests 4
These command are async commands which I tried:
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --worker-class tornado
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --worker-class gevent
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --worker-class gthread
gunicorn app1:application -b localhost:8000 --timeout 90000 -w 17 --worker-class eventlet
I am running a virtualenv with Python3.6 on Ubuntu 16.04 for my Django project using uwsgi and NGINX.
I have uWSGI installed globally and also in the virtualenv.
I can run my project from the command line using uWSGI within the env with
/home/user/Env/myproject/bin/uwsgi --http :8080 --home /home/user/Env/myproject --chdir /home/user/myproject/src/myproject -w myproject.wsgi
and go to my domain and it loads fine.
However I am obviously running uWSGI in "Emperor mode" and when I set the service file up (along with NGINX) the domain displays internal server error.
The uWSGI logs trace to --- no python application found ---
I was having this problem when running
uwsgi --http :8080 --home /home/user/Env/myproject --chdir /home/user/myproject/src/myproject -w myproject.wsgi
because it was using the global install uwsgi instead of the virtualenv one.
I changed my StartExec to the virtualenv uwsgi path but no luck.
I can't figure out what I'm doing wrong, path error? Syntax error?
my /etc/systemd/system/uwsgi.service file
[Unit]
Description=uWSGI Emperor service
[Service]
ExecStartPre=/bin/bash -c 'mkdir -p /run/uwsgi; chown user:www-data /run/uwsgi'
ExecStart=/home/user/Env/myproject/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
Okay bit silly but it seems I ran sudo systemctl stop uwsgi and then sudo systemctl start uwsgi and it works now.
I want to deploy dev server but I have a problem with starting celery and gunicorn. I'm using scripts for my purposes
celery.sh
#!/bin/bash
cd /home/dev/app
pipenv run celery -A config worker -B -l info
and start.sh for gunicorn
#!/bin/bash
cd /home/dev/app
pipenv run gunicorn config.wsgi:application -b 127.0.0.1:8005 -w 2 -t 60 \
--env DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE \
--env DSN=$SENTRY_DSN \
--env DATABASE_URL=$DATABASE_URL \
--log-file - \
--error-logfile /home/dev/app/errors.log
Also here is my config for supervisor
[program:back]
directory=/home/dev/app/
command=/home/dev/bin/start
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=/home/dev/bin/celery
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
When I'm running sudo supervisorctl start celery I'm getting the following error:
/home/dev/bin/celery: line 3: pipenv: command not found
Also I added the following lines as pipenv documentation says (https://pipenv.readthedocs.io/en/latest/diagnose/)
[supervisord]
environment=LC_ALL='en_US.UTF-8',LANG='en_US.UTF-8'
UPDATE
Changed my supervisor config:
[program:back]
directory=/home/dev/app/
command=pipenv run gunicorn config.wsgi:application --bind 127.0.0.1:8005
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
[program:celery]
directory=/home/dev/app/
command=pipenv run celery -A config:celery_app worker -B -l info
user=dev
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
And now I'm getting an error:
back: ERROR (no such file)
You need to give explicit path of gunicorn. Though I'm not sure about pipenv, but the error you are getting is because supervisor tries to find gunicorn in the directory. You should change your config file into something like this:
[program:back]
directory=/home/dev/app/
command=/path/to/pipenv run /path/to/gunicorn config.wsgi:application --bind 127.0.0.1:8005
Then you must restart your supervisord in order to load the settings.
sudo service supervisord reload
in your config file. change command= to bash -c followed by the full path and file to execute
this should do the trick
My Project bases on python-flask and celery with RabbitMq.
So I have to run 2 long services in the one container:
Two services:
1. gunicorn -w 64 -b 127.0.0.1:8888 manage:app
2. celery worker -A celery_worker.celery --loglevel=info
Those 2 services are both running as long time commands
I don't know how to write the Dockerfile to achieve my purpose.
I tried this:
CMD ["gunicorn -w 64 -b 127.0.0.1:8888 manage:app", "celery worker -A celery_worker.celery --loglevel=info"]
But it does not works.
Before I decided to use docker in my project, I used supervisor for execute those two commands simultaneously. But the supervisor has some problems in docker container which I couldn't solve(DETAIL).
So I want to know how to achieve my purpose that running 2 long time services in a docker container, how to write this Dockerfile. I want to execute "docker stop" that those 2 services can stop, "docker start" that those 2 services can start.....
I find that when I use docker-compose to shut down my gunicorn (19.7.1) python application, it always takes 10s to shut down. This is the default maximum time docker-compose waits before forcefully killing the process (adjusted with the -t / --timeout parameter). I assume this means that gunicorn isn't being gracefully shut down. I can reproduce this with:
docker-compose.yml:
version: "3"
services:
test:
build: ./
ports:
- 8000:8000
Dockerfile:
FROM python
RUN pip install gunicorn
COPY test.py .
EXPOSE 8000
CMD gunicorn -b :8000 test:app
test.py
def app(_, start_response):
"""Simplest possible application object"""
data = b'Hello, World!\n'
status = '200 OK'
response_headers = [
('Content-type', 'text/plain'),
('Content-Length', str(len(data)))
]
start_response(status, response_headers)
return iter([data])
Then running the app with:
docker-compose up -d
and gracefully stopping it with:
docker-compose stop
version:
docker-compose version 1.12.0, build b31ff33
I would prefer to allow gunicorn to stop gracefully. I think it should be able to based on the signal handlers in base.py.
All of the above is also true for updating images using docker-compose up -d twice, the second time with a new image to replace the old one.
Am I misunderstanding / misusing something? What signal does docker-compose send to stop processes? Shouldn't gunicorn be using it? Should I be able to restart my application faster than 10s?
TL;DR
Add exec after CMD in your dockerfile: CMD exec gunicorn -b :8000 test:app.
Details
I had the same issue, when I ran docker exec my_running_gunicorn ps aux, I saw something like:
gunicorn 1 0.0 0.0 4336 732 ? Ss 10:38 0:00 /bin/sh -c gunicorn -c gunicorn.conf.py vision:app
gunicorn 5 0.1 1.1 91600 22636 ? S 10:38 0:00 /usr/local/bin/python /usr/local/bin/gunicorn -c gunicorn.conf.py vision:app
gunicorn 8 0.2 2.5 186328 52540 ? S 10:38 0:00 /usr/local/bin/python /usr/local/bin/gunicorn -c gunicorn.conf.py vision:app
The 1 PID is not the gunicorn master, hence it didn't receive the sigterm signal.
With the exec in the Dockerfile, I now have
gunicorn 1 32.0 1.1 91472 22624 ? Ss 10:43 0:00 /usr/local/bin/python /usr/local/bin/gunicorn -c gunicorn.conf.py vision:app
gunicorn 7 45.0 1.9 131664 39116 ? R 10:43 0:00 /usr/local/bin/python /usr/local/bin/gunicorn -c gunicorn.conf.py vision:app
and it works.