APScheduler doesn't work with UWSGI - python

I'm using Django 1.8 and APScheduler to run workers on certain intervals. It works perfectly with Django's development server (e.g. ./manage.py runserver) but when I setup the project with UWSGI and master=true, the UWSGI worker can't get any requests from Nginx and browser shows 504 Gateway Timed-out error after 1-2min loading.
When I change it to master=false everything is fine.
Here is my UWSGI config:
[uwsgi]
chdir = /var/www/projectname/backend/projectname
module = projectname.wsgi:application
wsgi-file = /var/www/projectname/backend/projectname/projectname/wsgi.py
uid = root
gid = root
virtualenv = /var/www/venv/
master = false
processes = 4
socket = :8080
logto = /var/www/projectname/log/uwsgi.log
env = DJANGO_SETTINGS_MODULE=projectname.settings
enable-threads = true
Please note that I'm using Django's AppConfig to run the scheduler once. Is there any problem with my UWSGI config or it's because of Django?

Consider uWSGI mules for your background tasks. The workers handle requests, the mules handle longer running stuff.

Related

chown(): Operation not permitted occurred due to chown-soket of uwsgi【nginx】

I tried to run a django program using nginx and uwsgi, but the following error occurs in nginx.
connect() to unix:/var/www/html/view/dbproject/dbproject.sock failed (13: Permission denied) while connecting to upstream
I thought this error was probably because the owner of dbproject.sock, which is created when uwsgi is started, is username:username and not username:www-data.
Therefore, I added chown-soket = %(username):www-data in the uwsgi initialization file uwsgi.ini, but when I restart uwsgi, chown(): Operation not permitted is written in the uwsgi log.
How can I make the socket owner %(username):www-data?
Thank you.
uwsgi.ini
[uwsgi]
# path to your project
chdir = /var/www/html/view/dbproject
username = myusername
# path to wsgi.py in project
module = dbproject.wsgi:application
master = true
pidfile = /var/www/html/view/dbproject/django.uwsgi.pid
socket = /var/www/html/view/dbproject/dbproject.sock
# http = 127.0.0.1:8000
# path to python virtualvenv
home = /var/www/html/view/myvenv
chown-socket = %(username):www-data
chmod-socket = 660
enable-threads = true
processes = 5
thunder-lock = true
max-requests = 5000
# clear environment on exit
vacuum = true
daemonize = /var/www/html/view/dbproject/django.uwsgi.log
# Django settings
env DJANGO_SETTINGS_MODULE = dbproject.settings
The problem has been resolved.
I used root permission to start uwsgi.
I was using the Python virtual environment. However, the user when entering the virtual environment was not root. So all Python commands used in this situation are uid=username, gid=username. In other words, the following uwsgi command and the chown command executed by uwsgi will also be executed by username.
uwsgi --ini uwsgi.ini
However, as far as I know, the chown command causes a permission error if used by anyone other than root. Therefore, I think that the chown-socket in the uwsgi initialization file, which was executed by username, was causing permission errors.
So my operation is as follows
sudo -s # root permission
source myvenv/bin/acitvate # load python venv by root
uwsgi --ini uwsgi.ini # excute uwsgi by root
I am concerned that uwsgi will not work without root because of the problem if it is a public system, but if you have a better solution than mine, please point it out.

How to run Python RQ jobs on a DigitalOcean application sever?

I have a FastAPI application deployed on DigitalOcean, it has multiple API endpoints and in some of them, I have to run a scraping function as a background job using the RQ package in order not to keep the user waiting for a server response.
I've already managed to create a Redis database on DigitalOcean and successfully connect the application to it, but I'm facing issues with running the RQ worker.
Here's the code, inspired from RQ's official documentation :
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
#connecting to DigitalOcean's redis db
REDIS_URL = os.getenv('REDIS_URL')
conn = redis.Redis.from_url(url=REDIS_URL)
#Create a RQ queue using the Redis connection
q = Queue(connection=conn)
with Connection(conn):
worker = Worker([q], connection=conn) #This instruction works fine
worker.work() #The deployment fails here, the DigitalOcean server crashes at this instruction
The worker/job execution runs just fine locally but fails in DO's server
To what could this be due? is there anything I'm missing or any kind of configuration that needs to be done on DO's endpoint?
Thank you in advance!
I also tried to use FastAPI's BackgroundTask class. At first, it was running smoothly but the job stops running halfway through with no feedback on what was happening in the background from the class itself. I'm guessing it's due to a timeout that doesn't seem to have a custom configuration in FastAPI (perhaps because its background tasks are meant to be low-cost and fast).
I'm also thinking of trying Celery out, but I'm afraid I would run into the same issues as RQ.
Create a configuration file using this command:
sudo nano /etc/systemd/system/myproject.service
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/myproject
Environment="PATH=/home/user/myproject/myprojectvenv/bin"
ExecStart=/home/user/myproject/myprojectvenv/bin/gunicorn --workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[program:rq_worker]
command=/home/user/myproject/myprojectvenv/bin/rq -A rq_worker -l info
directory=/home/user/myproject
autostart=true
autorestart=true
stderr_logfile=/var/log/celery.err.log
stdout_logfile=/var/log/celery.out.log
[Install]
WantedBy=multi-user.target

Unable to run background tasks with Celery + Django + Gunicorn

Stack:
Python v2.7
Django v1.11
Celery v4.3.0
Gunicorn v19.7.1
Nginx v1.10
When I try to run django server and celery manually the async tasks executes as expected.
The problem comes when I am deploying django project using Gunicorn plus Nginx.
I tried running Celery using supervisor but it didn't help.
views.py
def _functionA():
_functionB.delay() #where _functionB is registered async task.
settings.py
# Celery settings
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
celery_init.py
from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'cpi_server.settings')
app = Celery('myproject')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
__init__.py
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from myproject.celery_init import app as celery_app
__all__ = ['celery_app']
gunicorn.service
[Unit]
Description=Gunicorn application server....
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=<myprojectdir>
Environment=PYTHONPATH=<ENV>
ExecStart=<myprojectdir>/env/bin/gunicorn --workers 3 --access-logfile access_gunicorn.log --error-logfile error_gunicorn.log --capture-output --log-level debug --bind unix:<myprojectdir>/myproject.sock <myproject>.wsgi:application
[Install]
WantedBy=multi-user.target
myproject_nginx.conf
server {
listen 8001;
location / {
include proxy_params;
proxy_pass http://unix:<myprojectdir>/myproject.sock;
}
}
celery worker
celery worker -B -l info -A myproject -Q celery,queue1,queue2,queue3 -n beat.%h -c 1
Can anyone help me with my question(s) below:
Why is that when Django is deployed using Gunicorn and nginx the Celery worker doesn't executes tasks whereas when ran manually it is able to execute the tasks i.e. when ran with python manage.py runserver ... .
You have concurrency level equal to 1 (the -c 1 in your worker command line). This basically means the worker is configured to run A SINGLE TASK at any point of time. If your tasks are long-running, then you may be under impression Celery is not running anything...
You can easily test this - when you start some task, run the following:
celery -A myproject inspect active
That will list you running tasks (if any).
Another thing to fix are your configuration varibles. Celery 4 now expects all configuration variables to be lower-case. Read the What’s new in Celery 4.0 (latentcall) document for more information, especially the Lowercase setting names section.

django celery daemon doesn't work

Here's my celery app config:
from __future__ import absolute_import
from celery import Celery
import os
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tshirtmafia.settings')
app = Celery('tshirtmafia')
app.conf.update(
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend',
)
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
settings.py:
INSTALLED_APPS:
'kombu.transport.django',
'djcelery',
also:
BROKER_URL = 'django://'
Here's my task:
#shared_task
def test():
send_mail('nesamone bus', 'Files have been successfully generated.', 'marijus.merkevicius#gmail.com',
['marijus.merkevicius#gmail.com'], fail_silently=False)
Now when I run locally python manage.py celeryd locally and then run test.delay() from shell locally it works.
Now I'm trying to deploy my app. When with the exact same configuration I try to open python manage.py celeryd and in other window I open shell and run test task, it doesn't work.
I've also tried to setup background daemon like this:
/etc/default/celeryd configuration:
# Name of nodes to start, here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Where to chdir at start. (CATMAID Django project dir.)
CELERYD_CHDIR="/home/tshirtnation/"
# Python interpreter from environment. (in CATMAID Django dir)
ENV_PYTHON="/usr/bin/python"
# How to call "manage.py celeryd_multi"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
# How to call "manage.py celeryctl"
CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit=300 --concurrency=1"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
And I use default celery /etc/init.d/celeryd script.
So basically it seems like celeryd starts but doesn't work. No idea how to debug this and what might be wrong.
Let me know if you need anything else
Celery turned to be a very capricious child in Django robust system as for me.
There are too little initial data for understanding the reason of your problems.
The most usual reason of Celery daemon fail is file system permissions.
But to clarify the reason I'd try:
Start celery from a command line by the user-owner of django project:
celery -A proj worker -l info
If it works OK, go further
Start celery in a verbal mode as a root user just like daemon to be:
sudo sh -x /etc/init.d/celeryd start
This will show most of the problems with the daemon script - celery user and group used, but not all, unfortunately: permission fails are not visible.
My little remark.
Usually Celery is started by own celery user, and the django project by another one. After long fighting celery and system, I refused from celery user, and owned celery process by the django project user.
And .. do not forget to start once
update-rc.d celerybeat defaults
update-rc.d celeryd defaults
this is for Ubuntu daemon start, sure.
Good luck

Problems deploying Pyramid with uWSGI on Nginx

I seem to be having some slight problems deploying a Pyramid web application. The problem seems to lie in my init script that I am using to start my web application on boot. For some reason, uWSGI will not work unless my socket is set to have a permission of "nobody.nobody" OR Nginx is started after my uwsgi init script. I'm changed my init script to reflect these changes, but it does not seem to be working. The init script (or the part that starts uwsgi) looks like so:
#!/sbin/runscript
args="--ini-paste /var/www/pyramid/app1/development.ini"
command="/var/www/pyramid/bin/uwsgi"
pidfile="/var/run/uwsgi.pid"
sock="/var/tmp/proxy/uwsgi.sock"
nobody="nobody.nobody"
start() {
ebegin "Starting app1"
chown $nobody $sock
start-stop-daemon --start --exec $command -- $args \
--pidfile $pidfile
chown $nobody $sock
einfo "app1 started"
eend $?
}
My Nginx configuration looks like so:
location / {
include uwsgi_params;
uwsgi_pass unix:///var/tmp/proxy/uwsgi.sock;
uwsgi_param SCRIPT_NAME "" ;
}
My ini file includes the following:
[uwsgi]
socket = /var/tmp/proxy/uwsgi.sock
pidfile = /var/run/uwsgi.pid
master = true
processes = 1
home = /var/www/pyramid
daemonize = /var/log/uwsgi.log
virtualenv = /var/www/pyramid/
pythonpath = /var/www/pyramid/bin
What happens is that Nginx will start, and then uwsgi will start. Performing a "ls -la" in /var/tmp/proxy reveals that the permissions of uwsgi.sock is set to "root root" instead of "nobody nobody". However, restarting Nginx will fix the problem, regardless of what the socket's permissions are (but Nginx has to be started first).
Thus, the ways I can get this to work is:
start uwsgi
start nginx
restart nginx
or
start nginx
start uwsgi
restart nginx
I'm at a complete loss as to why this isn't working. If anyone has any advice I'd greatly appreciate it!
You can use the following settings to change the permission of its socket in your ini file:
chmod-socket = 777 # socket permission
gid = www-data # socket group
uid = www-data # socket user
Another thing to consider is whether you actually want uWSGI to run as root. If you pass --uid and --gid arguments to uwsgi, uwsgi will masquerade as a different (preferably non-root) user.
For example, nginx usually runs as the www-data user and www-data group. So if you set up your wsgi app to run with "--gid www-data" and then add at least group-write permissions to your socket file with "--chmod-socket 020", then nginx will be able to write to the socket and you'll be in business.
See my blog post on the subject: http://blog.jackdesert.com/common-hurdles-to-deploying-uwsgi-apps-part-1

Categories

Resources