Cannot read environment variable in Python on Ubuntu - python

I have set up an environment variable which I execute locally using a .sh file:
.sh file:
#!/bin/sh
echo "environment variables"
export BROKER="amqp://admin:password#11.11.11.11:4672//"
Locally inside a virtual environment I can now read this in Python using:
BROKER = os.environ['BROKER']
However, on my production server (Ubuntu). I run the same file chmod +x name_of_file.sh and source settings.sh and can see the variable using printenv, but Python gives the error KeyError: 'BROKER' Why?
This only happens on my production machine despite the fact I can see the variable using printenv. Note my production machines does not use virtualenv.
If I run the python shell on Ubuntu and do os.environ['BROKER'] it prints out the correct value. So I have not idea what the app file does not find it.
This is the task that gets run which cannot find the variable (supervisor task)
[program:celery]
directory = /srv/app_test/
command=celery -A tasks worker -l info
stdout_logfile = /var/log/celeryd_.log
autostart=true
autorestart=true
startsecs=5
stopwaitsecs = 600
killasgroup=true
priority=998
user=ubuntu
Celery Config (which does not find the variable when executed under supervisor:
from kombu import Exchange, Queue
import os
# Celery Settings
BROKER = os.environ['BROKER']
When I restart supervisor it gives the key error.

The environment variables from your shell will not be visible within supervisor tasks.
You need to use the environment setting in your supervisor config:
[program:celery]
...
environment=BROKER="amqp://admin:password#11.11.11.11:4672//"
This requires supervisor 3.0+.

Related

How to change default path of Celery beat service?

I installed Celery as a Windows service. My code moves *.pid and Celery log files into another directory, but three files (celerybeat-schedule.bak, celerybeat-schedule.dir, celerybeat-schedule.dat) which I am not able to move.
I used below code for changing other file's default path:
command = '"{celery_path}" -A {proj_dir} beat -f "{log_path}" -l info --pidfile="{pid_path}" '.format(
celery_path=os.path.join(PYTHONSCRIPTPATH, 'celery.exe'),
proj_dir=PROJECTDIR,
# log_path_1=os.path.join(INSTDIR,'celery_2.log')),
log_path=os.path.join(tmpdir,'celery_'+cur_date_time+'.log'),
pid_path = os.path.join(tmpdir,'celerybeat_'+cur_date_time+'.pid'))
How to change default path of Celery beat service?
If you executed celery -A your.project.app beat --help it would print you very useful CLI help where you would find the solution to your problem - the -s <path to the scheduler database file> flag.
-s SCHEDULE, --schedule SCHEDULE
Path to the schedule database. Defaults to celerybeat-
schedule. The extension '.db' may be appended to the
filename. Default is celerybeat-schedule.
All you have to do is to pass a full path to the schedule database file to your Celery beat process. Example: -s C:/services/celery/celerybeat-schedule.db
Finally I am able to change path of celery services using below code.
command = '"{celery_path}" -A {proj_dir} beat -f "{log_path}" -l info --pidfile="{pid_path}" '.format(
celery_path=os.path.join(PYTHONSCRIPTPATH, 'celery.exe'),
proj_dir=PROJECTDIR,
# log_path_1=os.path.join(INSTDIR,'celery_2.log')),
log_path=os.path.join(CELERYDIR,'celery_'+cur_date_time+'.log'),
# bak_path=os.path.join(CELERYDIR,'celerybeat-schedule'),
pid_path = os.path.join(CELERYDIR,'celerybeat_'+cur_date_time+'.pid'))

ERROR: CANT_REREAD: The directory named as part of the path /home/app/logs/celery.log does not exist

I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.

Cannot setup Celery as daemon on server

I cannot setup Celery as daemon on server (django 1.6.11, celery 3.1, Ubuntu 14.04)
Tried lot of options, can anyone place full setting of working configuration to run celery as daemon?
I am very disappointed from official docs http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts - none of this working, no full step-by-step tutorial. Zero (!!!) videos on youtube on how to setup daemon.
Now i able to run celery simple by celery worker -A engine -l info -E
tasks from django are executed successfully.
I have done configs:
/etc/defaults/celery
# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"
# or we could have three nodes:
#CELERYD_NODES="w1 w2 w3"
# Absolute path to "manage.py"
CELERY_BIN="/var/www/engine/manage.py"
# How to call manage.py
CELERYD_MULTI="celery multi"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=2"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="root"
/etc/init.d/celeryd
got from https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd without changes
Now, when i go to console and run:
cd /etc/init.d
celery multi start w1
i see output:
celery multi v3.1.11 (Cipater)
> Starting nodes...
> w1#engine: OK
So, no errors! Tasks are not invoked and i cannot figure out whats wrong.
I would suggest to use Supervisor. It's better way than init scripts, because you can run multiple Celery instances for different projects on one server. Example config for Supervisor you can find in Celery repo or fully working example from my project:
# /etc/supervisor/conf.d/celery.conf
[program:celery]
command=/home/newspos/.virtualenvs/newspos/bin/celery worker -A newspos --loglevel=INFO
user=newspos
environment=DJANGO_SETTINGS_MODULE="newspos.settings"
directory=/home/newspos/projects/newspos/
autostart=true
autorestart=true
stopwaitsecs = 600
killasgroup=true
startsecs=10
stdout_logfile=/var/log/celery/newspos-celeryd.log
stderr_logfile=/var/log/celery/newspos-celeryd.log

Run supervisor in virtual environment

I installed supervisor and gunicorn in my virtual environment (venv).
I am using this tutorial: https://realpython.com/blog/python/kickstarting-flask-on-ubuntu-setup-and-deployment/
I'm confused as to where I should be creating the config file for supervisor as the default etc/supervisor won't apply to me.
The supervisorctl file is in the directory:
/home/giri/venv/py2.7/lib/python2.7/site-packages/supervisor
I noticed this line in the supervisorctl file:
Options:
-c/--configuration -- configuration file path (default /etc/supervisord.conf)
Do I need to manually set this flag each time I run the supervisorctl script or is there another way?
Thanks
As found in the docs (http://supervisord.org/configuration.html):
The Supervisor configuration file is conventionally named
supervisord.conf. It is used by both supervisord and supervisorctl. If
either application is started without the -c option (the option which
is used to tell the application the configuration filename
explicitly), the application will look for a file named
supervisord.conf within the following locations, in the specified
order. It will use the first file it finds.
$CWD/supervisord.conf
$CWD/etc/supervisord.conf
/etc/supervisord.conf
So put the supervisor.conf in your current working directory and you're fine.

Supervisor not starting gunicorn

I have a django application on an Ubuntu server that is run with gunicorn and started/stopped with supervisor. I am migrating it to a new server that is running the same Ubuntu server OS. It has been working fine up until now when I've tried starting it with supervisor. When I try to start it it exits with ERROR (abnormal termination) I'm using the exact same configuration files on my new server as I was on the old one and it's really bothering me why it's not working on the new server... I'll put some of my configs below as well as part of the supervisor log.
/etc/supervisor/supervisor.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0766 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
loglevel=debug
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
/etc/supervisor/conf.d/beta.conf
[program:beta]
command = /var/www/beta/myapp/bin/gunicorn_start
user = eli
stdout_logfile = /var/web-data/logs/beta/gunicorn_supervisor.log
redirect_stderr = true
/var/www/beta/myapp/bin/gunicorn_start
#!/bin/bash
source /var/www/beta/env/bin/activate
exec gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi
/var/www/beta/myapp/bin/gunicorn_config.py
command = '/var/www/beta/env/bin/gunicorn'
pythonpath = '/var/www/beta/myapp'
bind = '127.0.0.1:9000'
workers = 1
user = 'eli'
/var/log/supervisor/supervisord.log
http://pastebin.com/fAGdJMKg
/var/web-data/logs/beta/gunicorn_supervisor.log
empty
My next step would be to just wipe the new server clean and start from scratch again to see if that might solve my problem. This is really bothering me why two servers with the exact same configurations, one works and the other doesn't.
I have also tried changing the location of the socket file as well as adding user=eli to the [supervisord] block to no avail.
Lastly, if I run /var/www/beta/myapp/bin/gunicorn_start from the command line as eli it will run and I'm able to access my website. However when I sudo su and then run it, it will pause like it's running (~1sec) and then exit. It doesn't print anything to the console nor add anything to the log file.
The problem was solved by running /var/www/beta/myapp/bin/gunicorn_start line by line as root. The first source worked fine but it was the second line that was having trouble.
I then tried running gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi and this was doing the same thing but stil not showing any errors. So then I ran gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi --preload and found that it was raising a KeyError for an environment variable that I had in my settings. Then it all made sense. When I was running this as root, it didn't have the same environment variables as my normal user did.
And granted this was the only difference between my two servers. I decided to try out environment variables for my django secret key and database password. So I switched these to their actual values. Then ran sudo supervisorctl start beta and it started perfectly fine.

Categories

Resources