I have a django application on an Ubuntu server that is run with gunicorn and started/stopped with supervisor. I am migrating it to a new server that is running the same Ubuntu server OS. It has been working fine up until now when I've tried starting it with supervisor. When I try to start it it exits with ERROR (abnormal termination) I'm using the exact same configuration files on my new server as I was on the old one and it's really bothering me why it's not working on the new server... I'll put some of my configs below as well as part of the supervisor log.
/etc/supervisor/supervisor.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0766 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
loglevel=debug
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
/etc/supervisor/conf.d/beta.conf
[program:beta]
command = /var/www/beta/myapp/bin/gunicorn_start
user = eli
stdout_logfile = /var/web-data/logs/beta/gunicorn_supervisor.log
redirect_stderr = true
/var/www/beta/myapp/bin/gunicorn_start
#!/bin/bash
source /var/www/beta/env/bin/activate
exec gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi
/var/www/beta/myapp/bin/gunicorn_config.py
command = '/var/www/beta/env/bin/gunicorn'
pythonpath = '/var/www/beta/myapp'
bind = '127.0.0.1:9000'
workers = 1
user = 'eli'
/var/log/supervisor/supervisord.log
http://pastebin.com/fAGdJMKg
/var/web-data/logs/beta/gunicorn_supervisor.log
empty
My next step would be to just wipe the new server clean and start from scratch again to see if that might solve my problem. This is really bothering me why two servers with the exact same configurations, one works and the other doesn't.
I have also tried changing the location of the socket file as well as adding user=eli to the [supervisord] block to no avail.
Lastly, if I run /var/www/beta/myapp/bin/gunicorn_start from the command line as eli it will run and I'm able to access my website. However when I sudo su and then run it, it will pause like it's running (~1sec) and then exit. It doesn't print anything to the console nor add anything to the log file.
The problem was solved by running /var/www/beta/myapp/bin/gunicorn_start line by line as root. The first source worked fine but it was the second line that was having trouble.
I then tried running gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi and this was doing the same thing but stil not showing any errors. So then I ran gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi --preload and found that it was raising a KeyError for an environment variable that I had in my settings. Then it all made sense. When I was running this as root, it didn't have the same environment variables as my normal user did.
And granted this was the only difference between my two servers. I decided to try out environment variables for my django secret key and database password. So I switched these to their actual values. Then ran sudo supervisorctl start beta and it started perfectly fine.
Related
I'm following a tutorial on how to use Celery on my Django production server.
When I get to the bit where it says:
Now reread the configuration and add the new process:
sudo supervisorctl reread
sudo supervisorctl update
When I perform sudo supervisorctl reread in my server (Ubuntu 16.04) terminal, it returns this:
ERROR: CANT_REREAD:
The directory named as part of the path /home/app/logs/celery.log does not exist.
in section 'app-celery' (file: '/etc/supervisor/conf.d/app-celery.conf')
I've done all of the instructions prior to this including installing supervisor as well as creating a file named mysite-celery.conf (app-celery.conf) in the folder: /etc/supervisor/conf.d
If you're curious my app-celery.conf file looks like this:
[program:app-celery]
command=/home/app/bin/celery worker -A draft1 --loglevel=INFO
directory=/home/app/draft1
user=zorgan
numprocs=1
stdout_logfile=/home/app/logs/celery.log
stderr_logfile=/home/app/logs/celery.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Any idea what the problem is?
Somehow supervisor is not able to create the folder - /home/app/logs/.
You can create it manually using mkdir and restart the supervisor service
mkdir /home/app/logs
sudo service supervisor restart
I added my username to the superisord.conf file under the [unix_http_server] section like so:
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0770 ; sockef file mode (default 0700)
chown=appuser:supervisor ;(username:group)
This seemed to work- time will tell if it continues working after I manage solve the rest of the supervisor issues.
So i have a code that i'm trying to run and when it finish it should display a message to the user saying what changes have happend. This works fine if i run it on terminal. But when i add it to a cronjob nothing happens and no error is shown.
It seems to me that it is losing the display session, but cant figure it how to solve it. Here is the show message code:
def sendmessage(message):
subprocess.Popen(['notify-send', message])
return
I also tried this version:
def sendmessage(message):
Notify.init("Changes")
n = Notify.Notification.new(message)
n.show()
return
Which gives the error (only in background also):
cannot open display:
Activated service 'org.freedesktop.Notifications' failed: Process org.freedesktop.Notifications exited with status
Would be very thankful for any help or alternative given.
I had a similar issue running a system daemon, where I wanted the user to be notified about a network bandwidth upload being exceeded.
The program was designed to run under systemd but at a push also under upstart.
You may get some mileage out of my configuration files:
systemd - bandwidth.service file
[Unit]
Description=Bandwidth traffic monitor
Documentation=man:bandwidth(7)
After=graphical.target
[Service]
Type=simple
Environment="DISPLAY=:0" "XAUTHORITY=/home/#USER#/.Xauthority"
PIDFile=/var/run/bandwidth.pid
ExecStart=/usr/bin/dbus-launch /usr/sbin/bandwidth_logd
ExecReload=/bin/kill -HUP $MAINPID
User=root
[Install]
WantedBy=graphical.target
upstart - bandwidth.conf file
#
# These are the scripts that run when a network appears.
# Use this on an upstart system in /etc/init
# test for systemd or upstart system with
# ps -p1 | grep systemd && echo systemd || echo upstart
# better
# ps -p1 | grep systemd >/dev/null && echo systemd || echo upstart
# Using upstart the script will need to daemonise which is a bugger
# so test for it and import Daemon_server.py
description "Bandwidth upstart events"
start on net-device-up # Start a daemon or run a script
stop on net-device-down # (Optional) Stop a daemon, scripts already self-terminate.
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect fork
#Set environment variables
env DISPLAY=":0"
export DISPLAY
env XAUTHORITY="/home/#USER#/.Xauthority"
export XAUTHORITY
script
# You can put shell script in here, including if/then and tests.
# replace #USER# with your name and ensure that you have .Xauthority in $HOME
/usr/sbin/bandwidth_logd
end script
You will note that in both configuration files the environment becomes key and #USER# is replaced by a real user name with a valid .Xauthority file in their $HOME directory.
In the python code I use the following to emit the message(import notify2).
def warning(msg):
result = True
try:
notify2.init("Bandwidth")
mess = notify2.Notification("Bandwidth",msg,'/usr/share/bandwidth/bandwidth.png')
mess.set_urgency(2)
mess.set_timeout(0)
mess.show()
except:
result = False
return result
I'm trying to follow the tutorial at http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html. I am working with an ubuntu 14.4 instance on amazon EC2. I'm trying to deploy a django app I've developed locally with python3. So far I've got the app working following the tut as long as I manually ssh in , turn on the virtualenv and then turn on uwsgi, using :
workon env1
uwsgi --ini /home/ubuntu/mysite_uwsgi.ini
I noticed however that when I tried to send a request to the app this morning, that I was getting:
errno 5 input/output error
This was solved my manually sshing in and executing the 2 lines above. I don't understand how this works exactly , but somehow my virtualenv and uwsgi were deactivated after I logged off. I need to keep them active so that all requests can be funneled to my app. I don't know how to do this. Following the tut above , I've modified /etc/rc.local to:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
workon myenv
uwsgi --ini /home/ubuntu/mysite_uwsgi.ini
exit 0
Will this solve my problem. If not what should I do?
You can daemonize the process:
env = DJANGO_SETTINGS_MODULE=mysite.settings # set an environment variable
pidfile = /tmp/project-master.pid # create a pidfile
harakiri = 20 # respawn processes taking more than 20 seconds
limit-as = 128 # limit the project to 128 MB
max-requests = 5000 # respawn processes after serving 5000 requests
"uwsgi --ini uwsgi.ini --daemonize=/var/log/yourproject.log # background the
If you've followed the rest of the doc and are ready to run in "emperor" mode, add this to /etc/rc.local before exit 0:
/usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data
I have set up an environment variable which I execute locally using a .sh file:
.sh file:
#!/bin/sh
echo "environment variables"
export BROKER="amqp://admin:password#11.11.11.11:4672//"
Locally inside a virtual environment I can now read this in Python using:
BROKER = os.environ['BROKER']
However, on my production server (Ubuntu). I run the same file chmod +x name_of_file.sh and source settings.sh and can see the variable using printenv, but Python gives the error KeyError: 'BROKER' Why?
This only happens on my production machine despite the fact I can see the variable using printenv. Note my production machines does not use virtualenv.
If I run the python shell on Ubuntu and do os.environ['BROKER'] it prints out the correct value. So I have not idea what the app file does not find it.
This is the task that gets run which cannot find the variable (supervisor task)
[program:celery]
directory = /srv/app_test/
command=celery -A tasks worker -l info
stdout_logfile = /var/log/celeryd_.log
autostart=true
autorestart=true
startsecs=5
stopwaitsecs = 600
killasgroup=true
priority=998
user=ubuntu
Celery Config (which does not find the variable when executed under supervisor:
from kombu import Exchange, Queue
import os
# Celery Settings
BROKER = os.environ['BROKER']
When I restart supervisor it gives the key error.
The environment variables from your shell will not be visible within supervisor tasks.
You need to use the environment setting in your supervisor config:
[program:celery]
...
environment=BROKER="amqp://admin:password#11.11.11.11:4672//"
This requires supervisor 3.0+.
When trying to run python bottle on port 80 I get the following:
socket.error: [Errno 10013] An attempt was made to access a socket in a way forb
idden by its access permissions
My goal is to run the web server on port 80 so the url's will be nice and tidy without any need to specify the port
for example:
http://localhost/doSomething
instead of
http://localhost:8080/doSomething
Any ideas?
Thanks
Exactly as the error says. You need to have permissions to run something on the 80th port, normal user cannot do it. You can execute the bottle webapp as root (or maybe www-data) and it should be fine as long as the port is free.
But taking security (and stability) into consideration you should look at different ways of deployment, for example nginx together with gunicorn.
Gunicorn Docs
Nginx Docs
Check your system's firewall setting.
Check whether another application already use port 80 using following commands:
On unix: netstat -an | grep :80
On Windows: netstat -an | findstr :80
According to Windows Sockets Error Codes:
WSAEACCES 10013
Permission denied.
An attempt was made to access a socket in a way forbidden by its
access permissions. An example is using a broadcast address for sendto
without broadcast permission being set using setsockopt(SO_BROADCAST).
Another possible reason for the WSAEACCES error is that when the bind
function is called (on Windows NT 4.0 with SP4 and later), another
application, service, or kernel mode driver is bound to the same
address with exclusive access. Such exclusive access is a new feature
of Windows NT 4.0 with SP4 and later, and is implemented by using the
SO_EXCLUSIVEADDRUSE option.
Sometimes dont need to want install nginx, python with gunicorn is a viable alternative with supervisor but you need to make lot of tricks for working
i assume you know install supervisor, and later install again requirements
pip3 install virtualenv
mkdir /home/user/.envpython
virtualenv /home/user/.envpython/bin/activate
source /home/user/.envpython/bin/activate
cd /home/user/python-path/
pip3 install -r requirements
create a supervisor file like that
nano /etc/supervisord.d/python-file.conf
and edit with this example, edit the program that you need, remember the python3 run in other ports > 1024
;example with path python supervisor in background
[program:python]
environment=HOME="/home/user",USER="user"
user=user
directory = /home/user/python-path
command = python3 /home/user/python-path/main.py
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
;example with path python gunicorn supervisor and port 80
[program:gunicorn]
;environment=HOME="/home/user",USER="user"
;user=user
directory = /home/user/python-path
command = bash /home/user/.scripts/supervisor-initiate.sh
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
and create the file
nano /home/user/.scripts/supervisor-initiate.sh
with the following content
source /home/user/.envpython/bin/activate
cd /home/user/python-path
gunicorn -w 1 -t 120 -b 0.0.0.0:80 main:app
i assume you file in python is called main and you initiate the app with flask or django that called "app"
only restart supervisord process
systemctl restart supervisord
and you have the app with gunicorn in the port 80, i post because i find for a very long time for this solution
Waiting works for anyone