I'm attempting to set up a Django project with Nginx and Gunicorn. I think I am encountering some issues with paths that I can't seem to figure out.
my root virtualenv dir: /var/www/webapps/testapp/
If I run gunicorn mysite.wsgi:application --bind 0.0.0.0:8001 from /var/www/webapps/testapp/testapp/ (apologies for the naming conventions...) It works!
However... If I attempt to run from the bash script I am using to start gunicorn the project seems to run but when I attempt to load a page I get these errors:
ImportError at /home/
cannot import name views
Request Method: GET
Request URL: https://URL/home/
Django Version: 1.6.2
Exception Type: ImportError
Exception Value:
cannot import name views
Exception Location: /var/www/webapps/testapp/testapp/testapp/urls.py in <module>, line 2
Python Executable: /var/www/webapps/testapp/bin/python2.7
Python Version: 2.7.6
Python Path:
['/var/www/webapps/testapp/testapp',
'/var/www/webapps/testapp/bin',
'/var/www/webapps/testapp/testapp',
'/var/www/webapps/testapp/bin',
'/var/www/webapps/testapp/lib/python27.zip',
'/var/www/webapps/testapp/lib/python2.7',
'/var/www/webapps/testapp/lib/python2.7/plat-linux2',
'/var/www/webapps/testapp/lib/python2.7/lib-tk',
'/var/www/webapps/testapp/lib/python2.7/lib-old',
'/var/www/webapps/testapp/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/var/www/webapps/testapp/lib/python2.7/site-packages']
Server time: Wed, 21 May 2014 15:26:45 +0000
The bash script I am using is as follows:
#!/bin/bash
NAME="testapp2" # Name of the application
DJANGODIR=/var/www/webapps/testapp/testapp # Django project directory
SOCKFILE=/var/www/webapps/testapp/run/gunicorn.sock # we will communicte using this unix socket
USER=testappuser # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=16 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=mysite.settings # which settings file should Django use
DJANGO_WSGI_MODULE=mysite.wsgi # WSGI module name
PYTHONPATH=/var/www/webapps/testapp/bin`
echo "Starting $NAME as "whoami"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH`
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
Using the bash script does not work and I can't seem to figure out why. Can anyone help?
Thanks
Turns out the issue lay with directory permissions and the user running the gunicorn process.
All sorted now, thanks!
Related
I cannot get python files to be served up with Apache 2.2 or 2.4 without a 500 error. I have WebStation installed, python, perl, php, and Apache 2.2 and 2.4 installed.
I can serve up static files just fine with apache. When I try to serve up a most basic "hello world" cgi, I get a 500 error. The error is
[cgid:error] [pid 10076:tid 140542621480832] (2)No such file or directory: AH01241: exec of ['/volume2/Development/WebRepo/cgi-bin/test.py' failed.
Tried to execute both a perl script and a python script. Both run successfully from a command line, but not from served up with Apache (same errors of "no such file..") Also note this is a 500 error, not a 404, so it's seeing the file. I can serve up static HTML files just fine.
The python script couldn't be simpler:
#!/usr/bin/python
print "Content-type: text/html\n\n";
print "Hello, World.";
All files have 755 permissions. The path to python is correct. I'm at a loss as to what to do next.
Python can serve CGI scripts out of the box, using http.server.CGIHTTPRequestHandler.
On my Synology NAS I have official Python3 package installed (version 3.8.2-0150).
I can SSH into NAS as admin and add a script:
mkdir -p app/cgi-bin
cat << EOF > app/cgi-bin/hello.py
#!/usr/bin/env python3
print('Content-Type: text/html')
print()
print('<html><body><h2>Hello World!</h2></body></html>')
EOF
After that I can run it like this (note that --directory doesn't have effect for --cgi so I cd there):
cd app && python3 -m http.server --cgi
Then on my machine, I can curl http://nas:8000/cgi-bin/hello.py.
Running on boot
You can run this automatically on boot via the task scheduler.
Control Panel → Task Scheduler → Create → Triggered Task → User-defined script. Fill these on General tab:
Task: Python CGI
User: admin
Enabled: [x]
And User-defined script on Task Settings tab:
cd /var/services/homes/admin/app
python3 -m http.server --cgi
Then you can run it manually. It should also run on reboot.
Permissions
If you want to run the task as root, make sure file permissions are correct from root's point of view. In my case there's discrepancy by some reason.
$ ls -l app/cgi-bin/hello.py
-rwxrwxrwx+ 1 admin users 122 Nov 29 14:50 app/cgi-bin/hello.py
$ sudo ls -l app/cgi-bin/hello.py
Password:
-rwx--x--x+ 1 admin users 122 Nov 29 14:50 app/cgi-bin/hello.py
On a CentOS 7 server, I have installed Python 3.6 via SCL. ( https://www.softwarecollections.org/en/scls/rhscl/rh-python36/)
I have this line in .bashrc to enable SCL's Python 3.6
source scl_source enable rh-python36
I have installed pipenv:
pip install --user pipenv
I run Python programs via the command line:
pipenv run python myprogram.py
All these work great. I have a Flask application that uses the user's pipenv. I am trying to create a systemd unit file to start/stop/reload the Flask web application. How can I get the sytemd unit file to use the user's pipenv installed via SCL's Python and pip?
I tried to execute the command line from root and I get this error:
[root#localhost ~]# source scl_source enable rh-python36
[root#localhost ~]# /home/user/.local/bin/pipenv run python /home/user/hello.py
Traceback (most recent call last):
File "/home/user/.local/bin/pipenv", line 7, in <module>
from pipenv import cli
ModuleNotFoundError: No module named 'pipenv'
However, I am able to execute the command via su -c by loading user's bash shell:
su -c 'bash -lc /home/user/.local/bin/pipenv run python hello.py' user
But this line seems awkward. What is a the correct line I could use in systemd unit file's ExecStart line? What environment variables should be included in order to use the user's pipenv?
Here's my working systemd unit file:
[Unit]
Description=Python app
# Requirements
Requires=network.target
# Dependency ordering
After=network.target
[Service]
# Let processes take awhile to start up
TimeoutStartSec=0
RestartSec=10
Restart=always
Environment="APP_SITE_SETTINGS=/home/app/.config/settings.cfg"
Environment="PYTHONPATH=/home/app/.local/lib/python3.6/site-packages"
WorkingDirectory=/home/app/app-site
User=app
Group=app
PermissionsStartOnly=true
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
# Main process
ExecStartPre=/bin/mkdir -p /run/app
ExecStartPre=/bin/chown app:app /run/app
#ExecStartPre=source scl_source enable rh-python36
ExecStart=/usr/bin/scl enable rh-python36 -- /home/app/.local/bin/pipenv run uwsgi \
--socket 127.0.0.1:6003 \
--buffer-size 65535 \
--enable-threads \
--single-interpreter \
--threads 1 \
-L \
--stats /run/app/uwsgi_stats.socket \
--lazy-apps \
--master-fifo /run/stocks/uwsgimasterfifo \
--processes 1 \
--harakiri 960 \
--max-worker-lifetime=21600 \
--ignore-sigpipe \
--ignore-write-errors \
--disable-write-exception \
--mount /=run:app \
--manage-script-name
[Install]
WantedBy=multi-user.target
I followed multiple tutorials available on stackoverflow about starting a python script at startup but none of them works.
I need to activate a virtualenv then start a flask server
I tried
init.d method
I made an start.sh in /etc/init.d/
#!/bin/sh
### BEGIN INIT INFO
# Provides: skeleton
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Should-Start: $portmap
# Should-Stop: $portmap
# X-Start-Before: nis
# X-Stop-After: nis
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# X-Interactive: true
# Short-Description: Example initscript
# Description: This file should be used to construct scripts to be
# placed in /etc/init.d.
### END INIT INFO
cd /home/ion/
source /home/ion/py35/bin/activate
cd /home/ion/Desktop/flask/
nohup python main.py &
echo "Done"
Its permissions are chmod at +x
ion#aurora:/etc/init.d$ ll start.sh
-rwxr-xr-x 1 root root 625 Jun 25 19:10 start.sh*
Went to /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
/etc/init.d/start.sh
exit 0
Didn't work
cronjob method
sudo crontab -e
and appended
#reboot sh '/etc/init.d/start.sh'
Didn't worked either , where am I wrong?
Manual triggered logs
(py35) ion#aurora:~/Desktop/flask$ python main.py
WARNING:tensorflow:From /home/ion/Desktop/flask/encoder.py:57: calling l2_normalize (from tensorflow.python.ops.nn_impl) with dim is deprecated and will be removed in a future version.
Instructions for updating:
dim is deprecated, use axis instead
2018-06-25 19:34:05.511943: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://localhost:5505/ (Press CTRL+C to quit)
* Restarting with stat
1) Don't use old "init.d" method. Use something modern. If you have Ubuntu 15.04 and higher, you can use Systemd to create daemon that will be started automatically at startup. If you have for example Ubuntu older than 15.04 - use Upstart.
For Systemd:
Create unit file in /lib/systemd/system/you_service_name.service with the following content (as far as I can see your python script doesn't spawn new process while running, so Type should be simple. More info here):
[Unit]
Description=<your_service_name>
After=network.target network-online.target
[Service]
Type=simple
User=<required_user_name>
Group=<required_group_name>
Restart=always
ExecStartPre=/bin/mkdir -p /var/run/<your_service_name>
PIDFile=/var/run/<your_service_name>/service.pid
ExecStart=/path/to/python_executable /path/to/your/script.py
[Install]
WantedBy=multi-user.target
Save this file and reload systemd:
sudo systemctl daemon-reload
Then add your service to autostart:
sudo systemctl enable you_service_name.service
you should see that Systemd created required symlinks after enable action.
Reboot and see if it's up and running (ps aux | grep python or sudo systemctl status you_service_name.service). If there is something weird - check Systemd journal:
sudo journalctl -xe
UPD:
To launch your python script in desired virtualenv, just use this expression in your service unit file:
ExecStart=/venv_home/path/to/python /venv_home/path/to/your/script.py
2) You can also use crontab, but you need to specify full path for desired shell there, for example:
#reboot /bin/bash /path/to/script.sh
If you need additional help - just let me know here.
I use gunicorn to start my django app. For that I usually go into the directory where the manage.py file is located and then use this command:
gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --workers=2
that I got form the official documentation (it's using a different settings file)
Now, I want to write a script that does that which I found here:
#!/bin/sh
GUNICORN=/usr/local/bin/gunicorn
ROOT=/path/to/folder/with/manage.py
PID=/var/run/gunicorn.pid
#APP=main:application
if [ -f $PID ]; then rm $PID; fi
cd $ROOT
exec $GUNICORN -c $ROOT/ gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --pid=$PID #$APP
But I get this
usage: gunicorn [OPTIONS] [APP_MODULE]
gunicorn: error: unrecognized arguments: app.wsgi
when I execute it. Any idea on how to write it so it will work?
And also, what is that PID ?
Thanks !
Ok, it's pretty simple, just create a file with (sudo nano gunicorn.sh)
cd /path/to/folder/with/manage.py/
exec gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi
and then execute it
./gunicorn.sh
I use a bash script to run gunicorn. It is named _run_gunicorn.sh_
#!/bin/bash
NAME=new_project
DJANGODIR=/home/flame/Projects/$NAME
SOCKFILE=/home/flame/launch/web.sock
USER=flame
GROUP=flame
DJANGO_SETTINGS_MODULE=$NAME.settings
DJANGO_WSGI_MODULE=$NAME.wsgi
# export PWD=$DJANGODIR # still not work if I uncomment THIS LINE
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers 7 \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
If I run from the project dir:
[/home/flame/Projects/new_project]$ bash run_gunicorn.sh
It works well. But if
[~]$ bash Projects/new_project/run_gunicorn.sh
it raises errors:
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
I guess it is about current working directory. So I change the add export PWD=$DJANGODIR before gunicorn run. But the error remains.
Is it about some python related environment variables? Or what's the problem?
Using
export PWD=$DJANGODIR
you do NOT actually change your current working directory. You can easily check this in a shell by using the command pwd after the set. You will have to include something like
cd $DJANGODIR
into your script.