How to set up a gunicorn script (django) - python

I use gunicorn to start my django app. For that I usually go into the directory where the manage.py file is located and then use this command:
gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --workers=2
that I got form the official documentation (it's using a different settings file)
Now, I want to write a script that does that which I found here:
#!/bin/sh
GUNICORN=/usr/local/bin/gunicorn
ROOT=/path/to/folder/with/manage.py
PID=/var/run/gunicorn.pid
#APP=main:application
if [ -f $PID ]; then rm $PID; fi
cd $ROOT
exec $GUNICORN -c $ROOT/ gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --pid=$PID #$APP
But I get this
usage: gunicorn [OPTIONS] [APP_MODULE]
gunicorn: error: unrecognized arguments: app.wsgi
when I execute it. Any idea on how to write it so it will work?
And also, what is that PID ?
Thanks !

Ok, it's pretty simple, just create a file with (sudo nano gunicorn.sh)
cd /path/to/folder/with/manage.py/
exec gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi
and then execute it
./gunicorn.sh

Related

Passing multiple parameters to docker container

I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)

Run multiple commands while starting docker services

I am trying to run two different commands when doing docker-compose up.
My command: parameter looks like:
gunicorn --reload analytics_api:api --workers=3 --timeout 10000 -b :8083 && exec python /analytics/model_download.py
But when I run this the container fails with the error:
gunicorn: error: unrecognized arguments: && exec python
/analytics/model_download.py
The second part of the command python /analytics/model_download.py, is used to download some dependencies from a sharedpath to a directory inside the container.
I want to run it while the service is up, not during the build.
What is going wrong here?
One way can be to have a startup shell script as the command which has both these entries
startup.sh
#start this in background if the other process need not wait for completion
exec python /analytics/model_download.py &
gunicorn --reload analytics_api:api --workers=3 --timeout 10000 -b :8083
while true
do
sleep 5
done
Adding bash -c before the command solved the problem.
The command value will look like:
bash -c "python model_download.py && gunicorn --reload analytics_api:api -- workers=3 --timeout 10000 -b :8083"

Django, Gunicorn: Cannot import name views

I'm attempting to set up a Django project with Nginx and Gunicorn. I think I am encountering some issues with paths that I can't seem to figure out.
my root virtualenv dir: /var/www/webapps/testapp/
If I run gunicorn mysite.wsgi:application --bind 0.0.0.0:8001 from /var/www/webapps/testapp/testapp/ (apologies for the naming conventions...) It works!
However... If I attempt to run from the bash script I am using to start gunicorn the project seems to run but when I attempt to load a page I get these errors:
ImportError at /home/
cannot import name views
Request Method: GET
Request URL: https://URL/home/
Django Version: 1.6.2
Exception Type: ImportError
Exception Value:
cannot import name views
Exception Location: /var/www/webapps/testapp/testapp/testapp/urls.py in <module>, line 2
Python Executable: /var/www/webapps/testapp/bin/python2.7
Python Version: 2.7.6
Python Path:
['/var/www/webapps/testapp/testapp',
'/var/www/webapps/testapp/bin',
'/var/www/webapps/testapp/testapp',
'/var/www/webapps/testapp/bin',
'/var/www/webapps/testapp/lib/python27.zip',
'/var/www/webapps/testapp/lib/python2.7',
'/var/www/webapps/testapp/lib/python2.7/plat-linux2',
'/var/www/webapps/testapp/lib/python2.7/lib-tk',
'/var/www/webapps/testapp/lib/python2.7/lib-old',
'/var/www/webapps/testapp/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7',
'/usr/local/lib/python2.7/plat-linux2',
'/usr/local/lib/python2.7/lib-tk',
'/var/www/webapps/testapp/lib/python2.7/site-packages']
Server time: Wed, 21 May 2014 15:26:45 +0000
The bash script I am using is as follows:
#!/bin/bash
NAME="testapp2" # Name of the application
DJANGODIR=/var/www/webapps/testapp/testapp # Django project directory
SOCKFILE=/var/www/webapps/testapp/run/gunicorn.sock # we will communicte using this unix socket
USER=testappuser # the user to run as
GROUP=webapps # the group to run as
NUM_WORKERS=16 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=mysite.settings # which settings file should Django use
DJANGO_WSGI_MODULE=mysite.wsgi # WSGI module name
PYTHONPATH=/var/www/webapps/testapp/bin`
echo "Starting $NAME as "whoami"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH`
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
Using the bash script does not work and I can't seem to figure out why. Can anyone help?
Thanks
Turns out the issue lay with directory permissions and the user running the gunicorn process.
All sorted now, thanks!

Gunicorn not work outside project directory, even export PWD=$DJANGODIR

I use a bash script to run gunicorn. It is named _run_gunicorn.sh_
#!/bin/bash
NAME=new_project
DJANGODIR=/home/flame/Projects/$NAME
SOCKFILE=/home/flame/launch/web.sock
USER=flame
GROUP=flame
DJANGO_SETTINGS_MODULE=$NAME.settings
DJANGO_WSGI_MODULE=$NAME.wsgi
# export PWD=$DJANGODIR # still not work if I uncomment THIS LINE
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers 7 \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
If I run from the project dir:
[/home/flame/Projects/new_project]$ bash run_gunicorn.sh
It works well. But if
[~]$ bash Projects/new_project/run_gunicorn.sh
it raises errors:
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
I guess it is about current working directory. So I change the add export PWD=$DJANGODIR before gunicorn run. But the error remains.
Is it about some python related environment variables? Or what's the problem?
Using
export PWD=$DJANGODIR
you do NOT actually change your current working directory. You can easily check this in a shell by using the command pwd after the set. You will have to include something like
cd $DJANGODIR
into your script.

gunicorn - not taking config file if executed from shell script

This is the code for my shell script :
#!/bin/bash
source /path/to/active
gunicorn_django -c /path/to/conf.py -D
The above sh file when executed, starts gunicorn process but it is not using the config file .
But, if i execute the command directly from the command line, like
gunicorn_django -c path/to/conf.py -D
then it is using the config file .
Also, in the sh file, if i give options directly like -w 3 -error-logfile etc.. then it is taking options .
Use this script, worked for me :
#!/bin/bash
source /path/to/active
gunicorn_django -c $(pwd)/path/to/conffilefrom/presentworkingdirectory -D

Categories

Resources