My Project bases on python-flask and celery with RabbitMq.
So I have to run 2 long services in the one container:
Two services:
1. gunicorn -w 64 -b 127.0.0.1:8888 manage:app
2. celery worker -A celery_worker.celery --loglevel=info
Those 2 services are both running as long time commands
I don't know how to write the Dockerfile to achieve my purpose.
I tried this:
CMD ["gunicorn -w 64 -b 127.0.0.1:8888 manage:app", "celery worker -A celery_worker.celery --loglevel=info"]
But it does not works.
Before I decided to use docker in my project, I used supervisor for execute those two commands simultaneously. But the supervisor has some problems in docker container which I couldn't solve(DETAIL).
So I want to know how to achieve my purpose that running 2 long time services in a docker container, how to write this Dockerfile. I want to execute "docker stop" that those 2 services can stop, "docker start" that those 2 services can start.....
Related
I have some problems running Django on an ECS task.
I want to have a Django webapp running on an ECS task and accessible to the world.
Here are the symptoms:
When I run an ECS task using Django python manage.py runserver 0.0.0.0:8000 as entry point for my container, I have a connection refused response.
When I run the task using Gunicorn using gunicorn --bind 0.0.0.0:8000 my-project.wsgi I have no data response.
I don't see logs on CloudWatch and I can't find any server's logs when I ssh to the ECS instance.
Here are some of my settings related to that kind of issue:
I have set my ECS instance security groups inbound to All TCP | TCP | 0 - 65535 | 0.0.0.0/0 to be sure it's not a firewall problem. And I can assert that because I can run a ruby on rails server on the same ECS instance perfectly.
In my container task definition I set a port mapping to 80:8000 and an other to 8000:8000.
In my settings.py, I have set ALLOWED_HOSTS = ["*"] and DEBUG = False.
Locally my server run perfectly on the same docker image when doing a docker run -it -p 8000:8000 my-image gunicorn --bind=0.0.0.0:8000 wsgi or same with manage.py runserver.
Here is my docker file for a Gunicorn web server.
FROM python:3.6
WORKDIR /usr/src/my-django-project
COPY my-django-project .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn","--bind","0.0.0.0:8000","wsgi"]
# CMD ["python","manage.py", "runserver", "0.0.0.0:8000"]
Any help would be grateful!
To help you debugging:
What is the status of the job when you are trying to access your webapp.
Figure out which instance the job is running and try docker ps on that ecs instance for the running job.
If you are able see the container or the job running on the instance, try access your webapp directly on the server with command like curl http://localhost:8000 or wget
If you container is not running. Try docker ps -a and see which one has just stopped and check with docker logs -f
With this approach, you can cut out all AWS firewall settings, so that you can see if your container is configured correctly. I think it will help you tracking down the issue easier.
After you figuring out the container is running fine and you are able to request with localhost, then you can work on security group inbound/outbound filter.
Currently, I have a Django wsgi to handle web traffic. (I have a nginx as front end to forward traffic to wsgi)
Now, I would like to start a crobtab service, to run periodic background job.
This is what I had done
django\Dockerfile
FROM python:3.6.4-alpine3.4
...
RUN chmod +x entrypoint.sh
ENTRYPOINT ["sh", "entrypoint.sh"]
CMD /usr/local/bin/gunicorn web.wsgi:application -b django:5000 --log-level=info --error-logfile=/var/log/gunicorn3.err.log
django\entrypoint.sh
#!/bin/sh
...
python manage.py crontab add
# https://stackoverflow.com/questions/37015624/how-to-run-a-cron-job-inside-a-docker-container
#
# "crond -help" yields:
#
# -f Foreground
# -b Background (default)
# -S Log to syslog (default)
# -l N Set log level. Most verbose:0, default:8
# -d N Set log level, log to stderr
# -L FILE Log to FILE
# -c DIR Cron dir. Default:/var/spool/cron/crontabs
# start cron
/usr/sbin/crond -f -l 8
exec "$#"
docker-compose.yml
django:
build:
context: ./django
dockerfile: Dockerfile
restart: always
depends_on:
- pgbouncer
expose:
- "5000"
volumes:
- static_data:/app/static
If I use the above setup, I notice that.
My schedule cron worker is running periodically.
But, wsgi unable to serve web traffic.
First, I try to change in django\entrypoint.sh
# start cron
/usr/sbin/crond -f -l 8
to
# start cron
/usr/sbin/crond -b -l 8
After making above change,
My schedule cron worker is no longer running. Not sure why.
wsgi can serve web traffic.
May I know why is it so? How can I make my django container to serve web traffic, and run cron job at the same time?
Or, is it not the right way to do such thing in Docker? I should use 2 containers?
I'd run a full on second container naming the service in the compose file cron or something of the likes (maybe more specific to the actual job in the event you have multiple crons). 1 process per container is general practice. In the "cron" container I wouldn't even run it via crond I'd have whatever I'm using on the host machine to handle the scheduling of the container. I'd change your job from being a django-cron to a custom django-admin command since you won't be managing the running of it with the django app anymore. You could still build the second container off the django one image just change the CMD with the docker-compose.yml file command: ["django-admin", "mycommand"]. Likely would be unnecessary to expose the ports on the second container. Invoke as a normal service docker-compose run mycronservice
celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
celery -A app worker -Q default -B -l debug --purge -n default_worker
celery -A app beat -l info
As of now we are running the three commands in screens. What is the more production way of running these commands?
The easiest way to create daemons is with supervisord. sentry, which also uses django and celery recommends using supervisord to run the workers - you can adjust the configuration to suit your set up:
[program:celery-priority-high]
directory=/www/my_app/
command=/path/to/celery -A app worker -Q priority_high -B -l debug --purge -n priority_high_worker
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=syslog
stderr_logfile=syslog
You can, of course, also run django itself using this method.
If supervisord is too much bloat for your needs, you can also create init scripts for your init system of choice (eg. systemd).
The scenario:
Two unrelated web apps with celery background tasks running on same server.
One RabbitMQ instance
Each web app has its own virtualenv (including celery). Same celery version in both virtualenvs.
I use the following command lines to start a worker and a beat process for each application.
celery -A firstapp.tasks worker
celery -A firstapp.tasks beat
celery -A secondapp.tasks worker --hostname foobar
celery -A secondapp.tasks beat
Now everything seems to work OK, but in the worker process of secondapp I get the following error:
Received unregistered task of type 'firstapp.tasks.do_something'
Is there a way to isolate the two celery's from each other?
I'm using Celery version 3.1.16, BTW.
I believe I fixed the problem by creating a RabbitMQ vhost and configuring the second app to use that one.
Create vhost (and set permissions):
sudo rabbitmqctl add_vhost /secondapp
sudo rabbitmqctl set_permissions -p /secondapp guest ".*" ".*" ".*"
And then change the command lines for the second app:
celery -A secondapp.tasks -b amqp://localhost//secondapp worker
celery -A secondapp.tasks -b amqp://localhost//secondapp beat
I want to make a Flask+Nginx+Gunicorn deployment. I have Nginx setup and running and I run gunicorn as described in the docs:
gunicorn app:app
But when I logout of the server the gunicorn process exits? What is the correct way to make sure it stay running for Nginx to connect to, and restarts if it crashes?
Use --daemon option while running gunicorn.
Example:
gunicorn grand56.wsgi:application --name grand56 --workers 3 --user=root --group=root --bind=127.0.0.1:1001 --daemon
use --daemon to the binding command of gunicorn.
ex:
gunicorn --bind 0.0.0.0:8001 your_project.wsgi --daemon
I'd look into something like Supervisor.
Very useful tutorial can be found here https://www.codingforentrepreneurs.com/blog/hello-linux-setup-gunicorn-and-supervisor/
Try this:
nohup gunicorn app:app &
The key thing to note is that when you start the process from the command line it is a child of your terminal process (i. e. a child of bash). When you log out of the server your bash process is terminated - as are all its children.
You'll want to use whatever system you have in place to manage nginx also manage gunicorn (anything from init.d or Upstart scripts to specialized application process monitors like Monit, Supervisor, Bluepill, Foreman, etc.)
Pay attention to Sean.
However you can run it on the fly like this:
nohup gunicorn -c config.py </dev/null >/dev/null 2>&1 and it will no longer be dependent on the terminal connection. You could replace >/dev/null with something like >somelogfile if you want to save any output.
But for production use it is best to get it integrated into whatever tool you use for managing processes.
Supervisor is a great cross-platform solution for process management. It is very feature rich and (in my opinion) requires a lot more configuration than some of the vanilla Linux alternatives (upstart, sysv, systemd). You should definitely use something like this to start, monitor and (if need be) restart your process.
No matter what process manager you end up using, you can still very easily leave gunicorn "running improperly" (ie as root user). I think some of the important details left out by other answers are that you should probably have one (non-root) user own the gunicorn process which binds to a unix socket that is owned by that user and the nginx group and has permissions 770. With gunicorn, you specify a mask instead, so invert 770 into 007 and use the -m flag. This way, only gunicorn and nginx can read/write/execute to the socket and no port is needed. You can specify the user and group of your gunicorn process with the -u and -g flags, and it will create the socket with those owners. Whatever you end up using for process mgmt, for nginx/gunicorn, you probably want something like this in your startup script:
exec gunicorn wsgi:app -u gunicorn -g nginx -m 007 -b gunicorn.sock >> /var/log/$<service_name>.sys.log 2>&1
Make sure the gunicorn user has write permission on the log file. Then, in nginx, where you formerly had the ip/port (ie 0.0.0.0:5000), you put the path to the socket (ie /usr/share/nginx/html/gunicorn.sock). Notice I did not use the --daemon flag here, but I used exec, this assumes a process manager, which will run gunicorn as a child process with exec.
You can find all the different flags here.
I tried the systemd option and it worked fine, the link below has my full answer and has all the steps , to invoke your app as a gunicorn service.
https://askubuntu.com/questions/930589/running-upstart-script-on-17-04/1010398#1010398
Running hug api like this.
--daemon is to keep the process in background.
--access-logfile to keep request log
--bind=< ip>:< port> Giving IP will allow to access from other systems(If proxy is not needed).
gunicorn <pyscirpt_name>:__hug_wsgi__ --name caassist -w 4 --access-logfile /var/logs/gunicorn/gunicorn_access.log --daemon --bind=<ip>:<port>