Gitlab CI Flask timeouts - python

I am trying to deploy a Flask webapp using gitlab CI.
In my script I launch the following command :
- if [[ "$STATUS" == "NOTRUN" ]] ; then eval "nohup flask run &" ; fi
The problem is that the webapp is deploying, but my gitlab CI timeouts after 1hour because it thinks the command is still running.
What do I have to add for it to succeed and not fail ?
Thank you very much

Unfortunately it won't be so easy. There is similar issue on gitlab.
Process started with Runner, even if you add nohup and & at the end, is marked with process group ID. When job is finished Runner is sending kill signal to whole process group. So any process started directly from CI job will be terminated at job end. Using service manager you're not starting the process in context of Runner's job. Your only notifying a manager to start a process using prepared configuration
The only solution I know is to create some .service and run it with systemctl.

Related

celery with Django

I am building a app and I am trying to run some tasks everyday. So I saw some answers, blogs and tutorials about using celery, So I liked the idea of using celery for doing background jobs.
But I have some questions about celery :-
As mentioned in Celery Documentation that after setting a celery task , I have to run a command like celery -A proj worker -l INFO which will process all the tasks and after command it will run the tasks, so my question is , I have to stop the running server to execute this command and
what if I deploy Django project with celery on Heroku or Python Anywhere.
Should I have to run command every time Or I can execute this command first then i can start the server ?
If I have to run this command every time to perform background tasks then how is this possible when deploying to Heroku,
Will celery's background tasks will remain running after executing python manage.py run server in only terminal ?
Why I am in doubt ? :-
What I think is, When running celery -A proj worker -l INFO it will process (or Run) the tasks and I cannot execute run server in one terminal.
Any help would be much Appreciated. Thank You
Should I have to run command every time Or I can execute this command first then i can start the server ?
Dockerize your Celecry and write your own script for auto-run.
You can't run celery worker and django application in one terminal simultaneously, because both of them are programs that should be running in parallel. So you should use two terminals, one for django and another for celery worker.
I highly recommend to read this heroku development article for using Celery and Django on heroku.

How can I run luigid and luigi task within docker? [duplicate]

I have built a base image from Dockerfile named centos+ssh. In centos+ssh's Dockerfile, I use CMD to run ssh service.
Then I want to build a image run other service named rabbitmq,the Dockerfile:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD /opt/mq/sbin/rabbitmq-server start
To start rabbitmq container,run:
docker run -d -p 222:22 -p 4149:4149 rabbitmq
but ssh service doesn't work, it sense rabbitmq's Dockerfile CMD override centos's CMD.
How does CMD work inside docker image?
If I want to run multiple service, how to? Using supervisor?
You are right, the second Dockerfile will overwrite the CMD command of the first one. Docker will always run a single command, not more. So at the end of your Dockerfile, you can specify one command to run. Not more.
But you can execute both commands in one line:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD service sshd start && /opt/mq/sbin/rabbitmq-server start
What you could also do to make your Dockerfile a little bit cleaner, you could put your CMD commands to an extra file:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD sh /home/centos/all_your_commands.sh
And a file like this:
service sshd start &
/opt/mq/sbin/rabbitmq-server start
Even though CMD is written down in the Dockerfile, it really is runtime information. Just like EXPOSE, but contrary to e.g. RUN and ADD. By this, I mean that you can override it later, in an extending Dockerfile, or simple in your run command, which is what you are experiencing. At all times, there can be only one CMD.
If you want to run multiple services, I indeed would use supervisor. You can make a supervisor configuration file for each service, ADD these in a directory, and run the supervisor with supervisord -c /etc/supervisor to point to a supervisor configuration file which loads all your services and looks like
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
If you would like more details, I wrote a blog on this subject here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
While I respect the answer from qkrijger explaining how you can work around this issue I think there is a lot more we can learn about what's going on here ...
To actually answer your question of "why" ... I think it would for helpful for you to understand how the docker stop command works and that all processes should be shutdown cleanly to prevent problems when you try to restart them (file corruption etc).
Problem: What if docker did start SSH from it's command and started RabbitMQ from your Docker file? "The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container." Which process is docker tracking as PID 1 that will get the SIGTERM? Will it be SSH or Rabbit?? "According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly, and as a result their containers become filled with zombie processes over time."
Answer: Docker simply takes that last CMD as the one that will get launched as the root process with PID 1 and get the SIGTERM from docker stop.
Suggested solution: You should use (or create) a base image specifically made for running more than one service, such as phusion/baseimage
It should be important to note that tini exists exactly for this reason, and as of Docker 1.13 and up, tini is officially part of Docker, which tells us that running more than one process in Docker IS VALID .. so even if someone claims to be more skilled regarding Docker, and insists that you absurd for thinking of doing this, know that you are not. There are perfectly valid situations for doing so.
Good to know:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
http://www.techbar.me/stopping-docker-containers-gracefully/
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
https://github.com/phusion/baseimage-docker#docker_single_process
The official docker answer to Run multiple services in a container.
It explains how you can do it with an init system (systemd, sysvinit, upstart) , a script (CMD ./my_wrapper_script.sh) or a supervisor like supervisord.
The && workaround can work only for services that starts in background (daemons) or that will execute quickly without interaction and release the prompt. Doing this with an interactive service (that keeps the prompt) and only the first service will start.
To address why CMD is designed to run only one service per container, let's just realize what would happen if the secondary servers run in the same container are not trivial / auxiliary but "major" (e.g. storage bundled with the frontend app). For starters, it would break down several important containerization features such as horizontal (auto-)scaling and rescheduling between nodes, both of which assume there is only one application (source of CPU load) per container. Then there is the issue of vulnerabilities - more servers exposed in a container means more frequent patching of CVEs...
So let's admit that it is a 'nudge' from Docker (and Kubernetes/Openshift) designers towards good practices and we should not reinvent workarounds (SSH is not necessary - we have docker exec / kubectl exec / oc rsh designed to replace it).
More info
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container

Execute a flask run in Jenkins shell without timeout

I'm using Jenkins to run a Flask app automatically from a Git branch.
The build works well, and it starts the Flask app on my server, except that when you run flask run, the command line stays active as long as the flask app runs.
Thus, the build never ends, and it ends up as an unstable build.
How can I get the flask app to run and get a Jenkins build success if it got the the * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) message?
If you're running flask run in a bash script, adding & to the end (flask run &) will run the task in the background, allowing the bash script to continue. I think this will let your job finish and Jenkins can scan stdout for the message indicating success.
Edit: Apparently overriding the build number export BUILD_ID=<whatever> is enough to stop Jenkins from killing the background process. I'd be wary of what you choose as <whatever>, if you choose an existing BUILD_ID, there could be side-effects.

Why Fabric run app terminate?

I have been trying to start app in remote hosts using Fabric.
I was running a daemonized Java app, which run perfectly fine when I logged in and run it manually. The app keep running even after I exit the session.
But when I use Fabric run(), my app terminates once the session ends.
Although run(command, pty=False) solved my problem (It is roughly documented here), I still can not see why these are relevant. I am a python newbie, can anyone explain the difference between:
start daemon with ssh logged in and start manually
start daemon using Fabric with pty=True
start daemon using Fabric with pty=False

Running celeryd_multi as a daemon

I'm using celery 3.0.11 and djcelery 3.0.11 with python 2.7 and django 1.3.4.
I'm trying to run celeryd as a daemon and I've followed instructions from http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html
When I run the workers using celeryd as described in the link with a python (non-django) configuration, the daemon comes up.
When I run the workers using python manage.py celery worker --loglevel=info to test the workers, they come up fine and start to consume messages.
But when I run celeryd with a django configuration i.e. using manage.py celeryd_multi, I just get a message that says
> Starting nodes...
> <node_name>.<user_name>: OK
But I don't see any daemon running and my messages obviously don't get consumed. There is an empty log file (the one that's configured in the celeryd config file).
I've tried this with a very basic django project as well and I get the same result.
I'm wondering if I'm missing any basic configuration piece. Since I don't get any errors and I don't have any logs, I'm stuck. Running it with sh-x doesn't show anything special either.
Has anyone experienced this before or does anyone have any suggestions on what I can try?
Thanks,
For now I've switched to using supervisord instead of celeryd and I have no issues running multiple workers.

Categories

Resources