Service inside docker container stops after some time - python

I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue?

Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.

Related

Pycharm Couldn't connect to console process when using remote docker interpreter

I am trying to run my code within a docker container hosted on an AWS EC2 machine.
It seems that PyCharm can connect to the interpreter because it can show the list of installed packages when looking at the interpreter configuration.
However, when I try to open a Python console, or when I try to run a Python script, I have the error:
3987f6fc2476:/usr/bin/python3 /opt/.pycharm_helpers/pydev/pydevconsole.py --mode=server --port=55516
Couldn't connect to console process.
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Happy to provide more information. What is possibly going wrong here? The error seems pretty generic.
EDIT: PyCharm can start the docker container but still the Python console won't work. On the server, docker ps returns:
ecd6a7220b55 9e1ad5b17633 "/usr/bin/python3 /o…" 1 second ago Up Less than a second 22/tcp, 0.0.0.0:50219->50219/tcp dreamy_matsumoto
Turns out that the issue with that PyCharm used a random port every time it starts a Python console when connecting to remote docker container. If we could open all the inbound ports on the EC2, this feature would work. Of course there is nothing worst from a security perspective. Do NOT do this. (but if you really want to do it you'll need to setup docker through TCP)

How do I move a container from one docker to another?

I know similar questions have been asked but I couldn't get it working or it was not specific enough for me since I am fairly new to dockers. The question is similar to the question in this thread How to move Docker containers between different hosts? but I don't fully understand the answer or I can't get it working.
My problem: I am using docker Desktop to run a python script locally in a container. But I want this python script to be able to run on a windows server 2016. The script is a short webscraper which creates a csv file.
I am aware I need to install some sort of docker on the webserver and I need to export my container and be able to load in the container at the webserver.
In the thread referred above it says that I need to use docker commit psscrape but when I try to use it.
I get: "Error response from daemon: No such container: psscraper." This is probably since the container has ran but stopped. Since the program runs only for a few seconds. psscraper is in the 'docker ps -a' list but not in the 'docker ps' list. I guess it has something to do with that.
psscraper is the name of the python file.
Is there anyone who could enlighten me on how to proceed?

How can I run luigid and luigi task within docker? [duplicate]

I have built a base image from Dockerfile named centos+ssh. In centos+ssh's Dockerfile, I use CMD to run ssh service.
Then I want to build a image run other service named rabbitmq,the Dockerfile:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD /opt/mq/sbin/rabbitmq-server start
To start rabbitmq container,run:
docker run -d -p 222:22 -p 4149:4149 rabbitmq
but ssh service doesn't work, it sense rabbitmq's Dockerfile CMD override centos's CMD.
How does CMD work inside docker image?
If I want to run multiple service, how to? Using supervisor?
You are right, the second Dockerfile will overwrite the CMD command of the first one. Docker will always run a single command, not more. So at the end of your Dockerfile, you can specify one command to run. Not more.
But you can execute both commands in one line:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD service sshd start && /opt/mq/sbin/rabbitmq-server start
What you could also do to make your Dockerfile a little bit cleaner, you could put your CMD commands to an extra file:
FROM centos+ssh
EXPOSE 22
EXPOSE 4149
CMD sh /home/centos/all_your_commands.sh
And a file like this:
service sshd start &
/opt/mq/sbin/rabbitmq-server start
Even though CMD is written down in the Dockerfile, it really is runtime information. Just like EXPOSE, but contrary to e.g. RUN and ADD. By this, I mean that you can override it later, in an extending Dockerfile, or simple in your run command, which is what you are experiencing. At all times, there can be only one CMD.
If you want to run multiple services, I indeed would use supervisor. You can make a supervisor configuration file for each service, ADD these in a directory, and run the supervisor with supervisord -c /etc/supervisor to point to a supervisor configuration file which loads all your services and looks like
[supervisord]
nodaemon=true
[include]
files = /etc/supervisor/conf.d/*.conf
If you would like more details, I wrote a blog on this subject here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/
While I respect the answer from qkrijger explaining how you can work around this issue I think there is a lot more we can learn about what's going on here ...
To actually answer your question of "why" ... I think it would for helpful for you to understand how the docker stop command works and that all processes should be shutdown cleanly to prevent problems when you try to restart them (file corruption etc).
Problem: What if docker did start SSH from it's command and started RabbitMQ from your Docker file? "The docker stop command attempts to stop a running container first by sending a SIGTERM signal to the root process (PID 1) in the container." Which process is docker tracking as PID 1 that will get the SIGTERM? Will it be SSH or Rabbit?? "According to the Unix process model, the init process -- PID 1 -- inherits all orphaned child processes and must reap them. Most Docker containers do not have an init process that does this correctly, and as a result their containers become filled with zombie processes over time."
Answer: Docker simply takes that last CMD as the one that will get launched as the root process with PID 1 and get the SIGTERM from docker stop.
Suggested solution: You should use (or create) a base image specifically made for running more than one service, such as phusion/baseimage
It should be important to note that tini exists exactly for this reason, and as of Docker 1.13 and up, tini is officially part of Docker, which tells us that running more than one process in Docker IS VALID .. so even if someone claims to be more skilled regarding Docker, and insists that you absurd for thinking of doing this, know that you are not. There are perfectly valid situations for doing so.
Good to know:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/
http://www.techbar.me/stopping-docker-containers-gracefully/
https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/
https://github.com/phusion/baseimage-docker#docker_single_process
The official docker answer to Run multiple services in a container.
It explains how you can do it with an init system (systemd, sysvinit, upstart) , a script (CMD ./my_wrapper_script.sh) or a supervisor like supervisord.
The && workaround can work only for services that starts in background (daemons) or that will execute quickly without interaction and release the prompt. Doing this with an interactive service (that keeps the prompt) and only the first service will start.
To address why CMD is designed to run only one service per container, let's just realize what would happen if the secondary servers run in the same container are not trivial / auxiliary but "major" (e.g. storage bundled with the frontend app). For starters, it would break down several important containerization features such as horizontal (auto-)scaling and rescheduling between nodes, both of which assume there is only one application (source of CPU load) per container. Then there is the issue of vulnerabilities - more servers exposed in a container means more frequent patching of CVEs...
So let's admit that it is a 'nudge' from Docker (and Kubernetes/Openshift) designers towards good practices and we should not reinvent workarounds (SSH is not necessary - we have docker exec / kubectl exec / oc rsh designed to replace it).
More info
https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container

How to Make uWSGI die when it encounters an error?

I have my Python app running through uWSGI. Rarely, the app will encounter an error which makes it not be able to load. At that point, if I send requests to uWSGI, I get the error no python application found, check your startup logs for errors. What I would like to happen in this situation is for uWSGI to just die so that the program managing it (Supervisor, in my case) can restart it. Is there a setting or something I can use to force this?
More info about my setup:
Python 2.7 app being run through uWSGI in a docker container. The docker container is managed by Supervisor, and if it dies, Supervisor will restart it, which is what I want to happen.
After an hour of searching, I finally found a way to do this. Just pass the --need-app argument when starting uWSGI, or add need-app = true in your .ini file, if you run things that way. No idea why this is off by default (in what situation would you ever want uWSGI to keep running when your app has died?) but so it goes.

Python process suspends on SSH logout after nohup/screen

I have a remote server through Blue Host that's intended to run a server based on Twisted for Python. The only access I have to it is over SSH, so to keep Python running after I log out I tried using nohup python server.py & and screen -dm python server.py, getting the same results for each. Everything works fine until I log out of SSH - even though Python is running in the background as expected, once I've logged out, my client can no longer communicate with the server. The strange part is that if I log back in over SSH and check the running processes with ps aux, I see Python running and my client can successfully communicate with the server again. Even if I don't type anything at all once I log back in, everything works as expected. But, of course, as soon as I log back out, it's as if the server is gone.
I've contacted support for the hosting service in case this is some oddity on their end, but hopefully this is something that can be resolved on my end instead.
Edit: Looks like Blue Host doesn't want me doing server-y stuff without buying the VPS upgrade so it looks like that's the big problem.
Edit 2: Okay, so in case anybody ends up having a similar problem, here's what the main issue turned out to be. I was mistaken in my original description; I was able to connect to the server but I was getting kicked off immediately for what turned out to be a MySQL error. I guess trying to connect to a localhost database with no active connection somehow causes problems, so instead I changed the MySQL connection command to connect to my site's IP address instead, even though it was the same IP as the server. That seemed to do the trick in terms of my main issue.
Don't use this method to keep the server process running. Instead try using supervisor (apt-get install supervisor). It allows you to daemonize your process, and ability to stop/restart etc.
Here's a sample config entry (/etc/supervisor/supervisord.conf):
[program:my_server]
command=python /path/to/server/server.py
directory=/path/to/server/
autostart=true
autorestart=true
stdout_logfile=/var/log/server.log
stderr_logfile=/var/log/server_error.log
user=your_linux_user_name
After you edit your config, do
sudo service supervisor stop
sudo service supervisor start #need to do this - doing a `restart` doesn't reload the config file!
your server should now be running properly. You can manage its lifecycle via sudo supervisorctl

Categories

Resources