logging system kill signals for debugging - python

I have a script run by python3.7.9 on my ubuntu18.04 docker container.
At some point the python interpreter is killed. This is likely caused by resource excess.
Using docker log ${container_id} I only get the stderr inside the container, but I am also interested in what kind of resource was exceeded, so that I can give useful feedback to development.
Is this automatically logged on a system level (linux, docker)?
If this is not the case, how can I log this?

Related

Is it possible to get container OS logs from Google Cloud Run

I'm using google cloud run. I run container with simple Flask+gunicorn app that starts heavy computation.
Sometimes it fails
Application exec likely failed
terminated: Application failed to start: not available
I'm 100% confident it's not related to google cloud run timeouts or Flask + gunicorn timeouts.
I've added hooks for gunicorn: worker_exit, worker_abort, worker_int, on_exit. Mentioned hooks are not invoked.
Exactly the same operation works well locally. I can reproduce it at cloud run only.
Seems like something crashes at cloud run and just kills my python process completely.
Is there any chance to debug it?
Maybe I can stream tail -f /var/log/{messages,kernel,dmesg,syslog} somehow in parallel to logs?
The idea is to understand what kills app.
UPD:
I've managed to get a bit more logs
Default
[INFO] Handling signal: term
Caught SIGTERM signal.Caught SIGTERM signal.
What is the right way to find what (and why) sends SIGTERM to my python process?
I would suggest setting up Cloud Logging with your Cloud Run instance. You can easily do so by following this documentation which shows how to attach Cloud Logging to the Python root logger. This will allow you to have more control over the logs that appear for your Cloud Run application.
Setting Up Cloud Logging for Python
Also in setting up Cloud Logging it should allow Cloud Run to pick up automatically any logs under the var/log directory as well as any syslogs (dev/log).
Hope this helps! Let me know if you need further assistance.

How to handle the STDOUT of a docker container with dockerpy?

I'm planning to write an app that can remotely control as well as interact with a container. Now the only problem is to handle the output.
With the help of docker-py, I can control docker. However, I want to control the STDOUT of the container. For example, I run a logging system in the container and want to redirect the output log to python STDOUT simultaneously or to the remote client.
How show I do it with docker-py or any other ways to implement it?

Trying to remotely start a process with visible window with Python on Windows machines

I tried to do this with WMI, but interactive processes cannot be started with it (as stated in Microsoft documentation). I see processes in task manager, but windows do not show.
I tried with Paramiko, same thing. Process visible in task manager, but no window appears (notepad for example).
I tried with PsExec, but the only case a window appears on the remote machine, is when you specify -i and it does not show normally, only through a message box saying something like "a message arrived do you want to see it".
Do you know a way to start a program remotely, and have its interface behave like it would if you manually started it?
Thanks.
Normally the (SSH) servers run as a Windows service.
Window services run in a separate Windows session (google for "Session 0 isolation"). They cannot access interactive (user) Windows sessions.
Also note that there can be multiple user sessions (multiple logged in users) in Windows. How would the SSH server know, what user session to display the GUI on (even if it could)?
The message you are getting is thanks to the "Interactive Services Detection" service that detects that a service is trying to show a GUI on an invisible Session 0 and allows you to replicate the GUI on the user session.
You can run the SSH server in an interactive Windows session, instead as a service. It has its limitations though.
In general, all this (running GUI application on Windows remotely through SSH) does not look like a good idea to me.
Also this question is more about a specific SSH server, rather that about an SSH client you are using. So you you include details about your SSH server, you can get better answers.
ok i found a way. With subprocess schtasks( the windows task scheduler). For whatever reason, when i launch a remote process with it , it starts as if i had clicked myself on the exe. For it to start with no delay, creating the task to an old date like 2012 with schtasks /Create /F and running the then named task with schtasks /Run will do the trick

Service inside docker container stops after some time

I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue?
Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.

python log manager

I have several python programs that runs in parallel.
I want to write a python program which will manage the programs logs, which mean that the other programs will sent log message to this program and the program will write it to the log file.
Another important feature is that if one of the programs will crash, the 'manage log program' will know about it and could write it to the log file.
I try to use this sample http://docs.python.org/library/logging.html#sending-and-receiving-logging-events-across-a-network
but I failed.
Can anyone please help me?
I wrote a python logger that does just this (even with mpi support).
It is available at https://github.com/JensTimmerman/VSC-tools/blob/master/vsc/fancylogger.py
This logger can log to an udp port on a remote machine.
There I run a daemon that collects the logs and writes them to file:
https://github.com/JensTimmerman/VSC-tools/blob/master/bin/logdaemon.py
This script will start the daemon for you:
https://github.com/JensTimmerman/VSC-tools/blob/master/bin/startlogdaemon.sh
If you then start your python processes and run them in parallel with mpi for example you will only need to use fancylogger.getLogger() and use it as a normal python logger.
It will pick up the environment variables set with the script, log to that server, and have some extra mpi info in the log records. (like the mpi thread number)
If you do not use mpi you will have two options:
set the 'FANCYLOG_SERVER' and 'FANCYLOG_SERVER_PORT' variables manually in each shell where you start the remote python process
or just start the daemon. And in the python scripts get your logger
like this:
import fancylogger
fancylogger.logToUDP(hostname, port=5005)
logger = fancylogger.getLogger()

Categories

Resources