Python logging from multiple Docker containers back to localhost - python

I have a scenario where I have created an emulator in Python3 of a test node that can be launched in a Docker container.
So basically, on one server running Ubuntu 18.04, I have 50 ~ 100 containers, each one emulating a node and performing a basic file transfer task.
Each container is running a Python3 application that emulates a node. For logging purposes, I have the following:
import logging
logging.basicConfig (format='%(asctime)s : %(message)s', filename='test.log', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG)
So basically by executing:
logging.error ("File transfer failed")
I get a log file test.log with the proper formatted time stamp and error message.
The issue is this is occurring inside the container, and for that matter, inside 50 ~ 100 containers.
Is there a way to have all the containers logging to a single log file on the localhost where the containers exist? I have looked at log handlers in Python but cannot seem to wrap my head around getting out of the container and writing to file on local host.

How about using a docker volume. docker volumes can be used to persist data to an external file system. By doing so, your containers will have the access to read and write to your local hard drive instead of creating log files inside of the containers itself.
But you may have to find a way to avoid race conditions, to write to the shared location.
Read about docker volumes in their official docs. It's pretty easy.

The default idiom for Docker logging is to log to stdout. So just don't specify a file when you do basicConfig() and logs will go there by default.
You can then access those logs with docker logs command.

Related

Is it possible to get a Docker container to read a file from host file system not using Volumes?

I have a Python script inside a container that needs to continuously read changing values inside a file located on the host file system. Using a volume to mount the file won't work because that only captures a snapshot of the values in the file at that moment. I know it's possible since the node_exporter container is able to read files on the host filesystem using custom methods in Golang. Does anyone know a general method to accomplish this?
I have a Python script [...] that needs to continuously read changing values inside a file located on the host file system.
Just run it. Most Linux systems have Python preinstalled. You don't need Docker here. You can use tools like Python virtual environments if your application has Python library dependencies that need to be installed.
Is it possible to get a Docker container to read a file from host file system not using Volumes?
You need some kind of mount, perhaps a bind mount; docker run -v /host/path:/container/path image-name. Make sure to not overwrite the application's code in the image when you do this, since the mount will completely hide anything in the underlying image.
Without a bind mount, you can't access the host filesystem at all. This filesystem isolation is a key feature of Docker.
...the [Prometheus] node_exporter container...
Reading from the GitHub link in the question, "It's not recommended to deploy it as a Docker container because it requires access to the host system." The docker run example there uses a bind mount to access the entire host filesystem, circumventing Docker's filesystem isolation.

Pass argument to python script running in a docker container

Suppose the following setup:
Website written in php / laravel
User uploads a file (either text / doc / pdf)
We have a docker container which contains a python script for converting text into a numpy array.
I want to take this uploaded data and pass it to the python script.
I can't find anything which explains how to pass dynamically generated inputs into a container.
Can this be done by executing a shell script from inside the laravel app which contains the uploaded file as a variable specified in the dockerfile's ENTRYPOINT?
Are there any other ways of doing this?
I would strongly recommend using tcp/ip for such purposes. By the way, in this case you benefit from:
You can detect whether your python service is online
You can move python container to another machine
Implementation is really simple. You can choose any framework, but for me suitable is Twisted, and implement your python script as follows:
from twisted.internet.protocol import Factory, Protocol
from twisted.protocols.basic import LineReceiver
class DataProcessor(LineReceiver):
def lineReceived(self, line):
# line contains your data
pass
Factory factory = Factory()
factory.protocol = DataProcessor
reactor.listenTCP(8080, factory)
... a python script for ...
Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.
This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end of docker run will be visible to the Python script in sys.argv, like any other command-line options. You can use a docker run -v option to publish parts of the host's filesystem into the container. So you might be able to run something like
docker run --rm -v $PWD/files:/data \
converter_image \
python convert.py /data/in.txt /data/out.pkl
where all of the /data paths are in the container's private filesystem space.
There are two big caveats:
The host paths in the docker run -v option are paths specifically on the physical host. If your HTTP service is also running in a container you need to know some host-system path you can write to that's also visible in your container filesystem.
Running any docker command at all effectively requires root privileges. If any of the filenames or paths involved are dynamic, shell injection attacks can compromise your system. Be very careful with how you run this from a network-accessible script.
One way to do this would be to upload the files to a directory to which the Docker container has access to and then poll the directory for new files using the Python script. You can access local directories from Docker containers using "bind mounts". Google something like "How to share data between a Docker container and host system" to read more about bind mount and sharing volumes.

How to handle the STDOUT of a docker container with dockerpy?

I'm planning to write an app that can remotely control as well as interact with a container. Now the only problem is to handle the output.
With the help of docker-py, I can control docker. However, I want to control the STDOUT of the container. For example, I run a logging system in the container and want to redirect the output log to python STDOUT simultaneously or to the remote client.
How show I do it with docker-py or any other ways to implement it?

Recover previous Python output to terminal

I have a bash script, in Python that runs on a Ubuntu server. Today, I mistakenly closed the Putty window after monitoring that the script ran correctly.
There is some usefull information that was printed during the scrip running and I would like to recover them.
Is there a directory, like /var/log/syslog for system logs, for Python logs?
This scripts takes 24 hours to run, on a very costly AWS EC2 instance, and running it again is not an option.
Yes, I should have printed usefull information to a log file myself, from the python script, but no, I did not do that.
Unless the script has an internal logging mechanism like e.g. using logging as mentioned in the comments, the output will have been written to /dev/stdout or /dev/stderr respectively, in which case, if you did not log the respective data streams to a file for persistent storage by using e.g. tee, your output is lost.

Service inside docker container stops after some time

I have deployed a rest service inside a docker container using uwsgi and nginx.
When I run this python flask rest service inside docker container, for first one hour service works fine but after sometime somehow nginx and rest service stops for some reason.
Has anyone faced similar issue?
Is there any know fix for this issue?
Consider doing a docker ps -a to get the stopped container's identifier.
-a here just means listing all of the containers you got on your machine.
Then do docker inspect and look for the LogPath attribute.
Open up the container's log file and see if you could identify the root cause on why the process died inside the container. (You might need root permission to do this)
Note: A process can die because of anything, e.g. code fault
If nothing suspicious is presented in the log file then you might want to check on the State attribute. Also check the ExitCode attribute to see if you can work backwards to see which line of your application could have exited using that code.
Also check the OOMKilled flag, if this is true then it means your container could be killed due to out of memory error.
Well if you still can't figure out why then you might need to add more logging into your application to give you more insight on why it died.

Categories

Resources