How to handle the STDOUT of a docker container with dockerpy? - python

I'm planning to write an app that can remotely control as well as interact with a container. Now the only problem is to handle the output.
With the help of docker-py, I can control docker. However, I want to control the STDOUT of the container. For example, I run a logging system in the container and want to redirect the output log to python STDOUT simultaneously or to the remote client.
How show I do it with docker-py or any other ways to implement it?

Related

logging system kill signals for debugging

I have a script run by python3.7.9 on my ubuntu18.04 docker container.
At some point the python interpreter is killed. This is likely caused by resource excess.
Using docker log ${container_id} I only get the stderr inside the container, but I am also interested in what kind of resource was exceeded, so that I can give useful feedback to development.
Is this automatically logged on a system level (linux, docker)?
If this is not the case, how can I log this?

Python logging from multiple Docker containers back to localhost

I have a scenario where I have created an emulator in Python3 of a test node that can be launched in a Docker container.
So basically, on one server running Ubuntu 18.04, I have 50 ~ 100 containers, each one emulating a node and performing a basic file transfer task.
Each container is running a Python3 application that emulates a node. For logging purposes, I have the following:
import logging
logging.basicConfig (format='%(asctime)s : %(message)s', filename='test.log', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG)
So basically by executing:
logging.error ("File transfer failed")
I get a log file test.log with the proper formatted time stamp and error message.
The issue is this is occurring inside the container, and for that matter, inside 50 ~ 100 containers.
Is there a way to have all the containers logging to a single log file on the localhost where the containers exist? I have looked at log handlers in Python but cannot seem to wrap my head around getting out of the container and writing to file on local host.
How about using a docker volume. docker volumes can be used to persist data to an external file system. By doing so, your containers will have the access to read and write to your local hard drive instead of creating log files inside of the containers itself.
But you may have to find a way to avoid race conditions, to write to the shared location.
Read about docker volumes in their official docs. It's pretty easy.
The default idiom for Docker logging is to log to stdout. So just don't specify a file when you do basicConfig() and logs will go there by default.
You can then access those logs with docker logs command.

Run a script located on remote server using python

I have a python script located on a remote server with SSH enabled. That script displays a lot of debug messages displayed while executing. I want to trigger this script using another python script which is on my local system and depending on the output of the earlier script, I want to proceed further. While doing all this, I want the display messages on the remote server to be displayed on my local system as well. Basically, I want to view whatever output is thrown by the remote script during the course of the script, on my local system. I am able to trigger the script using paramiko but I am neither able to check whether the script on the remote server is running nor am I able to view it's output. Is there any way to do it? Already tried conn.recv(65535) but to no avail.
In my experience I found python fabric module easier than using paramiko. If you want to execute local script on remote machine using fabric. You just need to upload them using put() and then call run() api.
http://docs.fabfile.org/en/1.14/api/core/operations.html#fabric.operations.put

Trying to remotely start a process with visible window with Python on Windows machines

I tried to do this with WMI, but interactive processes cannot be started with it (as stated in Microsoft documentation). I see processes in task manager, but windows do not show.
I tried with Paramiko, same thing. Process visible in task manager, but no window appears (notepad for example).
I tried with PsExec, but the only case a window appears on the remote machine, is when you specify -i and it does not show normally, only through a message box saying something like "a message arrived do you want to see it".
Do you know a way to start a program remotely, and have its interface behave like it would if you manually started it?
Thanks.
Normally the (SSH) servers run as a Windows service.
Window services run in a separate Windows session (google for "Session 0 isolation"). They cannot access interactive (user) Windows sessions.
Also note that there can be multiple user sessions (multiple logged in users) in Windows. How would the SSH server know, what user session to display the GUI on (even if it could)?
The message you are getting is thanks to the "Interactive Services Detection" service that detects that a service is trying to show a GUI on an invisible Session 0 and allows you to replicate the GUI on the user session.
You can run the SSH server in an interactive Windows session, instead as a service. It has its limitations though.
In general, all this (running GUI application on Windows remotely through SSH) does not look like a good idea to me.
Also this question is more about a specific SSH server, rather that about an SSH client you are using. So you you include details about your SSH server, you can get better answers.
ok i found a way. With subprocess schtasks( the windows task scheduler). For whatever reason, when i launch a remote process with it , it starts as if i had clicked myself on the exe. For it to start with no delay, creating the task to an old date like 2012 with schtasks /Create /F and running the then named task with schtasks /Run will do the trick

Docker - Run Container from Inside Container

I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.

Categories

Resources