Execute Host OS Command from Flask container [duplicate] - python

How to control host from docker container?
For example, how to execute copied to host bash script?

This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);

Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.

The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.

That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.

My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).

Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()

If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally

docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#

I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.

You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local

In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine

I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"

Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/

To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Related

Multi-threading not working w/ Flask and Docker [duplicate]

I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
Finally I found a solution to see Python output when running daemonized in Docker, thanks to #ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output now (both, stderr and stdout) via
docker logs myapp
why -u ref
- print is indeed buffered and docker logs will eventually give you that output, just after enough of it will have piled up
- executing the same script with python -u gives instant output as said above
- import logging + logging.warning("text") gives the expected result even without -u
what it means by python -u ref. > python --help | grep -- -u
-u : force the stdout and stderr streams to be unbuffered;
In my case, running Python with -u didn't change anything. What did the trick, however, was to set PYTHONUNBUFFERED=1 as environment variable:
docker run --name=myapp -e PYTHONUNBUFFERED=1 -d myappimage
[Edit]: Updated PYTHONUNBUFFERED=0 to PYTHONUNBUFFERED=1 after Lars's comment. This doesn't change the behavior and adds clarity.
If you want to add your print output to your Flask output when running docker-compose up, add the following to your docker compose file.
web:
environment:
- PYTHONUNBUFFERED=1
https://docs.docker.com/compose/environment-variables/
See this article which explain detail reason for the behavior:
There are typically three modes for buffering:
If a file descriptor is unbuffered then no buffering occurs whatsoever, and function calls that read or write data occur immediately (and will block).
If a file descriptor is fully-buffered then a fixed-size buffer is used, and read or write calls simply read or write from the buffer. The buffer isn’t flushed until it fills up.
If a file descriptor is line-buffered then the buffering waits until it sees a newline character. So data will buffer and buffer until a \n is seen, and then all of the data that buffered is flushed at that point in time. In reality there’s typically a maximum size on the buffer (just as in the fully-buffered case), so the rule is actually more like “buffer until a newline character is seen or 4096 bytes of data are encountered, whichever occurs first”.
And GNU libc (glibc) uses the following rules for buffering:
Stream Type Behavior
stdin input line-buffered
stdout (TTY) output line-buffered
stdout (not a TTY) output fully-buffered
stderr output unbuffered
So, if use -t, from docker document, it will allocate a pseudo-tty, then stdout becomes line-buffered, thus docker run --name=myapp -it myappimage could see the one-line output.
And, if just use -d, no tty was allocated, then, stdout is fully-buffered, one line App started surely not able to flush the buffer.
Then, use -dt to make stdout line buffered or add -u in python to flush the buffer is the way to fix it.
Since I haven't seen this answer yet:
You can also flush stdout after you print to it:
import time
if __name__ == '__main__':
while True:
print('cleaner is up', flush=True)
time.sleep(5)
Try to add these two environment variables to your solution PYTHONUNBUFFERED=1 and PYTHONIOENCODING=UTF-8
You can see logs on detached image if you change print to logging.
main.py:
import time
import logging
print "App started"
logging.warning("Log app started")
while True:
time.sleep(1)
Dockerfile:
FROM python:2.7-stretch
ADD . /app
WORKDIR /app
CMD ["python","main.py"]
If anybody is running the python application with conda you should add --no-capture-output to the command since conda buffers to stdout by default.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "my-app", "python", "main.py"]
As a quick fix, try this:
from __future__ import print_function
# some code
print("App started", file=sys.stderr)
This works for me when I encounter the same problems. But, to be honest, I don't know why does this error happen.
I had to use PYTHONUNBUFFERED=1 in my docker-compose.yml file to see the output from django runserver.
If you aren't using docker-compose and just normal docker instead, you can add this to your Dockerfile that is hosting a flask app
ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"
CMD [ "flask", "run" ]
When using python manage.py runserver for a Django application, adding environment variable PYTHONUNBUFFERED=1 solve my problem. print('helloworld', flush=True) also works for me.
However, python -u doesn't work for me.
Usually, we redirect it to a specific file (by mounting a volume from host and writing it to that file).
Adding a tty using -t is also fine. You need to pick it up in docker logs.
Using large log outputs, I did not have any issue with buffer storing all without putting it in dockers log.

Trying to connect to a python socket inside a docker container from host

I have to implement to my Distributed systems class the Berkeley Algorithm and I chose to do it in python with sockets. The master is supposed to run in the host and the slaves in docker containers.
The closest I got from connecting from host (as a master) to the container (as the slave) was exposing the ports with the -p 9000:9000 flag the running the container, the host connects successfully to the container but doesn't receive or send anything (the same thing for the container) with that I have came to the conclusion that the python socket inside the process simply is not receiving packets from the port. I have already tried using -net=host flag but the host simply can't find the container. One progress that I had was to instantiate two docker containers and pinging one from another using the hostname provided in /etc/hosts but this is not what I really want.
I have the whole code in github if you need the source. The code is commented in English, but the documentation is in Portuguese
Summarising all I want to do is to open a socket with python inside a docker container and be able to reach in the host machine, what kind of network configuration do I need to do be able to do that?
EDIT: More info
The following bash script is used to instantiate three docker containers then execute a command into each one of them to clone my repo, cd into it and into a test folder containing a bash to execute a slave and then start the master at host:
docker run -it -d -p 127.0.0.1:9000:9000/tcp --name slave1 python bash
docker run -it -d -p 127.0.0.1:9001:9001/tcp --name slave2 python bash
docker run -it -d -p 127.0.0.1:9002:9002/tcp --name slave3 python bash
docker exec -t -d slave1 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_1.sh'
sleep 1
docker exec -t -d slave2 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_2.sh'
sleep 1
docker exec -t -d slave3 bash -c 'git clone https://github.com/guilhermePaciulli/BerkeleyAlgorithm.git;cd BerkeleyAlgorithm;git pull;cd test;bash slave_3.sh'
sleep 1
bash test/master.sh
To start each instance I use another bash command
To instantiate the slave I use:
python ../main.py -s 127.0.0.1:9000 175 logs/slave_log_1.txt
The -s is a flag to tell the main.py class that this is a slave, the 127.0.0.1:9000 are the ip and port that this slave is going to listen (and the master is going to connect) and the rest are just configurations (this example is used for the first slave).
And to instantiate the master I use:
python ./main.py -m 127.0.0.1:8080 185 15 test/slaves.txt test/logs/master_log.txt
Just like the slave the -m tells main that this is a master, 127.0.0.1:8080 are the ip and port that the master is going to connect to the slave and the rest are just configurations.
When you run a server-type process inside a Docker container, it needs to be configured to listen on the special "all interfaces" address 0.0.0.0. Each container has its own notion of localhost or 127.0.0.1, and if you set a process to listen or bind to 127.0.0.1, it can only be reached from its own localhost which is different from all other containers' localhost and the host's localhost.
In the server command you show, you'd run something like
python ../main.py -s 0.0.0.0:9000 175 logs/slave_log_1.txt
(Stringly consider building a Dockerfile to describe how to build and start your image. Starting a bunch of empty containers, git clone into each, and then manually launching processes is a lot of manual work to be lost as soon as you docker rm the container.)
I looked through your code and I see you creating the server socket and binding it to a port and listening, but I could not find where you call socket.accept() method ?

How to check the status of docker-compose up -d command

When we run docker-compose up-d command to run dockers using docker-compose.yml file, it starts building images or pulling images from the registry. We can see each and every step of this command on the terminal.
I am trying to run this command from a python script. The command starts successfully but after the command, I do not have any idea of how much the process has been completed. Is there any way I can monitor the status of docker-compose up -d command so that script can let the user (who is using the script) know how much the process has completed or if the docker-compose command has failed due to some reasons.?
Thanks
CODE:
from pexpect import pxssh
session = pxssh.pxssh()
if not session.login(ip_address,<USERNAME>, <PASSWORD>):
print("SSH session failed on login")
print(str(session))
else:
print("SSH session login successfull")
session.sendline("sudo docker-compose up -d")
session.prompt()
resp = session.before
print(resp)
You can view docker compose logs with following ways
Use docker compose up -d to start all services in detached mode (-d)
(you won't see any logs in detached mode)
Use docker compose logs -f -t to attach yourself to the logs of all
running services, whereas -f means you follow the log output and the
-t option gives you nice timestamps (Docs)
credit
EDIT: Docker Compose is now available as part of the core Docker CLI. docker-compose is still supported for now but most documentation I have seen now refers to docker compose as standard. See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command for more.
I think should use the command docker-compose top, could check the result, It shoul not be empty when the container is running.
If the containers is stop or exit or Create, it should return empty
What I do to debug small issues is to run:
docker-compose up {service_name}
This way I get to see the output for an individual service. If the service has a dependency you can always start multiple services like so:
docker-compose up {service_name1} {service_name2}
Additionally I use:
docker-compose logs -f -t {service_name1}
To see the logs of an already running service or alternatively:
docker logs -t -f {container_name}
Notice that the command above needs the container name and not the service name
This way you can make sure service by service that everything works as expected and then you can launch them all in detached mode as suggested in the other answers
If you need a programmatic way with bash, this is the fastest implementation:
sleep 2 seconds
check the container was up several seconds ago => Means you've just successfully deployed it
docker ps will look like:
a6f088b1567e lc_fe_isr-app "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:10001->3000/tcp lc_fe_isr-app-1
#!/bin/bash
#
# Check if the a single container was started successfully
#
CONTAINER_NAME="lc_fe_isr-app-1"
sleep 2
docker ps | grep $CONTAINER_NAME
UP_SECONDS_AGO=`docker ps | grep $CONTAINER_NAME | grep ' seconds'`
echo $UP_SECONDS_AGO
if [ -n "$UP_SECONDS_AGO" ]
then
echo "Deploy successfully"
else
echo "Deploy FAILED"
exit 1
fi

Best way to manage docker containers with supervisord

I have to setup "dockerized" environments (integration, qa and production) on the same server (client's requirement). Each environment will be composed as follow:
rabbitmq
celery
flower
python 3 based application called "A" (specific branch per
environment)
Over them, jenkins will handle the deployment based on CI.
Using set of containers per environment sounds like the best approach.
But now I need, process manager to run and supervise all of them:
3 rabbit containers,
3 celery/flower containers,
3 "A" containers,
1 jenkins containers.
Supervisord seem to be the best choice, but during my tests, i'm not able to "properly" restart a container. Here a snippet of the supervisord.conf
[program:docker-rabbit]
command=/usr/bin/docker run -p 5672:5672 -p 15672:15672 tutum/rabbitmq
startsecs=20
autorestart=unexpected
exitcodes=0,1
stopsignal=KILL
So I wonder what is the best way to separate each environment and be able to manage and supervise each service (a container).
[EDIT My solution inspired by Thomas response]
each container is run by a .sh script that looking like
rabbit-integration.py
#!/bin/bash
#set -x
SERVICE="rabbitmq"
SH_S = "/path/to_shs"
export MY_ENV="integration"
. $SH_S/env_.sh
. $SH_S/utils.sh
SERVICE_ENV=$SERVICE-$MY_ENV
ID_FILE=/tmp/$SERVICE_ENV.name # pid file
trap stop SIGHUP SIGINT SIGTERM # trap signal for calling the stop function
run_rabbitmq
$SH_S/env_.sh is looking like:
# set env variable
...
case $MONARCH_ENV in
$INTEGRATION)
AMQP_PORT="5672"
AMQP_IP="172.17.42.1"
...
;;
$PREPRODUCTION)
AMQP_PORT="5673"
AMQP_IP="172.17.42.1"
...
;;
$PRODUCTION)
AMQP_PORT="5674"
REDIS_IP="172.17.42.1"
...
esac
$SH_S/utils.sh is looking like:
#!/bin/bash
function random_name(){
echo "$SERVICE_ENV-$(cat /proc/sys/kernel/random/uuid)"
}
function stop (){
echo "stopping docker container..."
/usr/bin/docker stop `cat $ID_FILE`
}
function run_rabbitmq (){
# do no daemonize and use stdout
NAME="$(random_name)"
echo $NAME > $ID_FILE
/usr/bin/docker run -i --name "$NAME" -p $AMQP_IP:$AMQP_PORT:5672 -p $AMQP_ADMIN_PORT:15672 -e RABBITMQ_PASS="$AMQP_PASSWORD" myimage-rabbitmq &
PID=$!
wait $PID
}
At least myconfig.intergration.conf is looking like:
[program:rabbit-integration]
command=/path/sh_s/rabbit-integration.sh
startsecs=20
priority=90
autorestart=unexpected
exitcodes=0,1
stopsignal=TERM
In the case i want use the same container the startup function is looking like:
function _run_my_container () {
NAME="my_container"
/usr/bin/docker start -i $NAME &
PID=$!
wait $PID
rc=$?
if [[ $rc != 0 ]]; then
_run_my_container
fi
}
where
function _run_my_container (){
/usr/bin/docker run -p{} -v{} --name "$NAME" myimage &
PID=$!
wait $PID
}
Supervisor requires that the processes it manages do not daemonize, as per its documentation:
Programs meant to be run under supervisor should not daemonize
themselves. Instead, they should run in the foreground. They should
not detach from the terminal from which they are started.
This is largely incompatible with Docker, where the containers are subprocesses of the Docker process itself (i.e. and hence are not subprocesses of Supervisor).
To be able to use Docker with Supervisor, you could write an equivalent of the pidproxy program that works with Docker.
But really, the two tools aren't really architected to work together, so you should consider changing one or the other:
Consider replacing Supervisor with Docker Compose (which is designed to work with Docker)
Consider replacing Docker with Rocket (which doesn't have a "master" process)
You need to make sure you use stopsignal=INT in your supervisor config, then exec docker run normally.
[program:foo]
stopsignal=INT
command=docker -rm run whatever
At least this seems to work for me with docker version 1.9.1.
If you run docker from inside a shell script, it is very important that you have exec in front of the docker run command, so that docker run replaces the shell process and thus receives the SIGINT directly from supervisord.
You can have Docker just not detach and then things work fine. We manage our Docker containers in this way through supervisor. Docker compose is great, but if you're already using Supervisor to manage non-docker things as well, it's nice to keep using it to have all your management in one place. We'll wrap our docker run in a bash script like the following and have supervisor track that, and everything works fine:
#!/bin/bash¬
TO_STOP=docker ps | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_STOP != '']; then¬
docker stop $SERVICE_NAME¬
fi¬
TO_REMOVE=docker ps -a | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_REMOVE != '']; then¬
docker rm $SERVICE_NAME¬
fi¬
¬
docker run -a stdout -a stderr --name="$SERVICE_NAME" \
--rm $DOCKER_IMAGE:$DOCKER_TAG
I found that executing docker run via supervisor actually works just fine, with a few precautions. The main thing one needs to avoid is allowing supervisord to send a SIGKILL to the docker run process, which will kill off that process but not the container itself.
For the most part, this can be handled by following the instructions in Why Your Dockerized Application Isn’t Receiving Signals. In short, one needs to:
Use the CMD ["/path/to/myapp"] form (same for ENTRYPOINT) instead of the shell form (CMD /path/to/myapp).
Pass --init to docker run.
If using an ENTRYPOINT, ensure its last line calls exec, so as to avoid spawning a new process.
If the above still isn't working, add a STOPSIGNAL to your Dockerfile.
Additionally, you'll want to make sure that your stopwaitsecs setting in supervisor is greater than the time your process might take to shutdown gracefully when it receives a SIGTERM (e.g., graceful_timeout if using gunicorn).
Here's a sample config to run a gunicorn container:
[program:gunicorn]
command=/usr/bin/docker run --init --rm -i -p 8000:8000 gunicorn
redirect_stderr=true
stopwaitsecs=31

Python ssh tunneling over multiple machines with agent

A little context is in order for this question: I am making an application that copies files/folders from one machine to another in python. The connection must be able to go through multiple machines. I quite literally have the machines connected in serial so I have to hop through them until I get to the correct one.
Currently, I am using python's subprocess module (Popen). As a very simplistic example I have
import subprocess
# need to set strict host checking to no since we connect to different
# machines over localhost
tunnel_string = "ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string.split())
# Do work, copy files etc. over ssh on localhost with port 9999
proc.terminate()
My question:
When doing it like this, I cannot seem to get agent forwarding to work, which is essential in something like this. Is there a way to do this?
I tried using the shell=True keyword in Popen like so
tunnel_string = "eval `ssh-agent` && ssh-add && ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string, shell=True)
# etc
The problem with this is that the name of the machines is given by user input, meaning they could easily inject malicious shell code. A second problem is that I then have a new ssh-agent process running every time I make a connection.
I have a nice function in my bashrc which identifies already running ssh-agents and sets the appropriate environment variables and adds my ssh key, but of cource subprocess cannot reference functions defined in my bashrc. I tried setting the executable="/bin/bash" variable with shell=True in Popen to no avail.
You should give Fabric a try.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
The program below will give you a test run.
First install fabric with pip install fabric then save the code below in fabfile.py
from fabric.api import *
env.hosts = ['server url/IP'] #change to ur server.
env.user = #username for the server
env.password = #password
def run_interactive():
with settings(warn_only = True)
cmd = 'clear'
while cmd is not 'stop fabric':
run(cmd)
cmd = raw_input('Command to run on server')
Change to the directory containing your fabfile and run fab run_interactive then each command you enter will be run on the server
I tested your first simplistic example and agent forwarding worked. The only think that I can see that might cause problems is that the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are not set correctly in the shell that you execute your script from. You might use ssh -v to get a better idea of where things are breaking down.
Try setting up a SSH config file: https://linuxize.com/post/using-the-ssh-config-file/
I frequently am required to tunnel through a bastion server and I use a configuration like so in my ~/.ssh/config file. Just change the host and user names. This also presumes that you have entries for these host names in your hosts (/etc/hosts) file.
Host my-bastion-server
Hostname my-bastion-server
User user123
AddKeysToAgent yes
UseKeychain yes
ForwardAgent yes
Host my-target-host
HostName my-target-host
User user123
AddKeysToAgent yes
UseKeychain yes
I then gain access with syntax like:
ssh my-bastion-server -At 'ssh my-target-host -At'
And I issue commands against my-target-host like:
ssh my-bastion-server -AT 'ssh my-target-host -AT "ls -la"'

Categories

Resources