Running Pythonscript over SSH from a Jenkinsagent doesn't start the script - python

Update:
The problem seems to be quite different from what I initially thought. Information about this is at the bottom of the post.
I've been having quite the struggle with starting my Python-script on a remote Raspberry Pi. I can't quite get my script working, either it's running, but blocking the terminal in Jenkins - blocking the deployment from ever finishing, or it's not starting at all. The weird thing in my opinion, is that when I'm running the exact same commands from my Windows machine, or even directly from the Jenkins node (Ubuntu Server), the script starts absolutely fine and without blocking the terminal. It just behaves like the good background-process I want it to be. But no! Not through the Jenkins pipeline itself.
I'll summarize the setup:
Jenkins controller: Docker container in my Unraid Server with SWAG reverse proxy for GitHub hook.
Jenkins node: VM in Unraid Server.
Target machine: Raspberry Pi, currently available on my local network (will set up VPN for this later).
Target script: A python script to update a small screen through GPIO with information gotten through a HTTP GET request, has a while True loop as the main loop and some time.sleep().
The following is my pipeline:
pipeline {
agent any
stage('Deploy') {
steps {
sh 'ls'
sshagent(credentials: ['jenkinsvm-to-pi'])
{
// Clear any existing instances
sh """
ssh ${target} pkill -f ${filename} &
"""
// Copy downloaded files to target machine.
sh """
scp lcd1602.py ${target}:home/pi/Documents/${filename}
"""
// Run script on target machine.
sh """
ssh ${target} "nohup python3 home/pi/Documents/${filename}" &
"""
}
}
}
}
This kills the process if I had started it through other means than from the pipeline (the pkill -f command). But it does not start it after this. I have tried with lots of variations, with & and without.
The entire pipeline seems quite simple to me, and I can't for the life of me figure out what's causing this issue.
Output from Jenkins
I would greatly appreciate some assistance with this.
Thanks!
EDIT1:
Also tried this setup:
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "ssh ${target} pkill -f ${filename} || echo 'No process was running'"
sh "scp lcd1602.py ${target}:~/Documents"
sh "ssh ${target} nohup python3 Documents/${filename} &"
}
}
A pretty weird thing is that it seemed to work the first time I updated the pipeline. At least so it seemed, but subsequent attempts didn't work. The console log isn't very helpful either, but I'll include the output.
[ssh-agent] Using credentials pi
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-SSrqVoXbnr0v/agent.40947
SSH_AGENT_PID=40949
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/AlbinTracker_main#tmp/private_key_4492026892118281470.key (/home/jenkins/workspace/AlbinTracker_main#tmp/private_key_4492026892118281470.key)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh pi#XXX.XXX.XXX.XXX pkill -f lcd1602.py
+ echo No process was running
No process was running
[Pipeline] sh
+ scp lcd1602.py pi#XXX.XXX.XXX.XXX:~/Documents
[Pipeline] sh
+ ssh pi#XXX.XXX.XXX.XXX nohup python3 Documents/lcd1602.py
[Pipeline] }
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 40949 killed;
[ssh-agent] Stopped.
[Pipeline] // sshagent
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
I've done some more testing. It seems like something is wrong even running the script from a terminal-SSH session. When I'm running the command: nohup python3 lcd1602.py & while I'm in a SSH session, there's no output to the nohup.out file, while when I'm not running it in nohup (python3 lcd1602.py &), I get the desired output directly in the terminal. If I run the command without the "&", the output is directed to nohup.out like it should, but this blocks the terminal, which clogs the deploy stage. I'm starting to think this is a problem related to nohup (my use of it) and not jenkins?
EDIT2:
I seem to have fixed a part of the problem, but another part remains.
I added the UNBUFFERED flag -u to the python command, to make sure that the logging went through instead of being stored in the buffer.
I added a bash script to the repository, that did the executing instead of running the python script directly.
Now I have some logging to make sure that I know when the script has been running etc. But I do think that the added bash script was actually what made the script start running. This works on my Raspberry Pi 3 B. When I try to run the same pipeline targeting my Raspberry Pi 1 B, it doesn't work... I'm faced with the same issue as before, the script isn't started at all (the RPi 1B is my original target machine, just did some testing with the RPi 3B).
The current Jenkinsfile:
String target = 'pi#xxx.xxx.xxx.xxx'
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(credentials: ['conn-to-mmrasp'])
{
sh "scp -r ./* ${target}:~"
sh "ssh ${target} bash startScript.sh &"
}
}
}
}
}
The bash script:
filename=lcd1602.py
pkill -f $filename || echo No process running
nohup python3 -u $filename >> nohup.out
Update:
I just want to clarify the situation of the problem, so that you don't have to read the wall of text just to get to this. The same pipeline, with only the target IP changed, works on my RPi3B, starting the script and running the GPIO-screen etc. So it seems to be an issue with RPi1B, something is not entirely right with it, even though I've reflashed the OS. Any ideas on what might be the difference? The SSH configuration should not be a problem, as that would've been clear from the logs in Jenkins.

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

Execute Host OS Command from Flask container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

Best way to manage docker containers with supervisord

I have to setup "dockerized" environments (integration, qa and production) on the same server (client's requirement). Each environment will be composed as follow:
rabbitmq
celery
flower
python 3 based application called "A" (specific branch per
environment)
Over them, jenkins will handle the deployment based on CI.
Using set of containers per environment sounds like the best approach.
But now I need, process manager to run and supervise all of them:
3 rabbit containers,
3 celery/flower containers,
3 "A" containers,
1 jenkins containers.
Supervisord seem to be the best choice, but during my tests, i'm not able to "properly" restart a container. Here a snippet of the supervisord.conf
[program:docker-rabbit]
command=/usr/bin/docker run -p 5672:5672 -p 15672:15672 tutum/rabbitmq
startsecs=20
autorestart=unexpected
exitcodes=0,1
stopsignal=KILL
So I wonder what is the best way to separate each environment and be able to manage and supervise each service (a container).
[EDIT My solution inspired by Thomas response]
each container is run by a .sh script that looking like
rabbit-integration.py
#!/bin/bash
#set -x
SERVICE="rabbitmq"
SH_S = "/path/to_shs"
export MY_ENV="integration"
. $SH_S/env_.sh
. $SH_S/utils.sh
SERVICE_ENV=$SERVICE-$MY_ENV
ID_FILE=/tmp/$SERVICE_ENV.name # pid file
trap stop SIGHUP SIGINT SIGTERM # trap signal for calling the stop function
run_rabbitmq
$SH_S/env_.sh is looking like:
# set env variable
...
case $MONARCH_ENV in
$INTEGRATION)
AMQP_PORT="5672"
AMQP_IP="172.17.42.1"
...
;;
$PREPRODUCTION)
AMQP_PORT="5673"
AMQP_IP="172.17.42.1"
...
;;
$PRODUCTION)
AMQP_PORT="5674"
REDIS_IP="172.17.42.1"
...
esac
$SH_S/utils.sh is looking like:
#!/bin/bash
function random_name(){
echo "$SERVICE_ENV-$(cat /proc/sys/kernel/random/uuid)"
}
function stop (){
echo "stopping docker container..."
/usr/bin/docker stop `cat $ID_FILE`
}
function run_rabbitmq (){
# do no daemonize and use stdout
NAME="$(random_name)"
echo $NAME > $ID_FILE
/usr/bin/docker run -i --name "$NAME" -p $AMQP_IP:$AMQP_PORT:5672 -p $AMQP_ADMIN_PORT:15672 -e RABBITMQ_PASS="$AMQP_PASSWORD" myimage-rabbitmq &
PID=$!
wait $PID
}
At least myconfig.intergration.conf is looking like:
[program:rabbit-integration]
command=/path/sh_s/rabbit-integration.sh
startsecs=20
priority=90
autorestart=unexpected
exitcodes=0,1
stopsignal=TERM
In the case i want use the same container the startup function is looking like:
function _run_my_container () {
NAME="my_container"
/usr/bin/docker start -i $NAME &
PID=$!
wait $PID
rc=$?
if [[ $rc != 0 ]]; then
_run_my_container
fi
}
where
function _run_my_container (){
/usr/bin/docker run -p{} -v{} --name "$NAME" myimage &
PID=$!
wait $PID
}
Supervisor requires that the processes it manages do not daemonize, as per its documentation:
Programs meant to be run under supervisor should not daemonize
themselves. Instead, they should run in the foreground. They should
not detach from the terminal from which they are started.
This is largely incompatible with Docker, where the containers are subprocesses of the Docker process itself (i.e. and hence are not subprocesses of Supervisor).
To be able to use Docker with Supervisor, you could write an equivalent of the pidproxy program that works with Docker.
But really, the two tools aren't really architected to work together, so you should consider changing one or the other:
Consider replacing Supervisor with Docker Compose (which is designed to work with Docker)
Consider replacing Docker with Rocket (which doesn't have a "master" process)
You need to make sure you use stopsignal=INT in your supervisor config, then exec docker run normally.
[program:foo]
stopsignal=INT
command=docker -rm run whatever
At least this seems to work for me with docker version 1.9.1.
If you run docker from inside a shell script, it is very important that you have exec in front of the docker run command, so that docker run replaces the shell process and thus receives the SIGINT directly from supervisord.
You can have Docker just not detach and then things work fine. We manage our Docker containers in this way through supervisor. Docker compose is great, but if you're already using Supervisor to manage non-docker things as well, it's nice to keep using it to have all your management in one place. We'll wrap our docker run in a bash script like the following and have supervisor track that, and everything works fine:
#!/bin/bash¬
TO_STOP=docker ps | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_STOP != '']; then¬
docker stop $SERVICE_NAME¬
fi¬
TO_REMOVE=docker ps -a | grep $SERVICE_NAME | awk '{ print $1 }'¬
if [$TO_REMOVE != '']; then¬
docker rm $SERVICE_NAME¬
fi¬
¬
docker run -a stdout -a stderr --name="$SERVICE_NAME" \
--rm $DOCKER_IMAGE:$DOCKER_TAG
I found that executing docker run via supervisor actually works just fine, with a few precautions. The main thing one needs to avoid is allowing supervisord to send a SIGKILL to the docker run process, which will kill off that process but not the container itself.
For the most part, this can be handled by following the instructions in Why Your Dockerized Application Isn’t Receiving Signals. In short, one needs to:
Use the CMD ["/path/to/myapp"] form (same for ENTRYPOINT) instead of the shell form (CMD /path/to/myapp).
Pass --init to docker run.
If using an ENTRYPOINT, ensure its last line calls exec, so as to avoid spawning a new process.
If the above still isn't working, add a STOPSIGNAL to your Dockerfile.
Additionally, you'll want to make sure that your stopwaitsecs setting in supervisor is greater than the time your process might take to shutdown gracefully when it receives a SIGTERM (e.g., graceful_timeout if using gunicorn).
Here's a sample config to run a gunicorn container:
[program:gunicorn]
command=/usr/bin/docker run --init --rm -i -p 8000:8000 gunicorn
redirect_stderr=true
stopwaitsecs=31

Running Docker as a syslog-ng destination fails

I have a Vagrant-created VM running stock Ubuntu Trusty 64, with one host CPU allocated to it.
Within that VM, I have a Docker image running stock Python 3.4.3:
FROM python:3.4.3-slim
ENTRYPOINT ["/usr/local/bin/python"]
When I execute an arbitrary Python script:
import time
while True:
time.sleep(1)
Like this:
sudo docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py
Everything is fine, the container run and just sits there doing very little. If I add print statements to the Python script, it gets sent to stdout as expected.
I also have syslog-ng installed in that VM, and my intention is to use my containerized Python to act as syslog-ng destination:
source s_foo {
unix-stream("/dev/log");
};
destination d_foo {
program("'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py'");
};
log {
source(s_foo);
destination(d_foo);
};
But when I reload the config, syslog-ng consumes about 20% of the VM's CPU, and 100% of the host's CPU, and the container never gets created (running sudo docker ps -a yields no containers). Running sudo syslog-ng-ctl stats tells me that it is trying to execute the program:
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;dropped;0
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;processed;2
dst.program;d_foo#0;'docker run -i -v /etc/alloy_listener/scripts:/scripts:ro alloy_listener /scripts/test.py';a;stored;0
My feeling is that because syslog-ng is using 20% of its CPU, but 100% of the host's, it's I/O bound and the VM is working extra-hard to keep up. To that end I tried consuming and flushing stdin and stdout in the Python script, but as far as I can tell because it's not even creating the container, it isn't getting as far as the script.
So my next thought was there must be some combination of docker's -a, -d, -i, and -t flags that I've not tried, but I'm sure I have tried every permissible combination to no avail.
What have I missed?
If you start syslog-ng in foreground (syslog-ng-binary -Fedv) you see that syslog-ng starts and stops the program destination in a loop, this cause the 100% CPU spining.
But after investigating the problem locally, you should use the program destination as (without '):
program("sudo docker run -i -v /scripts:/scripts python-test /scripts/test.py");
Br,
Micek

process is not going background

I am running the python script shown below. The script does a ssh to the remote machine and runs a c program in the background. But on running the python script I get the following output:
This above means that a.out was run and the pid is [1] 2115 .
However whn I login to the remote machine and check for a.out via 'ps' command I dont see it.
Another observation is that when i add the delay statement in the python script thread.sleep(20) like , and while the script is still runnuing,
if I check in the remote machine, a.out is active.
#!/usr/bin/python
import HostMod #where ssh function is wrote
COMMAND_PROMPT1 = '[#$] '
p = HostMod.HostModule()
obj1=p.HostLogin('10.200.2.197', 'root', 'newnet') #ssh connection to remote system
obj1.sendline ('./a.out > test.txt &') #sending program to remote machine to executethere
obj1.expect (COMMAND_PROMPT1)
print obj1.before
#a.out program
int main()
{
while(1)
{
sleep(10);
}
return 0;
}
please try giving absolute path of ./a.out
Try using nohup
...
obj1.sendline ('nohup ./a.out > test.txt &') #sending program to remote machine to executethere
but you should really not use a shell to invoke commands over ssh. The ssh protocol has builtin support for running commands. I am not sure how your HostMod module works, but you could try this from your shell (would be easy to port to use subprocess):
ssh somehost nohup sleep 20
<Ctrl+C>
ssh somehost
ps ax | grep sleep
And you should see your sleep process still running. This method does not instantiate a shell, which is much more reliable, since you may or may not have control over which shell is run, what is in your ~/.(...)rc files, etc. All in all, much more portable.

Categories

Resources