Docker and Python - python

I want to write a program in Python which starts a docker, executes some commands inside the docker and returns the output. This is, something I would do in a shell like :
docker run --rm -i -it name_docker_image
echo "Input for programm" | script.sh
Output of the script
I am trying to do it with a code like:
process = subprocess.Popen("sudo docker run --rm -i -it syntaxnet",stdout=subprocess.PIPE, shell=True)
process.communicate("echo \""+ sentence + "\" | work/demo.sh")
out = process.stdout.read()
retyurned_value= out.splitlines()
But it gives me an error message: "the input device is not a TTY". How can I fix it?

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

run a docker and a command using python script

I have to run a docker and then a command inside the workdir using a python script.
i'm triyng to do it as follows:
command = ['gnome-terminal', '-e', "bash -c 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh; echo b; exec $SHELL'"]
p = subprocess.Popen(command)
where 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' is a shell script with the docker run with root privileges
the fisrt command 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' works good, but the second one 'echo b' that has to run inside the container doesn't work..
Thank you!

Assign variable read from file in Makefile recipe

I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1
I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.
You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))

Execute command on docker container from remote machine

I have docker running on Host. There are two docker containers on this host i.e container_1 and container_2. Now I want to execute some commands on container_1 from my remote dev machine.
pipe separated commands i.e,
sudo docker exec -it container_1 sudo find <dir> - type f -iname *_abc_* -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'
Form above command only first command till first pipe execute on docker container and next set of command execute on host.
I am using python fabric api to execute this from remote machine.
Is there any way to execute this full command on container from remote machine ?
That's because pipe command actually gets executed on your host, try this, it may work for you:
sudo docker exec -it container_1 bash -c "sudo find - type f -iname_abc_ -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'"

Running interactive commands in docker in Python subprocess

When I use docker run in interactive mode I am able to run the commands I want to test some python stuff.
root#pydock:~# docker run -i -t dockerfile/python /bin/bash
[ root#197306c1b256:/data ]$ python -c "print 'hi there'"
hi there
[ root#197306c1b256:/data ]$ exit
exit
root#pydock:~#
I want to automate this from python using the subprocess module so I wrote this:
run_this = "print('hi')"
random_name = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20))
command = 'docker run -i -t --name="%s" dockerfile/python /bin/bash' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'cat <<\'PYSTUFF\' | timeout 0.5 python | head -n 500000 \n%s\nPYSTUFF' % run_this
output = subprocess.check_output([command],shell=True,stderr=subprocess.STDOUT)
command = 'exit'
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'docker ps -a | grep "%s" | awk "{print $1}" | xargs --no-run-if-empty docker rm -f' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
This is supposed to create the container, run the python command on the container and exit and remove the container. It does all this except the command is run on the host machine and not the docker container. I guess docker is switching shells or something like that. How do I run python subprocess from a new shell?
It looks like you are expecting the second command cat <<... to send input to the first command. But the two subprocess commands have nothing to do with each other, so this doesn't work.
Python's subprocess library, and the popen command that underlies it, offer a way to get a pipe to stdin of the process. This way, you can send in the commands you want directly from Python and don't have to attempt to get another subprocess to talk to it.
So, something like:
from subprocess import Popen, PIPE
p = Popen("docker run -i -t --name="%s" dockerfile/python /bin/bash", stdin=PIPE)
p.communicate("timeout 0.5 python | head -n 500000 \n" % run_this)
(I'm not a Python expert; apologies for errors in string-forming. Adapted from this answer)
You actually need to spawn a new child on the new shell you are opening.So after docker creation run docker enter or try the same operation with pexpect instead of subprocess.`pexpect spawns a new child and that way you can send commands.

Categories

Resources