run a docker and a command using python script - python

I have to run a docker and then a command inside the workdir using a python script.
i'm triyng to do it as follows:
command = ['gnome-terminal', '-e', "bash -c 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh; echo b; exec $SHELL'"]
p = subprocess.Popen(command)
where 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' is a shell script with the docker run with root privileges
the fisrt command 'sudo /home/mpark/Escriptori/SRTConverter/shell_docker.sh' works good, but the second one 'echo b' that has to run inside the container doesn't work..
Thank you!

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

Why a dockerized script have a different behaviour when I docker run or I docker execute it?

I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.

Docker exec: python command doesn't work after changing directory

I want to execute some code inside a docker container. To do this, I execute this script:
#!/bin/bash
docker start mycontainer
docker exec mycontainer python hello.py
docker exec mycontainer cd modifiedDiffusion
docker exec mycontainer python hello.py
docker exec mycontainer sh executeModifiedDiffusion.sh
docker stop mycontainer
I created a simple print('hello world') type script in the first directory in which the container puts you and a second script in the directory modifiedDiffusion.
The command cd modifiedDiffusion works, because I tried some tests with the ls command.
The first script runs, but the problem is that the second python script doesn't run. How do I solve this?
Try this:
#!/bin/bash
docker exec 6b bash -c "python hello.py; cd modifiedDiffusion; python hello.py; sh executeModifiedDiffusion.sh"
Ref: How to run 2 commands with docker exec
The above solution is recommended only for a few commands. As the commands increase, we must create a separate bash script, add all those commands inside it and then run the bash script inside the Docker container. This provide more flexibility.

Executing a bash command in a docker container from python script on host fails

I am trying to execute a bash command from python script that is wrapped in a docker exec call as the command needs to be executed inside a container.
This script is being executing on the host machine:
command_line_string = f"java -cp {omnisci_utility_path}:{driver_path} com.mapd.utility.SQLImporter" \
f" -u {omni_user} -p {omni_pass} -db {database_name} --port {omni_port}" \
f" -t {self.table_name} -su {denodo_user} -sp {denodo_pass}" \
f" -c {self.reader.connection_string}"\
f" -ss \"{read_data_query}\""
# in prod we have docker so we wrap it in docker exec:
if(args.env_type == "prod"):
command_line_string = f"docker exec -t {args.container_id} /bin/bash -c \"{command_line_string}\""
command_line_args = shlex.split(command_line)
command_line_process = subprocess.Popen(
command_line_args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
process_output, _ = command_line_process.communicate()
However, when I execute the command, supplying the arguments I get a "Java Usage" response suggesting that the java command I am invoking did not have the correct parameters:
2021-09-01:09:19:09 [default_omnisci_ingestion.py:64] INFO - docker exec -t 5d874bffcdf8 /bin/bash -c "java -cp /omnisci/bin/omnisci-utility-5.6.5.jar
:/root/denodo-8-vdp-jdbcdriver.jar com.mapd.utility.SQLImporter -u admin -p mypass -db omnisci --port 6274 -t MyTable -su sourceDBuser -sp sourceDBpass -c jdbc:vdb://sourceDBURL -ss "SELECT
basin as Basin,
reservoir as Reservoir, cast(case when wkt like '%M%' Then wkt Else replace(wkt, 'POLYGON ', 'MULTIPOLYGON (') || ')' End as varchar(999999)) as wkt
FROM
schema.myTable;""
2021-09-01:09:19:10 [command_executor.py:10] INFO - Usage: java [options] <mainclass> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a class)2021-09-01:09:19:10 [command_executor.py:10] INFO - or java [options] -jar <jarfile> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a jar file)
...
I know that the problem is due to the use of quotes but I just don't understand how to go about them.
For example, the java command I am nesting inside bin/bash -c needs to be wrapped with quotes like sl
bin/bash -c "java -cp ..."
Note: the command works fine if I execute it in our dev env where we do not have the "docker setup" and we execute the command as it is but on Stage we have the system running in a container thus the reason why I need to use docker exec` to invoke the same command in the contaner

How to run the docker commands from python?

I want to run a set of docker commands from python.
I tried creating a script like below and run the script from python using paramiko ssh_client to connect to the machine where the docker is running:
#!/bin/bash
# Get container ID
container_id="$(docker ps | grep hello | awk '{print $1}')"
docker exec -it $container_id sh -c "cd /var/opt/bin/ && echo $1 &&
echo $PWD && ./test.sh -q $1"
But docker exec ... never gets executed.
So I tried to run the below python script below, directly in the machine where the docker is running:
import subprocess
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
I get a message: "Usage: docker COMMAND..."
But I get the expected results if I run the command
docker exec 7f34a9c1b78f /bin/bash -c "cd /var/opt/bin/ && ls -a"
directly in the machine
How to run multiple docker commands from the python script? Thanks!
You have a mistake in your call to subprocess.call. subprocess.call expects a command with a series of parameters. You've given it a list of parameter pieces.
This code:
docker_run = "docker exec 7f34a9c1b78f /bin/bash -c \"cd
/var/opt/bin/ && ls -a\"".split()
subprocess.call(docker_run, shell=True)
Runs this:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd', '/var/opt/bin/', '&&', 'ls', '-a"'
], shell=True)
Instead, I believe you want:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'"cd /var/opt/bin/ && ls -a"' # Notice how this is only one argument.
], shell=True)
You might need to tweak that second call. I suspect you don't need the quotes ('cd /var/opt/bin/ && ls -a' might work instead of '"cd /var/opt/bin/ && ls -a"'), but I haven't tested it.
Following are a few methods worked:
Remove double quotes:
subprocess.call([
'docker', 'exec', '7f34a9c1b78f', '/bin/bash', '-c',
'cd /opt/teradata/tdqgm/bin/ && ./support-archive.sh -q 6b171e7a-7071-4975-a3ac-000000000241'
])
If you are not sure of how the command should be split up to pass it as an argument of subprocess method, shlex module:
https://docs.python.org/2.7/library/shlex.html#shlex.split

Categories

Resources