Assign variable read from file in Makefile recipe - python

I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1

I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.

You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))

Related

unexpected EOF while looking for matching `"' bash while im trying to execute command outside a Docker container

I dont know what is happening here. Im executing a script that contains the following line:
var="${comand} bash -c \"export PATH=/local/Miniconda3/bin:$PATH >> ~/.bashrc; /local/Miniconda3/bin/python3 scripts/DNAscan.py ${var}\""
echo "${var}"
$var
The output of that line is:
sudo docker exec -it image bash -c "PATH=/local//Miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin >> ~/.bashrc; /local/Miniconda3/bin/python3 scripts/DNAscan.py -format fastq -in input_output/input/test_data.1.fq.gz -in2 input_output/input/test_data.2.fq.gz -reference hg38 -alignment -variantcalling -annotation -iobio -out input_output/output/ -BED"
Bur when I try to execute it from the script it gives me the following error:
>>: -c: line 1: unexpected EOF while looking for matching `"'
>>: -c: line 2: syntax error: unexpected end of file
What can I do to solve this ?
You don't really want docker exec here at all. This is a debugging tool that you can use to inspect a running container; I'd use it the same way I'd use a language-specific debugger like Python's pdb.
If you want to run a one-off command like this, you can use docker run. For example,
docker run \
--rm \ # clean up this container when done
-v "$PWD/data:/app/input_output" \ # make local files available in the container
my-image \ # image name to run
scripts/DNAscan.py \ # command to run
-format fastq \
-in input_output/input/test_data.1.fq.gz \
-in2 input_output/input/test_data.2.fq.gz \
-reference hg38 \
-alignment \
-variantcalling \
-annotation \
-iobio \
-out input_output/output/ \
-BED
The command you show does tries to write a .bashrc file inside the container setting up $PATH. However, most paths that run Docker containers don't actually read the .bashrc file at all. In your image's Dockerfile, you can use the ENV directive to set $PATH.
ENV PATH=/local/Miniconda3/bin:$PATH
Now we've gotten rid of the bash -c wrapper entirely, which will resolve your quoting problem.
#!/bin/sh
docker run --rm -v "$PWD/data:/app/input_output" my-image "$#"
The "$#" syntax passes the wrapper script's arguments as the docker run command argument, preserving any spaces or other punctuation in the argument list.

Executing a bash command in a docker container from python script on host fails

I am trying to execute a bash command from python script that is wrapped in a docker exec call as the command needs to be executed inside a container.
This script is being executing on the host machine:
command_line_string = f"java -cp {omnisci_utility_path}:{driver_path} com.mapd.utility.SQLImporter" \
f" -u {omni_user} -p {omni_pass} -db {database_name} --port {omni_port}" \
f" -t {self.table_name} -su {denodo_user} -sp {denodo_pass}" \
f" -c {self.reader.connection_string}"\
f" -ss \"{read_data_query}\""
# in prod we have docker so we wrap it in docker exec:
if(args.env_type == "prod"):
command_line_string = f"docker exec -t {args.container_id} /bin/bash -c \"{command_line_string}\""
command_line_args = shlex.split(command_line)
command_line_process = subprocess.Popen(
command_line_args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
process_output, _ = command_line_process.communicate()
However, when I execute the command, supplying the arguments I get a "Java Usage" response suggesting that the java command I am invoking did not have the correct parameters:
2021-09-01:09:19:09 [default_omnisci_ingestion.py:64] INFO - docker exec -t 5d874bffcdf8 /bin/bash -c "java -cp /omnisci/bin/omnisci-utility-5.6.5.jar
:/root/denodo-8-vdp-jdbcdriver.jar com.mapd.utility.SQLImporter -u admin -p mypass -db omnisci --port 6274 -t MyTable -su sourceDBuser -sp sourceDBpass -c jdbc:vdb://sourceDBURL -ss "SELECT
basin as Basin,
reservoir as Reservoir, cast(case when wkt like '%M%' Then wkt Else replace(wkt, 'POLYGON ', 'MULTIPOLYGON (') || ')' End as varchar(999999)) as wkt
FROM
schema.myTable;""
2021-09-01:09:19:10 [command_executor.py:10] INFO - Usage: java [options] <mainclass> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a class)2021-09-01:09:19:10 [command_executor.py:10] INFO - or java [options] -jar <jarfile> [args...]
2021-09-01:09:19:10 [command_executor.py:10] INFO - (to execute a jar file)
...
I know that the problem is due to the use of quotes but I just don't understand how to go about them.
For example, the java command I am nesting inside bin/bash -c needs to be wrapped with quotes like sl
bin/bash -c "java -cp ..."
Note: the command works fine if I execute it in our dev env where we do not have the "docker setup" and we execute the command as it is but on Stage we have the system running in a container thus the reason why I need to use docker exec` to invoke the same command in the contaner

Unable to run python file in Docker run - docker: invalid reference format and docker: error response from daemon

I am currently following this tutorial here. I want to run a preprocessing.py with 3 arguments in a docker run command on Windows. I have the image installed on my local program as shown here. This is my file structure:
-FYP
-data
-lfw
-etc
-medium_facenet_tutorial
-preprocessing.py
-__int__.py
-align_dlib.py
-shape_predictor_68_face_landmarks.dat
-Dockerfile
-requirements.txt
shape_predictor_68_face_landmarks.dat
Here are the Docker run commands I ran, on git bash and windows powershell. I prefer using git bash but at this point any advice for either two will be appreciated.
Git Bash
winpty docker run -w "$PWD" \
-v "$PWD":/FYP \
-e PYTHONPATH="$PYTHONPATH":/FYP \
-it colemurray/medium-facenet-tutorial python3 /FYP/medium_facenet_tutorial/preprocess.py \
--input-dir /FYP/data \
--output-dir /FYP/output/intermediate \
--crop-dim 180
Error: Error response from daemon: the working directory 'C:/Users/JIA SHENG/Documents/My Projects/FYP' is invalid, it needs to be an absolute path.
Error that I've got before changing -e PYTHONPATH="$PYTHONPATH:/medium-facenet-tutorial" \ to -e PYTHONPATH="$PYTHONPATH":/FYP \:
docker: invalid reference format: repository name must be lowercase.
Echo of the docker run shown here
Windows PowerShell
docker run -w "$PWD" `
-v "$PWD":/FYP `
-e PYTHONPATH="$PYTHONPATH":/FYP `
-it colemurray/medium-facenet-tutorial python3 /FYP/medium_facenet_tutorial/preprocess.py `
--input-dir /FYP/data `
--output-dir /FYP/output/intermediate `
--crop-dim 180
Error: docker: invalid reference format: repository name must be lowercase.
I understand that the invalid reference format error suggests that docket cannot convert the string I've provided to an image but I am too new at Docker to point what I went wrong while adapting the Docker run command from the original source. Main changes that I made includes adding a -w flag to set the working directory and switching out the folder name to 'FYP'.
Original Docker run command:
docker run -v $PWD:/medium-facenet-tutorial \
-e PYTHONPATH=$PYTHONPATH:/medium-facenet-tutorial \
-it colemurray/medium-facenet-tutorial python3 /medium-facenet-tutorial/medium_facenet_tutorial/preprocess.py \
--input-dir /medium-facenet-tutorial/data \
--output-dir /medium-facenet-tutorial/output/intermediate \
--crop-dim 180
Any advice on why the Docker run command is invalid will be greatly appreciated, thank you.
Edit: The whole file structure, including the preprocessing.py is provided in the tutorial, in this Github repository.

Passing multiple parameters to docker container

I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)

Execute command on docker container from remote machine

I have docker running on Host. There are two docker containers on this host i.e container_1 and container_2. Now I want to execute some commands on container_1 from my remote dev machine.
pipe separated commands i.e,
sudo docker exec -it container_1 sudo find <dir> - type f -iname *_abc_* -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'
Form above command only first command till first pipe execute on docker container and next set of command execute on host.
I am using python fabric api to execute this from remote machine.
Is there any way to execute this full command on container from remote machine ?
That's because pipe command actually gets executed on your host, try this, it may work for you:
sudo docker exec -it container_1 bash -c "sudo find - type f -iname_abc_ -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'"

Categories

Resources