I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)
Related
My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build
I have created a Docker image with dockerfile where the Entrypoint is as follows:
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python", "./myprojectmain.py", "--config", "./config.py"]
When I run I use the command:
docker run myproject
all is fine it seems.
However I have a secondary .py file in the root of the project called setup.py. The purpose of this file is to update some of the config and json files after getting some input from the user.
Is there a way to run this secondary file (setup.py) or do I need to create a whole new image (which seems ridiculous).
Thanks
Well... if you got an image, you don't have to use entrypoint... just run your scripts like this:
docker run image "python /some/path/myscript.py"
or
docker run image /bin/bash -c "cd /some/path && python myscript.py"
or with entry point
RUN ./myprojectmain.py --config ./config.py
RUN ./myproject2main.py --config ./config.py
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python"]
You can straightforwardly provide an alternate command after the image name in the docker run command. It's harder to override the entrypoint, though. If you have both a command and an entrypoint then they are combined together into a single command.
This workflow is easiest if your Dockerfile has a CMD, and that's a complete runnable shell command. If you have an ENTRYPOINT at all, it is some kind of wrapper that does some initial setup and then runs the command it's given as additional arguments. In this particular setup, conda run with its arguments seems to meet that need and have the correct form, so you could say
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "--"]
CMD ["python", "./myprojectmain.py", "--config", "./config.py"]
(Note that conda run seems to have some issues; you could probably simulate it using a custom entrypoint wrapper script or use a pip-based non-virtual-environment workflow instead.)
If you split the ENTRYPOINT and CMD like this, then you can run
docker run myproject \
python setup.py
The alternate python setup.py command will be appended to the conda run entrypoint command.
... update some of the config and json files ...
It's often a good idea to inject these into your container using a bind mount. Depending on how exactly the files get set up, you may be able to initialize them from the host environment, without Docker
./setup.py
docker run -d -v $PWD/config:/app/config myproject
but if they are sensitive to the Docker environment in some way, you could do it in Docker too; make sure to mount the same configuration storage into both containers.
docker network create mynet
docker volume create config
docker run --rm --net mynet -v config:/app/config myproject ./setup.py
docker run -d -p 8000:8000 --net mynet -v config:/app/config myproject
We have a very small python code which we need to run on container. Following two commands are working fine ,
I want to merge them in one ? the ultimate goal is container should be up(base on image) , run python and exit by itself.
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage:latest
docker exec -it mycontainer python3 /usr/src/app/subfolder/createfile.py
I tried -c "/bin/bash python3 /usr/src/app/subfolder/createfile.py" this didnt work , it just.
You can add a file called Dockerfile (with no dot before or after file name, exactly this one) with the below contents:
from ubuntu:20.04
env run=/usr/src/app/subfolder/createfile.py
cmd ["/bin/bash", "-c", "${run}"]
Then run:
docker build YOUR_IMAGE_NAME # this should be a new name, like pythonimage2
Now run it:
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage2:latest
So, now the docker executing command which makes container running is now /usr/src/app/subfolder/createfile.py.
Regarding your answer, you can add a variable and each time just modify that line.
Even you can do this to your Dockerfile:
from ubuntu:20.04
env path=/usr/src/app/subfolder/
env file=$path/createfile.py
cmd ["/bin/bash", "-c", "${file}"]
Now you just need to modify file variable. If any of your sub-directories changes, you can remove that from path variable and add it to file variable.
I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1
I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.
You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))
I tried to run docker image but have this problem. All the files are saved in the directory "my_new_docker_build". This is the Dockerfile:
FROM python:2.7.14
RUN mkdir /my_new_docker_build
WORKDIR /my_new_docker_build
ADD . /my_new_docker_build/
EXPOSE 5984
CMD ["python", "/my_new_docker_build/py_couchdb.py"]
And this is is my Python code cadded "py_couchdb.py":
from os import system
# Launch volume docker couchdb from command line
system("docker run --name my-couchdb -v /my/custom-config-dir:/opt/couchdb/etc/local.d -d couchdb")
# Coudchdb visible in open world
system("docker run -p 5984:5984 -d couchdb")