import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'--strat',
type=str,
)
args = parser.parse_args()
strat = args.strat
I would like to right my docker-compose.yml file such as I would just pass my argument from there.
I did
version: "3.3"
services:
mm:
container_name: mm
stdin_open: true
build: .
context: .
dockerfile: Dockerfile
args:
strat: 1
and my docker file
FROM python:3.10.7
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py
But it does not work.
Any idea what I should change pelase?
You need to update docker file to process the build arguments and remap them to environment variables to be processed by CMD. Try something like the following:
FROM python:3.10.7
ARG strat
# ...
ENV strat=${strat}
CMD python3 main.py --strat=$strat
But personally I would consider completely switching to environment variables instead of build arguments (so there is no need to rebuild image for every argument change).
The args thingy is for "build time" only.
If you want to run your already built image with different arguments, use environment variables or just pass them as you would with a regular binary.
like: docker compose run backend ./manage.py makemigrations
here you see that the ./manage.py and makemigrations are two arguments passed to backend service defined in docker-compose.yml
Related
My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build
I have a Makefile, that runs a docker-compose, which has a container that executes a python script. I want to be able to pass a variable in the command-line to the Makefile and print it within the python script (testing.py).
My directory looks like:
main_folder:
-docker-compose.yaml
-Makefile
-testing.py
I have tried with the following configuration. The Makefile is:
.PHONY: run run-prod stop stop-prod rm
run:
WORKING_DAG=$(working_dag) docker-compose -f docker-compose.yml up -d --remove-orphans --build --force-recreate
The docker-compose is:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
And the file testing.py is:
import sys
print(sys.argv[0], flush=True)
When I run in the command line:
make working_dag=testing run
It doesn't fail but it does not print anything neither. How could I make it? Thanks
I believe that the variable WORKING_DAG is getting assigned correctly through the command-line and Makefile is passing it correctly to the docker-compose. I verified it by running the container to not be destroyed and then after logging into the container, I checked the value of WORKING_DAG:
To not destroy the container once the docker execution is completed, I modified the docker-compose.yml, as follows:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
command: -c "tail -f /dev/null"
airflow#d8dcb07c926a:/opt/airflow$ echo $WORKING_DAG
testing
The issue that docker does not display Python's std.out when deploying with docker-compose was already commented in Github, here, and it still not resolved. Making it work when using docker-compose, is only possible if we transfer/mount the file into the container, or if we use Dockerfile instead.
When using a Dockerfile, you only have to run the corresponding script as follows,
CMD ["python", "-u", "testing.py", "$WORKING_DAG"]
To mount the script into the container, please look at #DazWilkin's answer, here.
You'll need to mount your testing.py into the container (using volumes). In the following, your current working directory (${PWD}) is used and testing.py is mounted in the container's root directory:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
volumes:
- ${PWD}/testing.py:/testing.py
environment:
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 /testing.py ${WORKING_DAG}"
NOTE There's no need to include WORKING_DAG in the service definition as it's exposed to the Docker Compose environment by your Makefile. Setting it as you did, overwrites it with "" (empty string) because ${working_dag} was your original environment variable but you remapped this to WORKING_DAG in your Makefile run step.
And
import sys
print(sys.argv[0:], flush=True)
Then:
make --always-make working_dag=Freddie run
WORKING_DAG=Freddie docker-compose --file=./docker-compose.yaml up
Recreating 66014039_prepare_files_1 ... done
Attaching to 66014039_prepare_files_1
prepare_files_1 | ['/testing.py', 'Freddie']
66014039_prepare_files_1 exited with code 0
I have the following Dockerfile:
FROM python:3.6
ADD . /
RUN pip install -r requirements.txt
ENTRYPOINT ["python3.6", "./main.py"]
Where main.py takes in run time arguments that are parsed using argparse like so:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('city')
args = parser.parse_args()
print(args.city)
I have succesfully built a docker image from my Dockerfile and run a container based on this image using:
docker run -it my-docker-image "los angeles"
This works great and argparse in python receives the param "los angeles" as the city in my python code snippet.
I am now trying to accomplish the equivalent of
docker run -it my-docker-image "los angeles"
But with a docker-compose.yml file. Right now my docker-compose.yml file looks like:
version: '3'
services:
data-worker:
image: url-to-my-docker-container-on-ecr-that-i-have-pulled-onto-my-local-machine
container_name: run-for-city
volumes:
- ~/.aws/:/root/.aws
I then try running:
docker-compose up
and it gets started but fails saying:
run-for-city | usage: main.py [-h] city
run-for-city | main.py: error: the following arguments are required: city
run-for-city exited with code 2
which makes perfect sense as i need to pass the city param when running docker-compose up but don't know exactly how to.
i tried:
docker-compose up "los angeles"
but that did not work.
Do i need to add something to my docker-compose.yml and or an argument to the docker-compose up command or what?
In Docker terminology, the extra parameter you're passing is a "command". (Anything after the image name in docker run is the command; conventionally it's an actual command to run, but if you have an ENTRYPOINT in your Dockerfile, it's passed as arguments to the entrypoint.) You can replicate this in your docker-compose.yml file:
version: '3'
services:
data-worker:
image: 123456789012.dkr...
volumes:
- ~/.aws/:/root/.aws
command: ["los angeles"] # <--
If you need to pass this at the command line, the only real way to do it is via an environment variable. Compose will substitute environment variables (note, somewhat limited syntax) and I believe it will work to say
command: ["$CITY"]
and you'd launch this with
CITY="los angeles" docker-compose up
Suppose I have
ENTRYPOINT ["python", "myscript.py"]
and myscript.py has an argument --envvar
That is, if I were to run it locally, I would run python myscript --envvar $envvar
Is there any way to provide this argument in Docker, given that I've already chosen to make Python my entrypoint?
If it’s really an environment variable, use the docker run -e option.
docker run -e VAR=value myimage
Alternately, anything you specify as a “command”, either things after the image name in the docker run command or a Dockerfile CMD directive, get passed as command-line arguments to the entrypoint.
# note: your local shell expands $envvar
docker run myimage --envvar "$envvar"
I have made a little python script to create a DB and some tables inside a RethinkDB
But now I'm trying to launch this python script inside my rethink container launched with docker-compose.
This is my docker-compose.yml rethink container config
# Rethink DB
rethink:
image: rethinkdb:latest
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
I'm trying to execute the script with after launching my container
docker exec -it rethink python src/app/db-install.py
But I get this error
rpc error: code = 2 desc = oci runtime error: exec failed: exec: "python": executable file not found in $PATH
Python is not found in me container. Is this possible to execute a python script inside a given container with docker-compose or with docker exec ?
First find out if you have python executable in the container:
docker exec -it rethink which python
If it exists, Use the absolute path provided by which command in previous step:
docker exec -it rethink /absolute/path/to/python src/app/db-install.py
If not, you can convert your python script to bash script, so you can run it without extra executables and libraries.
Or you can create a dockerfile, use base image, and install python.
dockerfile:
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Docker Compose file:
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
Docker-compose
Assuming that python is installed, try:
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE MY_PYTHON_COMMAND
For a start, you might also just go into the shell at first and run a python script from the command prompt.
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE bash
In your case, MY_DOCKER_COMPOSE_SERVICE is 'rethink', and that is not the container name here, but the name of the service (first line rethink:), and only the service is run with docker-compose run, not the container.
The MY_PYTHON_COMMAND is, in your case of Python2, python src/app/db-install.py, but in Python3 it is python -m src/app/db-install (without the ".py"), or, if you have Python3 and Python2 installed, python3 -m src/app/db-install.
Dockerfile
To be able to run this python command, the Python file needs to be in the container. Therefore, in your Dockerfile that you need to call with build: ., you need to copy your build directory to a directory in the container of your choice
COPY $PROJECT_PATH /tmp
This /tmp will be created in your build directory. If you just write ".", you do not have any subfolder and save it directly in the build directory.
When using /tmp as the subfolder, you might write at the end of your Dockerfile:
WORKDIR /tmp
Docker-compose
Or if you do not change the WORKDIR from the build (".") context to /tmp and you still want to reach /tmp, run your Python file like /tmp/db-install.py.
The rethinkdb image is based on the debian:jessie image :
https://github.com/rethinkdb/rethinkdb-dockerfiles/blob/da98484fc73485fe7780546903d01dcbcd931673/jessie/2.3.5/Dockerfile
The debian:jessie image does not come with python installed.
So you will need to create your own Dockerfile, something like :
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Then change your docker-compose :
# Rethink DB
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
build : . is the path to your Dockerfile.