I have the following Dockerfile:
FROM python:3.6
ADD . /
RUN pip install -r requirements.txt
ENTRYPOINT ["python3.6", "./main.py"]
Where main.py takes in run time arguments that are parsed using argparse like so:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('city')
args = parser.parse_args()
print(args.city)
I have succesfully built a docker image from my Dockerfile and run a container based on this image using:
docker run -it my-docker-image "los angeles"
This works great and argparse in python receives the param "los angeles" as the city in my python code snippet.
I am now trying to accomplish the equivalent of
docker run -it my-docker-image "los angeles"
But with a docker-compose.yml file. Right now my docker-compose.yml file looks like:
version: '3'
services:
data-worker:
image: url-to-my-docker-container-on-ecr-that-i-have-pulled-onto-my-local-machine
container_name: run-for-city
volumes:
- ~/.aws/:/root/.aws
I then try running:
docker-compose up
and it gets started but fails saying:
run-for-city | usage: main.py [-h] city
run-for-city | main.py: error: the following arguments are required: city
run-for-city exited with code 2
which makes perfect sense as i need to pass the city param when running docker-compose up but don't know exactly how to.
i tried:
docker-compose up "los angeles"
but that did not work.
Do i need to add something to my docker-compose.yml and or an argument to the docker-compose up command or what?
In Docker terminology, the extra parameter you're passing is a "command". (Anything after the image name in docker run is the command; conventionally it's an actual command to run, but if you have an ENTRYPOINT in your Dockerfile, it's passed as arguments to the entrypoint.) You can replicate this in your docker-compose.yml file:
version: '3'
services:
data-worker:
image: 123456789012.dkr...
volumes:
- ~/.aws/:/root/.aws
command: ["los angeles"] # <--
If you need to pass this at the command line, the only real way to do it is via an environment variable. Compose will substitute environment variables (note, somewhat limited syntax) and I believe it will work to say
command: ["$CITY"]
and you'd launch this with
CITY="los angeles" docker-compose up
Related
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'--strat',
type=str,
)
args = parser.parse_args()
strat = args.strat
I would like to right my docker-compose.yml file such as I would just pass my argument from there.
I did
version: "3.3"
services:
mm:
container_name: mm
stdin_open: true
build: .
context: .
dockerfile: Dockerfile
args:
strat: 1
and my docker file
FROM python:3.10.7
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py
But it does not work.
Any idea what I should change pelase?
You need to update docker file to process the build arguments and remap them to environment variables to be processed by CMD. Try something like the following:
FROM python:3.10.7
ARG strat
# ...
ENV strat=${strat}
CMD python3 main.py --strat=$strat
But personally I would consider completely switching to environment variables instead of build arguments (so there is no need to rebuild image for every argument change).
The args thingy is for "build time" only.
If you want to run your already built image with different arguments, use environment variables or just pass them as you would with a regular binary.
like: docker compose run backend ./manage.py makemigrations
here you see that the ./manage.py and makemigrations are two arguments passed to backend service defined in docker-compose.yml
I need to run the same python code, but with different initiation arguments with docker.
So under the main directory I've setup a folder called docker that contains different folders, each having same docker file but with the different arguments setup. Below is are examples of test_1 and test_2, where test_x is changed between the different folders, as well as test_1 becomes test_2 and so on:
Dockerfile found under docker/test_1 folder
FROM python:3.7
RUN mkdir /app/test_1
WORKDIR /app/test_1
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_1"]
Dockerfile found under docker/test_2 folder
FROM python:3.7
RUN mkdir /app/test_2
WORKDIR /app/test_2
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_2"]
Under the main directory I've setup a docker compose file that initiates the different containers (all running the same code) and that share a txt file in shared_folder:
services:
test_1:
container_name: test_1
build: ./docker/test_1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test_2:
container_name: test_2
build: ./docker/test_2
volumes:
- output:/app/shared_folder
restart: unless-stopped
So my question here with docker, is this the right way to go about it, when setting up multiple py executions of the same code with different parameter? or is there another recommended approach. Do want to mention, they need to share the file in shared_folder, that's a requirement and all the instances have read/write access to the same file in the shared_folder (this is a must have).
It is very easy to override the Dockerfile CMD with a docker run command-line argument or Compose command:. So, I would build only one image, and I would give it a useful default CMD.
FROM python:3.7
WORKDIR /app
COPY ./env/requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
CMD ["./main.py"]
(Make sure your script is executable – maybe run chmod +x main.py on the host – and begins with a "shebang" line #!/usr/bin/env python3, so you don't have to explicitly name the interpreter.)
Now in your docker-compose.yml file, have both services build: the same image. You'll technically get two images out in the docker images output but they will have the same image ID and the second image build will run extremely quickly (it will come entirely from the layer cache). Use Compose command: to override the entire CMD as required.
version: '3.8'
services:
test_1:
build: .
command: ./main.py -t test_1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test_2:
build: .
command: ./main.py -t test_2
volumes:
- output:/app/shared_folder
restart: unless-stopped
You could also manually run this outside of Compose if you just wanted to validate things, with the same approach
docker build -t myapp .
docker run --rm myapp \
./main.py --help
With this approach you do not need to rebuild the image for each different command you want to run or wrangle with the syntactic complexities of docker run --entrypoint.
First delete CMD ["python", "main.py","-t","test_2"] in Dockerfile and instead add entrypoint in docker-compose.yaml would be a better way to build the image for the code are all the same. if you have more container to start, it will save you a lot of time.
About the question you asked if the shared_folder you want to share is a read-only file, it is OK, if not, for instance, log files you want to put in it from instance out to the host, you should be careful about the log file name, should not be the same in the two containers.
I would definitely DRY it, use a single Dockefile and use an ARG to build them.
Here is what you could do:
In docker/Dockerfile:
FROM python:3.7
ARG FOLDER
## We need to duplicate the value of the ARG in an ENV
## because the arguments are only visible through the build
## so, it won't be accessible to our command
ENV FOLDER=$FOLDER
RUN mkdir -p /app/$FOLDER
WORKDIR /app/$FOLDER
COPY ./$FOLDER/env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["sh", "-c", "python main.py -t $FOLDER"]
And in your docker-compose.yml define those build arguments:
version: "3.9"
services:
test1:
container_name: test_1
build:
context: ./docker
args:
FOLDER: test1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test2:
container_name: test_2
build:
context: ./docker
args:
FOLDER: test2
volumes:
- output:/app/shared_folder
restart: unless-stopped
I have a Makefile, that runs a docker-compose, which has a container that executes a python script. I want to be able to pass a variable in the command-line to the Makefile and print it within the python script (testing.py).
My directory looks like:
main_folder:
-docker-compose.yaml
-Makefile
-testing.py
I have tried with the following configuration. The Makefile is:
.PHONY: run run-prod stop stop-prod rm
run:
WORKING_DAG=$(working_dag) docker-compose -f docker-compose.yml up -d --remove-orphans --build --force-recreate
The docker-compose is:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
And the file testing.py is:
import sys
print(sys.argv[0], flush=True)
When I run in the command line:
make working_dag=testing run
It doesn't fail but it does not print anything neither. How could I make it? Thanks
I believe that the variable WORKING_DAG is getting assigned correctly through the command-line and Makefile is passing it correctly to the docker-compose. I verified it by running the container to not be destroyed and then after logging into the container, I checked the value of WORKING_DAG:
To not destroy the container once the docker execution is completed, I modified the docker-compose.yml, as follows:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
command: -c "tail -f /dev/null"
airflow#d8dcb07c926a:/opt/airflow$ echo $WORKING_DAG
testing
The issue that docker does not display Python's std.out when deploying with docker-compose was already commented in Github, here, and it still not resolved. Making it work when using docker-compose, is only possible if we transfer/mount the file into the container, or if we use Dockerfile instead.
When using a Dockerfile, you only have to run the corresponding script as follows,
CMD ["python", "-u", "testing.py", "$WORKING_DAG"]
To mount the script into the container, please look at #DazWilkin's answer, here.
You'll need to mount your testing.py into the container (using volumes). In the following, your current working directory (${PWD}) is used and testing.py is mounted in the container's root directory:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
volumes:
- ${PWD}/testing.py:/testing.py
environment:
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 /testing.py ${WORKING_DAG}"
NOTE There's no need to include WORKING_DAG in the service definition as it's exposed to the Docker Compose environment by your Makefile. Setting it as you did, overwrites it with "" (empty string) because ${working_dag} was your original environment variable but you remapped this to WORKING_DAG in your Makefile run step.
And
import sys
print(sys.argv[0:], flush=True)
Then:
make --always-make working_dag=Freddie run
WORKING_DAG=Freddie docker-compose --file=./docker-compose.yaml up
Recreating 66014039_prepare_files_1 ... done
Attaching to 66014039_prepare_files_1
prepare_files_1 | ['/testing.py', 'Freddie']
66014039_prepare_files_1 exited with code 0
I am trying to do something very simple (I think), I want do build a docker image and launch two different scripts from the same image in parallel running containers.
Something as simple as
Container 1 -> print("Hello")
Container 2 -> print('World")
I did some research but some techniques seem a little over engineered and others do something like
CMD ["python", "script.py"] && ["python", "script2.py"]
which, isn't what I'm looking for.
I would love to see something like
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
But running two different scripts.
I'm still fairly new to all of this, so if this is a foolish question, I apologize in advance and thank you all for working with me.
You can do this easily using Docker Compose. Here is a simple docker-compose.yml file just to show the idea:
version: '3'
services:
app1:
image: alpine
command: >
/bin/sh -c 'while true; do echo "pre-processing"; sleep 1; done'
app2:
image: alpine
command: >
/bin/sh -c 'while true; do echo "post-processing"; sleep 1; done'
As you see both services use the same image, alpine in this example, but differ in commands they run. Execute docker-compose up and see the output:
app1_1 | pre-processing
app2_1 | post-processing
app2_1 | post-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
...
In your case, you just change the image to your image, say myimage:v1, and change the service commands so that the first service definition will have command: python script1.py line and the second one command: python script2.py.
If you are set on using the same image then I would suggest you set the ENTRYPOINT to python and then use docker run command to start the containers by providing the scrips as the CMD like so:
Dockerfile:
FROM python
...
ENTRYPOINT ["python"]
And the use the docker run command like so
docker run -d my_image script.py && docker run -d my_image script2.py
Which would start two containers each running separate scripts
BUT - I like to keep the images clean in terms of not having any additional scripts or packages that are not necessary for my service to work, so in this case I would simply create two separate images each having one of the scripts and then run them similarly.
Example:
FROM python
COPY script.py script.py
ENTRYPOINT ["python"]
CMD ["script.py"]
And the second image:
FROM python
COPY script2.py script2.py
ENTRYPOINT ["python"]
CMD ["script2.py"]
And then just build them as separate images and run the the same way as before
Any command you put at the end of the docker run command (or the Docker Compose command: field) replaces the CMD in the Dockerfile. I would suggest still putting in some useful default CMD, but you can always just
docker run --name hello myimage python script.py
docker run --name world myimage python script2.py
Try these steps, it should work.
Create a Dockerfile with contents:
FROM python:3.7-alpine
COPY script1.py /script1.py
COPY script2.py /script2.py
CMD ["/bin/sh"]
In script1.py
print("Hello")
In script2.py
print("World")
Build docker image docker build -t myimage:v1 .
Run the containers
$ docker run -it --rm --entrypoint python myimage:v1 /script1.py
Hello
$
$ docker run -it --rm --entrypoint python myimage:v1 /script2.py
World
$
NOTE: Here we're using same docker image myimage:v1 and just changing the entrypoint in every docker run command.
More info here.
Hope this helps.
I have made a little python script to create a DB and some tables inside a RethinkDB
But now I'm trying to launch this python script inside my rethink container launched with docker-compose.
This is my docker-compose.yml rethink container config
# Rethink DB
rethink:
image: rethinkdb:latest
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
I'm trying to execute the script with after launching my container
docker exec -it rethink python src/app/db-install.py
But I get this error
rpc error: code = 2 desc = oci runtime error: exec failed: exec: "python": executable file not found in $PATH
Python is not found in me container. Is this possible to execute a python script inside a given container with docker-compose or with docker exec ?
First find out if you have python executable in the container:
docker exec -it rethink which python
If it exists, Use the absolute path provided by which command in previous step:
docker exec -it rethink /absolute/path/to/python src/app/db-install.py
If not, you can convert your python script to bash script, so you can run it without extra executables and libraries.
Or you can create a dockerfile, use base image, and install python.
dockerfile:
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Docker Compose file:
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
Docker-compose
Assuming that python is installed, try:
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE MY_PYTHON_COMMAND
For a start, you might also just go into the shell at first and run a python script from the command prompt.
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE bash
In your case, MY_DOCKER_COMPOSE_SERVICE is 'rethink', and that is not the container name here, but the name of the service (first line rethink:), and only the service is run with docker-compose run, not the container.
The MY_PYTHON_COMMAND is, in your case of Python2, python src/app/db-install.py, but in Python3 it is python -m src/app/db-install (without the ".py"), or, if you have Python3 and Python2 installed, python3 -m src/app/db-install.
Dockerfile
To be able to run this python command, the Python file needs to be in the container. Therefore, in your Dockerfile that you need to call with build: ., you need to copy your build directory to a directory in the container of your choice
COPY $PROJECT_PATH /tmp
This /tmp will be created in your build directory. If you just write ".", you do not have any subfolder and save it directly in the build directory.
When using /tmp as the subfolder, you might write at the end of your Dockerfile:
WORKDIR /tmp
Docker-compose
Or if you do not change the WORKDIR from the build (".") context to /tmp and you still want to reach /tmp, run your Python file like /tmp/db-install.py.
The rethinkdb image is based on the debian:jessie image :
https://github.com/rethinkdb/rethinkdb-dockerfiles/blob/da98484fc73485fe7780546903d01dcbcd931673/jessie/2.3.5/Dockerfile
The debian:jessie image does not come with python installed.
So you will need to create your own Dockerfile, something like :
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Then change your docker-compose :
# Rethink DB
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
build : . is the path to your Dockerfile.