I need to run the same python code, but with different initiation arguments with docker.
So under the main directory I've setup a folder called docker that contains different folders, each having same docker file but with the different arguments setup. Below is are examples of test_1 and test_2, where test_x is changed between the different folders, as well as test_1 becomes test_2 and so on:
Dockerfile found under docker/test_1 folder
FROM python:3.7
RUN mkdir /app/test_1
WORKDIR /app/test_1
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_1"]
Dockerfile found under docker/test_2 folder
FROM python:3.7
RUN mkdir /app/test_2
WORKDIR /app/test_2
COPY ./env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ../ .
CMD ["python", "main.py","-t","test_2"]
Under the main directory I've setup a docker compose file that initiates the different containers (all running the same code) and that share a txt file in shared_folder:
services:
test_1:
container_name: test_1
build: ./docker/test_1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test_2:
container_name: test_2
build: ./docker/test_2
volumes:
- output:/app/shared_folder
restart: unless-stopped
So my question here with docker, is this the right way to go about it, when setting up multiple py executions of the same code with different parameter? or is there another recommended approach. Do want to mention, they need to share the file in shared_folder, that's a requirement and all the instances have read/write access to the same file in the shared_folder (this is a must have).
It is very easy to override the Dockerfile CMD with a docker run command-line argument or Compose command:. So, I would build only one image, and I would give it a useful default CMD.
FROM python:3.7
WORKDIR /app
COPY ./env/requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
CMD ["./main.py"]
(Make sure your script is executable – maybe run chmod +x main.py on the host – and begins with a "shebang" line #!/usr/bin/env python3, so you don't have to explicitly name the interpreter.)
Now in your docker-compose.yml file, have both services build: the same image. You'll technically get two images out in the docker images output but they will have the same image ID and the second image build will run extremely quickly (it will come entirely from the layer cache). Use Compose command: to override the entire CMD as required.
version: '3.8'
services:
test_1:
build: .
command: ./main.py -t test_1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test_2:
build: .
command: ./main.py -t test_2
volumes:
- output:/app/shared_folder
restart: unless-stopped
You could also manually run this outside of Compose if you just wanted to validate things, with the same approach
docker build -t myapp .
docker run --rm myapp \
./main.py --help
With this approach you do not need to rebuild the image for each different command you want to run or wrangle with the syntactic complexities of docker run --entrypoint.
First delete CMD ["python", "main.py","-t","test_2"] in Dockerfile and instead add entrypoint in docker-compose.yaml would be a better way to build the image for the code are all the same. if you have more container to start, it will save you a lot of time.
About the question you asked if the shared_folder you want to share is a read-only file, it is OK, if not, for instance, log files you want to put in it from instance out to the host, you should be careful about the log file name, should not be the same in the two containers.
I would definitely DRY it, use a single Dockefile and use an ARG to build them.
Here is what you could do:
In docker/Dockerfile:
FROM python:3.7
ARG FOLDER
## We need to duplicate the value of the ARG in an ENV
## because the arguments are only visible through the build
## so, it won't be accessible to our command
ENV FOLDER=$FOLDER
RUN mkdir -p /app/$FOLDER
WORKDIR /app/$FOLDER
COPY ./$FOLDER/env/requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["sh", "-c", "python main.py -t $FOLDER"]
And in your docker-compose.yml define those build arguments:
version: "3.9"
services:
test1:
container_name: test_1
build:
context: ./docker
args:
FOLDER: test1
volumes:
- output:/app/shared_folder
restart: unless-stopped
test2:
container_name: test_2
build:
context: ./docker
args:
FOLDER: test2
volumes:
- output:/app/shared_folder
restart: unless-stopped
Related
I have a python script which changes the first line of a file, and it works locally but not when running the docker container.
Directory structure
my_workdir
├─── Dockerfile
├─── docker-compose.yml
├─── script.py
└─── data
└─── r2.txt
this is my python code
a_file = open("data/r2.txt", "r")
list_of_lines = a_file.readlines()
list_of_lines[0] = "1\n"
a_file = open("data/r2.txt", "a")
a_file.writelines(list_of_lines)
a_file.close()
it basically changes the first line inside "r2.txt" to "1" but when I check the txt file is not changing it. I was reading that you need to mount a volume so I added the following to my docker-compose.yml file
version: "3"
services:
app:
image: imagen-c1
container_name: c1
privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
build: .
environment:
- DISPLAY=${DISPLAY}
- LEGACY_IPTABLES=true
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- ./:/data #HERE IS THE VOLUME
this is my dockerfile
FROM python:3.9-alpine
RUN pip install selenium
RUN apk add chromium
RUN apk add chromium-chromedriver
ENV CHROME_BIN=/usr/bin/chromium-browser \
CHROME_PATH=/usr/lib/chromium/
ENV HOME /home/user-machine
RUN pip install webdriver_manager
WORKDIR ./
COPY ["script.py", "data/", "./"]
CMD python3 script.py
maybe I'm not doing it correctly because the file remains the same.
I think there are 2 problems in your current code.
COPY ["script.py", "data/", "./"]
inside_container_root
├─── script.py
└─── r2.txt
Above is what will be outcome, according to this SO answer
Try do ls -l to see if this is really the case:
docker build -t my_image
docker run my_image ls -l
This might be what you wanted:
COPY script.py ./
ADD data data
ADD can copy directories
I think file data/r2.txt will be read-only because it is a file inside the image. Once you resolved (1), your script.py should hit error saying you are trying to write a read-only file. I think the whole data/ directory is not writable after image is created. The updated file needs to be a folder not existing in the image. Like data_out. If you do not need the contents of your new file to be used outside the container, you do not need to mount volume. Mounting volume is to enable retrieval of file from outside the container. To get a writable directory, just make sure that writeable directory does not exist in the image.
a_file = open("data_out/r2.txt", "a")
You might want something like this.
I have a Makefile, that runs a docker-compose, which has a container that executes a python script. I want to be able to pass a variable in the command-line to the Makefile and print it within the python script (testing.py).
My directory looks like:
main_folder:
-docker-compose.yaml
-Makefile
-testing.py
I have tried with the following configuration. The Makefile is:
.PHONY: run run-prod stop stop-prod rm
run:
WORKING_DAG=$(working_dag) docker-compose -f docker-compose.yml up -d --remove-orphans --build --force-recreate
The docker-compose is:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
And the file testing.py is:
import sys
print(sys.argv[0], flush=True)
When I run in the command line:
make working_dag=testing run
It doesn't fail but it does not print anything neither. How could I make it? Thanks
I believe that the variable WORKING_DAG is getting assigned correctly through the command-line and Makefile is passing it correctly to the docker-compose. I verified it by running the container to not be destroyed and then after logging into the container, I checked the value of WORKING_DAG:
To not destroy the container once the docker execution is completed, I modified the docker-compose.yml, as follows:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
command: -c "tail -f /dev/null"
airflow#d8dcb07c926a:/opt/airflow$ echo $WORKING_DAG
testing
The issue that docker does not display Python's std.out when deploying with docker-compose was already commented in Github, here, and it still not resolved. Making it work when using docker-compose, is only possible if we transfer/mount the file into the container, or if we use Dockerfile instead.
When using a Dockerfile, you only have to run the corresponding script as follows,
CMD ["python", "-u", "testing.py", "$WORKING_DAG"]
To mount the script into the container, please look at #DazWilkin's answer, here.
You'll need to mount your testing.py into the container (using volumes). In the following, your current working directory (${PWD}) is used and testing.py is mounted in the container's root directory:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
volumes:
- ${PWD}/testing.py:/testing.py
environment:
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 /testing.py ${WORKING_DAG}"
NOTE There's no need to include WORKING_DAG in the service definition as it's exposed to the Docker Compose environment by your Makefile. Setting it as you did, overwrites it with "" (empty string) because ${working_dag} was your original environment variable but you remapped this to WORKING_DAG in your Makefile run step.
And
import sys
print(sys.argv[0:], flush=True)
Then:
make --always-make working_dag=Freddie run
WORKING_DAG=Freddie docker-compose --file=./docker-compose.yaml up
Recreating 66014039_prepare_files_1 ... done
Attaching to 66014039_prepare_files_1
prepare_files_1 | ['/testing.py', 'Freddie']
66014039_prepare_files_1 exited with code 0
I have the following Dockerfile:
FROM python:3.6
ADD . /
RUN pip install -r requirements.txt
ENTRYPOINT ["python3.6", "./main.py"]
Where main.py takes in run time arguments that are parsed using argparse like so:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('city')
args = parser.parse_args()
print(args.city)
I have succesfully built a docker image from my Dockerfile and run a container based on this image using:
docker run -it my-docker-image "los angeles"
This works great and argparse in python receives the param "los angeles" as the city in my python code snippet.
I am now trying to accomplish the equivalent of
docker run -it my-docker-image "los angeles"
But with a docker-compose.yml file. Right now my docker-compose.yml file looks like:
version: '3'
services:
data-worker:
image: url-to-my-docker-container-on-ecr-that-i-have-pulled-onto-my-local-machine
container_name: run-for-city
volumes:
- ~/.aws/:/root/.aws
I then try running:
docker-compose up
and it gets started but fails saying:
run-for-city | usage: main.py [-h] city
run-for-city | main.py: error: the following arguments are required: city
run-for-city exited with code 2
which makes perfect sense as i need to pass the city param when running docker-compose up but don't know exactly how to.
i tried:
docker-compose up "los angeles"
but that did not work.
Do i need to add something to my docker-compose.yml and or an argument to the docker-compose up command or what?
In Docker terminology, the extra parameter you're passing is a "command". (Anything after the image name in docker run is the command; conventionally it's an actual command to run, but if you have an ENTRYPOINT in your Dockerfile, it's passed as arguments to the entrypoint.) You can replicate this in your docker-compose.yml file:
version: '3'
services:
data-worker:
image: 123456789012.dkr...
volumes:
- ~/.aws/:/root/.aws
command: ["los angeles"] # <--
If you need to pass this at the command line, the only real way to do it is via an environment variable. Compose will substitute environment variables (note, somewhat limited syntax) and I believe it will work to say
command: ["$CITY"]
and you'd launch this with
CITY="los angeles" docker-compose up
I'd like to be able to configure env variables for my docker containers and use them in build process with .env file
I currently have the following .env file:
SSH_PRIVATE_KEY=TEST
APP_PORT=8040
my docker-compose:
version: '3'
services:
companies:
image: companies8
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
ports:
- ${APP_PORT}:${APP_PORT}
env_file: .env
build:
context: .
args:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
my Dockerfile:
FROM python:3.7
# set a directory for the app
COPY . .
#Accept input argument from docker-compose.yml
ARG SSH_PRIVATE_KEY=abcdef
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
# Pass the content of the private key into the container
RUN mkdir -p /root/.ssh
RUN chmod 400 /root/.ssh
RUN echo "$SSH_PRIVATE_KEY" > /root/.ssh/id_rsa
RUN echo "$SSH_PUBLIC_KEY" > /root/.ssh/id_rsa.pub
RUN chmod 400 /root/.ssh/id_rsa
RUN chmod 400 /root/.ssh/id_rsa.pub
RUN eval $(ssh-agent -s) && ssh-add /root/.ssh/id_rsa && ssh-keyscan bitbucket.org > /root/.ssh/known_hosts
RUN ssh -T git#bitbucket.org
#Install the packages
RUN pip install -r v1/requirements.txt
# Tell the port number the container should expose
EXPOSE 8040
# run the command
CMD ["python", "v1/__main__.py"]
and i have the same SSH_PRIVATE_KEY environment variable set on my windows with value "test1" and the build log gives me the result 'test1' from
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
not the value that's in the .env file.
I need this because some of the libraries listed in my requirements.txt are in an internal repository and I need ssh to access them, therefore the ssh private key. There might be another proper way to use this, but its the general scenario i want to achieve - to pass env variables values from .env file to my docker build
There's a certain overlap between ENV and ARG as shown in the image below:
Since you are having the variable already exported in the operating system, its value will be present in the image from the ENV instruction.
But if you do not really need the variable in the image and only in the build step (as far as I see from the docker-compose file), then the ARG instruction is enough.
Being new to python & docker, I created a small flask app (test.py) which has two hardcoded values:
username = "test"
password = "12345"
I'm able to create a Docker image and run a container from the following Dockerfile:
FROM python:3.6
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/code/test.py"]`
How can I create a ENV variable for username & password and pass dynamic values while running containers?
Within your python code you can read env variables like:
import os
username = os.environ['MY_USER']
password = os.environ['MY_PASS']
print("Running with user: %s" % username)
Then when you run your container you can set these variables:
docker run -e MY_USER=test -e MY_PASS=12345 ... <image-name> ...
This will set the env variable within the container and these will be later read by the python script (test.py)
More info on os.environ and docker env
In your Python code you can do something like this:
# USERNAME = os.getenv('NAME_OF_ENV_VARIABLE','default_value_if_no_env_var_is_set')
USERNAME = os.getenv('USERNAME', 'test')
Then you can create a docker-compose.yml file to run your dockerfile with:
version: '2'
services:
python-container:
image: python-image:latest
environment:
- USERNAME=test
- PASSWORD=12345
You will run the compose file with:
$ docker-compose up
All you need to remember is to build your dockerfile that you mentioned in your question with:
$ docker build -t python-image .
Let me know if that helps. I hope that answers your question.
FROM python:3
MAINTAINER <abc#test.com>
ENV username=test
password=12345
RUN mkdir /dir/name
RUN cd /dir/name && pip3 install -r requirements.txt
WORKDIR /dir/name
ENTRYPOINT ["/usr/local/bin/python", "./test.py"]
I split my docker-compose into docker-compose.yml (base), docker-compose.dev.yml, etc., then I had this issue.
I solved it by specifying the .env file explicitly in the base:
web:
env_file:
- .env
Not sure why, according to the docs it should just work if there's an .env file.