How to run different python scripts in parallel containers from same image - python

I am trying to do something very simple (I think), I want do build a docker image and launch two different scripts from the same image in parallel running containers.
Something as simple as
Container 1 -> print("Hello")
Container 2 -> print('World")
I did some research but some techniques seem a little over engineered and others do something like
CMD ["python", "script.py"] && ["python", "script2.py"]
which, isn't what I'm looking for.
I would love to see something like
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
But running two different scripts.
I'm still fairly new to all of this, so if this is a foolish question, I apologize in advance and thank you all for working with me.

You can do this easily using Docker Compose. Here is a simple docker-compose.yml file just to show the idea:
version: '3'
services:
app1:
image: alpine
command: >
/bin/sh -c 'while true; do echo "pre-processing"; sleep 1; done'
app2:
image: alpine
command: >
/bin/sh -c 'while true; do echo "post-processing"; sleep 1; done'
As you see both services use the same image, alpine in this example, but differ in commands they run. Execute docker-compose up and see the output:
app1_1 | pre-processing
app2_1 | post-processing
app2_1 | post-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
...
In your case, you just change the image to your image, say myimage:v1, and change the service commands so that the first service definition will have command: python script1.py line and the second one command: python script2.py.

If you are set on using the same image then I would suggest you set the ENTRYPOINT to python and then use docker run command to start the containers by providing the scrips as the CMD like so:
Dockerfile:
FROM python
...
ENTRYPOINT ["python"]
And the use the docker run command like so
docker run -d my_image script.py && docker run -d my_image script2.py
Which would start two containers each running separate scripts
BUT - I like to keep the images clean in terms of not having any additional scripts or packages that are not necessary for my service to work, so in this case I would simply create two separate images each having one of the scripts and then run them similarly.
Example:
FROM python
COPY script.py script.py
ENTRYPOINT ["python"]
CMD ["script.py"]
And the second image:
FROM python
COPY script2.py script2.py
ENTRYPOINT ["python"]
CMD ["script2.py"]
And then just build them as separate images and run the the same way as before

Any command you put at the end of the docker run command (or the Docker Compose command: field) replaces the CMD in the Dockerfile. I would suggest still putting in some useful default CMD, but you can always just
docker run --name hello myimage python script.py
docker run --name world myimage python script2.py

Try these steps, it should work.
Create a Dockerfile with contents:
FROM python:3.7-alpine
COPY script1.py /script1.py
COPY script2.py /script2.py
CMD ["/bin/sh"]
In script1.py
print("Hello")
In script2.py
print("World")
Build docker image docker build -t myimage:v1 .
Run the containers
$ docker run -it --rm --entrypoint python myimage:v1 /script1.py
Hello
$
$ docker run -it --rm --entrypoint python myimage:v1 /script2.py
World
$
NOTE: Here we're using same docker image myimage:v1 and just changing the entrypoint in every docker run command.
More info here.
Hope this helps.

Related

run two python scripts with docker compose

My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build

Running multiple python main files off Docker image

I have created a Docker image with dockerfile where the Entrypoint is as follows:
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python", "./myprojectmain.py", "--config", "./config.py"]
When I run I use the command:
docker run myproject
all is fine it seems.
However I have a secondary .py file in the root of the project called setup.py. The purpose of this file is to update some of the config and json files after getting some input from the user.
Is there a way to run this secondary file (setup.py) or do I need to create a whole new image (which seems ridiculous).
Thanks
Well... if you got an image, you don't have to use entrypoint... just run your scripts like this:
docker run image "python /some/path/myscript.py"
or
docker run image /bin/bash -c "cd /some/path && python myscript.py"
or with entry point
RUN ./myprojectmain.py --config ./config.py
RUN ./myproject2main.py --config ./config.py
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python"]
You can straightforwardly provide an alternate command after the image name in the docker run command. It's harder to override the entrypoint, though. If you have both a command and an entrypoint then they are combined together into a single command.
This workflow is easiest if your Dockerfile has a CMD, and that's a complete runnable shell command. If you have an ENTRYPOINT at all, it is some kind of wrapper that does some initial setup and then runs the command it's given as additional arguments. In this particular setup, conda run with its arguments seems to meet that need and have the correct form, so you could say
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "--"]
CMD ["python", "./myprojectmain.py", "--config", "./config.py"]
(Note that conda run seems to have some issues; you could probably simulate it using a custom entrypoint wrapper script or use a pip-based non-virtual-environment workflow instead.)
If you split the ENTRYPOINT and CMD like this, then you can run
docker run myproject \
python setup.py
The alternate python setup.py command will be appended to the conda run entrypoint command.
... update some of the config and json files ...
It's often a good idea to inject these into your container using a bind mount. Depending on how exactly the files get set up, you may be able to initialize them from the host environment, without Docker
./setup.py
docker run -d -v $PWD/config:/app/config myproject
but if they are sensitive to the Docker environment in some way, you could do it in Docker too; make sure to mount the same configuration storage into both containers.
docker network create mynet
docker volume create config
docker run --rm --net mynet -v config:/app/config myproject ./setup.py
docker run -d -p 8000:8000 --net mynet -v config:/app/config myproject

How to run container and execute python on docker in one command?

We have a very small python code which we need to run on container. Following two commands are working fine ,
I want to merge them in one ? the ultimate goal is container should be up(base on image) , run python and exit by itself.
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage:latest
docker exec -it mycontainer python3 /usr/src/app/subfolder/createfile.py
I tried -c "/bin/bash python3 /usr/src/app/subfolder/createfile.py" this didnt work , it just.
You can add a file called Dockerfile (with no dot before or after file name, exactly this one) with the below contents:
from ubuntu:20.04
env run=/usr/src/app/subfolder/createfile.py
cmd ["/bin/bash", "-c", "${run}"]
Then run:
docker build YOUR_IMAGE_NAME # this should be a new name, like pythonimage2
Now run it:
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage2:latest
So, now the docker executing command which makes container running is now /usr/src/app/subfolder/createfile.py.
Regarding your answer, you can add a variable and each time just modify that line.
Even you can do this to your Dockerfile:
from ubuntu:20.04
env path=/usr/src/app/subfolder/
env file=$path/createfile.py
cmd ["/bin/bash", "-c", "${file}"]
Now you just need to modify file variable. If any of your sub-directories changes, you can remove that from path variable and add it to file variable.

How to pass variable from Makefile to Python script through docker-compose

I have a Makefile, that runs a docker-compose, which has a container that executes a python script. I want to be able to pass a variable in the command-line to the Makefile and print it within the python script (testing.py).
My directory looks like:
main_folder:
-docker-compose.yaml
-Makefile
-testing.py
I have tried with the following configuration. The Makefile is:
.PHONY: run run-prod stop stop-prod rm
run:
WORKING_DAG=$(working_dag) docker-compose -f docker-compose.yml up -d --remove-orphans --build --force-recreate
The docker-compose is:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
And the file testing.py is:
import sys
print(sys.argv[0], flush=True)
When I run in the command line:
make working_dag=testing run
It doesn't fail but it does not print anything neither. How could I make it? Thanks
I believe that the variable WORKING_DAG is getting assigned correctly through the command-line and Makefile is passing it correctly to the docker-compose. I verified it by running the container to not be destroyed and then after logging into the container, I checked the value of WORKING_DAG:
To not destroy the container once the docker execution is completed, I modified the docker-compose.yml, as follows:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
command: -c "tail -f /dev/null"
airflow#d8dcb07c926a:/opt/airflow$ echo $WORKING_DAG
testing
The issue that docker does not display Python's std.out when deploying with docker-compose was already commented in Github, here, and it still not resolved. Making it work when using docker-compose, is only possible if we transfer/mount the file into the container, or if we use Dockerfile instead.
When using a Dockerfile, you only have to run the corresponding script as follows,
CMD ["python", "-u", "testing.py", "$WORKING_DAG"]
To mount the script into the container, please look at #DazWilkin's answer, here.
You'll need to mount your testing.py into the container (using volumes). In the following, your current working directory (${PWD}) is used and testing.py is mounted in the container's root directory:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
volumes:
- ${PWD}/testing.py:/testing.py
environment:
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 /testing.py ${WORKING_DAG}"
NOTE There's no need to include WORKING_DAG in the service definition as it's exposed to the Docker Compose environment by your Makefile. Setting it as you did, overwrites it with "" (empty string) because ${working_dag} was your original environment variable but you remapped this to WORKING_DAG in your Makefile run step.
And
import sys
print(sys.argv[0:], flush=True)
Then:
make --always-make working_dag=Freddie run
WORKING_DAG=Freddie docker-compose --file=./docker-compose.yaml up
Recreating 66014039_prepare_files_1 ... done
Attaching to 66014039_prepare_files_1
prepare_files_1 | ['/testing.py', 'Freddie']
66014039_prepare_files_1 exited with code 0

Docker interactive mode and executing script

I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.

Categories

Resources