My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build
Related
I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.
I have created a Docker image with dockerfile where the Entrypoint is as follows:
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python", "./myprojectmain.py", "--config", "./config.py"]
When I run I use the command:
docker run myproject
all is fine it seems.
However I have a secondary .py file in the root of the project called setup.py. The purpose of this file is to update some of the config and json files after getting some input from the user.
Is there a way to run this secondary file (setup.py) or do I need to create a whole new image (which seems ridiculous).
Thanks
Well... if you got an image, you don't have to use entrypoint... just run your scripts like this:
docker run image "python /some/path/myscript.py"
or
docker run image /bin/bash -c "cd /some/path && python myscript.py"
or with entry point
RUN ./myprojectmain.py --config ./config.py
RUN ./myproject2main.py --config ./config.py
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python"]
You can straightforwardly provide an alternate command after the image name in the docker run command. It's harder to override the entrypoint, though. If you have both a command and an entrypoint then they are combined together into a single command.
This workflow is easiest if your Dockerfile has a CMD, and that's a complete runnable shell command. If you have an ENTRYPOINT at all, it is some kind of wrapper that does some initial setup and then runs the command it's given as additional arguments. In this particular setup, conda run with its arguments seems to meet that need and have the correct form, so you could say
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "--"]
CMD ["python", "./myprojectmain.py", "--config", "./config.py"]
(Note that conda run seems to have some issues; you could probably simulate it using a custom entrypoint wrapper script or use a pip-based non-virtual-environment workflow instead.)
If you split the ENTRYPOINT and CMD like this, then you can run
docker run myproject \
python setup.py
The alternate python setup.py command will be appended to the conda run entrypoint command.
... update some of the config and json files ...
It's often a good idea to inject these into your container using a bind mount. Depending on how exactly the files get set up, you may be able to initialize them from the host environment, without Docker
./setup.py
docker run -d -v $PWD/config:/app/config myproject
but if they are sensitive to the Docker environment in some way, you could do it in Docker too; make sure to mount the same configuration storage into both containers.
docker network create mynet
docker volume create config
docker run --rm --net mynet -v config:/app/config myproject ./setup.py
docker run -d -p 8000:8000 --net mynet -v config:/app/config myproject
I am trying to do something very simple (I think), I want do build a docker image and launch two different scripts from the same image in parallel running containers.
Something as simple as
Container 1 -> print("Hello")
Container 2 -> print('World")
I did some research but some techniques seem a little over engineered and others do something like
CMD ["python", "script.py"] && ["python", "script2.py"]
which, isn't what I'm looking for.
I would love to see something like
$ docker ps
CONTAINER ID IMAGE CREATED STATUS NAMES
a7e789711e62 67759a80360c 12 hours ago Up 2 minutes MyContainer1
87ae9c5c3f84 67759a80360c 12 hours ago Up About a minute MyContainer2
But running two different scripts.
I'm still fairly new to all of this, so if this is a foolish question, I apologize in advance and thank you all for working with me.
You can do this easily using Docker Compose. Here is a simple docker-compose.yml file just to show the idea:
version: '3'
services:
app1:
image: alpine
command: >
/bin/sh -c 'while true; do echo "pre-processing"; sleep 1; done'
app2:
image: alpine
command: >
/bin/sh -c 'while true; do echo "post-processing"; sleep 1; done'
As you see both services use the same image, alpine in this example, but differ in commands they run. Execute docker-compose up and see the output:
app1_1 | pre-processing
app2_1 | post-processing
app2_1 | post-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
app1_1 | pre-processing
app2_1 | post-processing
...
In your case, you just change the image to your image, say myimage:v1, and change the service commands so that the first service definition will have command: python script1.py line and the second one command: python script2.py.
If you are set on using the same image then I would suggest you set the ENTRYPOINT to python and then use docker run command to start the containers by providing the scrips as the CMD like so:
Dockerfile:
FROM python
...
ENTRYPOINT ["python"]
And the use the docker run command like so
docker run -d my_image script.py && docker run -d my_image script2.py
Which would start two containers each running separate scripts
BUT - I like to keep the images clean in terms of not having any additional scripts or packages that are not necessary for my service to work, so in this case I would simply create two separate images each having one of the scripts and then run them similarly.
Example:
FROM python
COPY script.py script.py
ENTRYPOINT ["python"]
CMD ["script.py"]
And the second image:
FROM python
COPY script2.py script2.py
ENTRYPOINT ["python"]
CMD ["script2.py"]
And then just build them as separate images and run the the same way as before
Any command you put at the end of the docker run command (or the Docker Compose command: field) replaces the CMD in the Dockerfile. I would suggest still putting in some useful default CMD, but you can always just
docker run --name hello myimage python script.py
docker run --name world myimage python script2.py
Try these steps, it should work.
Create a Dockerfile with contents:
FROM python:3.7-alpine
COPY script1.py /script1.py
COPY script2.py /script2.py
CMD ["/bin/sh"]
In script1.py
print("Hello")
In script2.py
print("World")
Build docker image docker build -t myimage:v1 .
Run the containers
$ docker run -it --rm --entrypoint python myimage:v1 /script1.py
Hello
$
$ docker run -it --rm --entrypoint python myimage:v1 /script2.py
World
$
NOTE: Here we're using same docker image myimage:v1 and just changing the entrypoint in every docker run command.
More info here.
Hope this helps.
I have a simple Python program that I want to run in IBM Cloud functions. Alas it needs two libraries (O365 and PySnow) so I have to Dockerize it and it needs to be able to accept a Json feed from STDIN. I succeeded in doing this:
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./main ./main
WORKDIR /main
CMD ["python", "main.py"]
This runs with: cat env_var.json | docker run -i f9bf70b8fc89
I've added the Docker container to IBM Cloud Functions like this:
ibmcloud fn action create e2t-bridge --docker [username]/e2t-bridge
However when I run it, it times out.
Now I did see a possible solution route, where I dockerize it as an Openwhisk application. But for that I need to create a binary from my Python application and then load it into a rather complicated Openwhisk skeleton, I think?
But having a file you can simply run was is the whole point of my Docker, so to create a binary of an interpreted language and then adding it into a Openwhisk docker just feels awfully clunky.
What would be the best way to approach this?
It turns out you don't need to create a binary, you just need to edit the OpenWhisk skeleton like so:
# Dockerfile for example whisk docker action
FROM openwhisk/dockerskeleton
ENV FLASK_PROXY_PORT 8080
### Add source file(s)
ADD requirements.txt /action/requirements.txt
RUN cd /action; pip install -r requirements.txt
# Move the file to
ADD ./main /action
# Rename our executable Python action
ADD /main/main.py /action/exec
CMD ["/bin/bash", "-c", "cd actionProxy && python -u actionproxy.py"]
And make sure that your Python code accepts a Json feed from stdin:
json_input = json.loads(sys.argv[1])
The whole explaination is here: https://github.com/iainhouston/dockerPython
I have made a little python script to create a DB and some tables inside a RethinkDB
But now I'm trying to launch this python script inside my rethink container launched with docker-compose.
This is my docker-compose.yml rethink container config
# Rethink DB
rethink:
image: rethinkdb:latest
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
I'm trying to execute the script with after launching my container
docker exec -it rethink python src/app/db-install.py
But I get this error
rpc error: code = 2 desc = oci runtime error: exec failed: exec: "python": executable file not found in $PATH
Python is not found in me container. Is this possible to execute a python script inside a given container with docker-compose or with docker exec ?
First find out if you have python executable in the container:
docker exec -it rethink which python
If it exists, Use the absolute path provided by which command in previous step:
docker exec -it rethink /absolute/path/to/python src/app/db-install.py
If not, you can convert your python script to bash script, so you can run it without extra executables and libraries.
Or you can create a dockerfile, use base image, and install python.
dockerfile:
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Docker Compose file:
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
Docker-compose
Assuming that python is installed, try:
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE MY_PYTHON_COMMAND
For a start, you might also just go into the shell at first and run a python script from the command prompt.
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE bash
In your case, MY_DOCKER_COMPOSE_SERVICE is 'rethink', and that is not the container name here, but the name of the service (first line rethink:), and only the service is run with docker-compose run, not the container.
The MY_PYTHON_COMMAND is, in your case of Python2, python src/app/db-install.py, but in Python3 it is python -m src/app/db-install (without the ".py"), or, if you have Python3 and Python2 installed, python3 -m src/app/db-install.
Dockerfile
To be able to run this python command, the Python file needs to be in the container. Therefore, in your Dockerfile that you need to call with build: ., you need to copy your build directory to a directory in the container of your choice
COPY $PROJECT_PATH /tmp
This /tmp will be created in your build directory. If you just write ".", you do not have any subfolder and save it directly in the build directory.
When using /tmp as the subfolder, you might write at the end of your Dockerfile:
WORKDIR /tmp
Docker-compose
Or if you do not change the WORKDIR from the build (".") context to /tmp and you still want to reach /tmp, run your Python file like /tmp/db-install.py.
The rethinkdb image is based on the debian:jessie image :
https://github.com/rethinkdb/rethinkdb-dockerfiles/blob/da98484fc73485fe7780546903d01dcbcd931673/jessie/2.3.5/Dockerfile
The debian:jessie image does not come with python installed.
So you will need to create your own Dockerfile, something like :
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Then change your docker-compose :
# Rethink DB
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
build : . is the path to your Dockerfile.