I am trying to compile and run python code in Docker.
Dockerfile
FROM python:3
WORKDIR /app
USER root
ADD . .
RUN chmod a+x ./main.py
RUN chmod a+x ./run.sh
ENTRYPOINT ["sh","./run.sh"]
run.sh
#! usr/bin/env bash
timeout --signal=SIGTERM --foreground 500 python3 main.py
exit $?
python code (using docker sdk for python):
client = docker.from_env()
client.images.build(path="./Dockerimagefolder/",tag="sample322")
container = client.containers.create(image="sample322", stdin_open = True)
container.start()
time.sleep(5)
s = container.attach_socket(params={'stdin': 1, 'stream': 1})
s.send('test'.encode())
container log:
Begin script
Enter your nameyou entered
test
as you can see, "you entered" is not shown on the next line, instead, it's placed in the same line as input. It also doesn't show the input that was sent i.e test
I am also unable to get the output, the log I shown above is from the docker app
print(container.logs())
Related
I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.
My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build
I am trying to execute python script in docker, and allow inputs from the terminal to the dockr
Dockerfile:
FROM python:3
WORKDIR /app
USER root
ADD . .
RUN chmod a+x ./main.py
RUN chmod a+x ./run.sh
ENTRYPOINT ["sh","./run.sh"]
main.py:
print("Begin script")
x = input("Enter your name")
print("you entered")
print(x)
run.sh:
#! usr/bin/env bash
timeout --signal=SIGTERM 500 python3 main.py
exit $?
Docker build and run commands:
docker image build . -t testimg --rm
docker run -ti --name testimg testimg
terminal logs:
Begin script
Enter your namejohn
It gets stuck, it does not register what I typed into my terminal "john"
The problem is in your run.sh script. I changed the script as follows:
#! usr/bin/env bash
timeout --signal=SIGTERM --foreground 500 python3 main.py
exit $?
And now it works. I added the --foreground option to the timeout command. You can find more on timeout and its options in the following links:
https://linuxize.com/post/timeout-command-in-linux/
https://man7.org/linux/man-pages/man1/timeout.1.html
I really just want to pass an argument via docker run
My Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# tell the port number the container should expose
EXPOSE 5000
# run the command
CMD ["python", "./app.py"]
My python file:
import sys
print(sys.argv)
I tried:
docker run myimage foo
I got an error:
flask-app git:(master) ✗ docker run myimage foo
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"foo\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
When you write foo at the end of your docker run command then you overwrite whole command. Therefore instead of
python app.py
you call
foo
Proper way of calling your script with arguments is:
docker run myimage python app.py foo
Alternatively you may use ENTRYPOINT instead of CMD and then your docker run command may contain just foo after image name
Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY app.py .
# run the command
ENTRYPOINT ["python", "./app.py"]
calling it:
docker run myimage foo
I have a dockerized python script. When I run it locally with docker run, I can see the output of the python file in the command line as you would expect. But, when I run it up on Jenkins, it doesn't seem to be outputting anything in the console log.
Am I missing something? So far everything I've tried hasn't remedied the situation.
This is my Dockerfile
FROM python:3
ADD test_script.py /
CMD [ "python", "-u", "./test_script.py" ]
And I'm doing
docker build --force-rm --no-cache -t test_script -f Dockerfile .
docker run test_script
in my command line and on Jenkins. Jenkins runs fine, it just does not output any of my print statements. I had this issue locally, but it was remedied when I added the -u flag to the Dockerfile