Docker map an external config file - python

I want to make an external config file via volume and pass it like:
docker run MyImage -v /home/path/my_config.conf:folder2/(is that right btw?)
But have no idea how to link this volume to the argument for the main.py...
My DocekrFile:
FROM python:3.6-jessie
MAINTAINER Vladislav Ladenkov
WORKDIR folder1/folder2
COPY folder2/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY folder2/*.py ./
?? how to link volume ??
ENTRYPOINT ["python3", "main.py", "#??volume??"]

You want to use a folder-name to map the volume:
docker run MyImage -v /home/path/:/folder1/folder2/
So now /home/path folder on the host machine is mounted to /folder1/folder2 inside the container.
Then just pass the path of the conf file as seen within the container to the cmd.
ENTRYPOINT ["python3", "main.py", "/folder1/folder2/myconf.conf"]

Related

Run docker image with json file as variable

I have the following docker image
FROM python:3.8-slim
WORKDIR /app
# copy the dependencies file to the working directory
COPY requirements.txt .
COPY model-segmentation-512.h5 .
COPY run.py .
# TODO add python dependencies
# install pip deps
RUN apt update
RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir /app/input
RUN mkdir /app/output
# copy the content of the local src directory to the working directory
#COPY src/ .
# command to run on container start
ENTRYPOINT [ "python", "run.py"]
and then I would like to run my image using the following command where json_file is a file I can update on my machine whenever I want that will be read by run.py to import all the required parameters for the python script.:
docker run -v /local/input:/app/input -v /local/output:/app/output/ -t docker_image python3 run.py model-segmentation-512.h5 json_file.json
However when I do this I get a FileNotFoundError: [Errno 2] No such file or directory: 'path/json_file.json' so I think I'm not introducing properly my json file. What should I change to allow my docker image to read an updated json file (just like a variable) every time I run it?
I think you are using ENTRYPOINT in the wrong way. Please see this question and read more about ENTRYPOINT and CMD. In short, what you specify after image name when you run docker, will be passed as CMD and means will be passed to the ENTRYPOINT as a list of arguments. See the next example:
Dockerfile:
FROM python:3.8-slim
WORKDIR /app
COPY run.py .
ENTRYPOINT [ "python", "run.py"]
run.py:
import sys
print(sys.argv[1:])
When you run it:
> docker run -it --rm run-docker-image-with-json-file-as-variable arg1 arg2 arg3
['arg1', 'arg2', 'arg3']
> docker run -it --rm run-docker-image-with-json-file-as-variable python3 run.py arg1 arg2 arg3
['python3', 'run.py', 'arg1', 'arg2', 'arg3']
Map the json file into the container using something like -v $(pwd)/json_file.json:/mapped_file.json and pass the mapped filename to your program, so you get
docker run -v $(pwd)/json_file.json:/mapped_file.json -v /local/input:/app/input -v /local/output:/app/output/ -t docker_image python3 run.py model-segmentation-512.h5 /mapped_file.json

Python winamd64 image and Windows container

I am new to Docker and want to know if there is a slim or light-sized Python winamd64 image?
In the Docker file, as follows:
FROM python:3.8-slim-buster
# Create project directory (workdir)
WORKDIR /SMART
# Add requirements.txt to WORKDIR and install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Add source code files to WORKDIR
ADD . .
# Application port (optional)
EXPOSE 8000
# Container start command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
It works well in the Linux container state, but I need my container to be Windows. I face a problem when I switch it to the Windows container, even if I change "experimental" to the true value. Therefore, I decided to use a Python winamd64 image, but all of the official images on the website are around 1 GByte. I need a small-sized Python image like (python:3.8-slim-buster) so how could I solve this problem? Or how could I change the Docker file in the first line (FROM python:3.8-slim-buster) to use a Python image that I push to my hub?

Trying to supply PGPASS to Docker Image

New to Docker here. I'm trying to create a basic Dockerfile where I run a python script that runs some queries in postgres through psycopg2. I currently have a pgpass file setup in my environment variables so that I can run these tools without supplying a password in the code. I'm trying to replicate this in Docker. I have windows on my local.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
RUN cp C:\Users\user.name\AppData\Roaming\postgresql\pgpass.conf /test
ENV PGPASSFILE="test/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
This is what I've tried in my Dockerfile. I've tried to copy over my pgpassfile and set it as my environment variable. Apologies if I have a forward/backslashes wrong, or syntax. I'm very new to Docker, Linux, etc.
Any help or alternatives would be appreciated
It's better to pass your secrets into the container at runtime than it is to include the secret in the image at build-time. This means that the Dockerfile doesn't need to know anything about this value.
For example
$ export PGPASSWORD=<postgres password literal>
$ docker run -e PGPASSWORD <image ref>
Now in that example, I've used PGPASSWORD, which is an alternative to PGPASSFILE. It's a little more complicated to do this same if you're using a file, but that would be something like this:
The plan will be to mount the credentials as a volume at runtime.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
ENV PGPASSFILE="/credentials/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
As I said above, we don't want to include the secrets in the image. We are going to indicate where the file will be in the image, but we don't actually include it yet.
Now, when we start the image, we'll mount a volume containing the file at the location specified in the image, /credentials
$ docker run --mount src="<host path to directory>",target="/credentials",type=bind <image ref>
I haven't tested this so you may need to adjust the exact paths and such, but this is the idea of how to set sensitive values in a docker container

How to allow docker to read files with root permissions

So I'm trying to run my FastAPI python app in a Docker container. I choose python:3.9 as a base image and everything seemed to work until I decided to integrate my SSL Cert-Files into the container.
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000
Docker run command:sudo docker run -p 33665:8000 -v /etc/letsencrypt/live/soulforger.net/:/app/SSL --name soulforger_api -d 24aea28ce756
Now the problem is that the directory im mapping is only accessible as a root user. When I exec into the Container, the files are there but I can't cat /app/SSL/cert.pem. Due to the fact that I can cat everything else without problem I assume its some sort of permissions problem when mapping the dir into the container. Does anybody have an idea of what can cause this issue?
Solution:
After a lot of digging I found out what the problem is, for anyone that happens upon this post and also uses Let's Encrypt, the files within /etc/letsencrypt/live/some.domain/ are only links to files in another directory. If you want to mount the SSL certificates of your server to your containers, you have to mount the entire /etc/letsencrypt/ dir in order to have access to the files referenced by the links. All props go to this answer.
You can change the user in the Dockerfile. Try to add USER root in your dockerfile.
Hopefully it will be helpful.
FROM python:3.9
USER root
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000

Docker pass command-line arguments

I really just want to pass an argument via docker run
My Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# tell the port number the container should expose
EXPOSE 5000
# run the command
CMD ["python", "./app.py"]
My python file:
import sys
print(sys.argv)
I tried:
docker run myimage foo
I got an error:
flask-app git:(master) ✗ docker run myimage foo
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"foo\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
When you write foo at the end of your docker run command then you overwrite whole command. Therefore instead of
python app.py
you call
foo
Proper way of calling your script with arguments is:
docker run myimage python app.py foo
Alternatively you may use ENTRYPOINT instead of CMD and then your docker run command may contain just foo after image name
Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY app.py .
# run the command
ENTRYPOINT ["python", "./app.py"]
calling it:
docker run myimage foo

Categories

Resources