How to allow docker to read files with root permissions - python

So I'm trying to run my FastAPI python app in a Docker container. I choose python:3.9 as a base image and everything seemed to work until I decided to integrate my SSL Cert-Files into the container.
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000
Docker run command:sudo docker run -p 33665:8000 -v /etc/letsencrypt/live/soulforger.net/:/app/SSL --name soulforger_api -d 24aea28ce756
Now the problem is that the directory im mapping is only accessible as a root user. When I exec into the Container, the files are there but I can't cat /app/SSL/cert.pem. Due to the fact that I can cat everything else without problem I assume its some sort of permissions problem when mapping the dir into the container. Does anybody have an idea of what can cause this issue?
Solution:
After a lot of digging I found out what the problem is, for anyone that happens upon this post and also uses Let's Encrypt, the files within /etc/letsencrypt/live/some.domain/ are only links to files in another directory. If you want to mount the SSL certificates of your server to your containers, you have to mount the entire /etc/letsencrypt/ dir in order to have access to the files referenced by the links. All props go to this answer.

You can change the user in the Dockerfile. Try to add USER root in your dockerfile.
Hopefully it will be helpful.
FROM python:3.9
USER root
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000

Related

Kafka producer not working in Docker container

I've created simple Kafka app that sends message to a topic. It works perfectly when I'm run it in local environment. But when I move it to Docker container it cannot connect to the broker. I think problem in container network settings but I cannot figure it out.
App code:
from kafka import KafkaProducer
producer = KafkaProducer(
bootstrap_servers='xxxxxxxxxx.mdb.yandexcloud.net:9091',
security_protocol="SASL_SSL",
sasl_mechanism="SCRAM-SHA-512",
sasl_plain_password='xxxxxxxxxx',
sasl_plain_username='xxxxxxxxxx',
ssl_cafile="YandexCA.crt",
api_version=(0,11,5))
producer.send('test_topic', b'test message')
producer.flush()
producer.close()
Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]
So it's runs perfectly in terminal but fails in Docker. What can cause it?
So the problem was in the password. There was characters with escaping like:
ENV PASS=xxxxx\6xxxxx
And when set by env vars it worked correctly but when set in docker file it was escaped. So in Dockerfile I set it like that:
ENV PASS="xxxxx\6xxxxx"
And everything started working.

Trying to supply PGPASS to Docker Image

New to Docker here. I'm trying to create a basic Dockerfile where I run a python script that runs some queries in postgres through psycopg2. I currently have a pgpass file setup in my environment variables so that I can run these tools without supplying a password in the code. I'm trying to replicate this in Docker. I have windows on my local.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
RUN cp C:\Users\user.name\AppData\Roaming\postgresql\pgpass.conf /test
ENV PGPASSFILE="test/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
This is what I've tried in my Dockerfile. I've tried to copy over my pgpassfile and set it as my environment variable. Apologies if I have a forward/backslashes wrong, or syntax. I'm very new to Docker, Linux, etc.
Any help or alternatives would be appreciated
It's better to pass your secrets into the container at runtime than it is to include the secret in the image at build-time. This means that the Dockerfile doesn't need to know anything about this value.
For example
$ export PGPASSWORD=<postgres password literal>
$ docker run -e PGPASSWORD <image ref>
Now in that example, I've used PGPASSWORD, which is an alternative to PGPASSFILE. It's a little more complicated to do this same if you're using a file, but that would be something like this:
The plan will be to mount the credentials as a volume at runtime.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
ENV PGPASSFILE="/credentials/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
As I said above, we don't want to include the secrets in the image. We are going to indicate where the file will be in the image, but we don't actually include it yet.
Now, when we start the image, we'll mount a volume containing the file at the location specified in the image, /credentials
$ docker run --mount src="<host path to directory>",target="/credentials",type=bind <image ref>
I haven't tested this so you may need to adjust the exact paths and such, but this is the idea of how to set sensitive values in a docker container

Passing AWS credentials to Python container

I am building Python container for the first time using VS Code and WSL2. Here is my sample Python code. It runs fine in VS interactive mode because it is picking up my default AWS credentials.
import boto3
s3BucketName = 'test-log-files'
s3 = boto3.resource('s3')
def s3move():
try:
s3.Object(s3BucketName, "destination/Changes.xlsx").copy_from(CopySource=(s3BucketName + "/source/Changes.xlsx"))
s3.Object(s3BucketName,"source/Changes.xlsx").delete()
print("Transfer Complete")
except:
print("Transfer failed")
if __name__ == "__main__":
s3move()
The Dockerfile built by VS Code:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
RUN pip install boto3
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "S3MoveFiles/S3MoveFiles.py"]
I would like to test this using docker container and seems like I have to pass the AWS credentials to the container. While there are other ways and probably more secure ways, I wanted to test the method by mounting the volume in a docker command as an argument.
docker run -v ~/.aws/credentials:/appuser/home/.aws/credentials:ro image_id
I get the "Transfer failed" message in the Terminal window in VS Code. What am I doing wrong here? Checked several articles but couldn't find any hints. I am not logged in as root.

Docker ENV for Python variables

Being new to python & docker, I created a small flask app (test.py) which has two hardcoded values:
username = "test"
password = "12345"
I'm able to create a Docker image and run a container from the following Dockerfile:
FROM python:3.6
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/code/test.py"]`
How can I create a ENV variable for username & password and pass dynamic values while running containers?
Within your python code you can read env variables like:
import os
username = os.environ['MY_USER']
password = os.environ['MY_PASS']
print("Running with user: %s" % username)
Then when you run your container you can set these variables:
docker run -e MY_USER=test -e MY_PASS=12345 ... <image-name> ...
This will set the env variable within the container and these will be later read by the python script (test.py)
More info on os.environ and docker env
In your Python code you can do something like this:
# USERNAME = os.getenv('NAME_OF_ENV_VARIABLE','default_value_if_no_env_var_is_set')
USERNAME = os.getenv('USERNAME', 'test')
Then you can create a docker-compose.yml file to run your dockerfile with:
version: '2'
services:
python-container:
image: python-image:latest
environment:
- USERNAME=test
- PASSWORD=12345
You will run the compose file with:
$ docker-compose up
All you need to remember is to build your dockerfile that you mentioned in your question with:
$ docker build -t python-image .
Let me know if that helps. I hope that answers your question.
FROM python:3
MAINTAINER <abc#test.com>
ENV username=test
password=12345
RUN mkdir /dir/name
RUN cd /dir/name && pip3 install -r requirements.txt
WORKDIR /dir/name
ENTRYPOINT ["/usr/local/bin/python", "./test.py"]
I split my docker-compose into docker-compose.yml (base), docker-compose.dev.yml, etc., then I had this issue.
I solved it by specifying the .env file explicitly in the base:
web:
env_file:
- .env
Not sure why, according to the docs it should just work if there's an .env file.

Docker map an external config file

I want to make an external config file via volume and pass it like:
docker run MyImage -v /home/path/my_config.conf:folder2/(is that right btw?)
But have no idea how to link this volume to the argument for the main.py...
My DocekrFile:
FROM python:3.6-jessie
MAINTAINER Vladislav Ladenkov
WORKDIR folder1/folder2
COPY folder2/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY folder2/*.py ./
?? how to link volume ??
ENTRYPOINT ["python3", "main.py", "#??volume??"]
You want to use a folder-name to map the volume:
docker run MyImage -v /home/path/:/folder1/folder2/
So now /home/path folder on the host machine is mounted to /folder1/folder2 inside the container.
Then just pass the path of the conf file as seen within the container to the cmd.
ENTRYPOINT ["python3", "main.py", "/folder1/folder2/myconf.conf"]

Categories

Resources