Trying to bind volume to a docker container - python

I'm trying to mount a volume to a docker container, but I have some problems with it. I have a simple python script in a docker container that creates file "links.json" and I would like to access this file from the filesystem.
This is my Dockerfile:
FROM python:3.6-slim
COPY . /srv/app
WORKDIR /srv/app
RUN pip install -r requirements.txt --src /usr/local/src
CMD [ "python", "main.py" ]
I have created volume with:
docker volume create my-data
And I'm running this container with command:
docker run --mount source=my-data,target=/srv/app 3743b8d3b043
I've tried it on MacOS.
When I wrote: docker volume inspect my-data, I got this result:
[
{
"CreatedAt": "2019-08-15T08:30:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-data/_data",
"Name": "my-data",
"Options": {},
"Scope": "local"
}
]
But all directories like /var/lib/docker/volumes, and directories of this code are empty.. Do you have any ideas where's the problem?
Thanks a lot!

you are overwriting all you data in /srv/app that you add during the build process . you may change your mount to use other target as /srv/app.
Update
start your container using:
docker run -v /ful/path/folder:/srv/app/data IMAGE

Related

Defining IP for VScode debugging Docker with Python HTTPserver

I have made a simple Python file called index.py with this relevant piece of code:
def main():
server = HTTPServer(('127.0.0.1', PORT), HomeHandler)
print("Server online, listening for requests.\n")
server.serve_forever()
if __name__ == "__main__":
main()
I then installed the Docker extension and chose to add Docker files. I now have these files:
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 8000 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "index.py"]
And docker-compose.debug.yml:
version: '3.4'
services:
serverlessproject:
image: serverlessproject
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen localhost:8000 index.py "]
ports:
- 8000:8000
As well as two files the .vscode folder:
launch.json:
{
"configurations": [
{
"name": "Docker: Python - General",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"projectType": "general"
}
}
]
And tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "serverlessproject:latest",
"dockerfile": "${workspaceFolder}/Dockerfile",
"context": "${workspaceFolder}",
"pull": true
}
},
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": [
"docker-build"
],
"python": {
"file": "index.py"
}
}
]
}
When debugging index.py it will print and start serving as expected. However, the Docker container will not be available via 127.0.0.1:8000 - and inspecting the terminal shows the last command ran looks like:
docker exec -d projectname-dev python3 /debugpy/launcher 172.17.0.1:43423 -- index.py
I can't find where this IP and port are actually actually defined - it's definitely not using the port I exposed in the Dockerfile or the portranges defined in docker-compose.yml or docker-compose.debug.yml. I'm certain this is just a simple line of configuration I can add to one of the above files, but I'm having trouble finding documentation for this specific case - most docs assume I'm using Django, for example.

How to allow docker to read files with root permissions

So I'm trying to run my FastAPI python app in a Docker container. I choose python:3.9 as a base image and everything seemed to work until I decided to integrate my SSL Cert-Files into the container.
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000
Docker run command:sudo docker run -p 33665:8000 -v /etc/letsencrypt/live/soulforger.net/:/app/SSL --name soulforger_api -d 24aea28ce756
Now the problem is that the directory im mapping is only accessible as a root user. When I exec into the Container, the files are there but I can't cat /app/SSL/cert.pem. Due to the fact that I can cat everything else without problem I assume its some sort of permissions problem when mapping the dir into the container. Does anybody have an idea of what can cause this issue?
Solution:
After a lot of digging I found out what the problem is, for anyone that happens upon this post and also uses Let's Encrypt, the files within /etc/letsencrypt/live/some.domain/ are only links to files in another directory. If you want to mount the SSL certificates of your server to your containers, you have to mount the entire /etc/letsencrypt/ dir in order to have access to the files referenced by the links. All props go to this answer.
You can change the user in the Dockerfile. Try to add USER root in your dockerfile.
Hopefully it will be helpful.
FROM python:3.9
USER root
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000

How can I get the error message of Python/vscode debugging inside a Docker container

I'm using this workflow to debug a generic Python application inside a Docker container with vscode.
If i introduce an error to my main.py, such as importing a package that doesn't exist, and press F5 to debug in VSCode, after building the docker image, it fails silently and doesn't provide any useful error message.
e.g:
Executing task: docker-run: debug <
docker run -dt -P --name "projectname-dev" --label "com.microsoft.created-by=visual-studio-code" -v "c:\Users\tim.vscode\extensions\ms-python.python-2021.10.1317843341\pythonFiles\lib\python\debugpy:/debugpy:ro" --entrypoint "python3" "projectname:latest" <
add88efdff111ae904a38f3d52a52d8191a95f1d53c931a032326ed2958218b3
Terminal will be reused by tasks, press any key to close it.
If i remove the error, I have working code, and by running it manually
docker run projectname
I can see the code works.
However it still fails to debug in vscode, failing silently. A breakpoint set on the first line is never reached. There is no message produced at all from vscode.
How can I see the error message in this case?
Launch.json:
version: '3.4'
services:
projectname:
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 main.py "]
ports:
- 5678:5678
docker-compose.debug.yml
version: '3.4'
services:
projectname:
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 main.py "]
ports:
- 5678:5678
main.py (Minimum viable):
import time
import requests
import json
#import package_that_doesnt_exist
time_look_back = 60*15 # last 15min
time_to_fetch=int(time.time()-time_look_back)
print(time_to_fetch)
Dockerfile:
FROM python:3.9.7-alpine3.14
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
RUN apk add postgresql-dev
RUN apk add musl-dev
RUN apk add gcc
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py"]
I tried this approach and hope it's suitable for you; if it's not, I will be happy to fix it.
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app"
}
],
"justMyCode": false,
"subProcess": true,
"redirectOutput": true
}
]
}
docker-compose.yml
(I usually use ptvsd for debugging inside containers)
version: '3.4'
services:
projectname:
environment:
- DEBUG=1
image: projectname
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install ptvsd && python main.py"]
ports:
- 5678:5678
In the Dockerfile, I exposed the port 5678 before the CMD (see only the relevant part)
USER appuser
EXPOSE 5678
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "main.py"]
main.py
import time
import requests
import json
import os
if bool(int(os.environ.get('DEBUG',0))):
import ptvsd
ptvsd.enable_attach(address=('0.0.0.0', 5678))
print("--- PTVS ATTACHED, press F5 on VSCode to start Debugging")
ptvsd.wait_for_attach()
#import package_that_doesnt_exist
time_look_back = 60*15 # last 15min
time_to_fetch=int(time.time()-time_look_back)
print(time_to_fetch)
Now, launching the compose up with the error package uncommented and env DEBUG=0:
Launching the compose up with the error package commented and env DEBUG=0:
Launching the compose up with the error uncommented and env DEBUG=1:
then using F5 to start debugging

How to pass AWS credential when building Docker image in Jenkins?

Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/

Using Docker and Python, how to access csv file created in volume?

Edit
Added suggestions from daudnadeem.
Created folder in the directory with my Dockerfile called temp_folder.
Updated the last line of the python file to be df.to_csv('/temp_folder/temp.csv').
Ran the docker build and then new run command docker run -v temp_folder:/temp_folder alexd/myapp ..
I have a very simple Python example using Docker. The code runs fine, but I can't figure out how to access the CSV file created by the Python code. I have created a volume in Docker and used docker inspect to try to access the CSV file but I'm unsure of the syntax and can't find an example online that makes sense to me.
Python Code
import pandas as pd
import numpy as np
import os
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=['A', 'B', 'C', 'D'])
df.to_csv('temp.csv')
Dockerfile
FROM python:3.7.1
RUN mkdir -p /var/docker-example
WORKDIR /var/docker-example
COPY ./ /var/docker-example
RUN pip install -r requirements.txt
ENTRYPOINT python /var/docker-example/main.py
Docker commands
$ docker build -t alexf/myapp -f ./Dockerfile .
$ docker volume create temp-vol
$ docker run -v temp-vol alexf/myapp .
$ docker inspect -f temp.csv temp-vol
temp.csv
Your "temp.csv" lives on the ephemeral docker image. So in order for you to access it outside of the docker image, the best thing for you to do is expose a volume.
In the directory where you have your Dockerfile, make a folder called "this_folder"
Then when you run your image, mount that volume to the folder within your container docker run -v this_folder:/this_folder <image-name>
Then change this code to:
import pandas as pd
import numpy as np
import os
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=['A', 'B', 'C', 'D'])
df.to_csv('/this_folder/temp.csv')
"this_folder" is now a "volume" which is mutually accessible by your docker container and the host machine. So outside of your docker container, if you ls /this_folder you should see temp.csv lives there now.
If you don't want to mount a volume, you could upload the file somewhere, and download it later. But in a local env, just mount the folder and use it to share files between your container and your local machine.
Edit
when stuff is not going as planned with docker, you may want to access it interactively. That means 'ssh'ing into your docker container'
You do that with: docker run -it pandas_example /bin/bash
When I logged in, I saw the file 'temp.csv' is being made in the same folder as the main.py
Now for you to solve the issue further is up to you. You need to move the 'temp.csv' file into the directory which is shared with your local machine.
FROM python:3.7.1
RUN pip3 install pandas
COPY test.py /my_working_folder/
CMD python3 /my_working_folder/test.py
For a quick fix add
import subprocess
subprocess.call("mv temp.csv /temp_folder/", shell=True)
to the end of main.py. But this is not recommended.
Let's make things simple if your only goal is to understand how volumes work and where to find the file on the host created by Python code inside the container.
Dockerfile:
FROM python:3.7.1
RUN mkdir -p /var/docker-example
WORKDIR /var/docker-example
COPY . /var/docker-example
ENTRYPOINT python /var/docker-example/main.py
main.py - will create /tmp/temp.txt inside the container with hi inside
with open('/tmp/temp.txt', 'w') as f:
f.write('hi')
Docker commands (run inside the project folder):
Build image:
docker build -t alexf/myapp .
Use named volumed vol which is mapped to /tmp folder inside the container:
Run container: docker run -d -v vol:/tmp alexf/myapp
Inspect the volume:docker inspect vol
[
{
"CreatedAt": "2019-11-05T22:07:02+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/vol/_data",
"Name": "vol",
"Options": null,
"Scope": "local"
}
]
Bash commands run on the docker host
sudo ls /var/lib/docker/volumes/vol/_data
temp.txt
sudo cat /var/lib/docker/volumes/vol/_data/temp.txt
hi
You can use also bind mounts and anonymous volumes to achieve the same result.
Docs

Categories

Resources