mount host directory as write only in the docker container - python

I am using python image for my application using Dockerfile and docker-compose.
I want to mount the python path inside the container /usr/local/bin to the directory in the host. For this, my docker-compose looks like
version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./.virtualenv:/usr/local/bin
And the .virtualenv directory is empty in the host.
After running the docker-compose and executing python command
python myfile.py
It gives error
python: command not found
It is probably because it is syncing .virtualenv directory as the source.
How can I use the .virtualenv directory as write-only so that the contents of /usr/local/bin maps to this directory and does not copy from this directory to the container?

I don't think I am right but try python3
if it does not work I really don't know

Bind mounts always take a directory from the host and inject it into the container, hiding what was originally in the image; they do not work the other way, and you can't use them to expose an image's contents to the host system.
The approach you describe has the further problem that Python virtual environments aren't really that portable. They are tied to a specific filesystem location and a specific Python interpreter. If the virtual environment is in /home/me/.virtualenv in one context and /usr/local/bin in another, it won't work, and if the host and container Python aren't the exact same version and build, it won't work either.
In Docker, generally an image should be self-contained: it includes the language interpreter, any libraries you might happen to need, and the application code. You don't need to inject these parts into a container with a bind mount, and the copy in the image will be separate from the development environment you have on your host. Further, since a Docker image is already isolated from the host, you don't need a virtual environment in a Docker context. A very typical Python-based Dockerfile would look like
FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ./myfile.py
and you can docker run this image, or include it in a docker-compose.yml file, without any volumes: at all.

Related

Docker: Bind mount not reflecting unless container is restarted

TLDR: Flask application. When I make changes to the app.py file inside source folder of the bind, the change is reflected in the target folder. But when I hit the API from Postman, the change is not seen unless the container is restarted.
Long Version:
I am just starting with docker and was trying bind mounts. I started my container with the following command:
docker run -p 9000:5000 --mount type=bind,source="$(pwd)"/src/,target=/src/ voting-app:latest
My Dockerfile is as below:
FROM python:3.8.10-alpine
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ./src ./src
WORKDIR ./src
ENTRYPOINT ["python"]
CMD ["app.py"]
This post mentioned that, if the inode of the file changes, docker cannot handle this especially if it is a single file bind. Mine is not, and the inode was not changing either.
When I run docker exec <container_id> cat app.py, I can see that the changes are carried over to the container file. It is just the API that is not reflecting the change.
Docker version 20.10.17, build 100c701.
Can someone please tell me what I am missing here ? Also, please feel free to ask for more information.
The full code is available here.
When the python file is running, it seems it is not automatically restarted when the file changes. So you need a process to watch the file and restart upon change.

Dockerizing an API built with python on local machine

I have cloned a repository of an API built with python on my local machine and my goal is to be able to send requests and receive responses locally.
I'm not familiar python but the code is very readable and easy to understand, however the repository contains some dependencies and configuration files to Dockerise (and I'm not familiar with Docker and containers too) .
so what are the steps to follow in order to be able to interact with the API locally?.
Here are some files in the repository for config and requirements :
requirements.txt file :
fastapi==0.70.0
pytest==7.0.1
requests==2.27.1
uvicorn==0.15.0
Dockerfile file :
FROM tiangolo/uvicorn-gunicorn:python3.9
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./app /app
i already installed Python3 and docker so what's next ?
Adjust Dockerfile
Assuming all code is in the /app directory you have already copied over all your code and installed all the dependencies required for the application.
But you are missing - at least (see disclaimer) - one essential line in the Dockerfile which is actually the most important line as it is the CMD command to tell Docker which command/ process should be executed when the container starts.
I am not familiar with the particular base image you are using (which is defined using the FROM command) but after googling I found this repo which suggests the following line, which does make a lot of sense to me as it starts a web server:
# open port 80 on the container to make it accesable from the outside
EXPOSE 80
# line as described in repo to start the web server
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
This should start the web server on port 80 using the application stored in a variable app in your main.py when the container starts.
Build and run container
When you have added that you need to build your image using docker build command.
docker build -t asmoun/my-container .
This builds an container image asmoun/my-container using the Dockerfile in the current directory, hence the .. So make sure you execute that when in the directory with the Dockerfile. This will take some time as the base image has to download and dependencies need to be installed.
You now have an image that you can run using docker run command:
docker run --name my-fastapi-container -d -p 80:80 asmoun/my-container
This will start a container called my-fastapi-container using the image asmoun/my-container in detached mode (-d option that makes sure your TTY is not attached to the container) and define a port mapping, which maps the port 80 on the host to port 80 on the container, which we have previously exposed in the Dockerfile (EXPOSE 80).
You should now see some ID getting printed to your console. This means the container has started. You can check its state using docker ps -a and you should see it marked as running. If it is, you should be able to connect to localhost:80 now. If it is not use docker logs my-fastapi-container to view the logs of the container and you'll hopefully learn more.
Disclaimer
Please be aware that this is only a minimal guide on how you should be able to get a simple FastAPI container up and running, but some parameters could well be different depending on the application (e.g. name of main.py could be server.py or similar stuff) in which case you will need to adjust some of the parameters but the overall process (1. adjust Dockerfile, 2. build container, 3. run container) should work. It's also possible that your application expects some other stuff to be present in the container which would need to be defined in the Dockerfile but neither me, nor you (presumably) know this, as the Dockerfile provided seems to be incomplete. This is just a best effort answer.
I have tried to link all relevant resources and commands so you can have a look at what some of them do and which options/ parameters might be of interest for you.

How to test my Dockerfile for my python project using GitHub actions?

Sometimes we make errors in writing the docker file. If there is an error in the Dockerfile, the docker build will fail.
Sometimes we may forget to specify dependencies in our Dockerfile. Let's take an example.
Suppose, I have a python script that can take the screenshot of any web page (whose URL is supplied).
Now, in my code I am using pyppeeteer(Headless chrome/chromium automation library (unofficial port of puppeteer)
pyppeeteer uses chromium and chrome driver. These are already installed in my machine. So, running pytest will pass in my local dev environment.
I forget to specify the RUN commands in the dockerfile, that will install chromium and chrome driver. so running tests inside the container will fail. (although the docker build will succeed.)
I want to automate the task of building docker images and running tests in the container.
In the local machine, I can run docker build -t myproj . to build.
and for testing, I can run docker run -it myproj pytest (if i forget to add the RUN that installs chromium and chromedriver, then my pytest will fail inside container)
I hope I am able to explain my purpose.
Normally,in github actions,the python source code can be run on ubuntu, mac, windows etc.
Besides the different os, I also want to build and test my dockerfile.
After a little more experimenting and research I found out that the solution to my problem is simple.
Inside the .github/workflows folder create a docker-build-test.yml file.
name: Docker build and test
on:
workflow_dispatch
# you can trigger on anything you want
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build Docker image
run: docker build -t samplepy .
- name: Run tests inside the container
run: docker run samplepy poetry run pytest
Its very simple, because github's ubuntu-latest VMs already have docker installed and configured. You just run the docker command and everything just works.
And, I tested the above workflow with a dummy python project also.

Is there a way to modify files inside docker via PyCharm?

I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.

Docker development workflow

What's the proper development workflow for code that runs in a Docker container?
Solomon Hykes said that the "official" workflow involves building and running a new Docker image for each Git commit. That makes sense, but what if I want to test a change before committing it to the Git repo?
I can think of two ways to do it:
Run the code on a local development server (e.g., the Django development server). Edit a file; test on the dev server; make a Git commit; rebuild the Docker image with the new code; test again on the local Docker container.
Don't run a local dev server. Instead, build and run a new Docker image each time I edit a file, and then test the change on local Docker container.
Both approaches are pretty inefficient. Is there a better way?
A more efficient way is to run a new container from the latest image that was built (which then has the latest code).
You could start that container starting a bash shell so that you will be able to edit files from inside the container:
docker run -it <some image> bash -l
You would then run the application in that container to test the new code.
Another way to alter files in that container is to start it with a volume. The idea is to alter files in a directory on the docker host instead of messing with files from the command line from the container itself:
docker run -it -v /home/joe/tmp:/data <some image>
Any file that you will put in /home/joe/tmp on your docker host will be available under /data/ in the container. Change /data to whatever path is suitable for your case and hack away.

Categories

Resources