TLDR: Flask application. When I make changes to the app.py file inside source folder of the bind, the change is reflected in the target folder. But when I hit the API from Postman, the change is not seen unless the container is restarted.
Long Version:
I am just starting with docker and was trying bind mounts. I started my container with the following command:
docker run -p 9000:5000 --mount type=bind,source="$(pwd)"/src/,target=/src/ voting-app:latest
My Dockerfile is as below:
FROM python:3.8.10-alpine
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ./src ./src
WORKDIR ./src
ENTRYPOINT ["python"]
CMD ["app.py"]
This post mentioned that, if the inode of the file changes, docker cannot handle this especially if it is a single file bind. Mine is not, and the inode was not changing either.
When I run docker exec <container_id> cat app.py, I can see that the changes are carried over to the container file. It is just the API that is not reflecting the change.
Docker version 20.10.17, build 100c701.
Can someone please tell me what I am missing here ? Also, please feel free to ask for more information.
The full code is available here.
When the python file is running, it seems it is not automatically restarted when the file changes. So you need a process to watch the file and restart upon change.
Related
I have cloned a repository of an API built with python on my local machine and my goal is to be able to send requests and receive responses locally.
I'm not familiar python but the code is very readable and easy to understand, however the repository contains some dependencies and configuration files to Dockerise (and I'm not familiar with Docker and containers too) .
so what are the steps to follow in order to be able to interact with the API locally?.
Here are some files in the repository for config and requirements :
requirements.txt file :
fastapi==0.70.0
pytest==7.0.1
requests==2.27.1
uvicorn==0.15.0
Dockerfile file :
FROM tiangolo/uvicorn-gunicorn:python3.9
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./app /app
i already installed Python3 and docker so what's next ?
Adjust Dockerfile
Assuming all code is in the /app directory you have already copied over all your code and installed all the dependencies required for the application.
But you are missing - at least (see disclaimer) - one essential line in the Dockerfile which is actually the most important line as it is the CMD command to tell Docker which command/ process should be executed when the container starts.
I am not familiar with the particular base image you are using (which is defined using the FROM command) but after googling I found this repo which suggests the following line, which does make a lot of sense to me as it starts a web server:
# open port 80 on the container to make it accesable from the outside
EXPOSE 80
# line as described in repo to start the web server
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
This should start the web server on port 80 using the application stored in a variable app in your main.py when the container starts.
Build and run container
When you have added that you need to build your image using docker build command.
docker build -t asmoun/my-container .
This builds an container image asmoun/my-container using the Dockerfile in the current directory, hence the .. So make sure you execute that when in the directory with the Dockerfile. This will take some time as the base image has to download and dependencies need to be installed.
You now have an image that you can run using docker run command:
docker run --name my-fastapi-container -d -p 80:80 asmoun/my-container
This will start a container called my-fastapi-container using the image asmoun/my-container in detached mode (-d option that makes sure your TTY is not attached to the container) and define a port mapping, which maps the port 80 on the host to port 80 on the container, which we have previously exposed in the Dockerfile (EXPOSE 80).
You should now see some ID getting printed to your console. This means the container has started. You can check its state using docker ps -a and you should see it marked as running. If it is, you should be able to connect to localhost:80 now. If it is not use docker logs my-fastapi-container to view the logs of the container and you'll hopefully learn more.
Disclaimer
Please be aware that this is only a minimal guide on how you should be able to get a simple FastAPI container up and running, but some parameters could well be different depending on the application (e.g. name of main.py could be server.py or similar stuff) in which case you will need to adjust some of the parameters but the overall process (1. adjust Dockerfile, 2. build container, 3. run container) should work. It's also possible that your application expects some other stuff to be present in the container which would need to be defined in the Dockerfile but neither me, nor you (presumably) know this, as the Dockerfile provided seems to be incomplete. This is just a best effort answer.
I have tried to link all relevant resources and commands so you can have a look at what some of them do and which options/ parameters might be of interest for you.
I am new to container, below questions might sound stupid.
There are two questions actually.
I have a non-web python application fully tested in VScode without any error, then I use below Dockerfile to build it locally.
FROM python:3.8-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./mycode.py"]
An image was built successfully, but running ended with a TypeError. I have made sure the requirements.txt has same dependency as in my project environment. The error message is "wrong tuple index" which gives me no clue on where the problem could come from a fully tested code. I am stuck here with a weird feeling.
I then tried buildpack with Procfile: worker: python mycode.py
An image was built successfully, but docker run could not launch the application with below error. I have no idea about what else beside "worker:" could launch a non-web python application in Procfile. Stuck again!
ERROR: failed to launch: determine start command: when there is no
default process a command is required
I searched but all are about web application with "web:" in Procfile. Any help on either question will be appreciated.
When you start the container, you'll need to pass it the worker process type like this:
$ docker run -it myapp worker
Then it should run the the command you added to the Procfile.
A few other things:
Make sure you're using the heroku/python buildpack or another buildpack that includes Procfile detection.
Confirm in the build output that the worker process type was created
You can put your command as the web: process type if you do not want to add worker to your start command. There's nothing wrong with a web: process that does run web app
Thank #codefinger for reminder. with several trials, I finally get my application launched with below command:
docker run -it --name container_name image_name python mycode.py
In fact docker run command has its format as:
docker run [options] image [command] [arg..]
I suspect even buildpack image has no worker process, it is still possible to use "command" option to launch your application.
However, successfully running image built by buildpack without any error leaves me an increasing weird feeling.
Same code and same requirement.txt file, nothing changed! Why image built by docker build gives me a TypeError? So weird.
I am using python image for my application using Dockerfile and docker-compose.
I want to mount the python path inside the container /usr/local/bin to the directory in the host. For this, my docker-compose looks like
version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- ./.virtualenv:/usr/local/bin
And the .virtualenv directory is empty in the host.
After running the docker-compose and executing python command
python myfile.py
It gives error
python: command not found
It is probably because it is syncing .virtualenv directory as the source.
How can I use the .virtualenv directory as write-only so that the contents of /usr/local/bin maps to this directory and does not copy from this directory to the container?
I don't think I am right but try python3
if it does not work I really don't know
Bind mounts always take a directory from the host and inject it into the container, hiding what was originally in the image; they do not work the other way, and you can't use them to expose an image's contents to the host system.
The approach you describe has the further problem that Python virtual environments aren't really that portable. They are tied to a specific filesystem location and a specific Python interpreter. If the virtual environment is in /home/me/.virtualenv in one context and /usr/local/bin in another, it won't work, and if the host and container Python aren't the exact same version and build, it won't work either.
In Docker, generally an image should be self-contained: it includes the language interpreter, any libraries you might happen to need, and the application code. You don't need to inject these parts into a container with a bind mount, and the copy in the image will be separate from the development environment you have on your host. Further, since a Docker image is already isolated from the host, you don't need a virtual environment in a Docker context. A very typical Python-based Dockerfile would look like
FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ./myfile.py
and you can docker run this image, or include it in a docker-compose.yml file, without any volumes: at all.
I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.
What's the proper development workflow for code that runs in a Docker container?
Solomon Hykes said that the "official" workflow involves building and running a new Docker image for each Git commit. That makes sense, but what if I want to test a change before committing it to the Git repo?
I can think of two ways to do it:
Run the code on a local development server (e.g., the Django development server). Edit a file; test on the dev server; make a Git commit; rebuild the Docker image with the new code; test again on the local Docker container.
Don't run a local dev server. Instead, build and run a new Docker image each time I edit a file, and then test the change on local Docker container.
Both approaches are pretty inefficient. Is there a better way?
A more efficient way is to run a new container from the latest image that was built (which then has the latest code).
You could start that container starting a bash shell so that you will be able to edit files from inside the container:
docker run -it <some image> bash -l
You would then run the application in that container to test the new code.
Another way to alter files in that container is to start it with a volume. The idea is to alter files in a directory on the docker host instead of messing with files from the command line from the container itself:
docker run -it -v /home/joe/tmp:/data <some image>
Any file that you will put in /home/joe/tmp on your docker host will be available under /data/ in the container. Change /data to whatever path is suitable for your case and hack away.