How to build non-web python application image with buildpack? - python

I am new to container, below questions might sound stupid.
There are two questions actually.
I have a non-web python application fully tested in VScode without any error, then I use below Dockerfile to build it locally.
FROM python:3.8-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./mycode.py"]
An image was built successfully, but running ended with a TypeError. I have made sure the requirements.txt has same dependency as in my project environment. The error message is "wrong tuple index" which gives me no clue on where the problem could come from a fully tested code. I am stuck here with a weird feeling.
I then tried buildpack with Procfile: worker: python mycode.py
An image was built successfully, but docker run could not launch the application with below error. I have no idea about what else beside "worker:" could launch a non-web python application in Procfile. Stuck again!
ERROR: failed to launch: determine start command: when there is no
default process a command is required
I searched but all are about web application with "web:" in Procfile. Any help on either question will be appreciated.

When you start the container, you'll need to pass it the worker process type like this:
$ docker run -it myapp worker
Then it should run the the command you added to the Procfile.
A few other things:
Make sure you're using the heroku/python buildpack or another buildpack that includes Procfile detection.
Confirm in the build output that the worker process type was created
You can put your command as the web: process type if you do not want to add worker to your start command. There's nothing wrong with a web: process that does run web app

Thank #codefinger for reminder. with several trials, I finally get my application launched with below command:
docker run -it --name container_name image_name python mycode.py
In fact docker run command has its format as:
docker run [options] image [command] [arg..]
I suspect even buildpack image has no worker process, it is still possible to use "command" option to launch your application.
However, successfully running image built by buildpack without any error leaves me an increasing weird feeling.
Same code and same requirement.txt file, nothing changed! Why image built by docker build gives me a TypeError? So weird.

Related

Docker: Bind mount not reflecting unless container is restarted

TLDR: Flask application. When I make changes to the app.py file inside source folder of the bind, the change is reflected in the target folder. But when I hit the API from Postman, the change is not seen unless the container is restarted.
Long Version:
I am just starting with docker and was trying bind mounts. I started my container with the following command:
docker run -p 9000:5000 --mount type=bind,source="$(pwd)"/src/,target=/src/ voting-app:latest
My Dockerfile is as below:
FROM python:3.8.10-alpine
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY ./src ./src
WORKDIR ./src
ENTRYPOINT ["python"]
CMD ["app.py"]
This post mentioned that, if the inode of the file changes, docker cannot handle this especially if it is a single file bind. Mine is not, and the inode was not changing either.
When I run docker exec <container_id> cat app.py, I can see that the changes are carried over to the container file. It is just the API that is not reflecting the change.
Docker version 20.10.17, build 100c701.
Can someone please tell me what I am missing here ? Also, please feel free to ask for more information.
The full code is available here.
When the python file is running, it seems it is not automatically restarted when the file changes. So you need a process to watch the file and restart upon change.

Dockerizing an API built with python on local machine

I have cloned a repository of an API built with python on my local machine and my goal is to be able to send requests and receive responses locally.
I'm not familiar python but the code is very readable and easy to understand, however the repository contains some dependencies and configuration files to Dockerise (and I'm not familiar with Docker and containers too) .
so what are the steps to follow in order to be able to interact with the API locally?.
Here are some files in the repository for config and requirements :
requirements.txt file :
fastapi==0.70.0
pytest==7.0.1
requests==2.27.1
uvicorn==0.15.0
Dockerfile file :
FROM tiangolo/uvicorn-gunicorn:python3.9
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
COPY ./app /app
i already installed Python3 and docker so what's next ?
Adjust Dockerfile
Assuming all code is in the /app directory you have already copied over all your code and installed all the dependencies required for the application.
But you are missing - at least (see disclaimer) - one essential line in the Dockerfile which is actually the most important line as it is the CMD command to tell Docker which command/ process should be executed when the container starts.
I am not familiar with the particular base image you are using (which is defined using the FROM command) but after googling I found this repo which suggests the following line, which does make a lot of sense to me as it starts a web server:
# open port 80 on the container to make it accesable from the outside
EXPOSE 80
# line as described in repo to start the web server
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
This should start the web server on port 80 using the application stored in a variable app in your main.py when the container starts.
Build and run container
When you have added that you need to build your image using docker build command.
docker build -t asmoun/my-container .
This builds an container image asmoun/my-container using the Dockerfile in the current directory, hence the .. So make sure you execute that when in the directory with the Dockerfile. This will take some time as the base image has to download and dependencies need to be installed.
You now have an image that you can run using docker run command:
docker run --name my-fastapi-container -d -p 80:80 asmoun/my-container
This will start a container called my-fastapi-container using the image asmoun/my-container in detached mode (-d option that makes sure your TTY is not attached to the container) and define a port mapping, which maps the port 80 on the host to port 80 on the container, which we have previously exposed in the Dockerfile (EXPOSE 80).
You should now see some ID getting printed to your console. This means the container has started. You can check its state using docker ps -a and you should see it marked as running. If it is, you should be able to connect to localhost:80 now. If it is not use docker logs my-fastapi-container to view the logs of the container and you'll hopefully learn more.
Disclaimer
Please be aware that this is only a minimal guide on how you should be able to get a simple FastAPI container up and running, but some parameters could well be different depending on the application (e.g. name of main.py could be server.py or similar stuff) in which case you will need to adjust some of the parameters but the overall process (1. adjust Dockerfile, 2. build container, 3. run container) should work. It's also possible that your application expects some other stuff to be present in the container which would need to be defined in the Dockerfile but neither me, nor you (presumably) know this, as the Dockerfile provided seems to be incomplete. This is just a best effort answer.
I have tried to link all relevant resources and commands so you can have a look at what some of them do and which options/ parameters might be of interest for you.

How to efficiently input files with docker

I am starting to get a hand on docker and try to containerized some of the applications I use. Thanks to the tutorial I was able to create docker images and containers but now I am trying to thing about the most efficient and practical ways to do things.
To present my use-case, I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
Normally I would run it through :
python process.py -i path_of_the_input_image -o path_of_the_output_image
Then, the way I do the connection input/output with my docker is the following. First I create the docker file :
FROM python:3.6.8
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD python ./process.py -i ./input_output/input.jpg -o ./input_output/output.jpg
And then after building the image, I run docker run mapping the a local folder with the input_output folder of docker:
docker run -v C:/local_folder/:/app/input_output my_docker_image
This seems to work, but is not really practical, as I have to create locally a specific folder to mount it to the docker container. So here are the questions I am asking myself :
Is there a more practical ways of doings things ? To directly send one single input file and directly receive one single output files from the output of a docker container ?
When I run the docker image, what happens (If I understand correctly) is that it will create a docker container that will run my program once process.py once and then just sits there doing nothing. Even after finishing running process.py it will still be there listed in the command "docker ps -a". Is this behaviour expected ? Is there a way to automatically delete finished container ? Am I using docker run the right way ?
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
That's most efficiently done without Docker; just run the python command you already have. If your application has interesting Python library dependencies, you can install them in a virtual environment to avoid conflicts with the system Python installation.
When I run the Docker image...
...the container runs its main command (docker run command arguments, Dockerfile CMD, possibly combined with an entrypoint from the some sources), and when that command exits, the container exits. It will be listed in docker ps -a output, but as "Stopped" (probably with status 0 for a successful completion). You can docker run --rm to have the container automatically delete itself.
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
Wrap it in a network service, like a Flask application. As long as this is running, you can use a tool like curl to do an HTTP POST with the input JPEG file as the body, and get the output JPEG file as the response. Avoid using local files and Docker together whenever that's an option (prefer network I/O for process inputs and outputs; prefer a database to local-file storage).
Why are volume mounts not practical?
I would argue that Dockerising your application is not practical, but you've chosen to do so for, presumably very good, reasons. Volume mounts are simply an extension to this. If you want to get data in/out of your container, the 'normal' way to do this is by using volume mounts as you have done. Sure, you could use docker cp to copy the files manually, but that's not really practical either.
As far as the process exiting goes, normally, once the main process exits, the container exits. docker ps -a shows stopped containers as well as running ones. You should see that it says Exited n minutes(hours, days etc) ago. This means that your container has run and exited, correctly. You can remove it with docker rm <containerid>.
docker ps (no -a) will only show the running ones, btw.
If you use the --rm flag in your Docker run command, it will be removed when it exits, so you won't see it in the ps -a output. Stopped containers can be started again, but that's rather unusual.
Another solution might be to change your script to wait for incoming files and process them as they are received. Then you can leave the container running, and it will just process them as needed. If doing this, make sure that your idle loop has a sleep or something in it to ensure that you don't consume too many resources.

Run .sh script from python as sudo

I'm working on a project with python where I want to automate docker containers creation. I have the project folder already with includes all the files required to create the image.
One of these is create_image.sh
docker build -t my_container:latest .
Currently I do:
sudo bash create_image.sh
But now I need to automate this process from python.
I have tried:
import os
import subprocess
subprocess.check_call("bash -c '. create_image.sh'", shell=True)
But I get this error:
CalledProcessError: Command 'bash -c '. create_image.sh'' returned non-zero exit status 1.
EDIT:
The use case is to automate containers creation through an API, I have the code in flask and python until this point, where I got stuck in the images creation from the docker file. The rest is automated from templates.
You can try:
subprocess.call(['sudo', 'bash', 'create_image.sh' ])
which is equivalent of
sudo bash create_image.sh
Note: Let me say that there are better ways of automating docker container creation - please check docker-compose which can build and start the container easily. If you can elaborate more on the use case, we could help you with an elegant solution for docker. It might not be a python problem
EDIT:
Following the comments, it would be better to create a docker-compose and makefile is used to issue docker commands. Inspiration - https://medium.com/#daniel.carlier/how-to-build-a-simple-flask-restful-api-with-docker-compose-2d849d738137
In case that's because your user can't run docker without sudo, probably it's better to grant him docker API access by including him to docker group: https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
Simply adding user to docker group:
sudo gpasswd -a $USER docker
Also if You want to automate docker operations on python, I'd recommend to use python library for docker: How to build an Image using Docker API Python Client?

Running Disco in a Docker container

I need to run a run a Python script in a Docker container (I currently have execution of "disco_test.py" as my ENTRYPOINT command) that will utilize Disco (which of course needs to be running in that container). The problem is I cannot seem to get Disco running either with CMD or RUN in the Dockerfile, or from within the Python script itself (using the subprocess module).
If, however, I create an otherwise identical image with no ENTRYPOINT command, run it with docker run -i -t disco_test /bin/bash and then open a Python shell, I can successfully get Disco running using the subprocess module (simply using call(["disco", "start"]) works). Upon exiting the Python shell I can indeed verify that Disco is still running properly (disco status reports "Master 0cfddb8fb0e4:8989 running"). When I attempt to start Disco in the same way (using call(["disco", "start"])) from within "disco_test.py", which I execute as the ENTRYPOINT command, it doesn't work! It will print "Master 0cfddb8fb0e4:8989 started", however checking disco status afterwards ALWAYS shows "Master 0cfddb8fb0e4:8989 stopped".
Is there something about how the ENTRYPOINT command is run that is preventing me from being able to get Disco running from within the corresponding Python script? Running "disco_test.py" on my machine (not in a Docker container) does indeed get Disco up and running successfully.
Any insights or suggestions would be greatly appreciated!
I would guess that its running daemonized and exits immediately stopping the container. You could try these containers dockerized disco . It uses supervisor to run disco.

Categories

Resources