I'm using docker for the first time. I have a web application in angular and a backend application in python/flask. After some struggle, I finally managed to get everything to work, but docker can't find my API program in my backend:
My server file is at /my_backend/src/server.py
My docker file is at /my_backend/Dockerfile.dockerfile
Content of Dockerfile.dockerfile:
FROM python:3.7-slim
COPY /src .
WORKDIR /src/app
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/requirements.txt
RUN python3 -m pip install -r /tmp/requirements.txt
CMD ["python","server.py"]
The error message in the command prompt is
Attaching to backend, frontend
backend | python: can't open file 'server.py': [Errno 2] No such file or directory
backend exited with code 2
Feel free to ask for more information.
I used this tutorial: https://medium.com/analytics-vidhya/dockerizing-angular-application-and-python-backend-bottle-flask-framework-21dcbf8715e5
If your script is in /src directory, don't use WORKDIR /src/app but WORKDIR /src or CMD ["python","../server.py"]
Solution 1:
FROM python:3.7-slim
COPY /src .
WORKDIR /src # <- HERE
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/requirements.txt
RUN python3 -m pip install -r /tmp/requirements.txt
CMD ["python","server.py"]
Solution 2:
FROM python:3.7-slim
COPY /src .
WORKDIR /src/app
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/requirements.txt
RUN python3 -m pip install -r /tmp/requirements.txt
CMD ["python","../server.py"] # <- HERE, or "/src/server.py"
Suggested by #TheFool:
FROM python:3.7-slim
WORKDIR /src/app # Swap WORKDIR and COPY
COPY /src .
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/requirements.txt
RUN python3 -m pip install -r /tmp/requirements.txt
CMD ["python","server.py"]
turn your workdir and copy instructions around to make it work.
FROM python:3.7-slim
WORKDIR /src/app # workdir first
COPY /src .
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /tmp/requirements.txt
RUN python3 -m pip install -r /tmp/requirements.txt
CMD ["python","server.py"]
Related
I am trying to copy a folder with contents. But when trying to access the file from docker it shows an error
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements/development.txt'
My Dockerfile contains:
FROM python
ENV PYTHONUNBUFFERED=1
RUN apt-get update \
&& pip install pip install gunicorn
WORKDIR /usr/src/app
COPY requirements/* /usr/src/app/
RUN pip install -r requirements/development.txt
COPY . .
EXPOSE 8000
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "cryb.wsgi:application"]
COPY requirements/* /usr/src/app/
You're copying stuff from requirements to /usr/src/app. This does NOT preserve directory structure. Make that
RUN mkdir /usr/src/app/requirements
COPY requirements/* /usr/src/app/requirements/
I'm training in the dockerfile assembly, I can't understand why it doesn't work.
Python django project GitHub:
[https://github.com/BrianRuizy/covid19-dashboard][1]
At the moment I have such a dockerfile, who knows how to create docker files, help me figure it out and tell me what my mistake is.
FROM python:3.8-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential
ENV PYTHONUNBUFFERED=1
ADD requirements.txt /
RUN pip install -r /requirements.txt
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
You never coppied your files to the container an example would look like something like this. This also prevents as running as superuser
FROM python:3.8-slim
#INSTALL REQUIREMENTS
RUN apt-get update
RUN apt-get -y install default-libmysqlclient-dev gcc wget
# create the app user
RUN useradd -u 1000 -ms /bin/bash app && mkdir /app && chown -R app:app /app
# copy the requirements
COPY --chown=app:app ./requirements.txt /app/requirements.txt
COPY --chown=app:app ./deploy-requirements.txt /app/deploy-requirements.txt
WORKDIR /app
# and install them
RUN pip install -r requirements.txt && pip install -r deploy-requirements.txt
#####
#
# everything below runs as the 'app' user for security reasons
#
#####
USER app
#COPY APP
COPY --chown=app:app . /app
WORKDIR /app
#RUN
ENTRYPOINT gunicorn -u app -g app --bind 0.0.0.0:8000 PROJECT.asgi:application -k uvicorn.workers.UvicornWorker
My Dockerfile
FROM python:3.7 AS builder
RUN python3 -m venv /venv
COPY requirements.txt .
RUN /venv/bin/pip3 install -r requirements.txt
FROM python:3.7
WORKDIR /home/sokov_admin/www/bot-telegram
COPY . .
CMD ["/venv/bin/python", "./bot.py"]
When I run the docker image I have this error:
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:380: starting container process caused: exec:
"/venv/bin/python": stat /venv/bin/python: no such file or directory:
unknown.
What should I change in my code?
The example you show doesn't need any OS-level dependencies for Python dependency builds. That simplifies things significantly: you can do things in a single Docker build stage, without a virtual environment, and there wouldn't be any particular benefit from splitting it up.
FROM python:3.7
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["./bot.py"]
The place where a multi-stage build with a virtual environment helps is if you need a full C toolchain to build Python libraries. In this case, in a first stage, you install the C toolchain and set up the virtual environment. In the second stage you need to COPY --from=... the entire virtual environment to the final image.
# Builder stage:
FROM python:3.7 AS builder
# Install OS-level dependencies
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
build-essential
# libmysql-client-dev, for example
# Create the virtual environment
RUN python3 -m venv /venv
ENV PATH=/venv/bin:$PATH
# Install Python dependencies
WORKDIR /app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
# If your setup.py/setup.cfg has a console script entry point,
# install the application too
# COPY . .
# RUN pip3 install .
# Final stage:
FROM python:3.7 # must be _exactly_ the same image as the builder
# Install OS-level dependencies if needed (libmysqlclient, not ...-dev)
# RUN apt-get update && apt-get install ...
# Copy the virtual environment; must be _exactly_ the same path
COPY --from=builder /venv /venv
ENV PATH=/venv/bin:$PATH
# Copy in the application (if it wasn't `pip install`ed into the venv)
WORKDIR /app
COPY . .
# Say how to run it
EXPOSE 8000
CMD ["./bot.py"]
I rewrote some sync python lib to async. How do I integrate it into my project?
I did the following:
clone it from github and rewrited
build the lib using python3 setup.py bdist_wheel --universal and
got the file .whl file
How can I integrate it to my project?
Currently I have the following docker file:
FROM python:3.6
MAINTAINER ...
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . $APP_DIR
EXPOSE 8080
CMD python app.py
How do I copy the .whl file into to container and install it using pip3 install {...}.whl?
First add WORKDIR /app before COPY requirements.txt to specify your app working directory inside the container, then if you have xxx.whl in the same folder as requirements.txt just copy it COPY xxx.whl /app then RUN pip install xxx.whl
like this:
FROM python:3.6
MAINTAINER ...
# specify workdir
WORKDIR /app
COPY requirements.txt /app
# copy xxx.whl from host to container
COPY xxx.whl /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# install xxx.whl
RUN pip install xxx.whl
COPY . /app
EXPOSE 8080
CMD python app.py
I have a requirements.txt file containing, amongst others:
Flask-RQ==0.2
-e git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo
When I try to build a Docker container using Docker Compose, it downloads both packages, and install them both, but when I do a pip freeze there is no sign of the -e package. When I try to run the app, it looks as if this package hasn't been installed. Here's the relevant output from the build:
Collecting Flask-RQ==0.2 (from -r requirements.txt (line 3))
Downloading Flask-RQ-0.2.tar.gz
Obtaining repo from git+https://token:x-oauth-basic#github.com/user/repo.git#egg=repo (from -r requirements.txt (line 4))
Cloning https://token:x-oauth-basic#github.com/user/repo.git to ./src/repo
And here's my Dockerfile:
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r requirements.txt
COPY . /usr/src/app
I find this situation very strange and would appreciate any help.
I ran into a similar issue, and one possible way that the problem can appear is from:
WORKDIR /usr/src/app
being set before pip install. pip will create the src/ directory (where the package is installed) inside of the WORKDIR. Now all of this shouldn't be an issue since your app files, when copied over, should not overwrite the src/ directory.
However, you might be mounting a volume to /usr/src/app. When you do that, you'll overwrite the /usr/src/app/src directory and then your package will not be found.
So one fix is to move WORKDIR after the pip install. So your Dockerfile will look like:
FROM python:2.7
RUN mkdir -p /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install -r /usr/src/app/requirements.txt
COPY . /usr/src/app
WORKDIR /usr/src/app
This fixed it for me. Hopefully it'll work for you.
#mikexstudios is correct, this happens because pip stores the package source in /usr/src/app/src, but you're mounting a local directory over top of it, meaning python can't find the package source.
Rather than changing the position of WORKDIR, I solved it by changing the pip command to:
pip install -r requirements.txt --src /usr/local/src
Either approach should work.
If you are recieving a similar error when installing a git repo from a requirements file under a dockerized container, you may have forgotten to install git.
Here is the error I recieved:
Downloading/unpacking CMRESHandler from
git+git://github.com/zigius/python-elasticsearch-logger.git (from -r
/home/ubuntu/requirements.txt (line 5))
Cloning git://github.com/zigius/python-elasticsearch-logger.git to
/tmp/pip_build_root/CMRESHandler
Cleaning up...
Cannot find command 'git'
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c useradd ubuntu -b /home && echo
"ubuntu ALL = NOPASSWD: ALL" >> /etc/sudoers &&
chown -R ubuntu:ubuntu /home/ubuntu && pip install -r /home/ubuntu/requirements.txt returned a non-zero code: 1
Here is an example Dockerfile that installs git and then installs all requirements:
FROM python:3.5-slim
RUN apt-get update && apt-get install -y --no-install-recommends git \
ADD . /code
WORKDIR /code
RUN pip install --upgrade pip setuptools && pip install -r /home/ubuntu/requirements.txt
Now you can use git packages in your requirements file in a Dockerized environment