I wrote a simple flask web service to use fastText to do the prediction. I want to put them into docker. My Dockerfile is like this:
FROM python:3
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
RUN git clone https://github.com/facebookresearch/fastText.git /tmp/fastText && \
rm -rf /tmp/fastText/.git* && \
mv /tmp/fastText/* / && \
cd / && \
make
CMD ["python", "app.py"]
requirements.txt
Flask==0.10.0
docker-compose.yml
version: "3.7"
services:
helloworld:
build:
context: ./
ports:
- 5000:5000
When I run the docker-compose up , it comes with an error:
ModuleNotFoundError: No module named 'fasttext'
How to do fix that?
Try running those instructions instead of the make one:
$ git clone https://github.com/facebookresearch/fastText.git
$ cd fastText
$ pip install .
You have to replace your Dockerfile with the following:
FROM python:3
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
RUN git clone https://github.com/facebookresearch/fastText.git && \
cd fastText && \
pip install .
CMD ["python", "app.py"]
In this way, you can build fastText for python (as shown in the official documentation).
Related
I created a docker python image on top of alpine
the problem is that when I want to start a django app it can not find django
and it is right bcz when I type pip list, it does not have django and other packages.
ps: when creating the images it shows that it is collecting django and other packages
this is the requirements.txt file
Django>=3.2.4,<3.3
djangorestframework>=3.12.4,<3.13
this is my Dockerfile:
FROM python:3.9-alpine3.13
LABEL maintainer="siavash"
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /tmp/requirements.txt
COPY ./requirements.dev.txt /tmp/requirements.dev.txt
COPY ./app /app
WORKDIR /app
EXPOSE 8000
ARG DEV=false
RUN python -m venv /py && \
/py/bin/pip install --upgrade pip && \
/py/bin/pip install -r /tmp/requirements.txt && \
if [ $DEV = "true" ]; \
then /py/bin/pip install -r /tmp/requirements.dev.txt ; \
fi && \
rm -rf /tmp && \
adduser \
--disabled-password \
--no-create-home \
django-user
ENV PATH = "/py/bin:$PATH"
USER django-user
and this is docker-compose.yml
version: "3.9"
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
and this is the command that I use:
docker-compose run --rm app sh -c "django-admin startproject app . "
BTW the image is created successfully
So the reason why this happening i believe is because of a very simple error that's hard to see ðŸ˜
ENV PATH = "/py/bin:$PATH"
should be
ENV PATH="/py/bin:$PATH"
and you might run into some django-user issue
USER django-user so you can use this one i pasted.
Everything else looks correct.
In normal cases, you should not use virtualenv inside Docker Container.
see https://stackoverflow.com/a/48562835/19886776
Inside the container there is no need to create an additional "django-user" user because the container is an isolated environment.
Below is code that creates a new Django project through a Docker Container
requirements.txt
Django>=3.2.4,<3.3
Dockerfile
FROM python:3.9-alpine3.13
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
WORKDIR /app
docker-compose.yml
version: "3.9"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
The commands to create the new project
docker-compose build
docker-compose run --rm app sh -c "django-admin startproject app ."
docker-compose up -d
To edit files that created by the docker container,
we need to fix the ownership of the new files.
sudo chown -R $USER:$USER app
Try pip3 install instead of pip install
If that doesn't work, try installing it separately in a step and check.
I have a private repos that can be installable via python's pip:
requirements.txt
git+https://${GITHUB_TOKEN}#github.com/MY_ACCOUNT/MY_REPO.git
And a Dockerfile:
Dockerfile
FROM python:3.8.11
RUN apt-get update && \
apt-get -y install gcc curl && \
rm -rf /var/lib/apt/lists/*
ARG GITHUB_TOKEN
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install -r /tmp/requirements.txt
It worked perfectly when i build up an image:
$ docker build . --build-arg GITHUB_TOKEN=THIS_IS_MY_GITHUB_TOKEN -t wow/my_app:latest
But when I inspected image, it shows GITHUB_TOKEN in Cmd section:
$ docker image inspect wow/my_app:latest
...
"ContainerConfig": {
...
"Cmd": [
"|1",
"GITHUB_TOKEN=THIS_IS_MY_GITHUB_TOKEN", # Here!
"/bin/sh",
"-c",
"pip install -r /tmp/requirements.txt"
],
...
},
...
I think this could lead to a security problem. How can I solve this so that anything credential info not appear in docker inspect?
If you build your image using BuildKit, you can take advantage of Docker build secrets.
You would structure your Dockerfile something like this:
FROM python:3.8.11
RUN apt-get update && \
apt-get -y install gcc curl && \
rm -rf /var/lib/apt/lists/*
COPY ./requirements.txt /tmp/requirements.txt
RUN --mount=type=secret,id=GITHUB_TOKEN \
GITHUB_TOKEN=$(cat /run/secrets/GITHUB_TOKEN) \
pip install -r /tmp/requirements.txt
And then if you have a GITHUB_TOKEN environment variable in your local environment, you could run:
docker buildx build --secret id=GITHUB_TOKEN -t myimage .
Or if you have the value in a file, you could run:
docker buildx build \
--secret id=GITHUB_TOKEN,src=github_token.txt \
-t myimage .
In either case, the setting will not be baked into the resulting image. See the linked documentation for more information.
I'm training in the dockerfile assembly, I can't understand why it doesn't work.
Python django project GitHub:
[https://github.com/BrianRuizy/covid19-dashboard][1]
At the moment I have such a dockerfile, who knows how to create docker files, help me figure it out and tell me what my mistake is.
FROM python:3.8-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential
ENV PYTHONUNBUFFERED=1
ADD requirements.txt /
RUN pip install -r /requirements.txt
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
You never coppied your files to the container an example would look like something like this. This also prevents as running as superuser
FROM python:3.8-slim
#INSTALL REQUIREMENTS
RUN apt-get update
RUN apt-get -y install default-libmysqlclient-dev gcc wget
# create the app user
RUN useradd -u 1000 -ms /bin/bash app && mkdir /app && chown -R app:app /app
# copy the requirements
COPY --chown=app:app ./requirements.txt /app/requirements.txt
COPY --chown=app:app ./deploy-requirements.txt /app/deploy-requirements.txt
WORKDIR /app
# and install them
RUN pip install -r requirements.txt && pip install -r deploy-requirements.txt
#####
#
# everything below runs as the 'app' user for security reasons
#
#####
USER app
#COPY APP
COPY --chown=app:app . /app
WORKDIR /app
#RUN
ENTRYPOINT gunicorn -u app -g app --bind 0.0.0.0:8000 PROJECT.asgi:application -k uvicorn.workers.UvicornWorker
I am trying to run a docker base image but am encountering the error /bin/sh: 1: python: not found. I am first building a parent image and then modifying it using the bash script below
#!/usr/bin/env bash
docker build -t <image_name>:latest .
docker run <image_name>:latest
docker push <image_name>:latest
and the Dockerfile
FROM ubuntu:18.04
# Installing Python
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install Pillow boto3
WORKDIR /app
After that, I run the following script to create and run the base image:
#!/usr/bin/env bash
docker build -t <base_image_name>:latest .
docker run -it <base_image_name>:latest
with the following Dockerfile:
FROM <image_name>:latest
COPY app.py /app
# Run app.py when the container launches
CMD python /app/app.py
I have also tried installing python through the Dockerfile of the base image, but I still get the same error.
IMHO a better solution would be to use one of the official python images.
FROM python:3.9-slim
RUN pip install --no-cache-dir Pillow boto3
WORKDIR /app
To fix the issue of python not being found -- instead of
cd /usr/local/bin \
&& ln -s /usr/bin/python3 python
OP should symlink to /usr/bin/python, not /usr/local/bin/python as they did in the original post. Another way to do this is with an absolute symlink as below.
ln -s /usr/bin/python3 /usr/bin/python
I have two similar dockerfiles. They differ only in the entrypoint
This is Dockerfile-cron
FROM python:3.8
WORKDIR /src
COPY requirements.txt ./
COPY requirements-dev.txt ./
COPY . ./
RUN apt-get update -y
RUN apt-get install libgl1-mesa-glx -y
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install --no-cache-dir -r requirements-dev.txt
RUN chmod +x ./cron.sh
CMD ./cron.sh
This is cron.sh
#!/usr/bin/env bash
* * * * * python manage.py check_subscriptions > /proc/1/fd/1 2>/proc/1/fd/2
I have cron service in docker-compose
cron:
build:
dockerfile: deploy/Dockerfile-cron
context: .
networks:
- sortif-network
When I up docker-compose I get error
cron_1 | ./cron.sh: line 2: Invoice # 1 (7).pdf: command not found