Docker-compose with extra_hosts fails, but docker build add-host succeeds - python

Why can't I map this docker command, which pip installs packages from a local network repository hosted at nexus.corp.com:
$> docker build -t demo --no-cache --add-host nexus.corp.com:1.2.3.4 .
which succeeds, into this docker-compose configuration:
version: "3"
services:
app:
build:
context: .
extra_hosts: ['nexus.corp.com:1.2.3.4']
command: >
sh -c "ping -c 4 nexus.corp.com"
which fails during the build step involving pip installing packages from the local repository?
Dockerfile
FROM python:3.8-slim
ENV PYTHONUNBUFFERED 1
# Install postgres client
RUN apt-get update
RUN apt-get install -y python3.8-dev
# For testing/debugging
RUN apt-get install -y iputils-ping
RUN pip install -U pip setuptools
WORKDIR /work
COPY ./pip.conf /etc/pip.conf # use custom pip config, see below
# Install a package hosted at the custom location
RUN pip3 install custom_package
pip.conf
[global]
index = https://nexus.corp.com/repository/corp-pypi-group/pypi
index-url = https://nexus.corp.com/repository/corp-pypi-group/simple
All of this networking is going on through a VPN, so the nexus.corp.com name isn't being served out by a DNS.

Related

testdriven.io Getting Error SCRAM authentication requires libpq version 10 or above

I am working on a tutorial (Test-Driven Development with Python, Flask and Docker) from testdriven.io and in running the command:
docker-compose exec api python manage.py recreate_db
I am getting the following error:
qlalchemy.exc.OperationalError: (psycopg2.OperationalError) SCRAM authentication requires libpq version 10 or above
From the research I have done this is due to libpq not being the correct version for psycopg2-binary. I have tired quite a few of the suggestions like having the following in my docker file prior to the requirements:
RUN apt-get -qq update && apt-get -qq install curl libpq-dev gcc 1> /dev/null
I have also tried using psycopg2 instead of psycopg2-binary.
I tried various docker images, but still cannot proceed past this point. Any help would be greatly appreciated.
My system:
Macbook Pro
Monterey Version 12.1
Apple M1 Pro chip
Requirements.txt file:
flask==2.1.1
flask-restx==0.5.1
Flask-SQLAlchemy==2.5.1
psycopg2-binary==2.9.3
pytest==7.1.1
Dockerfile:
# pull official base image
FROM python:3.10.3-slim-buster
# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1 .
ENV PYTHONBUFFERED 1
#RUN apt-get -qq update && apt-get -qq install curl libpq-dev gcc 1> /dev/null
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# add and install requirements
COPY ./requirements.txt .
RUN pip install -U --trusted-host files.pythonhosted.org --trusted-host pypi.org -r requirements.txt
# add app
COPY . .
# add entrypoint.sh
COPY ./entrypoint.sh .
RUN chmod +x /usr/src/app/entrypoint.sh
I was able to get this working with the help of #Michael Herman. I used downgraded the postgres image version to 11 from 14 and this resolved the issue.

Docker ubuntu 20 unable to install msodbcsql17 or 13 SQL SERVer odbc Driver 13 or 17

unfortunately I'm a bit desperate.
I have created a dockerimage and am running everything with docker-compose.
If i run docker-compose up i get this error:
| django.core.exceptions.ImproperlyConfigured: 'mssql' isn't an available database backend or couldn't be imported. Check the above exception. To use one of the built-in backends, use 'django.db.backends.XXX', where XXX is one of:
web_1 | 'mysql', 'oracle', 'postgresql', 'sqlite3'
if i see in the pip list, i see to less packages.
:(
docker-compose run web pip list
Package Version
---------- -------
asgiref 3.5.2
Django 4.0.4
pip 22.0.4
psycopg2 2.9.3
setuptools 58.1.0
sqlparse 0.4.2
wheel 0.37.1
´´´
Dockerfile
FROM ubuntu:20.04
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN apt update -y && apt upgrade -< && apt-get update
RUN apt-get install -y pip curl git python3-pip openjdk-8-jdk unixodbc-dev
#RUN pip install --upgrade pip
RUN pip install -r requirements.txt
#ADD SQL SERVER ODBC Driver 17 for Ubuntu 20.04
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN ACCEPT_EULA=Y apt-get install -y --allow-unauthenticated msodbcsql17
RUN ACCEPT_EULA=Y apt-get install -y --allow-unauthenticated mssql-tools
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
COPY . /code/
´´´
requirements.txt
Django>=4.0
psycopg2>=2.8
django-mssql==1.8
djangorestframework==3.13.1
pymssql==2.2.3
pyodbc==4.0.32
pyparsing==3.0.4
setuptools==61.2.0
sqlparse==0.4.1
´´´
docker-compose.yml
version: "3"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
docker-compose up build -d
This builds the compose new. Then i get easy errors to fix. Just change some lines in Dockerfile.
Also change in requirements django-mssql into mssql-django

Installing python package from private gitlab repo in Dockerfile

I'm currently trying to install python packages from a private gitlab repo. Unfortunately, I get problems with the credentials. Is there any way to install this package without writing my credentials into the Dockerfile or adding my personal ssh key into it?
Dockerfile:
FROM python:3.9.12-buster AS production
RUN apt-get update && apt-get install -y git
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
requirements.txt:
fastapi
uvicorn
cycler~=0.10.0
networkx
python-multipart
git+https://gitlab.private.net/group/private-repo.git#commit_hash#egg=foo
Error message:
#10 3.760 Cloning https://gitlab.private.net/group/private-repo.git (to revision commit_hash) to /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 3.769 Running command git clone --filter=blob:none --quiet https://gitlab.private.net/group/private-repo.git /tmp/pip-install-q9wtmf_q/foo_commit_hash
#10 4.039 fatal: could not read Username for 'https://gitlab.private.net/group/private-repo.git': No such device or address
#10 4.060 error: subprocess-exited-with-error
Generally speaking, you can use multi-stage docker builds to make sure your credentials don't stay in the image.
In your case, you might do something like this:
FROM python:3.9.12-buster as download
RUN apt-get update && apt-get install -y git
RUN pip install --upgrade pip wheel
ARG GIT_USERNAME
ARG GIT_PASSWORD
WORKDIR /build
COPY requirements.txt .
# add password to requirements file
RUN sed -i -E "s|gitlab.private.net|$GIT_USERNAME:$GIT_PASSWORD#gitlab.private.net|" requirements.txt
# download dependencies and build wheels to /build/dist
RUN python -m pip wheel -w /build/dist -r requirements.txt
FROM python:3.9.12-buster as production
WORKDIR /app
COPY --from=download /build/dist /wheelhouse
# install dependencies from the wheels created in previous build stage
RUN pip install --no-index /wheelhouse/*.whl
COPY . .
# ... the rest of your dockerfile
In GitLab CI, you might use the build command like this:
script:
# ...
- docker build --build-arg GIT_USERNAME=gitlab-ci-token --build-arg GIT_PASSWORD=$CI_JOB_TOKEN -t $CI_REGISTRY_IMAGE .
Then your image will be built and the final image won't contain your credentials. It will also be smaller since you don't have to install git :)
As a side note, you can simplify this somewhat by using the GitLab PyPI package registry.
So I also had to install my dependencies from private package repository for my python project.
This was the Dockerfile I used for building my project.
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gettext gcc libpq-dev python3-dev build-essential python3-pip python3-setuptools python3-wheel python3-cffi libcairo2 libpango-1.0-0 libpangocairo-1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info
RUN pip config set global.extra-index-url https://<personal_access_token_name>:<personal_access_token>#gitlab.com/simple/
# you need to configure pip to pull packages from remote private repository.
# for gitlab you require personal access token to access them with read permissions
COPY . /code/
RUN --mount=type=cache,target=/root/.cache pip install -r requirements.txt
RUN --mount=type=cache,target=/root/.cache pip install -r /code/webapi/requirements.txt
WORKDIR /code/webapi
ENTRYPOINT /code/webapi/entrypoint.sh

Prevent docker-compose from reinstalling requirements.txt while using a built image

I have an app ABC, which I want to put on docker environment. I built a Dockerfile and got the image abcd1234 which I used in docker-compose.yml
But on trying to build the docker-compose, All the requirements.txt files are getting reinstalled. Can it not use the already existing image and prevent time from reinstalling it?
I'm new to docker and trying to understand all the parameters. Also, is the 'context' correct? in docker-compose.yml or it should contain path inside the Image?
PS, my docker-compose.yml is not in same directory of project because I'll be using multiple images to expose more ports.
docker-compose.yml:
services:
app:
build:
context: /Users/user/Desktop/ABC/
ports:
- "8000:8000"
image: abcd1234
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- PROJECT_ENV=development
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano
RUN apt-get install -y libsm6 libxext6 libxrender-dev
COPY . /ABC/
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r /ABC/requirements.txt
WORKDIR .
Please guide me on how to tackle these 2 scenarios. Thanks!
The context: directory is the directory on your host system that includes the Dockerfile. It's the same directory you would pass to docker build, and it frequently is just the current directory ..
Within the Dockerfile, Docker can cache individual build steps so that it doesn't repeat them, but only until it reaches the point where something has changed. That "something" can be a changed RUN line, but at the point of your COPY, if any file at all changes in your local source tree that also invalidates the cache for everything after it.
For this reason, a typical Dockerfile has a couple of "phases"; you can repeat this pattern in other languages too. You can restructure your Dockerfile in this order:
# 1. Base information; this almost never changes
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
WORKDIR /ABC
# 2. Install OS packages. Doesn't depend on your source tree.
# Frequently just one RUN line (but could be more if you need
# packages that aren't in the default OS package repository).
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential unzip libxrender-dev libpq-dev
# 3. Copy _only_ the file that declares language-level dependencies.
# Repeat starting from here only if this file changes.
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Copy the rest of the application in. In a compiled language
# (Javascript/Webpack, Typescript, Java, Go, ...) build it.
COPY . .
# 5. Explain how to run the application.
EXPOSE 8000
CMD python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000

Flask application on Docker with Let's Encrypt

I want to create a Flask application in a Docker instance that has HTTPS enabled using the Let's Encrypt method of obtaining an SSL Cert. The cert also needs to be auto-renewed every so often (3 months I think), which is already done on my server but the Flask app needs to get access to the file also!
What would I need to modify on this Docker file to enable Let's encrypt?
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["web/app.py"]
You can use the docker volume feature:
A volume is a mount directory between the host machine and the container.
There is two ways to create a volume with docker:
You can declare a VOLUME command in the dockerfile
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y
RUN apt-get install -y python-pip python-dev build-essential
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
VOLUME /certdir
ENTRYPOINT ["python"]
CMD ["web/app.py"]
This will create a directory named after the container id inside /var/lib/docker/volumes.
This solution is more useful when you want to share something from the container to the host but is not very practical when it's the other way around.
You can use the -v flag on docker create or docker run to add a volume to the container:
docker run -v /certdir ./certdir web/app.py
Where /certdir is the directory /certdir inside the container and ./certdir is the one on the host inside your project directory.
This solution will work since the host directory will be mounted inside the container at the defined location. But without specifying it clearly in some documentation or provide a easy to use alias for your docker run/create command other user will not know how to define it.
PS:quick tip:
Put your RUN commands inside one single statement:
```
FROM ubuntu:latest
RUN apt-get update -y && apt-get upgrade -y \
&& apt-get install -y python-pip python-dev build-essential \
&& pip install --upgrade pip
COPY . /app
```
The advantage is docker will create only one layer for the installation of dependencies instead of three. (see documentation)

Categories

Resources