Read the value in the YAML file from the Python script - python

I have a Python script named app.py that has the value of the project ID,
project_id = "p007-999"
I hard code it inside the .gitlab-ci.yml file provided below,
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
I would like to automate this. I think the steps for that will be,
a. write the project_id value from the Python script to a shell
script variables.sh.
b. In the before_script: of the YML file,
execute the variables.sh and read the value from there.
How do I achieve it correctly?

You can do this with ruamel.yaml, which was specfically developed to do these
kind of round-trip updates (disclaimer: I am the author of that package).
Assuming your input is:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- PID_TO_REPLACE
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
And your code something like:
import sys
from pathlib import Path
import ruamel.yaml
def update_project_id(path, pid):
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2) # non-standard indent of 4 for sequences
yaml.preserve_quotes = True
data = yaml.load(path)
data['before_script'][0] = 'export WE_PROJECT_ID="' + pid + '"'
yaml.dump(data, path)
file_name = Path('.gitlab-ci.yml')
project_id = "p007-999"
update_project_id(file_name, project_id)
which gives as output:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
(including the comment, which you lose when using by most other YAML loader/dumpers)

This is almost definitely inappropriate, but I really can't help myself.
WARNING: This is destructive, and will overwrite .gitlab-ci.yml.
awk '
NR==FNR && $1=="project_id" {pid=$NF}
/WE_PROJECT_ID=/ {sub(/\".*\"/, pid)}
NR!=FNR {print > FILENAME}
' app.py .gitlab-ci.yml
In the first file only, assign the last column to pid only if the first column is exactly "project_id".
On any line in any file that assigns the variable WE_PROJECT_ID, replace the first quoted string with pid.
In any files other than the first, print all records to the current file. This is possible due to awk's nifty buffers. If you have to be told to make a back-up, don't run this.

Related

Dockerfile - chown command usage base on different env's [duplicate]

I have dockerfile
FROM centos:7
ENV foo=42
then I build it
docker build -t my_docker .
and run it.
docker run -it -d my_docker
Is it possible to pass arguments from command line and use it with if else in Dockerfile? I mean something like
FROM centos:7
if (my_arg==42)
{ENV=TRUE}
else:
{ENV=FALSE}
and build with this argument.
docker build -t my_docker . --my_arg=42
It might not look that clean but you can have your Dockerfile (conditional) as follow:
FROM centos:7
ARG arg
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
and then build the image as:
docker build -t my_docker . --build-arg arg=45
or
docker build -t my_docker .
There is an interesting alternative to the proposed solutions, that works with a single Dockerfile, require only a single call to docker build per conditional build and avoids bash.
Solution:
The following Dockerfile solves that problem. Copy-paste it and try it yourself.
ARG my_arg
FROM centos:7 AS base
RUN echo "do stuff with the centos image"
FROM base AS branch-version-1
RUN echo "this is the stage that sets VAR=TRUE"
ENV VAR=TRUE
FROM base AS branch-version-2
RUN echo "this is the stage that sets VAR=FALSE"
ENV VAR=FALSE
FROM branch-version-${my_arg} AS final
RUN echo "VAR is equal to ${VAR}"
Explanation of Dockerfile:
We first get a base image (centos:7 in your case) and put it into its own stage. The base stage should contain things that you want to do before the condition. After that, we have two more stages, representing the branches of our condition: branch-version-1 and branch-version-2. We build both of them. The final stage than chooses one of these stages, based on my_arg. Conditional Dockerfile. There you go.
Output when running:
(I abbreviated this a little...)
my_arg==2
docker build --build-arg my_arg=2 .
Step 1/12 : ARG my_arg
Step 2/12 : ARG ENV
Step 3/12 : FROM centos:7 AS base
Step 4/12 : RUN echo "do stuff with the centos image"
do stuff with the centos image
Step 5/12 : FROM base AS branch-version-1
Step 6/12 : RUN echo "this is the stage that sets VAR=TRUE"
this is the stage that sets VAR=TRUE
Step 7/12 : ENV VAR=TRUE
Step 8/12 : FROM base AS branch-version-2
Step 9/12 : RUN echo "this is the stage that sets VAR=FALSE"
this is the stage that sets VAR=FALSE
Step 10/12 : ENV VAR=FALSE
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to FALSE
my_arg==1
docker build --build-arg my_arg=1 .
...
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to TRUE
Thanks to Tõnis for this amazing idea!
Do not use build args described in other answers where at all possible. This is an old messy solution. Docker's target property solves for this issue.
Target Example
Dockerfile
FROM foo as base
RUN ...
# Build dev image
FROM base as image-dev
RUN ...
COPY ...
# Build prod image
FROM base as image-prod
RUN ...
COPY ...
docker build --target image-dev -t foo .
version: '3.4'
services:
dev:
build:
context: .
dockerfile: Dockerfile
target: image-dev
Real World
Dockerfiles get complex in the real world. Use buildkit & COPY --from for faster, more maintainable Dockerfiles:
Docker builds every stage above the target, regardless of whether it is inherited or not. Use buildkit to build only inherited stages. Docker must by v19+. Hopefully this will be a default feature soon.
Targets may share build stages. Use COPY --from to simplify inheritance.
FROM foo as base
RUN ...
WORKDIR /opt/my-proj
FROM base as npm-ci-dev
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci
FROM base as npm-ci-prod
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci --only=prod
FROM base as proj-files
COPY --chown=www-data:www-data ./ /opt/my-proj
FROM base as image-dev
# Will mount, not copy in dev environment
RUN ...
FROM base as image-ci
COPY --from=npm-ci-dev /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-stage
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-prod
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
Enable experimental mode.
sudo echo '{"experimental": true}' | sudo tee /etc/docker/daemon.json
Build with buildkit enabled. Buildkit builds without cache by default - enable with --build-arg BUILDKIT_INLINE_CACHE=1
CI build job.
DOCKER_BUILDKIT=1 \
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--target image-ci\
-t foo:ci
.
Use cache from a pulled image with --cache-from
Prod build job
docker pull foo:ci
docker pull foo:stage
DOCKER_BUILDKIT=1 \
docker build \
--cache-from foo:ci,foo:stage \
--target image-prod \
-t prod
.
From some reason most of the answers here didn't help me (maybe it's related to my FROM image in the Dockerfile)
So I preferred to create a bash script in my workspace combined with --build-arg in order to handle if statement while Docker build by checking if the argument is empty or not
Bash script:
#!/bin/bash -x
if test -z $1 ; then
echo "The arg is empty"
....do something....
else
echo "The arg is not empty: $1"
....do something else....
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
....
Docker Build:
docker build --pull -f "Dockerfile" -t $SERVICE_NAME --build-arg arg="yes" .
Remark: This will go to the else (false) in the bash script
docker build --pull -f "Dockerfile" -t $SERVICE_NAME .
Remark: This will go to the if (true)
Edit 1:
After several tries I have found the following article and this one
which helped me to understand 2 things:
1) ARG before FROM is outside of the build
2) The default shell is /bin/sh which means that the if else is working a little bit different in the docker build. for example you need only one "=" instead of "==" to compare strings.
So you can do this inside the Dockerfile
ARG argname=false #default argument when not provided in the --build-arg
RUN if [ "$argname" = "false" ] ; then echo 'false'; else echo 'true'; fi
and in the docker build:
docker build --pull -f "Dockerfile" --label "service_name=${SERVICE_NAME}" -t $SERVICE_NAME --build-arg argname=true .
Just use the "test" binary directly to do this. You also should use the noop command ":" if you don't want to specify an "else" condition, so docker does not stop with a non zero return value error.
RUN test -z "$YOURVAR" || echo "var is set" && echo "var is not set"
RUN test -z "$YOURVAR" && echo "var is not set" || :
RUN test -z "$YOURVAR" || echo "var is set" && :
The accepted answer may solve the question, but if you want multiline if conditions in the dockerfile, you can do that placing \ at the end of each line (similar to how you would do in a shell script) and ending each command with ;. You can even define someting like set -eux as the 1st command.
Example:
RUN set -eux; \
if [ -f /path/to/file ]; then \
mv /path/to/file /dest; \
fi; \
if [ -d /path/to/dir ]; then \
mv /path/to/dir /dest; \
fi
In your case:
FROM centos:7
ARG arg
RUN if [ -z "$arg" ] ; then \
echo Argument not provided; \
else \
echo Argument is $arg; \
fi
Then build with:
docker build -t my_docker . --build-arg arg=42
According to the doc for the docker build command, there is a parameter called --build-arg.
Example usage:
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
IMO it's what you need :)
Exactly as others told, shell script would help.
Just an additional case, IMHO it's worth mentioning (for someone else who stumble upon here, looking for an easier case), that is Environment replacement.
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:
${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.
Using Bash script and Alpine/Centos
Dockerfile
FROM alpine #just change this to centos
ARG MYARG=""
ENV E_MYARG=$MYARG
ADD . /tmp
RUN chmod +x /tmp/script.sh && /tmp/script.sh
script.sh
#!/usr/bin/env sh
if [ -z "$E_MYARG" ]; then
echo "NO PARAM PASSED"
else
echo $E_MYARG
fi
Passing arg:
docker build -t test --build-arg MYARG="this is a test" .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in 10b0e07e33fc
this is a test
Removing intermediate container 10b0e07e33fc
---> f6f085ffb284
Successfully built f6f085ffb284
Without arg:
docker build -t test .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in b89210b0cac0
NO PARAM PASSED
Removing intermediate container b89210b0cac0
....
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
As I need the same environment variables on container startup, I used a file to "persist" it from build to run.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.
You can just add a simple check:
RUN [ -z "$ARG" ] \
&& echo "ARG argument not provided." \
&& exit 1 || exit 0
I saw a lot of possible solutions, but no one fits on the problem I faced today. So, I'm taking time to answer the question with one another possible solution that worked to me.
In my case I toke advantage of the well known if [ "$VAR" == "this" ]; then echo "do that"; fi. The caveat is that Docker, I don't know explain why, doesn't like the double equal on this case. So we need to write like that if [ "$VAR" = "this" ]; then echo "do that"; fi.
There is the full example that worked in my case:
FROM node:16
# Let's set args and envs
ARG APP_ENV="dev"
ARG NPM_CMD="install"
ARG USER="nodeuser"
ARG PORT=8080
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
ENV NODE_ENV=${APP_ENV}
# Let's set the starting point
WORKDIR /app
# Let's build a cache
COPY package*.json .
RUN date \
# If the environment is production or staging, omit dev packages
# If any other environment, install dev packages
&& if [ "$APP_ENV" = "production" ]; then NPM_CMD="ci --omit=dev"; fi \
&& if [ "$APP_ENV" = "staging" ]; then NPM_CMD="ci --omit=dev"; fi \
&& npm ${NPM_CMD} \
&& usermod -d /app -l ${USER} node
# Let's add the App
COPY . .
# Let's expose the App port
EXPOSE ${PORT}
# Let's set the user
USER ${USER}
# Let's set the start App command
CMD [ "node", "server.js" ]
So if the user pass the proper build argument, the docker build command will create an image of app for production. If not, it will create an image of the app with dev Node.js packages.
To make it works, you can call like this:
# docker build --build-arg APP_ENV=production -t app-node .
For any one trying to build Windows based image, you need to access argument with %% for cmd.
# Dockerfile Windows
# ...
ARG SAMPLE_ARG
RUN if %SAMPLE_ARG% == hello_world ( `
echo hehe %SAMPLE_ARG% `
) else ( `
echo haha %SAMPLE_ARG% `
)
# ...
BTW, ARG declaration must be placed after FROM, otherwise the argument will not be available.
# The ARGs in front of FROM is for image
ARG IMLABEL=xxxx \
IMVERS=x.x
FROM ${IMLABEL}:${IMVERS}
# The ARGs after FROM is for parameters to be used in the script
ARG condition-x
RUN if [ "$condition-x" = "condition-1" ]; then \
echo "$condition-1"; \
elif [ "$condition-x" = "condition-1" ]; then \
echo "$condition-2"; \
else
echo "$condition-others"; \
fi
build -t --build-arg IMLABEL --build-arg IMVERS --build-arg condition-x -f Dockerfile -t image:version .

How to install python and run a python file in a gitlab job

image : mcr.microsoft.com/dotnet/core/sdk:3.1
.deploy: &deploy
before_script:
- apt-get update -y
script:
- cd source/
- pip install -r requirements.txt
- python build_file.py > swagger.yml
I want to run the build_file.py file and write the output to swagger.yml. So to run the file I need to install python. How can I do that?
You can use a different Docker image for each job, so you can split your deployment stage into multiple jobs. In one use the python:3 image for example to run pip and generate the swagger.yml, then define it as an artifact that will be used by the next jobs.
Example (untested!) snippet:
deploy-swagger:
image: python:3
stage: deploy
script:
- cd source/
- pip install -r requirements.txt
- python build_file.py > swagger.yml
artifacts:
paths:
- source/swagger.yml
deploy-dotnet:
image: mcr.microsoft.com/dotnet/core/sdk:3.1
stage: deploy
dependencies:
- deploy-swagger
script:
- ls -l source/swagger.yml
- ...
You could (probably should) also make the swagger generation be part of previous stage and set an expiration for the artifact. See this blog post for example.

Gitlab with specfic runner at DigitalOcean return error during connect dial tcp: lookup docker on 67.207.67.3:53: no such host

I'm trying to implement CD for my dockerized Django app using Gitlab with a specific runner at a DigitalOcean droplet.
Here's my .gitlab-ci.yml
image:
name: docker/compose:1.29.2
entrypoint: [""]
services:
- docker:dind
# - docker:18-dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- mkdir -p ~/.ssh
- echo "${SSH_PRIVATE_KEY}" | tr -d '\r' > ~/.ssh/py_key
- cat ~/.ssh/py_key
- chmod 700 ~/.ssh
- chmod 600 ~/.ssh/py_key
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/py_key
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
build:
stage: build
tags:
- pythonist
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/nginx:nginx || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/root/Pythonist.org/
- bash ./deploy.sh
only:
- master
And here's the gitlab-runner/config.toml:
[[runners]]
name = "pythonist-on-gitlab"
url = "https://gitlab.com/ci"
token = "token"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker/compose:1.29.2"
privileged = flase
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
# volumes = ["/cache"]
shm_size = 0
#privileged = true
volumes = ["/cache","/var/run/docker.sock:/var/run/docker.sock"]
When I push to the repository to trigger a build, here's the error I get:
$ docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
WARNING! Using --password via the CLI is insecure. Use
--password-stdin.
error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on
67.207.67.3:53: no such host
What might be wrong here?

Docker: unable to test a postgresql connection

I'm following a Python/TDD/Docker tutorial by TestDriven.io.
I build a custom image and I want to test it. I cannot (I think, I'm a noob with Docker and Python, please patience) do.
This is the image: registry.gitlab.com/sineverba/warehouse:latest. It works because I deployed to Heroku with success.
I don't want docker-compose for testing the final image, so I tried to do:
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#flask-tdd-net:5432/users_dev
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users_dev -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
I can launch a simple
docker exec app python -V and get version, for example.
But when I launch
docker exec app python -m pytest "project/tests"
I get (split down, full log here: https://pastebin.com/tYjn65ys)
self = <[AttributeError("'NoneType' object has no attribute 'drivername'") raised in repr()] SQLAlchemy object at 0x7fc74676e7f0>
app = <Flask 'project'>, sa_url = None, options = {}
def apply_driver_hacks(self, app, sa_url, options):
"""This method is called before engine creation and used to inject
driver specific hacks into the options. The `options` parameter is
a dictionary of keyword arguments that will then be used to call
the :func:`sqlalchemy.create_engine` function.
The default implementation provides some saner defaults for things
like pool sizes for MySQL and sqlite. Also it injects the setting of
`SQLALCHEMY_NATIVE_UNICODE`.
"""
> if sa_url.drivername.startswith('mysql'):
E AttributeError: 'NoneType' object has no attribute 'drivername'
I did try also (after stopping and removing containers and recreating DBs)
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
So, moving name from users_dev to users.
Full repo link: https://github.com/sineverba/flask-tdd-docker/tree/add-gitlab-warehouse
Thank you in advance!
Edit
I changed the env cause link db was wrong. These are new commands, but got same error. I tried also to export both env, without success.
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
export DATABASE_URL=postgres://postgres:postgres#db:5432/users
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
docker exec app python -m pytest "project/tests"
docker container stop app && docker container rm app && docker container stop db && docker container rm db
Starting example
This is the testdriven.io example, from Gitlab integration (that I want not use). Only env exported for app is the DATABASE_TEST_URL
image: docker:stable
stages:
- build
- test
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile.prod
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgres:latest
variables:
POSTGRES_DB: users
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
DATABASE_TEST_URL: postgres://runner:runner#postgres:5432/users
script:
- pytest "project/tests" -p no:warnings
- flake8 project
- black project --check
- isort project/**/*.py --check-only
Solved
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest

Docker compose obtain env variables from .env file and pass to docker build

I'd like to be able to configure env variables for my docker containers and use them in build process with .env file
I currently have the following .env file:
SSH_PRIVATE_KEY=TEST
APP_PORT=8040
my docker-compose:
version: '3'
services:
companies:
image: companies8
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
ports:
- ${APP_PORT}:${APP_PORT}
env_file: .env
build:
context: .
args:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
my Dockerfile:
FROM python:3.7
# set a directory for the app
COPY . .
#Accept input argument from docker-compose.yml
ARG SSH_PRIVATE_KEY=abcdef
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
# Pass the content of the private key into the container
RUN mkdir -p /root/.ssh
RUN chmod 400 /root/.ssh
RUN echo "$SSH_PRIVATE_KEY" > /root/.ssh/id_rsa
RUN echo "$SSH_PUBLIC_KEY" > /root/.ssh/id_rsa.pub
RUN chmod 400 /root/.ssh/id_rsa
RUN chmod 400 /root/.ssh/id_rsa.pub
RUN eval $(ssh-agent -s) && ssh-add /root/.ssh/id_rsa && ssh-keyscan bitbucket.org > /root/.ssh/known_hosts
RUN ssh -T git#bitbucket.org
#Install the packages
RUN pip install -r v1/requirements.txt
# Tell the port number the container should expose
EXPOSE 8040
# run the command
CMD ["python", "v1/__main__.py"]
and i have the same SSH_PRIVATE_KEY environment variable set on my windows with value "test1" and the build log gives me the result 'test1' from
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
not the value that's in the .env file.
I need this because some of the libraries listed in my requirements.txt are in an internal repository and I need ssh to access them, therefore the ssh private key. There might be another proper way to use this, but its the general scenario i want to achieve - to pass env variables values from .env file to my docker build
There's a certain overlap between ENV and ARG as shown in the image below:
Since you are having the variable already exported in the operating system, its value will be present in the image from the ENV instruction.
But if you do not really need the variable in the image and only in the build step (as far as I see from the docker-compose file), then the ARG instruction is enough.

Categories

Resources