I have dockerfile
FROM centos:7
ENV foo=42
then I build it
docker build -t my_docker .
and run it.
docker run -it -d my_docker
Is it possible to pass arguments from command line and use it with if else in Dockerfile? I mean something like
FROM centos:7
if (my_arg==42)
{ENV=TRUE}
else:
{ENV=FALSE}
and build with this argument.
docker build -t my_docker . --my_arg=42
It might not look that clean but you can have your Dockerfile (conditional) as follow:
FROM centos:7
ARG arg
RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
and then build the image as:
docker build -t my_docker . --build-arg arg=45
or
docker build -t my_docker .
There is an interesting alternative to the proposed solutions, that works with a single Dockerfile, require only a single call to docker build per conditional build and avoids bash.
Solution:
The following Dockerfile solves that problem. Copy-paste it and try it yourself.
ARG my_arg
FROM centos:7 AS base
RUN echo "do stuff with the centos image"
FROM base AS branch-version-1
RUN echo "this is the stage that sets VAR=TRUE"
ENV VAR=TRUE
FROM base AS branch-version-2
RUN echo "this is the stage that sets VAR=FALSE"
ENV VAR=FALSE
FROM branch-version-${my_arg} AS final
RUN echo "VAR is equal to ${VAR}"
Explanation of Dockerfile:
We first get a base image (centos:7 in your case) and put it into its own stage. The base stage should contain things that you want to do before the condition. After that, we have two more stages, representing the branches of our condition: branch-version-1 and branch-version-2. We build both of them. The final stage than chooses one of these stages, based on my_arg. Conditional Dockerfile. There you go.
Output when running:
(I abbreviated this a little...)
my_arg==2
docker build --build-arg my_arg=2 .
Step 1/12 : ARG my_arg
Step 2/12 : ARG ENV
Step 3/12 : FROM centos:7 AS base
Step 4/12 : RUN echo "do stuff with the centos image"
do stuff with the centos image
Step 5/12 : FROM base AS branch-version-1
Step 6/12 : RUN echo "this is the stage that sets VAR=TRUE"
this is the stage that sets VAR=TRUE
Step 7/12 : ENV VAR=TRUE
Step 8/12 : FROM base AS branch-version-2
Step 9/12 : RUN echo "this is the stage that sets VAR=FALSE"
this is the stage that sets VAR=FALSE
Step 10/12 : ENV VAR=FALSE
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to FALSE
my_arg==1
docker build --build-arg my_arg=1 .
...
Step 11/12 : FROM branch-version-${my_arg}
Step 12/12 : RUN echo "VAR is equal to ${VAR}"
VAR is equal to TRUE
Thanks to Tõnis for this amazing idea!
Do not use build args described in other answers where at all possible. This is an old messy solution. Docker's target property solves for this issue.
Target Example
Dockerfile
FROM foo as base
RUN ...
# Build dev image
FROM base as image-dev
RUN ...
COPY ...
# Build prod image
FROM base as image-prod
RUN ...
COPY ...
docker build --target image-dev -t foo .
version: '3.4'
services:
dev:
build:
context: .
dockerfile: Dockerfile
target: image-dev
Real World
Dockerfiles get complex in the real world. Use buildkit & COPY --from for faster, more maintainable Dockerfiles:
Docker builds every stage above the target, regardless of whether it is inherited or not. Use buildkit to build only inherited stages. Docker must by v19+. Hopefully this will be a default feature soon.
Targets may share build stages. Use COPY --from to simplify inheritance.
FROM foo as base
RUN ...
WORKDIR /opt/my-proj
FROM base as npm-ci-dev
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci
FROM base as npm-ci-prod
# invalidate cache
COPY --chown=www-data:www-data ./package.json /opt/my-proj/package.json
COPY --chown=www-data:www-data ./package-lock.json /opt/my-proj/package-lock.json
RUN npm ci --only=prod
FROM base as proj-files
COPY --chown=www-data:www-data ./ /opt/my-proj
FROM base as image-dev
# Will mount, not copy in dev environment
RUN ...
FROM base as image-ci
COPY --from=npm-ci-dev /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-stage
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
FROM base as image-prod
COPY --from=npm-ci-prod /opt/my-proj .
COPY --from=proj-files /opt/my-proj .
RUN ...
Enable experimental mode.
sudo echo '{"experimental": true}' | sudo tee /etc/docker/daemon.json
Build with buildkit enabled. Buildkit builds without cache by default - enable with --build-arg BUILDKIT_INLINE_CACHE=1
CI build job.
DOCKER_BUILDKIT=1 \
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--target image-ci\
-t foo:ci
.
Use cache from a pulled image with --cache-from
Prod build job
docker pull foo:ci
docker pull foo:stage
DOCKER_BUILDKIT=1 \
docker build \
--cache-from foo:ci,foo:stage \
--target image-prod \
-t prod
.
From some reason most of the answers here didn't help me (maybe it's related to my FROM image in the Dockerfile)
So I preferred to create a bash script in my workspace combined with --build-arg in order to handle if statement while Docker build by checking if the argument is empty or not
Bash script:
#!/bin/bash -x
if test -z $1 ; then
echo "The arg is empty"
....do something....
else
echo "The arg is not empty: $1"
....do something else....
fi
Dockerfile:
FROM ...
....
ARG arg
COPY bash.sh /tmp/
RUN chmod u+x /tmp/bash.sh && /tmp/bash.sh $arg
....
Docker Build:
docker build --pull -f "Dockerfile" -t $SERVICE_NAME --build-arg arg="yes" .
Remark: This will go to the else (false) in the bash script
docker build --pull -f "Dockerfile" -t $SERVICE_NAME .
Remark: This will go to the if (true)
Edit 1:
After several tries I have found the following article and this one
which helped me to understand 2 things:
1) ARG before FROM is outside of the build
2) The default shell is /bin/sh which means that the if else is working a little bit different in the docker build. for example you need only one "=" instead of "==" to compare strings.
So you can do this inside the Dockerfile
ARG argname=false #default argument when not provided in the --build-arg
RUN if [ "$argname" = "false" ] ; then echo 'false'; else echo 'true'; fi
and in the docker build:
docker build --pull -f "Dockerfile" --label "service_name=${SERVICE_NAME}" -t $SERVICE_NAME --build-arg argname=true .
Just use the "test" binary directly to do this. You also should use the noop command ":" if you don't want to specify an "else" condition, so docker does not stop with a non zero return value error.
RUN test -z "$YOURVAR" || echo "var is set" && echo "var is not set"
RUN test -z "$YOURVAR" && echo "var is not set" || :
RUN test -z "$YOURVAR" || echo "var is set" && :
The accepted answer may solve the question, but if you want multiline if conditions in the dockerfile, you can do that placing \ at the end of each line (similar to how you would do in a shell script) and ending each command with ;. You can even define someting like set -eux as the 1st command.
Example:
RUN set -eux; \
if [ -f /path/to/file ]; then \
mv /path/to/file /dest; \
fi; \
if [ -d /path/to/dir ]; then \
mv /path/to/dir /dest; \
fi
In your case:
FROM centos:7
ARG arg
RUN if [ -z "$arg" ] ; then \
echo Argument not provided; \
else \
echo Argument is $arg; \
fi
Then build with:
docker build -t my_docker . --build-arg arg=42
According to the doc for the docker build command, there is a parameter called --build-arg.
Example usage:
docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
IMO it's what you need :)
Exactly as others told, shell script would help.
Just an additional case, IMHO it's worth mentioning (for someone else who stumble upon here, looking for an easier case), that is Environment replacement.
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:
${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.
Using Bash script and Alpine/Centos
Dockerfile
FROM alpine #just change this to centos
ARG MYARG=""
ENV E_MYARG=$MYARG
ADD . /tmp
RUN chmod +x /tmp/script.sh && /tmp/script.sh
script.sh
#!/usr/bin/env sh
if [ -z "$E_MYARG" ]; then
echo "NO PARAM PASSED"
else
echo $E_MYARG
fi
Passing arg:
docker build -t test --build-arg MYARG="this is a test" .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in 10b0e07e33fc
this is a test
Removing intermediate container 10b0e07e33fc
---> f6f085ffb284
Successfully built f6f085ffb284
Without arg:
docker build -t test .
....
Step 5/5 : RUN chmod +x /tmp/script.sh && /tmp/script.sh
---> Running in b89210b0cac0
NO PARAM PASSED
Removing intermediate container b89210b0cac0
....
I had a similar issue for setting proxy server on a container.
The solution I'm using is an entrypoint script, and another script for environment variables configuration. Using RUN, you assure the configuration script runs on build, and ENTRYPOINT when you run the container.
--build-arg is used on command line to set proxy user and password.
As I need the same environment variables on container startup, I used a file to "persist" it from build to run.
The entrypoint script looks like:
#!/bin/bash
# Load the script of environment variables
. /root/configproxy.sh
# Run the main container command
exec "$#"
configproxy.sh
#!/bin/bash
function start_config {
read u p < /root/proxy_credentials
export HTTP_PROXY=http://$u:$p#proxy.com:8080
export HTTPS_PROXY=https://$u:$p#proxy.com:8080
/bin/cat <<EOF > /etc/apt/apt.conf
Acquire::http::proxy "http://$u:$p#proxy.com:8080";
Acquire::https::proxy "https://$u:$p#proxy.com:8080";
EOF
}
if [ -s "/root/proxy_credentials" ]
then
start_config
fi
And in the Dockerfile, configure:
# Base Image
FROM ubuntu:18.04
ARG user
ARG pass
USER root
# -z the length of STRING is zero
# [] are an alias for test command
# if $user is not empty, write credentials file
RUN if [ ! -z "$user" ]; then echo "${user} ${pass}">/root/proxy_credentials ; fi
#copy bash scripts
COPY configproxy.sh /root
COPY startup.sh .
RUN ["/bin/bash", "-c", ". /root/configproxy.sh"]
# Install dependencies and tools
#RUN apt-get update -y && \
# apt-get install -yqq --no-install-recommends \
# vim iputils-ping
ENTRYPOINT ["./startup.sh"]
CMD ["sh", "-c", "bash"]
Build without proxy settings
docker build -t img01 -f Dockerfile .
Build with proxy settings
docker build -t img01 --build-arg user=<USER> --build-arg pass=<PASS> -f Dockerfile .
Take a look here.
You can just add a simple check:
RUN [ -z "$ARG" ] \
&& echo "ARG argument not provided." \
&& exit 1 || exit 0
I saw a lot of possible solutions, but no one fits on the problem I faced today. So, I'm taking time to answer the question with one another possible solution that worked to me.
In my case I toke advantage of the well known if [ "$VAR" == "this" ]; then echo "do that"; fi. The caveat is that Docker, I don't know explain why, doesn't like the double equal on this case. So we need to write like that if [ "$VAR" = "this" ]; then echo "do that"; fi.
There is the full example that worked in my case:
FROM node:16
# Let's set args and envs
ARG APP_ENV="dev"
ARG NPM_CMD="install"
ARG USER="nodeuser"
ARG PORT=8080
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
ENV NODE_ENV=${APP_ENV}
# Let's set the starting point
WORKDIR /app
# Let's build a cache
COPY package*.json .
RUN date \
# If the environment is production or staging, omit dev packages
# If any other environment, install dev packages
&& if [ "$APP_ENV" = "production" ]; then NPM_CMD="ci --omit=dev"; fi \
&& if [ "$APP_ENV" = "staging" ]; then NPM_CMD="ci --omit=dev"; fi \
&& npm ${NPM_CMD} \
&& usermod -d /app -l ${USER} node
# Let's add the App
COPY . .
# Let's expose the App port
EXPOSE ${PORT}
# Let's set the user
USER ${USER}
# Let's set the start App command
CMD [ "node", "server.js" ]
So if the user pass the proper build argument, the docker build command will create an image of app for production. If not, it will create an image of the app with dev Node.js packages.
To make it works, you can call like this:
# docker build --build-arg APP_ENV=production -t app-node .
For any one trying to build Windows based image, you need to access argument with %% for cmd.
# Dockerfile Windows
# ...
ARG SAMPLE_ARG
RUN if %SAMPLE_ARG% == hello_world ( `
echo hehe %SAMPLE_ARG% `
) else ( `
echo haha %SAMPLE_ARG% `
)
# ...
BTW, ARG declaration must be placed after FROM, otherwise the argument will not be available.
# The ARGs in front of FROM is for image
ARG IMLABEL=xxxx \
IMVERS=x.x
FROM ${IMLABEL}:${IMVERS}
# The ARGs after FROM is for parameters to be used in the script
ARG condition-x
RUN if [ "$condition-x" = "condition-1" ]; then \
echo "$condition-1"; \
elif [ "$condition-x" = "condition-1" ]; then \
echo "$condition-2"; \
else
echo "$condition-others"; \
fi
build -t --build-arg IMLABEL --build-arg IMVERS --build-arg condition-x -f Dockerfile -t image:version .
Related
I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)
I'm running a python script inside a docker container using crontab. Also, I set some environment variables (as database host, password, etc.) in .env file in the project's directory. If I run the script manually inside the container (python3 main.py) everything is working properly. But when the script is run by crontab the environment variables are not found (None).
I have the following setup:
Dockerfile
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get -y install cron
RUN apt-get install -y python3-pip python-dev
WORKDIR /home/me/theservice
COPY . .
RUN chmod 0644 theservice-cron
RUN touch /var/log/theservice-cron.log
RUN chmod +x run.sh
ENTRYPOINT ./run.sh
run.sh
#!/bin/bash
crontab theservice-cron
cron -f
docker-compose.yml
version: '3.7'
services:
theservice:
build: .
env_file:
- ./.env
theservice-cron
HOME=/home/me/theservice
* * * * * python3 /home/me/theservice/main.py >> /var/log/theservice-cron.log 2>&1
#* * * * * cd /home/me/theservice && python3 main.py >> /var/log/theservice-cron.log 2>&1
I assumed that the cronjob is running in another directory and there the environment variables set in /home/me/theservice/.env are not accessible. So I tried to add HOME=/home/me/theservice line in the theservice-cron file or just to execute /home/me/theservice before running the script but it didn't help.
In the python script, I use os to access environment variables
import os
print(os.environ['db_host'])
How I can fix this problem?
I had similar problem.
I did fix it using the following:
CMD printenv > /etc/environment && cron && tail -f /var/log/theservice-cron.log
According to
https://askubuntu.com/questions/700107/why-do-variables-set-in-my-etc-environment-show-up-in-my-cron-environment, cron reads env vars from /etc/enviroment
For those fighting to get ENV variables from docker-compose into docker, simply have a shell script run at ENTRYPOINT in your Dockerfile, with
printenv > /etc/environment
again, the naming of "/etc/environment" is CRUCIAL !
And then in your crontab, have it call a shell script:
* * * * * bash -c "sh /var/www/html/cron_php.sh"
The scripts simply does :
#!/bin/bash
cd /var/www/html
php whatever.php
You will now have the docker-compose environment variables in your php cron application. It took me a full day to figure this out. Hope i save someone's trouble now !
UPDATE:
In Azure Docker (Web app) the mechanism doesn't seem to work. A small tweak is needed:
In the Dockerfile, in the ENTRYPOINT sh script, write a file (and CHMOD to execution rights chmod 770 ) /etc/environments.sh using this command:
eval $(printenv | awk -F= '{print "export " $1"=""""$2""" }' >> /etc/environments.sh)
Then, in your crontab shell where you execute php, do this:
#!/bin/bash
. /etc/environments.sh
php whatever.php
Notice the "." instead of source. Even though the Docker container is Linux using bash, source did not do the trick, the . did work.
Note: In my local Windows Docker the first solution, using /etc/envrionment worked fine. I was baffled to find out that on Azure the second fix was needed.
I'd like to be able to configure env variables for my docker containers and use them in build process with .env file
I currently have the following .env file:
SSH_PRIVATE_KEY=TEST
APP_PORT=8040
my docker-compose:
version: '3'
services:
companies:
image: companies8
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
ports:
- ${APP_PORT}:${APP_PORT}
env_file: .env
build:
context: .
args:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
my Dockerfile:
FROM python:3.7
# set a directory for the app
COPY . .
#Accept input argument from docker-compose.yml
ARG SSH_PRIVATE_KEY=abcdef
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
# Pass the content of the private key into the container
RUN mkdir -p /root/.ssh
RUN chmod 400 /root/.ssh
RUN echo "$SSH_PRIVATE_KEY" > /root/.ssh/id_rsa
RUN echo "$SSH_PUBLIC_KEY" > /root/.ssh/id_rsa.pub
RUN chmod 400 /root/.ssh/id_rsa
RUN chmod 400 /root/.ssh/id_rsa.pub
RUN eval $(ssh-agent -s) && ssh-add /root/.ssh/id_rsa && ssh-keyscan bitbucket.org > /root/.ssh/known_hosts
RUN ssh -T git#bitbucket.org
#Install the packages
RUN pip install -r v1/requirements.txt
# Tell the port number the container should expose
EXPOSE 8040
# run the command
CMD ["python", "v1/__main__.py"]
and i have the same SSH_PRIVATE_KEY environment variable set on my windows with value "test1" and the build log gives me the result 'test1' from
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
not the value that's in the .env file.
I need this because some of the libraries listed in my requirements.txt are in an internal repository and I need ssh to access them, therefore the ssh private key. There might be another proper way to use this, but its the general scenario i want to achieve - to pass env variables values from .env file to my docker build
There's a certain overlap between ENV and ARG as shown in the image below:
Since you are having the variable already exported in the operating system, its value will be present in the image from the ENV instruction.
But if you do not really need the variable in the image and only in the build step (as far as I see from the docker-compose file), then the ARG instruction is enough.
I have a Python script named app.py that has the value of the project ID,
project_id = "p007-999"
I hard code it inside the .gitlab-ci.yml file provided below,
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
I would like to automate this. I think the steps for that will be,
a. write the project_id value from the Python script to a shell
script variables.sh.
b. In the before_script: of the YML file,
execute the variables.sh and read the value from there.
How do I achieve it correctly?
You can do this with ruamel.yaml, which was specfically developed to do these
kind of round-trip updates (disclaimer: I am the author of that package).
Assuming your input is:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- PID_TO_REPLACE
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
And your code something like:
import sys
from pathlib import Path
import ruamel.yaml
def update_project_id(path, pid):
yaml = ruamel.yaml.YAML()
yaml.indent(sequence=4, offset=2) # non-standard indent of 4 for sequences
yaml.preserve_quotes = True
data = yaml.load(path)
data['before_script'][0] = 'export WE_PROJECT_ID="' + pid + '"'
yaml.dump(data, path)
file_name = Path('.gitlab-ci.yml')
project_id = "p007-999"
update_project_id(file_name, project_id)
which gives as output:
# list of enabled stages, the default should be built, test, publish
stages:
- build
- publish
before_script:
- export WE_PROJECT_ID="p007-999"
- docker login -u "$WELANCE_REGISTRY_USER" -p "$WELANCE_REGISTRY_TOKEN" registry.welance.com
build:
stage: build
services:
- docker:dind
variables:
DOCKER_HOST: docker:2375
script:
- echo $WE_PROJECT_ID
- cd templates && pwd && yarn install && yarn prod && cd ..
- docker build -t registry.welance.com/$WE_PROJECT_ID:$CI_COMMIT_REF_SLUG.$CI_COMMIT_SHA -f ./build/ci/Dockerfile .
(including the comment, which you lose when using by most other YAML loader/dumpers)
This is almost definitely inappropriate, but I really can't help myself.
WARNING: This is destructive, and will overwrite .gitlab-ci.yml.
awk '
NR==FNR && $1=="project_id" {pid=$NF}
/WE_PROJECT_ID=/ {sub(/\".*\"/, pid)}
NR!=FNR {print > FILENAME}
' app.py .gitlab-ci.yml
In the first file only, assign the last column to pid only if the first column is exactly "project_id".
On any line in any file that assigns the variable WE_PROJECT_ID, replace the first quoted string with pid.
In any files other than the first, print all records to the current file. This is possible due to awk's nifty buffers. If you have to be told to make a back-up, don't run this.
I have a dockerfile that calls a bash shell which executes a command to fetch the IP address and stores it into a variable. How do I return the variable's value from a bash shell to a dockerfile? After the variable is returned in the dockerfile, how do I access that variable in a python file?
Dockerfile:
FROM mongo:3.0-wheezy
RUN apt-get update && apt-get install -y
WORKDIR /usr/src/mongodb
COPY . /usr/src/mongodb
## What I've tried ##
#CMD ['./script.sh']
#RUN bash 'export ip=`awk 'END{print $1}' /etc/hosts`'
ENV ip="bash 'export ip=`awk 'END{print $1}' /etc/hosts`'"
EXPOSE 27017
hello.py (boilerplate code):
mongoIP = os.environ.get('ip')
print(mongoIP)