running flask container and when I try to read environment variables, they are not set by docker compose. I am Using docker-compose file version 2
compose file:
services:
test
build: ./test
image: test:1.0
container_name: test_flask
ports:
- "80:80"
env_file: .env
environment:
- COUCHDB=http://192.168.99.100:5984
depends_on:
- couchdb
I have tried both with env_file and environment directives ?
I have also tried to put the values in double quotes, single quotes, no quotes, none worked.
the .env file contains:
COUCHDB="http://192.168.99.100:5984", also tried without quotes
then I read the variables from python code like this:
COUCH_SERVER = os.environ["COUCHDB"]
I also tried
os.environ.get('COUCHDB')
none worked.
The server is started from Dockerfile as this:
CMD service apache2 restart
I start the container with the command:
docker-compose up test
I am using Docker for Windows toolbox with version:
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 08:47:51 2017
OS/Arch: windows/amd64
Server:
Version: 17.04.0-ce
API version: 1.28 (minimum version 1.12)
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 18:45:47 2017
OS/Arch: linux/amd64
Experimental: false
Thank you for your help
I leave you an example of how to do to get the environment variables from the application.
docker-compose.yml
version: '2'
services:
app:
image: python:2.7
environment:
- BAR=FOO
volumes:
- ./app.py:/app.py
command: python app.py
app.py
import os
print(os.environ["BAR"])
I personally feel that the env variables must be set with a name and value prefixed under the environment section of your docker file. At least this is what I have worked with on Kubernetes deployment yaml
env:
- name: CouchDB
value: "Couch DB URL"
- name: "ENV Variable 1"
value: 'Some value for the variable'
The you could try to exec into the pod and try to echo out the environment variable just to be sure.
As you mentioned, accessing it throught os.environ in your python script should get you going.
I also have another example:
You can pass the env variable using the Dockerfile
Like This ENV SECRET="abcdefg"In the Dockerfile.
So the complete Dockerfile will look like this:
FROM python:3.7.9
LABEL maintainer_name="Omar Magdy"
COPY requirements.txt requirements.txt
ENV SECRET="abcdefg"
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
ENV FLASK_DEBUG=0
CMD ["flask", "run"]
Now in the docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- "the_data_base:/db"
volumes:
the_data_base:
external: true
Assuming that you have created an external volume called the "the_data_base", to store the data inside the external volume.
So, my solution suggests creating the environmental variable inside the Dockerfile, instead of inside the docker-compose file.
So you create the environmental variable at the moment of the creation of the container.
:) :)
Related
I have Django application where my settings are placed in folder named settings. Inside this folder I have init.py, base.py, deployment.py and production.py.
My wsgi.py looks like this:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
application = get_wsgi_application()
My Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git#master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token {GITHUB-TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80
My docker-compose file:
version: '3'
services:
app:
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
Problem
Every time I create image Docker is taking settings from development.py instead of production.py. I tried to change my setting using this command:
set DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
It works fine when using conda/venv and I am able to switch to production mode however when creating Docker image it does not take into consideration production.py file at all.
Question
Is there anything else I should be aware of that causes issues like this and how can I fix it?
YES, there is something else you need to check:
When you run your docker container you can specify environment variables.
If you declare environment variable DJANGO_SETTINGS_MODULE=myapp_settings.development it will override what you specified inside of wsgi.py!
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
code above basically means: declare "myapp_settings.settings.production" as the default but if environment variable DJANGO_SETTINGS_MODULE is declared, take the value of that variable.
Edit 1
Maybe you can try specifying the environment variable inside your docker-compose file:
version: '3'
services:
app:
environment:
- DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
I have developed a primarily raspberry pi app in Python that uses Redis as its local cache so naturally I turned to docker compose to define all my services i.e. redis and my app. I am using Docker Hub private repository to host my container. But I do not get how to use the docker buildx bake command to target linux/armv7 platform as --platform flag is not part of bake
All the examples that the Docker team has shown use the simple docker buildx command which cannot be run for compose files.
My docker-compose.yml file is defined as:
version: '3.0'
services:
redis:
image: redis:alpine
app:
image: dockerhub/repository
build: gateway
restart: always
Dockerfile:
# set base image (Host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /run
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "-u", "run.py" ]
Any help would be much appreciated. Thanks
you can supply platform parameter under key xbake as mentioned below. (reference document: https://docs.docker.com/engine/reference/commandline/buildx_bake/)
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to: type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true
This might sound simple, but I have this problem.
I have two docker containers running. One is for my front-end and other is for my backend services.
these are the Dockerfiles for both services.
front-end Dockerfile :
# Use an official node runtime as a parent image
FROM node:8
WORKDIR /app
# Install dependencies
COPY package.json /app
RUN npm install --silent
# Add rest of the client code
COPY . /app
EXPOSE 3000
CMD npm start
backend Dockerfile :
FROM python:3.7.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py /usr/src/app
COPY . /usr/src/app
EXPOSE 8083
# CMD ["python3", "-m", "http.server", "8080"]
CMD ["python3", "./server.py"]
I am building images with the docker-compose.yaml as below:
version: "3.2"
services:
frontend:
build: ./frontend
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
ports:
- 8080:8083
How can I make this two services Update whenever there is a change to front-end or back-end?
I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me?
repo link
What I want is: after every code change the container should b e up-to-date automatically !
Try this in your docker-compose.yml
version: "3.2"
services:
frontend:
build: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- /app/node_modules
- ./frontend:/app
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- ./backends/banuka:/app
ports:
- 8080:8083
Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works.
If you are mapping your react container's port to a different port:
ports:
- "30000:3000"
you may need to tell the WebSocketClient to look at the correct port:
environment:
- CHOKIDAR_USEPOLLING=true # create-ui-app <= 5.x
- WATCHPACK_POLLING=true # create-ui-app >= 5.x
- FAST_REFRESH=false
- WDS_SOCKET_PORT=30000 # The mapped port on your host machine
See related issue:
https://github.com/facebook/create-react-app/issues/11779
I am trying to test my setup using docker and tensorflow. I am using the official Tensorflow image tensorflow/tensorflow:1.15.0rc2-gpu-py3
My project has the minimum structure:
project/
Dockerfile
docker-compose.yml
jupyter/
README.md
I have the following Dockerfile:
# from official image
FROM tensorflow/tensorflow:1.15.0rc2-gpu-py3-jupyter
# add my notebooks so they are a part of the container
ADD ./jupyter /tf/notebooks
# copy-paste from tf github dockerfile in attempt to troubleshoot
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/dockerfiles/dockerfiles/gpu-jupyter.Dockerfile
WORKDIR /tf
RUN which jupyter
CMD ["jupyter-notebook --notebook-dir=/tf/notebooks --ip 0.0.0.0 --no-browser --allow-root"]
and the docker-compose.yml
version: '3'
services:
tf:
image: tensorflow/tensorflow:1.15.0rc2-gpu-py3-jupyter
# mount host system volume to save updates from container
volumes:
- jupyter:/tf/notebooks
ports:
- '8888:8888'
# added as part of troubleshooting
build:
context: .
dockerfile: Dockerfile
volumes:
jupyter:
running docker-compose build and docker-compose up succeeds (if the CMD in the Dockerfile is commented out), but just exits. From the docker hub repository, I thought adding the volume would auto-start a notebook.
Trying to run jupyter-notebook or jupyter notebook fails.
Thoughts on how to correct?
If you want to create a custom image from the official one adding the notebook directory then the image property in the docker-compose should be the name of the your local image not the tensorflow/tensorflow:1.15.0rc2-gpu-py3-jupyter. All you need is in this case a following Dockerfile:
FROM tensorflow/tensorflow:1.15.0rc2-gpu-py3-jupyter
ADD ./jupyter /tf/notebooks
In this case the docker-compose.yaml file should look like the following:
version: '3'
services:
tf:
image: tensorflow
# mount host system volume to save updates from container
volumes:
- jupyter:/tf/notebooks
ports:
- '8888:8888'
# added as part of troubleshooting
build:
context: .
dockerfile: Dockerfile
volumes:
jupyter:
Note, that the image is tensorflow.
However, there is really no need to use the custom Dockerfile. Just use the following docker-compose.yaml file:
version: '3'
services:
tf:
image: tensorflow/tensorflow:1.15.0rc2-gpu-py3-jupyter
# mount host system volume to save updates from container
volumes:
- ./jupyter:/tf/notebooks:Z
ports:
- '8888:8888'
It will directly map your local jupyter directory to the container and will use an official image without modification.
Note though, it might not work as expected on Windows due to issues with mapping of the host directories.
try this
RUN pip3 install nvidia-tensorflow
this will install tf 1.15
I made a Docker Image of a Web Application which is built on Python and my Web application needs CouchDB server to start before running the programme. Can anyone please tell me how can I install and run CouchDB server in the Dockerfile of this Web Application. My Dockerfile is given below:
FROM python:2.7.15-alpine3.7
RUN mkdir /home/WebDocker
ADD ./Webpage1 /home/WebDocker/Webpage1
ADD ./requirements.txt /home/WebDocker/requirements.txt
WORKDIR /home/WebDocker
RUN pip install -r /home/WebDocker/requirements.txt
RUN apk update && \
apk upgrade && \^M
apk add bash vim sudo
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
Welcome to SO! I solved it by using Docker-Compose for running a separate CouchDB Container and a separate Python Container. The relevant part of the configuration file docker-compose.yml looks like this:
# This help to avoid routing conflict within virtual machines:
networks:
default:
ipam:
driver: default
config:
- subnet: 192.168.112.0/24
# The CouchDB data is kept in docker volume:
volumes:
couchdb_data:
services:
# The container couchServer uses Dockerfile from the subdirectory CouchDB-DIR
# and it has the hostname 'couchServer':
couchServer:
build:
context: .
dockerfile: CouchDB-DIR/Dockerfile
ports:
- "5984:5984"
volumes:
- type: volume
source: couchdb_data
target: /opt/couchdb/data
read_only: False
- type: volume
source: ${DOCKER_VOLUMES_BASEPATH}/couchdb_log
target: /var/log/couchdb
read_only: False
tty: true
environment:
- COUCHDB_PASSWORD=__secret__
- COUCHDB_USER=admin
python_app:
build:
context: .
dockerfile: ./Python_DIR/Dockerfile
...
In the Docker subnet, the CouchDB can be accessed by http://couchServer:5984 from the Python container. To ensure that the CouchDB is not lost when restarting the container, it is kept in a separate Docker volume couchdb_data.
Use the enviroment-variable DOCKER_VOLUMES_BASEPATH to determine in which directory CouchDB logs. It can be defined in a .env-file.
The network section is only necessary if you have routing problems.