Docker is taking wrong settings file when creating image - python

I have Django application where my settings are placed in folder named settings. Inside this folder I have init.py, base.py, deployment.py and production.py.
My wsgi.py looks like this:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
application = get_wsgi_application()
My Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git#master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token {GITHUB-TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80
My docker-compose file:
version: '3'
services:
app:
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
Problem
Every time I create image Docker is taking settings from development.py instead of production.py. I tried to change my setting using this command:
set DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
It works fine when using conda/venv and I am able to switch to production mode however when creating Docker image it does not take into consideration production.py file at all.
Question
Is there anything else I should be aware of that causes issues like this and how can I fix it?

YES, there is something else you need to check:
When you run your docker container you can specify environment variables.
If you declare environment variable DJANGO_SETTINGS_MODULE=myapp_settings.development it will override what you specified inside of wsgi.py!
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
code above basically means: declare "myapp_settings.settings.production" as the default but if environment variable DJANGO_SETTINGS_MODULE is declared, take the value of that variable.
Edit 1
Maybe you can try specifying the environment variable inside your docker-compose file:
version: '3'
services:
app:
environment:
- DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env

Related

Changes on template files inside volume not showing on Flask frontend

I am using a docker-compose Flask implementation with the following configuration
docker-compose:
version: '3'
services:
dashboard:
build:
context: dashboard/
args:
APP_PORT: "8080"
container_name: dashboard
ports:
- "8080:8080"
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: "8080"
volumes:
- ./dashboard/:/usr/src/app
dashboard/Dockerfile:
FROM python:3.7-slim-bullseye
ENV PYTHONUNBUFFERED True
ARG APP_PORT
ENV APP_HOME /usr/src/app
WORKDIR $APP_HOME
COPY requirements.txt ./requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
CMD exec gunicorn --bind :$APP_PORT --workers 1 --threads 8 --timeout 0 main:app
dashboard/main.py:
import os
from flask import Flask, render_template
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
If I apply any change to the index.html file in my host system using VSCode, these changes won't apply when I refresh the page. However, I have tried getting into the container with docker exec -it dashboard bash and cat /usr/src/app/templates/index.html and they are reflected inside the container, since the volume is shared between the host and the container.
If I stop the container and run it again the changes are applied, but as I am working on frontend doing that all the time is pretty annoying.
Why the changes won't show on the browser but they are replicated on the container?
You should use: TEMPLATES_AUTO_RELOAD=True
From https://flask.palletsprojects.com/en/2.0.x/config/
It appears that the templates are preloaded and won't update until you enable this feature.

How to use docker buildx bake to build docker compose containers for both linux/armv7 and linux/amd64

I have developed a primarily raspberry pi app in Python that uses Redis as its local cache so naturally I turned to docker compose to define all my services i.e. redis and my app. I am using Docker Hub private repository to host my container. But I do not get how to use the docker buildx bake command to target linux/armv7 platform as --platform flag is not part of bake
All the examples that the Docker team has shown use the simple docker buildx command which cannot be run for compose files.
My docker-compose.yml file is defined as:
version: '3.0'
services:
redis:
image: redis:alpine
app:
image: dockerhub/repository
build: gateway
restart: always
Dockerfile:
# set base image (Host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /run
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "-u", "run.py" ]
Any help would be much appreciated. Thanks
you can supply platform parameter under key xbake as mentioned below. (reference document: https://docs.docker.com/engine/reference/commandline/buildx_bake/)
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to: type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true

How to Hot-Reload in ReactJS Docker

This might sound simple, but I have this problem.
I have two docker containers running. One is for my front-end and other is for my backend services.
these are the Dockerfiles for both services.
front-end Dockerfile :
# Use an official node runtime as a parent image
FROM node:8
WORKDIR /app
# Install dependencies
COPY package.json /app
RUN npm install --silent
# Add rest of the client code
COPY . /app
EXPOSE 3000
CMD npm start
backend Dockerfile :
FROM python:3.7.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py /usr/src/app
COPY . /usr/src/app
EXPOSE 8083
# CMD ["python3", "-m", "http.server", "8080"]
CMD ["python3", "./server.py"]
I am building images with the docker-compose.yaml as below:
version: "3.2"
services:
frontend:
build: ./frontend
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
ports:
- 8080:8083
How can I make this two services Update whenever there is a change to front-end or back-end?
I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me?
repo link
What I want is: after every code change the container should b e up-to-date automatically !
Try this in your docker-compose.yml
version: "3.2"
services:
frontend:
build: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- /app/node_modules
- ./frontend:/app
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- ./backends/banuka:/app
ports:
- 8080:8083
Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works.
If you are mapping your react container's port to a different port:
ports:
- "30000:3000"
you may need to tell the WebSocketClient to look at the correct port:
environment:
- CHOKIDAR_USEPOLLING=true # create-ui-app <= 5.x
- WATCHPACK_POLLING=true # create-ui-app >= 5.x
- FAST_REFRESH=false
- WDS_SOCKET_PORT=30000 # The mapped port on your host machine
See related issue:
https://github.com/facebook/create-react-app/issues/11779

Docker compose executable file not found in $PATH": unknown

but I'm having a problem.
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 0
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
compose.yml :
version: '3'
services:
db:
image: postgres
volumes:
- ./docker/data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=sampledb
- POSTGRES_USER=sampleuser
- POSTGRES_PASSWORD=samplesecret
- POSTGRES_INITDB_ARGS=--encoding=UTF-8
django:
build: .
environment:
- DJANGO_DEBUG=True
- DJANGO_DB_HOST=db
- DJANGO_DB_PORT=5432
- DJANGO_DB_NAME=sampledb
- DJANGO_DB_USERNAME=sampleuser
- DJANGO_DB_PASSWORD=samplesecret
- DJANGO_SECRET_KEY=dev_secret_key
ports:
- "8000:8000"
command:
- python3 manage.py runserver
volumes:
- .:/code
error :
ERROR: for django Cannot start service django: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"python3 manage.py runserver\": executable file not found in $PATH": unknown
At first, I thought Python Manage was wrong.
But i tried command ls , To my surprise, I succeeded.
Then I tried the ls -al command, but it failed.
I think the addition of a command to write space is causing a problem.
how can i fix it ?
When you use list syntax in the docker-compose.yml file, each item is taken as a word. You're running the shell equivalent of
'python3 manage.py runserver'
You can either break this up into separate words yourself
command:
- python3
- manage.py
- runserver
or have Docker Compose do it for you
command: python3 manage.py runserver
In general fixed properties of the image like this should be specified in the Dockerfile, not in the docker-compose.yml. Every time you run this image you're going to want to run this same command, and you're going to want to run the code built into the image. There are two syntaxes, with the same basic difference:
# Explicitly write out the words
CMD ["python3", "manage.py", "runserver"]
# Docker wraps in sh -c '...' which splits words for you
CMD python3 manage.py runserver
With the code built into the image and a reasonable default command defined there, you can delete the volumes: and command: from your docker-compose.yml file.

docker-compose not setting environment variables with flask

running flask container and when I try to read environment variables, they are not set by docker compose. I am Using docker-compose file version 2
compose file:
services:
test
build: ./test
image: test:1.0
container_name: test_flask
ports:
- "80:80"
env_file: .env
environment:
- COUCHDB=http://192.168.99.100:5984
depends_on:
- couchdb
I have tried both with env_file and environment directives ?
I have also tried to put the values in double quotes, single quotes, no quotes, none worked.
the .env file contains:
COUCHDB="http://192.168.99.100:5984", also tried without quotes
then I read the variables from python code like this:
COUCH_SERVER = os.environ["COUCHDB"]
I also tried
os.environ.get('COUCHDB')
none worked.
The server is started from Dockerfile as this:
CMD service apache2 restart
I start the container with the command:
docker-compose up test
I am using Docker for Windows toolbox with version:
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 08:47:51 2017
OS/Arch: windows/amd64
Server:
Version: 17.04.0-ce
API version: 1.28 (minimum version 1.12)
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 18:45:47 2017
OS/Arch: linux/amd64
Experimental: false
Thank you for your help
I leave you an example of how to do to get the environment variables from the application.
docker-compose.yml
version: '2'
services:
app:
image: python:2.7
environment:
- BAR=FOO
volumes:
- ./app.py:/app.py
command: python app.py
app.py
import os
print(os.environ["BAR"])
I personally feel that the env variables must be set with a name and value prefixed under the environment section of your docker file. At least this is what I have worked with on Kubernetes deployment yaml
env:
- name: CouchDB
value: "Couch DB URL"
- name: "ENV Variable 1"
value: 'Some value for the variable'
The you could try to exec into the pod and try to echo out the environment variable just to be sure.
As you mentioned, accessing it throught os.environ in your python script should get you going.
I also have another example:
You can pass the env variable using the Dockerfile
Like This ENV SECRET="abcdefg"In the Dockerfile.
So the complete Dockerfile will look like this:
FROM python:3.7.9
LABEL maintainer_name="Omar Magdy"
COPY requirements.txt requirements.txt
ENV SECRET="abcdefg"
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
ENV FLASK_DEBUG=0
CMD ["flask", "run"]
Now in the docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- "the_data_base:/db"
volumes:
the_data_base:
external: true
Assuming that you have created an external volume called the "the_data_base", to store the data inside the external volume.
So, my solution suggests creating the environmental variable inside the Dockerfile, instead of inside the docker-compose file.
So you create the environmental variable at the moment of the creation of the container.
:) :)

Categories

Resources