Python Flask - volumes don't work after dockerizing - python

I'm trying to dockerize a Python-Flask application, using also volumes in order to have a live update when I change the code, but volumes don't work and I have to stop the containers and open run it again.
That is the code that I try to change (main.py):
from flask import Flask
import pandas as pd
import json
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello"
My dockerfile.dev:
FROM python:3.9.5-slim-buster
WORKDIR '/app'
COPY requirements.txt .
RUN pip3 install -r requirements.txt
RUN pip install python-dotenv
COPY ./ ./
ENV FLASK_APP=main.py
EXPOSE 5000
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
My docker-compose.yaml
version: "3"
services:
backend:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5000:5000"
expose:
- "5000"
volumes:
- .:/app
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- PGHOST=db
- PGUSER=userp
- PGDATABASE=p
- PGPASSWORD=pgpwd
- PGPORT=5432
- DB_HOST=db
- POSTGRES_DB=p
- POSTGRES_USER=userp
- POSTGRES_PASSWORD=pgpwd
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
- POSTGRES_DB=db
- DB_HOST=127.0.0.1
- POSTGRES_USER=userp
- POSTGRES_PASSWORD=pgpwd
- POSTGRES_ROOT_PASSWORD=pgpwd
volumes:
- db-data-p:/var/lib/postgresql/data
pgadmin-p:
container_name: pgadmin4_container_p
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80"
logging:
driver: none
volumes:
db-data-p:
To start I execute docker-compose up
Volume /app seems not works

Flask does not reload files by default. You need to enable that explicitly e.g. by passing --debug on the flask command line:
python3 -m flask --debug run --host=0.0.0.0
If you modify your Dockerfile to use the --debug flag...
CMD [ "python3", "-m" , "flask", "--debug", "run", "--host=0.0.0.0"]
...then it will work as you expect. You could also set the FLASK_DEBUG environment variable instead of using the --debug flag:
services:
backend:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5000:5000"
volumes:
- .:/app
environment:
- FLASK_DEBUG=1

Related

Django crontab can’t connect database(Postgresql) with docker; no such table err

I am using django and postgresql. I am using django-crontab to change the data.
It runs well in the local environment, but we use docker to deploy and watch, and I confirmed that when cron runs, we refer to sqlite3.
I also made a separate cron container in docker composite and ran it, I am using it incorrectly because I am a beginner. Help me
#goods/cron.py
from goods.models import Goods
def test():
print(Goods.objects.all())
./docker-compose.yml
version: '3.8'
volumes:
postgres: {}
django_media: {}
django_static: {}
static_volume: {}
services:
postgres:
container_name: postgres
image: postgres:14.5
volumes:
- postgres:/var/lib/postgresql/data/
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
restart: always
nginx:
container_name: nginx
image: nginx:1.23.2
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- django_media:/media/
- django_static:/static/
depends_on:
- asgiserver
- backend
restart: always
django_backend:/app/media
backend: host:container
container_name: django_backend
build: .
entrypoint: sh -c "python manage.py migrate && gunicorn handsup.wsgi --workers=5 -b 0.0.0.0:8000"
restart: always
volumes:
- ./:/app/
- /etc/localtime:/etc/localtime:ro
- django_media:/app/media/
- django_static:/app/static/
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- postgres
redis:
image: redis:5
asgiserver:
build: .
command: daphne -b 0.0.0.0 -p 8080 handsup.asgi:application
volumes:
- ./:/app/
restart: always
environment:
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- redis
- postgres
cron:
build: .
restart: always
volumes:
- ./:/app/
depends_on:
- postgres
- backend
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
command: cron -f # as a long-running foreground process
./Dockerfile
FROM python:3.10.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /app/
WORKDIR /app/
RUN apt-get update -y
RUN apt-get install -y cron
COPY ./requirements.txt .
COPY ./ /app/
RUN pip install --no-cache-dir -r requirements.txt
# RUN service cron start
ENTRYPOINT ["./docker-entrypoint.sh"]
RUN pip install gunicorn psycopg2
./docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
./manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
I referred to the contents below.
How to make django-crontab execute commands in Docker container?

Error: Invalid volume specification in docker compose

I am facing below mentioned error, could you please help me to resolve this?
invalid volume specification: '/run/desktop/mnt/host/d/Master/Projects/python_task/image: /app:rw': invalid mount config for type "bind": invalid mount path: ' /app' mount path must be absolute
My file structure could be seen here image
Docker-Compose:
services:
psql_crxdb:
image: postgres:13
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=dvd_rental
volumes:
- "./dvdrental_data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
ports:
- "8080:80"
analytics:
build:
context: main
environment:
POSTGRESQL_CS: 'postgresql+psycopg2://postgres:password#psql_crxdb:5432/dvd_rental'
depends_on:
- psql_crxdb
command: ["python", "./main.py" ]
pythontask:
build:
context: python_task
command: ["python", "./circle.py" ]
volumes:
- "./python_task/image: /app"
Dockerfile:
FROM python:3.9
RUN apt-get install wget
RUN pip install Pillow datetime
WORKDIR /app
COPY circle.py circle.py
ENTRYPOINT ["python", "circle.py"]

Docker can't run the file

When I prescribe docker-compose up, the following error comes out, which I don't quite understand how to fix!
ERROR: for a1e9335fc0e8_bot Cannot start service tgbot: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "python3 main.py": executable file not found in $PATH: unknown
My Dockerfile:
FROM python:latest
WORKDIR /src
COPY req.txt /src
RUN pip install -r req.txt
COPY . /src
My docker-compose.yml:
version: "3.1"
services:
db:
container_name: database
image: sameersbn/postgresql:10-2
environment:
PG_PASSWORD: $PGPASSWORD
restart: always
ports:
- 5432:5432
networks:
- botnet
volumes:
- ./pgdata:/var/lib/postgresql
tgbot:
container_name: bot
build:
context: .
command:
- python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
networks:
botnet:
driver: bridge
Your command: is in the array format, so compose thinks that the executable file is called python3 main.py. That doesn't exist.
Change it to this and it'll work
tgbot:
container_name: bot
build:
context: .
command: python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
More info here.

python: can't open file 'manage.py': [Errno 2] No such file or directory when compose docker

I run two different Apps in containers.
Django App
Flask App
Django ran just well. I configured my Flask App as follow:
This Is a docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
This also is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
CMD python main.py
Problem: When I run
docker-compose up the following error occurs
backend_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
I don't know why it tries to open manage.py file since it is Flask and not Django App. I need your help. Thanks in Advance.
I'm not sure how this will work for you but, I resolved this by changing my docker-compose.yml to have a command parameter, so it looks like this
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py' << Updated Line >>
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
I started a new Flask app(but this time on virtualenv) and my problem fixed :)

Celery not running at times in docker

I have been going bonkers over this one, the celery service in my docker-compose.yml just does not pick up tasks (sometimes). It works at times though
Dockerfile:
FROM python:3.6
RUN apt-get update
RUN mkdir /web_back
WORKDIR /web_back
COPY web/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY web/ .
docker-compose.yml
(Taken out a few services for the sake of understanding)
version: '3'
services:
web_serv:
restart: always
build: .
container_name: web_back_01
env_file:
- ./envs/web_back_01.env
volumes:
- ./web/:/web_back
depends_on:
- web_postgres
expose:
- 8282
extra_hosts:
- "dockerhost:104.10.4.11"
command: bash -c "./initiate.sh"
service_A:
restart: always
build: ../../web-service-A/A/
container_name: web_back_service_a_01
volumes:
- ../../web-service-A/A.:/web-service-A
depends_on:
- web
ports:
- '5100:5100'
command: bash -c "python server.py"
service_B:
restart: always
build: ../../web-service-B/B/
container_name: web_back_service_b_01
volumes:
- ../../web-service-B/B.:/web-service-B
depends_on:
- web
ports:
- '5200:5200'
command: bash -c "python server.py"
web_postgres:
restart: always
build: ./postgres
container_name: web_postgres_01
# restart: unless-stopped
ports:
- "5433:5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: web
POSTGRES_PASSWORD: web
POSTGRES_DB: web
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
restart: always
build: ./nginx/
container_name: web_nginx_01
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- ./web/static/:/static_cdn/
- ./web/media/:/media_cdn/
ports:
- "80:80"
links:
- web_serv
redis:
restart: always
container_name: web_redis_01
ports:
- "6379:6379"
links:
- web_serv
image: redis
celery:
build: .
volumes:
- ./web/:/web_back
container_name: web_celery_01
command: celery -A web worker -l info
links:
- redis
depends_on:
- redis
volumes:
pgdata:
media:
static:
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
Notice the service_A and service_B, those are the two servies that at times do not get fired up.
Any help in understanding the odd behavior would be very helpful! Thanks
So, I think I ran into a similar problem. I was pulling my hair out because I was updating my worker.py and then not only would the autoload not reflect any changes, but, the when I'd rerun my docker-compose up my changes would still not be reflected.
Sometimes when I'd run docker-compose up --build --force-recreate my changes would be reflected, but not reliably.
I was able to resolve this problem by doing two things:
Remove the __pycache__ in my worker's directory.
Run $ find . -name "*.pyc" -exec rm {} \; before doing docker-compose up --build --force-recreate when caching behavior persists.
I'm not 100% sure what's going on myself, but its clear that Celery + Docker w/o autoload means that Docker has a tendency to use a cached version of the compiled task. I see a bit of chatter regarding ways to setup autoload with Celery + Docker with things like webdog or modd, but, I have yet to set that up for my project.

Categories

Resources