docker compose up raises ModuleNotFound - python

Trying to bring my fastAPI app together with docker compose. It works out of the docker but on docker it doesn't see my modules like endpoints and others. Not sure what Am doing wrong...
from endpoints import pizza_endpoints, order_endpoints
ModuleNotFoundError: No module named 'endpoints'
endpoints is folder with init and its imports are .py files
dockerfile:
FROM python:3.9-slim
COPY ./backend /backend
ENV PYTHONPATH "${PYTHONPATH}:/backend"
ENV PYTHONUNBUFFERED 1
WORKDIR /backend
EXPOSE 8000
RUN pip3 install -r requirements.txt
docker compose:
version: '3.9'
services:
backend:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn backend.main:app --host 0.0.0.0'
ports:
- 8008:8000
environment:
- DATABASE_URL=postgresql://postgres:postgres#db:5432/pypizza
depends_on:
- db
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=pypizza
volumes:
postgres_data:
main.py:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from endpoints import pizza_endpoints, order_endpoints
from dependency import database
from SQL import models
models.Base.metadata.create_all(database.engine)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins = ["*"],
allow_credentials = True,
allow_methods = ["*"],
allow_headers = ["*"]
)
app.include_router(pizza_endpoints.router, prefix="/pizza")
app.include_router(order_endpoints.router, prefix="/order")
#app.get("/")
async def welcome_page():
return {"message": "hello"}

Related

Python Flask - volumes don't work after dockerizing

I'm trying to dockerize a Python-Flask application, using also volumes in order to have a live update when I change the code, but volumes don't work and I have to stop the containers and open run it again.
That is the code that I try to change (main.py):
from flask import Flask
import pandas as pd
import json
import os
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello"
My dockerfile.dev:
FROM python:3.9.5-slim-buster
WORKDIR '/app'
COPY requirements.txt .
RUN pip3 install -r requirements.txt
RUN pip install python-dotenv
COPY ./ ./
ENV FLASK_APP=main.py
EXPOSE 5000
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
My docker-compose.yaml
version: "3"
services:
backend:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5000:5000"
expose:
- "5000"
volumes:
- .:/app
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- PGHOST=db
- PGUSER=userp
- PGDATABASE=p
- PGPASSWORD=pgpwd
- PGPORT=5432
- DB_HOST=db
- POSTGRES_DB=p
- POSTGRES_USER=userp
- POSTGRES_PASSWORD=pgpwd
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
- POSTGRES_DB=db
- DB_HOST=127.0.0.1
- POSTGRES_USER=userp
- POSTGRES_PASSWORD=pgpwd
- POSTGRES_ROOT_PASSWORD=pgpwd
volumes:
- db-data-p:/var/lib/postgresql/data
pgadmin-p:
container_name: pgadmin4_container_p
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: root
ports:
- "5050:80"
logging:
driver: none
volumes:
db-data-p:
To start I execute docker-compose up
Volume /app seems not works
Flask does not reload files by default. You need to enable that explicitly e.g. by passing --debug on the flask command line:
python3 -m flask --debug run --host=0.0.0.0
If you modify your Dockerfile to use the --debug flag...
CMD [ "python3", "-m" , "flask", "--debug", "run", "--host=0.0.0.0"]
...then it will work as you expect. You could also set the FLASK_DEBUG environment variable instead of using the --debug flag:
services:
backend:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5000:5000"
volumes:
- .:/app
environment:
- FLASK_DEBUG=1

Error running project with docker compose

I'm doing a practice application to learn docker, I'm doing the application with Python and fast Api along with its tutorial, I'm using everything with dockerfile and docker compose, which has a connection to the postgresql database, for that I'm using a orm sqlalchemy, running my application 'normally' from the command line, the project runs without any problem, but when running it with docker compose it generates several errors like the following:
ModuleNotFoundError: No module named 'routers'
When I 'solve' it, it generates another error which I have not been able to solve and I do not understand which is the following:
ModuleNotFoundError: No module named 'sqlalchemy'
Well, if I have sqlalchemy install, requirements.txt:
SQLAlchemy==1.4.39
This is my Python code, this is the main one:
from fastapi import FastAPI
from .routers import roles
app = FastAPI()
app.include_router(roles.router)
And this is the code where the error is generated:
from fastapi import APIRouter, Depends
from sqlalchemy.orm import Session
from db.postgres_connection import SessionLocal, engine
from models import roles
from schemas import roles as schemas
roles.Base.metadata.create_all(bind=engine)
router = APIRouter()
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
#router.get('/api/v1/roles/', response_model=list[schemas.RoleBase])
async def get_roles(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
roles = get_roles(db, skip=skip, limit=limit)
return roles
This is my Dockerfile:
FROM python:3.10.5-slim-buster
WORKDIR /code
COPY ./app ./code/app
COPY ./requirements.txt /code/
RUN pip install -r requirements.txt
EXPOSE 8000
CMD [ "uvicorn", "app.main:app", "--reload" ]
This is my docker compose file:
version: '3.9'
services:
web:
build: .
ports:
- '8000:8000'
volumes:
- .:/app
db:
image: postgres
restart: always
environment:
POSTGRES_USER: hamel
POSTGRES_PASSWORD: contrasena
POSTGRES_DB: bankmel
volumes:
- /home/isla/storage:/var/lib/postgresql/data
ports:
- '5432:5432'
structure folder:
you got a couple of problems
In the docker file change, your copy path code is creating one more directory under code.
COPY ./app ./app
then you will have a couple of other problems in your code.
Modify your Dockerfile as follows, I'm sure this error will be fixed:
FROM python:3.10.5-slim-buster
WORKDIR /code
COPY ./app ./app
COPY ./requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
EXPOSE 8000
CMD [ "uvicorn", "app.main:app", "--reload" ]
also in docker-compose.yml you should change volume in web as follows:
volumes:
- .:/code

Celery Task not working with redis in flask docker container

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Cannot assign requested address
web_1 |
web_1 | During handling of the above exception, another exception occurred: **[shown below]**
web_1 | File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
Without the celery task the application is working fine
docker-compose.yml
version: '3'
services:
web:
build: ./
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
depends_on:
- redis
redis:
container_name: redis
image: redis:6.2.6
ports:
- "6379:6379"
expose:
- "6379"
worker:
build:
context: ./
hostname: worker
command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
main.py
from flask import Flask
from instance import config, exts
from decouple import config as con
def create_app(config_class=config.Config):
app = Flask(__name__)
app.config.from_object(config.Config)
app.secret_key = con('flask_secret_key')
exts.mail.init_app(app)
from routes.test_route import test_api
app.register_blueprint(test_api)
return app
app = create_app()
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
I am using Flask blueprint for splitting the api routes
test_route.py
from flask import Flask, render_template, Blueprint
from instance.exts import celery
test_api = Blueprint('test_api', __name__)
#test_api.route('/test/<string:name>')
def testfnn(name):
task = celery.send_task('CeleryTask.reverse',args=[name])
return task.id
Celery tasks are also written in separate file
celery_tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time
celery= Celery('tasks',
broker = config('CELERY_BROKER_URL'),
backend = config('CELERY_RESULT_BACKEND'))
class CeleryTask:
#celery.task(name='CeleryTask.reverse')
def reverse(string):
time.sleep(25)
return string[::-1]
.env
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
Dockerfile
FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]
requirements.txt
Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32
Folder Structure
Thanks in Advance
In the end of your docker-compose.yml you can add:
networks:
your_net_name:
name: your_net_name
And in each container:
networks:
- your_net_name
These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.
Finally I'd also change your env variable to use the container address:
CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0
So you'd also add this section to your redis container:
hostname: redis_addr
This way the env var will get whatever address docker has assigned to the container.

Flask docker compose reload when changing source code? [duplicate]

I want my flask server to detect changes in code and reload automatically.
I'm running this on docker container.
Whenever I change something, I have to build and up again the container. I have no idea where's wrong. This is my first time using flask.
Here's my tree
├── docker-compose.yml
└── web
├── Dockerfile
├── app.py
├── crawler.py
└── requirements.txt
and code(app.py)
from flask import Flask
import requests
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello Flask!!'
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0')
and docker-compose
version: '2'
services:
web:
build: ./web
ports:
- "5000:5000"
volumes:
- ./web:/code
Please give me some advice. Thank you in advance.
Flask supports code reload when in debug mode as you've already done. The problem is that the application is running on a container and this isolates it from the real source code you are developing. Anyway, you can share the source between the running container and the host with volumes on your docker-compose.yaml like this:
Here is the docker-compose.yaml
version: "3"
services:
web:
build: ./web
ports: ['5000:5000']
volumes: ['./web:/app']
And here the Dockerfile:
FROM python:alpine
EXPOSE 5000
WORKDIR app
COPY * /app/
RUN pip install -r requirements.txt
CMD python app.py
I managed to achieve flask auto reload in docker using docker-compose with the following config:
version: "3"
services:
web:
build: ./web
entrypoint:
- flask
- run
- --host=0.0.0.0
environment:
FLASK_DEBUG: 1
FLASK_APP: ./app.py
ports: ['5000:5000']
volumes: ['./web:/app']
You have to manually specify environment variables and entrypoint in the docker compose file in order to achieve auto reload.
Assuming your file structure is the below:
Dockerfile: (note WORKING DIR)
FROM python:3.6.5-slim
RUN mkdir -p /home/project/bottle
WORKDIR /home/project/bottle
COPY requirements.txt .
RUN pip install --upgrade pip --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Docker Compose:
version: '3'
services:
web:
container_name: web
volumes:
- './web:/home/project/bottle/' <== Local Folder:WORKDIR
build: ./web
ports:
- "8080:8080"
This is my example:
version: "3.8"
services:
local-development:
build:
context: .
dockerfile: Dockerfiles/development.Dockerfile
ports:
- "5000:5000"
volumes:
- .:/code
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "hello world"
if __name__ in "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
debug=True enables Flask to change as your code changes.
Docker already plugs into your fs events to change the code "in the container".
If the compose is running different services (app and rq, for instance) you need to set up the volumes on both, or it won't work.

Docker, Flask, SQLAlchemy: ValueError: invalid literal for int() with base 10: 'None'

I have a flask app that can be initialized successfully and connects to Postgresql database. However, when i try to dockerize this app, i get the below error message. "SQLALCHEMY_DATABASE_URI" is correct and i can connect to it, so i can't figure where I have gone wrong.
docker-compose logs
app_1 | File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 60, in __init__
app_1 | self.port = int(port)
app_1 | ValueError: invalid literal for int() with base 10: 'None'
Postgres database connects successfully in Docker container
postgres_1 | LOG: database system is ready to accept connections
config.py
from os import environ
import os
RDS_USERNAME = environ.get('RDS_USERNAME')
RDS_PASSWORD = environ.get('RDS_PASSWORD')
RDS_HOSTNAME = environ.get('RDS_HOSTNAME')
RDS_PORT = environ.get('RDS_PORT')
RDS_DB_NAME = environ.get('RDS_DB_NAME')
SQLALCHEMY_DATABASE_URI = "postgresql+psycopg2://{username}:{password}#{hostname}:{port}/{dbname}"\
.format(username = RDS_USERNAME, password = RDS_PASSWORD, \
hostname = RDS_HOSTNAME, port = RDS_PORT, dbname = RDS_DB_NAME)
flask_app.py (entry point)
def create_app():
app = Flask(__name__, static_folder="./static", template_folder="./static")
app.config.from_pyfile('./app/config.py', silent=True)
register_blueprint(app)
register_extension(app)
with app.app_context():
print(db) -> This prints the correct path for SQLALCHEMY_DATABASE_URI
db.create_all()
db.session.commit()
return app
def register_blueprint(app):
app.register_blueprint(view_blueprint)
app.register_blueprint(race_blueprint)
def register_extension(app):
db.init_app(app)
migrate.init_app(app)
app = create_app()
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080, debug=True)
Dockerfile
FROM ubuntu
RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y python-pip && pip install --upgrade pip
RUN mkdir /home/ubuntu
WORKDIR /home/ubuntu/celery-scheduler
ADD requirements.txt /home/ubuntu/celery-scheduler/
RUN pip install -r requirements.txt
COPY . /home/ubuntu/celery-scheduler
EXPOSE 5000
CMD ["python", "flask_app.py", "--host", "0.0.0.0"]
docker-compose.yml
version: '2'
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
depends_on:
- postgres
postgres:
restart: always
image: postgres:9.6
environment:
- POSTGRES_USER=${RDS_USERNAME}
- POSTGRES_PASSWORD=${RDS_PASSWORD}
- POSTGRES_HOSTNAME=${RDS_HOSTNAME}
- POSTGRES_DB=${RDS_DB_NAME}
ports:
- "5432:5432"
You need to set environment variables RDS_USERNAME, RDS_PASSWORD, RDS_HOSTNAME, RDS_PORT , and RDS_DB_NAME in Dockerfile with ENV key value, for example
ENV RDS_PORT 5432
Answer:
1) Create a .env file with the variable definitions (I assumed that env variables will be 'pulled' from .bash_profile, but this is not the case...Remember to add .env to .gitignore for privacy)
RDS_USERNAME=xxx
RDS_PASSWORD=xxx
2) Specify the environment variables in docker-compose under app.
docker-compose.yml
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- RDS_USERNAME=${RDS_USERNAME}
- RDS_PASSWORD=${RDS_PASSWORD}
- RDS_HOSTNAME=${RDS_HOSTNAME}
- RDS_DB_NAME=${RDS_DB_NAME}
volumes:
- .:/app
depends_on:
- postgres

Categories

Resources