I have create a sample docker app with python and redis. Python is connected to the redis to store data. I want to pass the password servername to redis as an environment variable in docker-compose file. How can I achieve that?
Docker-compose:
version: "3.7"
services:
nginx_app:
image: nginx:latest
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8082:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass pass123 --appendonly yes
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network:
from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password='pass123')
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
just define environement variables in flask app and do os.getenv of them in python application, than add them to your flask app service in docker compose file:
flask_app:
environment:
RABBIT_USER: guest
RABBIT_PASSWORD: pass123
In your python file place following:
import os
redis = Redis(host='redis_app', port=6379, password=os.getenv('RABBIT_PASSWORD'))
As #AndriyIvaneyko says, in your docker-compose:
flask_app:
environment:
- PASSWORD=password
Another way that you can get this value is by setting an env variable in your shell export PASSWORD="password" and importing it into your docker-compose:
flask_app:
environment:
- PASSWORD
This is the approach I would recommend since it ensures that your credentials are not available in plain text in the docker-compose file. Moreover, collaboration becomes simpler as the env variable can be configured independently.
In your python:
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password=os.getenv('PASSWORD'))
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
You can do the same thing with other env variables. Here is the documentation.
Related
Trying to bring my fastAPI app together with docker compose. It works out of the docker but on docker it doesn't see my modules like endpoints and others. Not sure what Am doing wrong...
from endpoints import pizza_endpoints, order_endpoints
ModuleNotFoundError: No module named 'endpoints'
endpoints is folder with init and its imports are .py files
dockerfile:
FROM python:3.9-slim
COPY ./backend /backend
ENV PYTHONPATH "${PYTHONPATH}:/backend"
ENV PYTHONUNBUFFERED 1
WORKDIR /backend
EXPOSE 8000
RUN pip3 install -r requirements.txt
docker compose:
version: '3.9'
services:
backend:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn backend.main:app --host 0.0.0.0'
ports:
- 8008:8000
environment:
- DATABASE_URL=postgresql://postgres:postgres#db:5432/pypizza
depends_on:
- db
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=pypizza
volumes:
postgres_data:
main.py:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from endpoints import pizza_endpoints, order_endpoints
from dependency import database
from SQL import models
models.Base.metadata.create_all(database.engine)
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins = ["*"],
allow_credentials = True,
allow_methods = ["*"],
allow_headers = ["*"]
)
app.include_router(pizza_endpoints.router, prefix="/pizza")
app.include_router(order_endpoints.router, prefix="/order")
#app.get("/")
async def welcome_page():
return {"message": "hello"}
i am trying to connect & build two docker file with docker-compose. I am new to docker & then its getting confused while connecting with docker-compose.
Main purpose: connecting frontend & backend (of flask application) via docker-compose.
If my Dockerfile & docker-compose.yml file wrong. please correct me
like front end in one frontend folder with Dockerfile.
back end in another folder with Dockerfile. (connecting this 2, via doc-comp)
here's my file structure (inside template folder - loginpage.html)
app.py
from flask import Flask, render_template, flash, redirect, url_for, session, logging, request
app = Flask(__name__)
app.secret_key = 'hello'
#app.route("/", methods=["GET", "POST"])
def login():
if request.method == "POST":
uname = request.form["uname"]
return render_template("loginpage.html", uname=uname)
else:
return render_template("loginpage.html")
if __name__ == "__main__":
app.run(debug=True, port=5048,host='0.0.0.0')
backend/Dockerfile
FROM python:3.7-slim
ADD ./requirements.txt /backend/requirements.txt
WORKDIR /backend
RUN pip install -r requirements.txt
ADD . /backend
ENTRYPOINT ["python"]
CMD ["/backend/app.py"]
EXPOSE 5048
frontend/Dockerfile
FROM python:3.7-slim
COPY templates /backend/
COPY . /backend
WORKDIR /backend
docker-compose.yml
version: '2'
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile
restart: always
ports:
- "5048:5048"
frontend:
build: ./frontend
you can write docker-compose file something like this ..
version: "3.7"
services:
frontend:
build:
context: ./frontEnd
container_name: frontend
depends_on: [backend]
ports:
- "5000:5000"
networks:
- my_own_network
links:
- "backend:backend"
backend:
build:
context: ./backend
container_name: backend
ports:
- "5048:5048"
networks:
- my_own_network
networks:
my_own_network:
I am trying to connect redis container to python app container using environment variable. I passed password as an environment variable but it is not connecting, if I don't use an environment variable and hard code the password it works fine otherwise it gives redis.exceptions.ConnectionError
version: "3.7"
services:
nginx_app:
image: nginx:latest
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8090:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
environment:
- PASSWORD=pass123a
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass ${PASSWORD} --appendonly yes
environment:
- PASSWORD=pass123a
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network:
index.py
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password=os.getenv('PASSWORD'))
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
Update your docker-compose.yaml
the environment is a list of strings:
docker-composer interpolates ${ENV} where the value of ENV is loaded from .env file
Use:
command: redis-server --requirepass $PASSWORD --appendonly yes
Instead of:
command: redis-server --requirepass ${PASSWORD} --appendonly yes
You can verify environment variable inside ur container by:
docker-compose run --rm flask_app printenv | grep PASSWORD
That should return:
PASSWORD=pass123a
docker-compose example for environment variables: Here
Looks like you have missed passing the environment variable to your Redis container.
Try This:
version: "3.7"
services:
nginx_app:
image: nginx:latest
#LOCAL IMAGE
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8082:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
environment:
- PASSWORD=pass123a
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass ${PASSWORD} --appendonly yes
environment:
- PASSWORD=pass123a
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network:
I want to get a value from web3.eth.getTransactionCount. It just hangs. This function works fine elsewhere(normal app, console).
To recreate this behavior simply create a new folder, add these 3 files to the folder, and inside that folder run docker-compose up. *Note that infura credentials are safe to use.
dockerfile
FROM python:3.7
WORKDIR /usr/src/app
RUN pip install flask celery[redis] web3
docker-compose.yml
version: "3"
services:
redis:
image: redis:5.0.7
container_name: redis
ports:
- "6379:6379"
myapp:
build: .
container_name: myapp
ports:
- "5000:5000"
volumes:
- .:/usr/src/app
environment:
- FLASK_ENV=development
- WEB3_INFURA_PROJECT_ID=1cc71ab02b99475b8a3172b6a790c2f8
- WEB3_INFURA_API_SECRET=6a343124ed8e4a6f9b36d28c50ad65ca
entrypoint: |
bash -c "python /usr/src/app/app.py"
celery:
build: .
container_name: celery
volumes:
- .:/usr/src/app
environment:
- WEB3_INFURA_PROJECT_ID=1cc71ab02b99475b8a3172b6a790c2f8
- WEB3_INFURA_API_SECRET=6a343124ed8e4a6f9b36d28c50ad65ca
command: celery worker -A app.client -l info
app.py
from flask import Flask
from web3.auto.infura.rinkeby import w3 as web3
from celery import Celery
app = Flask(__name__)
client = Celery(app.name, broker='redis://redis:6379', backend='redis://redis:6379')
#client.task
def never_return():
print('start') # this is printed
nonce = web3.eth.getTransactionCount('0x51cDD4A883144F01Bf0753b6189f3A034866465f')
print('nonce', nonce) # this is never printed
#app.route('/')
def index():
never_return.apply_async()
return "hello celery"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
I found only 1 similar unresolved post here: Call to Google Cloud API in Celery task never returns
There seems to be something weird when making a request call by other library within Celery task. Everything works fine when I tried making post requests using request. Unfortunately I don't know how to work around this problem using this request library.
Any kind of suggestions are highly appreciated.
it seems to me this issue has something to do with websockets. so i tried to switch it to HTTP. and it works.
here is modified app.py
from flask import Flask
from web3 import Web3
from celery import Celery
from web3.middleware import geth_poa_middleware
import os
app = Flask(__name__)
client = Celery(app.name, broker='redis://redis:6379', backend='redis://redis:6379')
#client.task
def never_return():
w3 = Web3(Web3.HTTPProvider(f"https://rinkeby.infura.io/v3/{os.getenv('WEB3_INFURA_PROJECT_ID')}", request_kwargs={'timeout': 60}))
w3.middleware_onion.inject(geth_poa_middleware, layer=0)
print('started')
l = w3.eth.getBlock('latest')
print(f'block number: {l}')
print('finished ok')
#app.route('/')
def index():
never_return.apply_async()
return f"hello celery"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
I want my flask server to detect changes in code and reload automatically.
I'm running this on docker container.
Whenever I change something, I have to build and up again the container. I have no idea where's wrong. This is my first time using flask.
Here's my tree
├── docker-compose.yml
└── web
├── Dockerfile
├── app.py
├── crawler.py
└── requirements.txt
and code(app.py)
from flask import Flask
import requests
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello Flask!!'
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0')
and docker-compose
version: '2'
services:
web:
build: ./web
ports:
- "5000:5000"
volumes:
- ./web:/code
Please give me some advice. Thank you in advance.
Flask supports code reload when in debug mode as you've already done. The problem is that the application is running on a container and this isolates it from the real source code you are developing. Anyway, you can share the source between the running container and the host with volumes on your docker-compose.yaml like this:
Here is the docker-compose.yaml
version: "3"
services:
web:
build: ./web
ports: ['5000:5000']
volumes: ['./web:/app']
And here the Dockerfile:
FROM python:alpine
EXPOSE 5000
WORKDIR app
COPY * /app/
RUN pip install -r requirements.txt
CMD python app.py
I managed to achieve flask auto reload in docker using docker-compose with the following config:
version: "3"
services:
web:
build: ./web
entrypoint:
- flask
- run
- --host=0.0.0.0
environment:
FLASK_DEBUG: 1
FLASK_APP: ./app.py
ports: ['5000:5000']
volumes: ['./web:/app']
You have to manually specify environment variables and entrypoint in the docker compose file in order to achieve auto reload.
Assuming your file structure is the below:
Dockerfile: (note WORKING DIR)
FROM python:3.6.5-slim
RUN mkdir -p /home/project/bottle
WORKDIR /home/project/bottle
COPY requirements.txt .
RUN pip install --upgrade pip --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]
Docker Compose:
version: '3'
services:
web:
container_name: web
volumes:
- './web:/home/project/bottle/' <== Local Folder:WORKDIR
build: ./web
ports:
- "8080:8080"
This is my example:
version: "3.8"
services:
local-development:
build:
context: .
dockerfile: Dockerfiles/development.Dockerfile
ports:
- "5000:5000"
volumes:
- .:/code
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return "hello world"
if __name__ in "__main__":
app.run(host="0.0.0.0", port=5000, debug=True)
debug=True enables Flask to change as your code changes.
Docker already plugs into your fs events to change the code "in the container".
If the compose is running different services (app and rq, for instance) you need to set up the volumes on both, or it won't work.