Calling API within Celery task never returns - python

I want to get a value from web3.eth.getTransactionCount. It just hangs. This function works fine elsewhere(normal app, console).
To recreate this behavior simply create a new folder, add these 3 files to the folder, and inside that folder run docker-compose up. *Note that infura credentials are safe to use.
dockerfile
FROM python:3.7
WORKDIR /usr/src/app
RUN pip install flask celery[redis] web3
docker-compose.yml
version: "3"
services:
redis:
image: redis:5.0.7
container_name: redis
ports:
- "6379:6379"
myapp:
build: .
container_name: myapp
ports:
- "5000:5000"
volumes:
- .:/usr/src/app
environment:
- FLASK_ENV=development
- WEB3_INFURA_PROJECT_ID=1cc71ab02b99475b8a3172b6a790c2f8
- WEB3_INFURA_API_SECRET=6a343124ed8e4a6f9b36d28c50ad65ca
entrypoint: |
bash -c "python /usr/src/app/app.py"
celery:
build: .
container_name: celery
volumes:
- .:/usr/src/app
environment:
- WEB3_INFURA_PROJECT_ID=1cc71ab02b99475b8a3172b6a790c2f8
- WEB3_INFURA_API_SECRET=6a343124ed8e4a6f9b36d28c50ad65ca
command: celery worker -A app.client -l info
app.py
from flask import Flask
from web3.auto.infura.rinkeby import w3 as web3
from celery import Celery
app = Flask(__name__)
client = Celery(app.name, broker='redis://redis:6379', backend='redis://redis:6379')
#client.task
def never_return():
print('start') # this is printed
nonce = web3.eth.getTransactionCount('0x51cDD4A883144F01Bf0753b6189f3A034866465f')
print('nonce', nonce) # this is never printed
#app.route('/')
def index():
never_return.apply_async()
return "hello celery"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
I found only 1 similar unresolved post here: Call to Google Cloud API in Celery task never returns
There seems to be something weird when making a request call by other library within Celery task. Everything works fine when I tried making post requests using request. Unfortunately I don't know how to work around this problem using this request library.
Any kind of suggestions are highly appreciated.

it seems to me this issue has something to do with websockets. so i tried to switch it to HTTP. and it works.
here is modified app.py
from flask import Flask
from web3 import Web3
from celery import Celery
from web3.middleware import geth_poa_middleware
import os
app = Flask(__name__)
client = Celery(app.name, broker='redis://redis:6379', backend='redis://redis:6379')
#client.task
def never_return():
w3 = Web3(Web3.HTTPProvider(f"https://rinkeby.infura.io/v3/{os.getenv('WEB3_INFURA_PROJECT_ID')}", request_kwargs={'timeout': 60}))
w3.middleware_onion.inject(geth_poa_middleware, layer=0)
print('started')
l = w3.eth.getBlock('latest')
print(f'block number: {l}')
print('finished ok')
#app.route('/')
def index():
never_return.apply_async()
return f"hello celery"
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')

Related

Celery Task not working with redis in flask docker container

I am trying to run a celery task in a flask docker container and I am getting error like below when celery task is executed
web_1 | sock.connect(socket_address)
web_1 | OSError: [Errno 99] Cannot assign requested address
web_1 |
web_1 | During handling of the above exception, another exception occurred: **[shown below]**
web_1 | File "/opt/venv/lib/python3.8/site-packages/redis/connection.py", line 571, in connect
web_1 | raise ConnectionError(self._error_message(e))
web_1 | redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
Without the celery task the application is working fine
docker-compose.yml
version: '3'
services:
web:
build: ./
volumes:
- ./app:/app
ports:
- "80:80"
environment:
- FLASK_APP=app/main.py
- FLASK_DEBUG=1
- 'RUN=flask run --host=0.0.0.0 --port=80'
depends_on:
- redis
redis:
container_name: redis
image: redis:6.2.6
ports:
- "6379:6379"
expose:
- "6379"
worker:
build:
context: ./
hostname: worker
command: "cd /app/routes && celery -A celery_tasks.celery worker --loglevel=info"
volumes:
- ./app:/app
links:
- redis
depends_on:
- redis
main.py
from flask import Flask
from instance import config, exts
from decouple import config as con
def create_app(config_class=config.Config):
app = Flask(__name__)
app.config.from_object(config.Config)
app.secret_key = con('flask_secret_key')
exts.mail.init_app(app)
from routes.test_route import test_api
app.register_blueprint(test_api)
return app
app = create_app()
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=80)
I am using Flask blueprint for splitting the api routes
test_route.py
from flask import Flask, render_template, Blueprint
from instance.exts import celery
test_api = Blueprint('test_api', __name__)
#test_api.route('/test/<string:name>')
def testfnn(name):
task = celery.send_task('CeleryTask.reverse',args=[name])
return task.id
Celery tasks are also written in separate file
celery_tasks.py
from celery import Celery
from celery.utils.log import get_task_logger
from decouple import config
import time
celery= Celery('tasks',
broker = config('CELERY_BROKER_URL'),
backend = config('CELERY_RESULT_BACKEND'))
class CeleryTask:
#celery.task(name='CeleryTask.reverse')
def reverse(string):
time.sleep(25)
return string[::-1]
.env
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
Dockerfile
FROM tiangolo/uwsgi-nginx:python3.8
RUN apt-get update
WORKDIR /app
ENV PYTHONUNBUFFERED 1
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN python -m pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt
COPY ./app /app
CMD ["python", "app/main.py"]
requirements.txt
Flask==2.0.3
celery==5.2.3
python-decouple==3.5
Flask-Mail==0.9.1
redis==4.0.2
SQLAlchemy==1.4.32
Folder Structure
Thanks in Advance
In the end of your docker-compose.yml you can add:
networks:
your_net_name:
name: your_net_name
And in each container:
networks:
- your_net_name
These two steps will put all the containers at the same network. By default docker creates one, but as I've had problems letting them be auto-renamed, I think this approach gives you more control.
Finally I'd also change your env variable to use the container address:
CELERY_BROKER_URL=redis://redis_addr/0
CELERY_RESULT_BACKEND=redis://redis_addr/0
So you'd also add this section to your redis container:
hostname: redis_addr
This way the env var will get whatever address docker has assigned to the container.

serving flask app with waitress and docker

I am serving a flask app with docker but the docker logs command shows that the app is running on a development server. I want to serve this app with waitress.
The project is structured like this below. A docker-compose.yml file to build the image, expose the port and run the manage.py file
docker-compose.yml
web:
build: .
image: web
container_name: web
ports:
- 8080:5000
command: python manage.py run -h 0.0.0.0
manage.py file imports the create_app and provides it into FLaskGroup
from flask.cli import FlaskGroup
from project.server import create_app
app = create_app()
cli = FlaskGroup(create_app=create_app)
if __name__ == "__main__":
cli()
project/server/__init__.py file imports the main_blueprint and registers it.
from project.server.main.views import main_blueprint
from flask import Flask
import os
def create_app(script_info=None):
app = Flask(
__name__,
template_folder="../client/templates",
static_folder="../client/static",
)
app_settings = os.getenv("APP_SETTINGS")
app.config.from_object(app_settings)
app.register_blueprint(main_blueprint)
app.shell_context_processor({"app": app})
return app
project/server/main/views.py
from flask import render_template, Blueprint, jsonify, request
main_blueprint = Blueprint("main", __name__,)
#main_blueprint.route("/", methods=["GET"])
def home():
return render_template("pages/home.html")
#main_blueprint.route("/test", methods=["GET"])
def parse():
return jsonify({"result": "test"}), 202
How can I modify the existing code to serve the flask app with waitress? Thank you.
I got it running by changing the docker-compose.yml file:
command
python manage.py run -h 0.0.0.0 to waitress-serve --call "project.server:create_app"
port
8080:5000 to 8080:8080
docker-compose.yml file looks like below now:
web:
build: .
image: web
container_name: web
ports:
- 8080:8080
command: waitress-serve --call "project.server:create_app"
You run using python manage.py run -h 0.0.0.0, which uses the classic flask run. You should use waitress commands to run your app.
This doc might help you.

Pass env variable to docker-compose

I have create a sample docker app with python and redis. Python is connected to the redis to store data. I want to pass the password servername to redis as an environment variable in docker-compose file. How can I achieve that?
Docker-compose:
version: "3.7"
services:
nginx_app:
image: nginx:latest
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8082:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass pass123 --appendonly yes
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network:
from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password='pass123')
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
just define environement variables in flask app and do os.getenv of them in python application, than add them to your flask app service in docker compose file:
flask_app:
environment:
RABBIT_USER: guest
RABBIT_PASSWORD: pass123
In your python file place following:
import os
redis = Redis(host='redis_app', port=6379, password=os.getenv('RABBIT_PASSWORD'))
As #AndriyIvaneyko says, in your docker-compose:
flask_app:
environment:
- PASSWORD=password
Another way that you can get this value is by setting an env variable in your shell export PASSWORD="password" and importing it into your docker-compose:
flask_app:
environment:
- PASSWORD
This is the approach I would recommend since it ensures that your credentials are not available in plain text in the docker-compose file. Moreover, collaboration becomes simpler as the env variable can be configured independently.
In your python:
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password=os.getenv('PASSWORD'))
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
You can do the same thing with other env variables. Here is the documentation.

Application context error in Flask app with Celery in Docker

I'm attempting to use Flask and Celery in Docker and am having issues with the Flask application context.
Flask==1.0.2
celery==4.2.0
Flask-CeleryExt==0.3.1
Here is some pertinent code.
docker-compose.yaml
version: '3'
services:
myapp:
build:
context: .
dockerfile: compose/dev/myapp/Dockerfile
ports:
- '5000:5000'
- '8888:8888'
env_file: .env
environment:
- FLASK_ENV=development
volumes:
- .:/myapp
entrypoint: /wait-for-postgres.sh
command: flask run --host=0.0.0.0
depends_on:
- postgres
- redis
networks:
- flask-redis-celery
celery:
build:
context: .
dockerfile: compose/dev/celery/Dockerfile
command: 'celery -A myapp.tasks worker -Q default --loglevel=info'
env_file: .env
volumes:
- .:/myapp
depends_on:
- redis
- myapp
networks:
- flask-redis-celery
extensions.py
from flask_celeryext import FlaskCeleryExt
ext = FlaskCeleryExt()
app.py in a register_extensions function. I'm using the application factory pattern in my app.
ext.init_app(app)
Inside of the myapp container, I can get to ext.celery per the documentation and see that I have a Celery instance and correctly send a task to
<Celery default at 0x7f600d0e7f98>
However, attempting to do the same in the celery container in my tasks file results in ext.celery being None.
tasks.py
from coupon.extensions import ext
celery = ext.celery # This is None
#celery.task(name='tasks.my_task', max_retries=2, default_retry_delay=60)
def my_task(some_args):
# etc.
Error
AttributeError: 'NoneType' object has no attribute 'task'
I've attempted numerous other options as well including make_celery as noted in the Flask docs, but cannot get to Flask and my models in the celery container, so do not believe this is very specific to Flask-CeleryExt.
I can make Celery tasks work fine if they do not access Flask objects, but I need to access SQLAlchemy models and custom classes from my Celery tasks.
How can I make Celery work properly in my celery container and be able to access Flask objects?

Celery worker in docker won't get correct message broker

I'm creating a flask service using an app factory pattern and I need to use celery for async tasks. I'm also using docker and docker-compose to contain and run everything. My structure looks like this:
server
|
+-- manage.py
+-- docker-compose.yml
+-- requirements.txt
+-- Dockerfile
|
+-- project
| |
| +-- api
| |
| +--tasks.py
|
| +-- __init__.py
My tasks.py file looks like this:
from project import celery_app
#celery_app.task
def celery_check(test):
print(test)
I call manage.py to run which looks like this:
# manage.py
from flask_script import Manager
from project import create_app
app = create_app()
manager = Manager(app)
if __name__ == '__main__':
manager.run()
And my __init__.pylooks like this:
# project/__init__.py
import os
import json
from flask_mongoalchemy import MongoAlchemy
from flask_cas import CAS
from flask import Flask
from itsdangerous import JSONWebSignatureSerializer as JWT
from flask_httpauth import HTTPTokenAuth
from celery import Celery
# instantiate the database and CAS
db = MongoAlchemy()
cas = CAS()
# Auth stuff (ReplaceMe is replaced below in create_app())
jwt = JWT("ReplaceMe")
auth = HTTPTokenAuth('Bearer')
celery_app = Celery(__name__, broker=os.environ.get("CELERY_BROKER_URL"))
def create_app():
# instantiate the app
app = Flask(__name__, template_folder='client/templates', static_folder='client/static')
# set config
app_settings = os.getenv('APP_SETTINGS')
app.config.from_object(app_settings)
# Send new static files every time if debug is enabled
if app.debug:
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
# Get the secret keys
parse_secret(app.config['CONFIG_FILE'], app)
celery_app.conf.update(app.config)
print(celery_app.conf)
# set up extensions
db.init_app(app)
cas.init_app(app)
# Replace the secret key with the app's
jwt.secret_key = app.config["SECRET_KEY"]
parse_config(app.config['CONFIG_FILE'])
# register blueprints
from project.api.views import twist_blueprint
app.register_blueprint(twist_blueprint)
return app
In my docker-compose I start a worker and define some environment variables like this:
version: '2.1'
services:
twist-service:
container_name: twist-service
build: .
volumes:
- '.:/usr/src/app'
ports:
- 5001:5000 # expose ports - HOST:CONTAINER
environment:
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_NAME_TESTING=testing
- DATABASE_NAME_DEV=dev
- DATABASE_URL=twist-database
- CONFIG_FILE=./project/default_config.json
- MONGO_PASSWORD=user
- CELERY_RESULT_BACKEND=redis://redis:6379
- CELERY_BROKER_URL=redis://redis:6379/0
- MONGO_PORT=27017
depends_on:
- celery
- twist-database
celery:
container_name: celery
build: .
command: celery -A project.api.tasks --loglevel=debug worker
volumes:
- '.:/usr/src/app'
twist-database:
image: mongo:latest
container_name: "twist-database"
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_USER=mongo
volumes:
- /data/db
ports:
- 27017:27017 # expose ports - HOST:CONTAINER
command: mongod
redis:
image: "redis:alpine"
command: redis-server
volumes:
- '/redis'
ports:
- '6379:6379'
However when I run my docker-compose file and generate the containers, I end up with this in the celery worker logs:
[2017-07-20 16:53:06,721: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Which means the worker is ignoring the configuration set for redis when celery was created, and trying to use rabbitmq instead. I've tried changing the project.api.tasks to project and project.celery_app, but to no avail.
It seems to me like the celery service should have the environment variables CELERY_RESULT_BACKEND and CELERY_BROKER_URL as well.
You need to link the docker services together. The most straight-forward mechanism to do this is to add a networks section in your dockerfile.

Categories

Resources