Python Flask Pika Consumer (RabbitMQ) - python

I have two little
Python Flask
apps
Appone --> Producer
Apptwo --> Consumer
Both are in different docker-container and orchestrated by docker-compose
I dont get the Data from the Producer to the Consumer...Even when I start in apptwo the start.consuming() the Producer cant send any Data to the RabbitMQ Broker
Maybe someone can help me. Thank you very much
docker-compose:
version: '3'
services:
appone:
container_name: appone
restart: always
build:
context: ./appone
dockerfile: Dockerfile
environment:
FLASK_APP: ./app.py
volumes:
- './appone:/code/:cached'
ports:
- "5001:5001"
apptwo:
container_name: apptwo
restart: always
build:
context: ./apptwo
dockerfile: Dockerfile
environment:
FLASK_DEBUG: 1
FLASK_APP: ./app.py
volumes:
- ./apptwo:/code:cached
ports:
- "5002:5002"
rabbitmq:
image: "rabbitmq:3-management"
hostname: "rabbit"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq"
volumes:
- ./rabbitmq/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
appone (Producer)
from flask import Flask
from flask_restful import Resource, Api
import pika
app = Flask(__name__)
api = Api(app)
app.config['DEBUG'] = True
message = "Hello World, its me appone"
class HelloWorld(Resource):
def get(self):
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='rabbitmq'))
channel = connection.channel()
channel.queue_declare(queue='hello', durable=True)
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!', properties=pika.BasicProperties(delivery_mode=2))
connection.close()
return {'message': message}
api.add_resource(HelloWorld, '/api/appone/post')
if __name__ == '__main__':
# Development
app.run(host="0.0.0.0", port=5001)
apptwo (Consumer)
from flask import Flask
from flask_restful import Resource, Api
import pika
from threading import Thread
app = Flask(__name__)
api = Api(app)
app.config['DEBUG'] = True
data = []
connection = pika.BlockingConnection(
pika.ConnectionParameters(host='rabbitmq'))
channel = connection.channel()
channel.queue_declare(queue='hello', durable=True)
def callback(ch, method, properties, body):
data.append(body)
ch.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_consume(queue='hello', on_message_callback=callback)
thread = Thread(channel.start_consuming())
thread.start()
class HelloWorld(Resource):
def get(self):
return {'message': data}
api.add_resource(HelloWorld, '/api/apptwo/get')
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", port=5002)
Goal
In this easy example I just want to receice the data in apptwo and store it in the data list...
Thanks again!!

In apptwo (Consumer):
thread = Thread(channel.start_consuming())
thread.start()
Here the constructor call of Thread is never called, since channel.start_consuming is called before, which is blocking. Changing your code to the following might help.
thread = Thread(target = channel.start_consuming)
thread.start()

Related

docker-compose container port not showing up on localhost

I am trying to run a docker-compose app that has two services. One to build a web server and the other to run the tests on it.
docker-compose.yml
version: "3.7"
services:
web:
build: .
ports:
- "127.0.0.1:5000:5000"
expose:
- 5000
test:
# expose:
# - 5000
depends_on:
- web
build: test_python/.
./Dockerfile
FROM python:buster
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
# Add .cargo/bin to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
# Check cargo is visible
RUN cargo --help
WORKDIR /code
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5000
COPY test_python .
CMD [ "python3", "base_routes.py" ]
test_python/Dockerfile
FROM python:buster
RUN pip3 install pytest requests
COPY . .
base_routes.py
from robyn import Robyn, static_file, jsonify
import asyncio
app = Robyn(__file__)
callCount = 0
#app.get("/")
async def h(request):
print(request)
global callCount
callCount += 1
message = "Called " + str(callCount) + " times"
return message
#app.get("/test")
async def test(request):
import os
path = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_python/index.html"))
return static_file(path)
#app.get("/jsonify")
async def json_get(request):
return jsonify({"hello": "world"})
#app.post("/jsonify")
async def json(request):
print(request)
return jsonify({"hello": "world"})
#app.post("/post")
async def postreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.put("/put")
async def putreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.delete("/delete")
async def deletereq(request):
return bytearray(request["body"]).decode("utf-8")
#app.patch("/patch")
async def patchreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.get("/sleep")
async def sleeper():
await asyncio.sleep(5)
return "sleep function"
#app.get("/blocker")
def blocker():
import time
time.sleep(10)
return "blocker function"
if __name__ == "__main__":
app.add_header("server", "robyn")
app.add_directory(route="/test_dir",directory_path="./test_dir/build", index_file="index.html")
app.start(port=5000)
These are the files that I have used in my project. When I try to open 127.0.0.1:5000 from my machine, it shows nothing. However, when I log in the web container and do curl http://localhost:5000/, it gives the right response.
I am unable to figure out how to access it on the host machine?
I had to make the python server listen at '0.0.0.0'.
I added the following line in my codebase
app.start(port=5000, url='0.0.0.0')

Job is visible only from redis cli but not showing in rq dashboard and not executed

I want to build a pipeline using Redis and RQ. I created a worker, server and a job, the worker is running and listening to queue, the server is dispatching a job to a queue, the job is dispatched and I print the job ID, in console, I can see the worker logs sth that receive a job in a queue. The job is never executing and never shows in rq dashboard, but I can see it in Redis CLI.
Verions I am using:
rq==1.7.0
redis==3.5.0
Here is my code:
Worker in run.py
import os
import redis
from rq import Worker, Queue, Connection
listen = ['stance_queue','default']
redis_url = os.getenv('REDIS_URL', 'redis://redis:6379')
conn = redis.from_url(redis_url)
# conn = redis.Redis(host='redis', port=6379)
if __name__ == '__main__':
with Connection(conn):
print("Createing worker")
worker = Worker(map(Queue, listen))
# worker = Worker([Queue()])
worker.work()
And here were I dispatch a job
from workers.stance.run import conn
q = Queue('default', connection=conn)
#server.route("/task")
def home():
if request.args.get("n"):
print('create a job in default queue')
job = q.enqueue( background_task, args=(20,))
return f"Task ({job.id}) added to queue at {job.enqueued_at}"
return "No value for count provided"
And here is the background job
def background_task(n):
""" Function that returns len(n) and simulates a delay """
delay = 2
print("Task running", flush=True)
print(f"Simulating a {delay} second delay", flush=True)
time.sleep(delay)
print(len(n))
print("Task complete")
return len(n)
Here is a screenshot for rq-dashboard
And here is the logs in the worker
Attaching to annotators_server_stance_worker_1
stance_worker_1 | Createing worker
stance_worker_1 | 08:33:44 Worker rq:worker:cae161cf792b4c998376cde2c0848291: started, version 1.7.0
stance_worker_1 | 08:33:44 Subscribing to channel rq:pubsub:cae161cf792b4c998376cde2c0848291
stance_worker_1 | 08:33:44 *** Listening on stance_queue, default...
stance_worker_1 | 08:33:44 Cleaning registries for queue: stance_queue
stance_worker_1 | 08:33:44 Cleaning registries for queue: default
stance_worker_1 | 08:33:49 default: home.annotator_server.background_task(20) (9f1f31e0-f465-4019-9dc6-85bc349feab9)
and here is the logs from redis-cli
mpose exec redis redis-cli
127.0.0.1:6379> keys *
1) "rq:workers"
2) "rq:failed:default"
3) "rq:clean_registries:default"
4) "rq:queues"
5) "rq:job:9f1f31e0-f465-4019-9dc6-85bc349feab9"
6) "rq:worker:cae161cf792b4c998376cde2c0848291"
7) "rq:workers:default"
8) "rq:clean_registries:stance_queue"
9) "rq:workers:stance_queue"
And here is my compose
version: '3'
services:
annotators_server:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./app:/home
depends_on:
- redis
redis:
image: "redis:alpine"
dashboard:
image: "godber/rq-dashboard"
ports:
- 9181:9181
command: rq-dashboard -H redis
depends_on:
- redis
stance_worker:
build:
context: ./app/workers/stance
dockerfile: Dockerfile
environment:
- REDIS_URL=redis://redis:6379
depends_on:
- redis
I never see a logs for the job excution, I tried to add TTL and TIMEOUT but still facing the samething.
Pass the redis database to the connection string, when starting dashboard and worker.
Redis url = redis://redis-host:6379/0 ( this refers to db 0 ).

Subscriber not getting the message in Redis

I am using docker and redis to learn how multi container in docker work.I am using below simple flask application which has a publish and subscribe code.However I don't see any message received by the subscriber
import redis
from flask import Flask
app = Flask(__name__)
client = redis.Redis(host="redis-server", decode_responses=True)
def event_handler(msg):
print("Handler", msg, flush=True)
#app.route("/")
def index():
print("Request received",flush=True)
print(client.ping(), flush=True)
client.publish("insert", "This is a test 1")
client.publish("insert", "This is a test 2")
client.publish("insert", "This is a test 3")
ps = client.pubsub()
ps.subscribe("insert",)
print(ps.get_message(), flush=True)
print(ps.get_message(), flush=True)
print(ps.get_message(), flush=True)
return "Hello World"
if __name__ == '__main__':
app.run(host="0.0.0.0", port="5000")
and below is my docker-compose
version: "3"
services:
redis-server:
image: "redis"
python-app:
build: .
ports:
- "4001:5000"
and docker file
# Specify the base image
FROM python:alpine
# specify the working directory
WORKDIR /usr/app
# copy the requiremnts file
COPY ./requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
CMD ["python","./app.py"]
And below is the output I am getting
Can someone please help me in identifying what I am doing wrong.Thanks
I was able to fix above issue by creating two separate scripts one for publisher and other for subscriber and starting the subscriber first.

Telegram bot API setWebhook not working with Google App Engine

My app.yaml file is as follows:
runtime: python
env: flex
entrypoint: gunicorn -b :8443 main:app
threadsafe: true
runtime_config:
python_version: 2
So, when I run this python script in GAE (of course, having deleted the previous webhook), the webhook doesn't get set up. I couldn't figure out what did I do wrong.
import sys
import os
import time
from flask import Flask, request
import telegram
# CONFIG
TOKEN = '<token>'
HOST = 'example.appspot.com' # Same FQDN used when generating SSL Cert
PORT = 8443
CERT = "certificate.pem"
CERT_KEY = "key.pem"
bot = telegram.Bot(TOKEN)
app = Flask(__name__)
#app.route('/')
def hello():
return 'Hello World!'
#app.route('/' + TOKEN, methods=['POST','GET'])
def webhook():
update = telegram.Update.de_json( request.get_json(force = True), bot )
chat_id = update.message.chat.id
bot.sendMessage(chat_id = chat_id, text = 'Hello, there')
return 'OK'
def setwebhook():
bot.setWebhook(url = "https://%s:%s/%s" % (HOST, PORT, TOKEN), certificate = open(CERT, 'rb'))
if __name__ == '__main__':
context = (CERT, CERT_KEY)
setwebhook()
time.sleep(5)
app.run(host = '0.0.0.0', port = PORT, ssl_context = context, debug = True)
I thought there might be an issue with SSL certificates, but if I just do this without running the python code, everything works out fine:
curl -F "url=https://example.appspot.com:8443/<token>" -F "certificate=#certificate.pem"
https://api.telegram.org/bot<token>/setWebhook

How to get arrived timestamp of a request in flask

I have an ordinary Flask application, with just one thread to process requests. There are many requests arriving at the same time. They queue up to wait for be processed. How can I get the waiting time in queue of each request?
from flask import Flask, g
import time
app = Flask(__name__)
#app.before_request()
def before_request():
g.start = time.time()
g.end = None
#app.teardown_request
def teardown_request(exc):
g.end = time.time()
print g.end - g.start
#app.route('/', methods=['POST'])
def serve_run():
pass
if __name__ == '__main__':
app.debug = True
app.run()
There is no way to do that using Flask's debug server in single-threaded mode (which is what your example code uses). That's because by default, the Flask debug server merely inherits from Python's standard HTTPServer, which is single-threaded. (And the underlying call to select.select() does not return a timestamp.)
I just have one thread to process requests.
OK, but would it suffice to spawn multiple threads, but prevent them from doing "real" work in parallel? If so, you might try app.run(..., threaded=True), to allow the requests to start immediately (in their own thread). After the start timestamp is recorded, use a threading.Lock to force the requests to execute serially.
Another option is to use a different WSGI server (not the Flask debug server). I suspect there's a way to achieve what you want using GUnicorn, configured with asynchronous workers in a single thread.
You can doing something like this
from flask import Flask, current_app, jsonify
import time
app = Flask(__name__)
#app.before_request
def before_request():
Flask.custom_profiler = {"start": time.time()}
#app.after_request
def after_request(response):
current_app.custom_profiler["end"] = time.time()
print(current_app.custom_profiler)
print(f"""execution time: {current_app.custom_profiler["end"] - current_app.custom_profiler["start"]}""")
return response
#app.route('/', methods=['GET'])
def main():
return jsonify({
"message": "Hello world"
})
if __name__ == '__main__':
app.run()
And testing like this
→ curl http://localhost:5000
{"message":"Hello world"}
Flask message
→ python main.py
* Serving Flask app "main" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
{'start': 1622960256.215391, 'end': 1622960256.215549}
execution time: 0.00015807151794433594
127.0.0.1 - - [06/Jun/2021 13:17:36] "GET / HTTP/1.1" 200 -

Categories

Resources