As the title says, I have a question about the architecture and interaction of the client, NestJS backend, and Python microservice. I need to develop a recognition service. I have a client application that needs to send an HTTP request with an image to the NestJS backend, which should contact a third-party Python microservice, this service will recognize the text from the image (this can take a long time) and we should receive this response on the client application. What is the best way to implement this? I have an idea to connect NestJS with a Python microservice via RabbitMQ, the client sends a POST HTTP request to the NestJS backend, the backend sends a message to create a task via RPC to the Python microservice, which creates a Redis RQ queue with the task, returns the backend to the NestJS id of the task and starts to perform a long-running task, we return the task id to the client from the NestJS backend. After that, the client, with a certain interval, can send a GET HTTP request to the NestJS backend, which in turn will send a message to get the status of the task to the microservice and return the status to the client. Is this a good way to do it or is it possible to somehow optimize this process or implement it more competently?
I think you're on the right track here.
Send image to nest via HTTP - yes
Post job to redis queue - yes, use nestjs builtin queue handling (see docs), this will make it easier to consume the product of the job as well
Instead of having your client poll for a result, check out Server-sent Events
Server sent events are intended exactly for the use-case you are using.
Related
I have a Python application that is executed to continually monitor the system to search anomalies. This project has connections to database to store information and a set of componentes that are in charge of different monitor tasks. Each component can be paused and resumed, and an in memory state (status, uptime, last anomaly detected, detection parameters...)
The question is: how can I expose the operations to pause and resumen (or even change the detection parameters) a component to a web client (for example a React webapp)?. All the options that I can think has problems:
Integrate a Flask API: the fundamentals of a REST API is that they should be stateless, but my backend has state (components and their status).
Separate Flask API. Here my problem is how communicate the Flask API with the backend. I have though in a message queue: the client send a request to the API, the API creates an event "start component 2", add it to the queue and the backend receives the event. But with this approach, the web client can not receive confirmation unless the request wait for another event from the backend with the result of the operation, which make the communication syncronous.
Maybe there is something that I'm missing and the solution is easier than all of this.
I am going to use Kafka as a message broker in my application. This application is written entirely using Python. For a part of this application (Login and Authentication), I need to implement a request-reply messaging system. In other words, the producer needs to get the response of the produced message from the consumer, synchronously.
Is it feasible using Kafka and its Python libraries (kafka-python, ...) ?
I'm facing the same issue (request-reply for an HTTP hit in my case)
My first bet was (100% python):
start a consumer thread,
publish the request message (including a request_id)
join the consumer thread
get the answer from the consumer thread
The consumer thread subscribe to the reply topic (seeked to end) and deals with received messages until finding the request_id (modulus timeout)
If it works for a basic testing, unfortunatly, creating a KafkaConsumer object is a slow process (~300ms) so it's not an option for a system with massive traffic.
In addition, if your system deals with parallel request-reply (for example, multi-threaded like a web server is) you'll need to create a KafkaConsumer dedicated to request_id (basically by using request_id as consumer_group) to avoid to have reply to request published by thread-A consumed (and ignored) by thread-B.
So you can't here reclycle your KafkaConsumer and have to pay the creation time for each request (in addition to processing time on backend).
If your request-reply processing is not parallelizable you can try to keep the KafkaConsuser object available for threads started to get answer
The only solution I can see at this point is to use a DB (relational/noSQL):
requestor store request_id in DB (as local as possible) aznd publish request in kafka
requestor poll DB until finding answer to request_id
In parallel, a consumer process receiving messages from reply topic and storing result in DB
But I don't like polling..... It wil generate heavy load on DB in a massive traffic system
My 2CTS
I have Django application as a load balancer. It receives tasks and forwards them to available server. All communications handled as HTTP requests, i.e. django posts http request to external server and ext. server posts http request to django upon task completion. Unfortunately there's no way to check whether the ext. server is busy or not, so I need to wait for notification.
As Django process requests asynchronously I need to build some synchronous task queue which will monitor free tasks and free servers and send tasks to all available servers and then will wait until any of them reports back. I tried using celery but not sure how to "wait" for server to report back.
I'm working with django-websocket-redis lib, that allow establish websockets over uwsgi in separated django loop.
By the documentation I understand well how to send data from server through websockets, but I don't understand how to receive.
Basically I have client and I want to send periodically from the client to server status. I don't understand what I need to do, to handle receiving messages from client on server side? What URL I should use on client?
You can achieve that by using periodically ajax calls from client to server. From documentation:
A client wishing to trigger events on the server side, shall use
XMLHttpRequests (Ajax), as they are much more suitable, rather than
messages sent via Websockets. The main purpose for Websockets is to
communicate asynchronously from the server to the client.
Unfortunately I was unable to find the way to achieve it using just websocket messages.
I need to implement a chat application for my web service (that is written in Django + Rest api framework). After doing some google search, I found that Django chat applications that are available are all deprecated and not supported anymore. And all the DIY (do it yourself) solutions I found are using Tornado or Twisted framework.
So, My question is: is it OK to make a Django-only based synchronous chat application? And do I need to use any asynchronous framework? I have very little experience in backend programming, so I want to keep everything as simple as possible.
Django, like many other web framework, is constructed around the concept of receiving an HTTP request from a web client, processing the request and sending a response. Breaking down that flow (simplified for sake of clarity):
The remote client opens TCP connection with your Django server.
The client sends a HTTP request to the server, having a path, some headers and possibly a body.
Server sends a HTTP response.
Connection is closed
Server goes back to a state where it waits for a new connection.
A chat server, if it needs to be somewhat real-time, needs to be different: it needs to maintain many simultaneous open connections with connected clients, so that when new messages are published on a channel, the appropriate clients are notified accordingly.
A modern way of implementing that is using WebSockets. This communication flow between the client and server starts with a HTTP request, like the one described above, but the client sends a special Upgrade HTTP request to the server, asking for the session to switch over from a simple request/response paradigm to a persistent, "full-duplex" communication model, where both the client and server can send messages at any time in both direction.
The fact that the connections with multiple simultaneous clients needs to be persistent means you can't have a simple execution model where a single request would be taken care of by your server at a time, which is usually what happens in what you call synchronous servers. Tornado and Twisted have different models for doing networking, using multithreading, so that multiple connections can be left open and taken care of simultanously by a server, and making a chat service possible.
Synchronous approach nevertheless
Having said that, there are ways to implement a very simple, non-scalable chat service with apparent latency:
Clients perform POST requests to your server to send messages to channels.
Clients perform periodical GET requests to the server to ask for any new messages to the channels they're subscribed to. The rate at which they send these requests is basically the refresh rate of the chat app.
With this approach, your server will work significantly harder than if it had a asynchronous execution model for maintaining persistent connections, but it will work.
If you're going to make a chat app, you'll want to use websockets. They'll make getting updates to all clients participating in a conversation significantly easier and it'll give you real time conversations within your app. Having said that, I've never seen websockets used within a synchronous framework.
If it's OK to make Django-only based synchronous chat application? Too many unanswered questions for a reasonable answer. How many people will use this chat app? How many people per conversation? How long will this app be around? If you're looking to make something simple for you and a couple friends, make what you know. If you're getting paid to make this app, use websockets and use an asynchronous framework.
You certainly can develop a synchronous chat app, you don't necessarily need to us an asynchronous framework. but it all comes down to what do you want your app to do? how many people will use the app? will there be multiple users and multiple chats going on at the same time?