How to get instant response (one of many) from API - python

I need to make such an architecture:
The first block is some application that sends five POST requests to the global API - which is second block. This API gets all five requests, which have data and number from 1 to 5. This number tells the API where to send the data in local network. Third block is my local network that has 5 APIs which receive data and after some computings send back json.
Here comes my problem. How to send back these response (lets say 200 code with specific json) to first block, through the second, immidietly? I don't want second block to wait for all 5 responses and then send them to first in one chunk. How to make it instant - for example 4th local API computes something, sends back json data to second block (global API) which sends it to first app?

Related

Django Websocket Send Text and bytes at sametime

I have client and server in my project. In the client part, the user will upload his own excel file and this file will come to my server for processing. My artificial intelligence python code will run on my server and it will make changes to excel. When every time it changes, I want to send the updated version to the client so that the client can see the change live. Example Let's say I have 10 functions on server side, each function changes some cells in excel(I can get the index of the changed cells). When each function is finished, I will send the changing indexes to the client and these changed places will be updated in the table in the client (C++, Qt).
At first, I made the server with PHP, but calling my artificial intelligence python codes externally(shell_exec) was not a good method. That's why I want to do the server part with python.
Is django the best way for me?
What I've tried with Django:
I wanted to send data continuously from server to client with StreamingHttpResponse object, but even though I used iter_content to recv the incoming data on the client, when all the code was finished, all came at once. When I set the chunksize value of iter_content to a small value, I could get it instantly, but it's not a full word. So I decided to use websocket.
I have a problem with websocket; I can't send text and byte data at the same time.
When client while uploading the Excel file, I need to send some text data as a parameter to my server.
Waiting for your help thank you!
You can send bytes as hexadecimal string.
Check this out: binascii hexlify

Running python script concurrently based on trigger

What would be best way to solve following problem with Python ?
I have real-time data stream coming to my object-oriented storage from user application (json files being stored into S3 storage in Amazon).
Upon receiving of each JSON file, I have to within certain time (1s in this instance) process data in the file and generate response that is send back to the user. This data is being processed by simple Python script.
My issue is, that the real-time data stream can at the same time generate even few hundreds JSON files from user applications that I need to run trough my Python script and I don't know how to approach this the best way.
I understand, that way to tackle this would be to use trigger based Lambdas that would execute job on the top of every file once uploaded from real-time stream in server-less environment, however this option is quite expensive compared to have single server instance running and somehow triggering jobs inside.
Any advice is appreciated. Thanks.
Serverless can actually be cheaper than using a server. It is much cheaper when there are periods of no activity because you don't need to pay for a server doing nothing.
The hardest part of your requirement is sending the response back to the user. If an object is uploaded to S3, there is no easy way to send back a response and it isn't even obvious who is the user that sent the file.
You could process the incoming file and then store a response back in a similarly-named object, and the client could then poll S3 for the response. That requires the upload to use a unique name that is somehow generated.
An alternative would be for the data to be sent to AWS API Gateway, which can trigger an AWS Lambda function and then directly return the response to the requester. No server required, automatic scaling.
If you wanted to use a server, then you'd need a way for the client to send a message to the server with a reference to the JSON object in S3 (or with the data itself). The server would need to be running a web server that can receive the request, perform the work and provide back the response.
Bottom line: Think about the data flow first, rather than the processing.

Python GET request from live/infinite API endpoint

I want to get info from API via Python, which has infinite info updating (it is live - for example live video or live monitoring). So I want to stop this GET request after interval (for example 1 second), then process these information and then repeat this cycle.
Any ideas? (now I am using requests module, but I do not know, how to stop receiving data and then get them)
I might be off here, but if you hit an endpoint at a specific time, it should return the JSON at that particular moment. You could then store it and use it in whatever process you have created.
If you want to hit it again, you would just use requests to hit the endpoint.

Python : ValueError: Unterminated string starting at: line 1 column 1 while using requests.json()

I am using a nodejs express server that sends data to a python client. The nodejs express server sends data using res.send() function and the returned data is huge and sometimes when I am processing the data at the python side using response.json() function I get this error. Here are is my understanding of the error , either the python side the entire request was not read or the node server truncated the data when it reached a maximum size.
Here are my questions:
1. Am I supposed to use res.json() instead of res.send() ? Since to my understanding res.json() also calls res.send() inturn.
2. Am I supposed to stream the response data in python ? is that a good option? or is it even an option ?
3. Is this a configuration issue ? As I understand there are ways to configure the nginx server that is connecting these microservices to limit the amount of data transferred?
4. Is there way to ensure that data transmitted from the node server always contains the complete json. Liking parseing the body before sending it.
I am beginner and If I had made any mistakes or suggestions in the questions, do point them out.

Can Django send multi-part responses for a single request?

I apologise if this is a daft question. I'm currently writing against a Django API (which I also maintain) and wish under certain circumstances to be able to generate multiple partial responses in the case where a single request yields a large number of objects, rather than sending the entire JSON structure as a single response.
Is there a technique to do this? It needs to follow a standard such that client systems using different request libraries would be able to make use of the functionality.
The issue is that the client system, at the point of asking, does not know the number of objects that will be present in the response.
If this is not possible, then I will have to chain requests on the client end - for example, getting the first 20 objects & if the response suggests there will be more, requesting the next 20 etc. This approach is an OK work-around, but any subsequent requests rely on the previous response. I'd rather ask once and have some kind of multi-part response.
As far as I know, No you can't send Multipart http response not yet atleast. Multipart response is only valid in http requests. Why? Because no browser as I know of completely supports this.
Firefox 3.5: Renders only the last part, others are ignored.
IE 8: Shows all the content as if it were text/plain, including the boundaries.
Chrome 3: Saves all the content in a single file, nothing is rendered.
Safari 4: Saves all the content in a single file, nothing is rendered.
Opera 10.10: Something weird. Starts rendering the first part as plain/text, and then clears everything. The loading progress bar hangs on 31%.
(Data credits Diego Jancic)

Categories

Resources