I want to know about server communication principles - python

Looking at the process of communicating between the server and the client with "fiddler," it was seen that dozens of times of communication with the server was made with a single click in Chrome.
In some cases, the response by the first request may be included in the second request, and I wonder how the information is extracted and included in the next request.

Related

Is there any Instagram Web API for the new version of the site?

You're able to dm in the new version and i thought that there'd some simple GET and POST requests for that without getting access to the official instagram API.
I don't want to use bots that emulate app or similar, cause i can get a ban for that.
Tried to look at XHR in network tab on dev tools (Google Chrome) but I've never done that before and I have some troubles with that. I see requests, headers, response (where are messages), but i can't define how to do that with python for example.
I'm looking for help with that or for any ready solutions (not nescessary for python, i think i can port them to python or just use the language an api was written for)
Edit:
link looks like this (for the inbox page):
https://www.instagram.com/direct_v2/web/inbox/?persistentBadging=true&folder=0&limit=10&thread_message_limit=10
and a ton of headers
Instagram sends a request with cursor to load the direct messages data in chunks.
Its response has prev_cursor & oldest_cursor.
oldest_cursor value is the next cursor value you need to send for the next chunk of messages
When prev_cursor value becomes MINCURSOR it means that it is the last chunk means the first chunk of the message that has been initiated in the chat history.
I have been working on a script to unsend all the messages on Instagram DM. For deleting messages I need to get messages first, so I have written function which provides me all the messages.
You can look at the repository https://github.com/pishangujeniya/instagram-helper
For getting messages there is no limit in Instagram API requests. But for delete request, Instagram starts sending 429 Response Code i.e. Too Many Requests after we delete 83 messages in the single session. The solution to continue deleting is by logout and re-login after some time. But there also problems exist, if too many logouts and login were done, then Instagram blocks your account to log in for a particular period of time. (In my case I was blocked for 30 Minutes to log in while developing the script)
Update 20 April, 2020
I updated the script with having delay between requests, so as to avoid 429 response, and it is working very fine as of now.

Send response to all clients with FLASK

I am developing an application with flask, which has a page with a map drawn on it, it is generated by a JSON, when a change is made on the map by the user it sends the change to the server and this makes a response to the user that made the request.
What i want is that when making the response it is made to all the users that are connected to the page so that the information is refreshed, and not only for the one who made the request.
you will have to implement some registration mechanizm for clients, so when an update occur, you iterate over clients in the registration list and send them the new data
to implement the actual push, you can do it with web-sockets (best for high throughput and small messages) or you can use server-send-event for that (much simpler implementation mainly because it's riding on the http protocol)
there are other approaches using more advanced techniques, but, those 2 are the simplest and basic ones
What you are searching for is notification in push
im guessing one of the possible options for you is to make you javascript code \ or html to send a request every few minutes to check for new json
it can be simple done with ajax and interval

How can one return offloaded process response from celery?

I work on web application that generates pdf, it returns generated pdf file. Previously I handled the pdf generation in the main process. My superior told me, that will potentially cause the app to stall, as django is synchronous.
So he suggest offload the process to celery, I tried it and have the idea about how to process it using celery. But I can't figure out the how to return the response to the client.
I can return response that it's being processed, but what about the pdf file? return the possible url too? and send request every now and then?
Typically this is handled by making the HTTP interaction asynchronous as well.
Instead of returning the PDF, you'd return a future link that the PDF will be available on or a processing status page, and the client can poll on that link to retrieve the PDF. Typically, this is done by returning HTTP 202 Accepted response.
Other alternative that may be suitable in different circumstances is to return the response in a WebSocket push message. This may be suitable if your already have a WebSocket connection in the app or if you need lower latency than can be provided by polling. If the processing takes very long time (e.g hours), it may be appropriate to ask the user their email address (or take the email address from the user's profile) and send the user an email when they can retrieve the result.

Request processing time in python

I'm trying to test a web application using selenium python. I've wrote a script to mimic a user. It logs in to the server, generates some reports and so on. It is working fine.
Now, I need to see how much time the server is taking to process a specific request.
Is there a way to find that from the same python code?
Any alternate method is acceptable.
Note:
The server is in the same LAN
Also I don't have privileges to do anything at the server side. So anything I can do is from outside the server.
Any sort of help is appreciable. Thank you
Have you considered the w3c HTTP access log field, "time-taken." This will report on every single request the time in milliseconds maximally. On some platforms the precision reported is more granular. In order for a web server, an application server with an HTTP access layer, an enterprise services bus with an HTTP access layer (for SOAP and REST calls) to be fully w3c standards compliant this log value must be available for inclusion in the HTTP access logs.
You will see every single granular request and the time required for processing from first byte of receipt at the server to the last byte sent minus the final TCP ACK at the end.

How to handle a heavy request on a server?

I don't know if this is the right place to ask but, i am desperate for an answer.
The problem in hand here is not the number of requests but, the amount of time one single request will take. For each request, the server has to query about 12 different sources for data and it can take upto 6 hours for server to get the data (let's leave request timeout from this because, this is not the server directly communicating with the client. This server is fetching messages from kafka and then starts getting the data from the sources). I am supposed to come up with a scalable solution. Can anyone help me with this?
The problem don't end here:
Once the server gets the data, he has to push to kafka for further computation using spark. Streaming api will be used in this part.
I am open to any web framework or any scaling solution in python.

Categories

Resources