Getting Python HttpServer response instantly - python

I am using Python HttpServer in the server side. One GET request will take more time to respond and I want to update the user the current status of it, such as 'Fetching module X. Please wait', 'Fetching module Y. Please wait'.
But, it is not getting updated in the client side even though I sending it in between the modules. I have tried flushing the stream, but no luck.
self.wfile.write('Fetching module X. Please wait')
self.wfile.flush()
How can I force the HttpServer to send the information instantly, instead of waiting for the completion of full response ?

You can use python threading
from threading import Thread
t = threading.Thread(target=function to be call, args=[request])
t.setDaemon(False)
t.start()
This code will force to return response instantly and run your function in background.

Suggest you put the user indication to header not body. Then you can use stream to reach your requirements.
NOTE: Next code is base on python2, you can change http server related to python3 related api if you like.
server.py:
import BaseHTTPServer
import time
class RequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
Page = "Main content here."
def do_GET(self):
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.send_header("Content-Length", str(len(self.Page)))
self.send_header("User-Information", "Fetching module X. Please wait")
self.end_headers()
time.sleep(10)
self.wfile.write(self.Page)
if __name__ == '__main__':
serverAddress = ('', 8080)
server = BaseHTTPServer.HTTPServer(serverAddress, RequestHandler)
server.serve_forever()
client.py:
import requests
r = requests.get('http://127.0.0.1:8080', stream=True)
print(r.headers['User-Information'])
print(r.content)
Explain:
If use stream, the header information will still be fetched by client at once, so you can print it to user at once with print(r.headers['User-Information'])
But with stream, the body information will not transmit, it's be delayed until client use r.content to require it(Response.iter_lines() or Response.iter_content() also ok), so when you do print(r.content), it will need 10 seconds to see the main content as it cost 10s in server code.
Output: (The first line will be shown to user at once, but the second line will be shown 10 seconds later)
Fetching module X. Please wait
Main content here.
Attach the guide for your reference, hope it helpful.

Related

How to make a request inside a simple mitmproxy script?

Good day,
I am currently trying to figure out a way to make non blocking requests inside a simple script of mitmproxy, but the documentation doesn't seem to be clear for me for the first look.
I think it's probably the easiest if I show my current code and describe my issue below:
from copy import copy
from mitmproxy import http
def request(flow: http.HTTPFlow):
headers = copy(flow.request.headers)
headers.update({"Authorization": "<removed>", "Requested-URI": flow.request.pretty_url})
req = http.HTTPRequest(
first_line_format="origin_form",
scheme=flow.request.scheme,
port=443,
path="/",
http_version=flow.request.http_version,
content=flow.request.content,
host="my.api.xyz",
headers=headers,
method=flow.request.method
)
print(req.get_text())
flow.response = http.HTTPResponse.make(
200, req.content,
)
Basically I would like to intercept any HTTP(S) request done and make a non blocking request to an API endpoint at https://my.api.xyz/ which should take all original headers and return a png screenshot of the originally requested URL.
However the code above produces an empty content and the print returns nothing either.
My issue seems to be related to: mtmproxy http get request in script and Resubmitting a request from a response in mitmproxy but I still couldn't figure out a proper way of sending requests inside mitmproxy.
The following piece of code probably does what you are looking for:
from copy import copy
from mitmproxy import http
from mitmproxy import ctx
from mitmproxy.addons import clientplayback
def request(flow: http.HTTPFlow):
ctx.log.info("Inside request")
if hasattr(flow.request, 'is_custom'):
return
headers = copy(flow.request.headers)
headers.update({"Authorization": "<removed>", "Requested-URI": flow.request.pretty_url})
req = http.HTTPRequest(
first_line_format="origin_form",
scheme='http',
port=8000,
path="/",
http_version=flow.request.http_version,
content=flow.request.content,
host="localhost",
headers=headers,
method=flow.request.method
)
req.is_custom = True
playback = ctx.master.addons.get('clientplayback')
f = flow.copy()
f.request = req
playback.start_replay([f])
It uses the clientplayback addon in order to send out the request. When this new request is sent, that will generate another request event which will then be an infinite loop. That is the reason for the is_custom attribute I added to the request there. If the request that generated this event is the one that we have created, then we don't want to create a new request from it.

How to fire and forgot a HTTP request?

Is it possible to fire a request and not wait for response at all?
For python, most internet search results in
asynchronous-requests-with-python-requests
grequests
requests-futures
However, all the above solutions spawns a new thread and wait for response on each of the respective threads. Is it possible to not wait for any response at all, anywhere?
You can run your thread as a daemon, see the code below; If I comment out the line (t.daemon = True), the code will wait on the threads to finish before exiting. With daemon set to true, it will simply exit. You can try it with the example below.
import requests
import threading
import time
def get_thread():
g = requests.get("http://www.google.com")
time.sleep(2)
print(g.text[0:100])
if __name__ == '__main__':
t = threading.Thread(target=get_thread)
t.daemon = True # Try commenting this out, running it, and see the difference
t.start()
print("Done")
I don't really know what you are trying to achieve by just firing an http request. So I will list some use cases I can think of.
Ignoring the result
If the only thing you want is that your program feels like it never stops for making a request. You can use a library like aiohttp to make concurrent request without actually calling await for the responses.
import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
session.get('http://python.org')
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
but how can you know that the request was made successfully if you don't check anything?
Ignoring the body
Maybe you want to be very performant, and you are worried about loosing time reading the body. In that case yo can just fire the request, check the status code and then close the connection.
def make_request(url = "yahoo.com", timeout= 50):
conn = http.client.HTTPConnection(url, timeout=timeout)
conn.request("GET", "/")
res = conn.getresponse()
print(res.status)
conn.close()
If you close the connection as I did previously you won't be able to reuse the connections.
The right way
I would recommend to await on asynchronous calls using aiohttp so you can add the necessary logic without having to block.
But if you are looking for performance, a custom solution with the http library is necessary. Maybe you could also consider very small request/responses, small timeouts and compression in your client and server.

Why is my request in Python not registered by the server?

I am currently writing a script in Python which allows to listen the serial port from a connected device in order to guess the height of a person when passing through a door.
The final idea is to send this information to Piwik, a web analytics package through a http request.
The code is as follow:
import serial, httplib, uuid
arduino = serial.Serial('/dev/ttyACM0', 9600)
while True:
data = arduino.readline()
print data
conn = httplib.HTTPConnection("my-domain.com")
conn.request("HEAD","/piwik/piwik.php?idsite=1&rec=1&action_name=Entree-magasin&uid="+str(uuid.uuid4())+"&e_c=entree-magasin&e_a=passage&e_n=taille&e_v="+str(data)+"")
print conn.request
when I just ask to print the following line:
"/piwik/piwik.php?idsite=1&rec=1&action_name=Entree-magasin&uid="+str(uuid.uuid4())+"&e_c=entree-magasin&e_a=passage&e_n=taille&e_v="+str(data)+""
it works fine. But if I look in the logs of the server hosting my website the request is not sent.
If I remove the following part "&e_c=entree-magasin&e_a=passage&e_n=taille&e_v="+str(data)+" then it works fine and the request is sent.
If I leave the following part &e_c=entree-magasin&e_a=passage&e_n=taille&e_v="+str(data)+" and hard code the following value +str(data)+ by a figure, then the request is sent too.
I don't really where the problem can be. If anyone can help that would be great.
After reading your answers and work on and on on it, I find a way of optimizing my code by using the requests function instead, but the result is still the same i cannot get str(data) value within my request:
import serial, requests, uuid
arduino = serial.Serial('/dev/ttyACM0', 9600)
while True:
data = arduino.readline()
print data
r = requests.get('http://my-domain.com/piwik/piwik.php?idsite=1&rec=1&action_name=Entree-magasin&uid='+str(uuid.uuid4())+'&e_c=entree-magasin&e_a=passage&e_n=taille&e_v='+str(data)+'')
print r
Try the below code, the parameters are not part of header
resp, content = h.request("http://my-domain.com/piwik/piwik.php?idsite=1&rec=1&action_name=Entree-magasin&uid="+str(uuid.uuid4())+"&e_c=entree-magasin&e_a=passage&e_n=taille&e_v="+str(data), "GET")
I think I figured it out. I tried to do the same thing but with Google Analytics instead of Piwik. It works with Google Analytics, the str(data) is going back properly within the system, but for some reason it is not working with Piwik :(

Can I set a header with python's SimpleHTTPServer?

I'm using SimpleHTTPServer to test some webpages I'm working on. It works great, however I need to do some cross-domain requests. That requires setting a Access-Control-Allow-Origin header with the domains the page is allowed to access.
Is there an easy way to set a header with SimpleHTTPServer and serve the original content? The header would be the same on each request.
This is a bit of a hack because it changes end_headers() behavior, but I think it's slightly better than copying and pasting the entire SimpleHTTPServer.py file.
My approach overrides end_headers() in a subclass and in it calls send_my_headers() followed by calling the superclass's end_headers().
It's not 1 - 2 lines either, less than 20 though; mostly boilerplate.
#!/usr/bin/env python
try:
from http import server # Python 3
except ImportError:
import SimpleHTTPServer as server # Python 2
class MyHTTPRequestHandler(server.SimpleHTTPRequestHandler):
def end_headers(self):
self.send_my_headers()
server.SimpleHTTPRequestHandler.end_headers(self)
def send_my_headers(self):
self.send_header("Access-Control-Allow-Origin", "*")
if __name__ == '__main__':
server.test(HandlerClass=MyHTTPRequestHandler)
I'd say there's no simple way of doing it, where simple means "just add 1-2 lines that will write the additional header and keep the existing functionality". So, the best solution would be to subclass the SimpleHTTPRequestHandler class and re-implement the functionality, with the addition of the new header.
The problem behind why there is no simple way of doing this can be observed by looking at the implementation of the SimpleHTTPRequestHandler class in the Python library: http://hg.python.org/cpython/file/19c74cadea95/Lib/http/server.py#l654
Notice the send_head() method, particularly the lines at the end of the method which send the response headers. Notice the invocation of the end_headers() method. This method writes the headers to the output, together with a blank line which signals the end of all headers and the start of the response body: http://docs.python.org/py3k/library/http.server.html#http.server.BaseHTTPRequestHandler.end_headers
Therefore, it would not be possible to subclass the SimpleHTTPRequestHandler handler, invoke the super-class do_GET() method, and then just add another header -- because the sending of the headers has already finished when the call to the super-class do_GET() method returns. And it has to work like this because the do_GET() method has to send the body (the file that is requested), and to send the body - it has to finalize sending the headers.
So, again, I think you're stuck with sub-classing the SimpleHTTPRequestHandler class, implementing it exactly as the code in the library (just copy-paste it?), and add another header before the call to the end_headers() method in send_head():
...
self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
# this below is the new header
self.send_header('Access-Control-Allow-Origin', '*')
self.end_headers()
return f
...
# coding: utf-8
import SimpleHTTPServer
import SocketServer
PORT = 9999
def do_GET(self):
self.send_response(200)
self.send_header('Access-Control-Allow-Origin', 'http://example.com')
self.end_headers()
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
Handler.do_GET = do_GET
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()
While this is an older answer, its the first result in google...
Basically what #iMon0 suggested..Seems correct?..Example of doPOST
def do_POST(self):
self.send_response()
self.send_header('Content-type','application/json')
self.send_header('Access-Control-Allow-Origin','*')
self.end_headers()
sTest = {}
sTest['dummyitem'] = "Just an example of JSON"
self.wfile.write(json.dumps(sTest))
By doing this, the flow feels correct..
1: You get a request
2: You apply the headers and response type you want
3: You post back the data you want, be this what or how ever you want.,
The above example is working fine for me and can be extended further, its just a bare bone JSON post server. So i'll leave this here on SOF incase someone needs it or i myself come back in a few months for it.
This does produce a valid JSON file with only the sTest object, Same as a PHP generated page/file.

Python Flask + nginx fcgi - output large response?

I'm using Python Flask + nginx with FCGI.
On some requests, I have to output large responses. Usually those responses are fetched from a socket. Currently I'm doing the response like this:
response = []
while True:
recv = s.recv(1024)
if not recv: break
response.append(recv)
s.close()
response = ''.join(response)
return flask.make_response(response, 200, {
'Content-type': 'binary/octet-stream',
'Content-length': len(response),
'Content-transfer-encoding': 'binary',
})
The problem is I actually do not need the data. I also have a way to determine the exact response length to be fetched from the socket. So I need a good way to send the HTTP headers, then start outputing directly from the socket, instead of collecting it in memory and then supplying to nginx (probably by some sort of a stream).
I was unable to find the solution to this seemingly common issue. How would that be achieved?
Thank you!
if response in flask.make_response is an iterable, it will be iterated over to produce the response, and each string is written to the output stream on it's own.
what this means is that you can also return a generator which will yield the output when iterated over. if you know the content length, then you can (and should) pass it as header.
a simple example:
from flask import Flask
app = Flask(__name__)
import sys
import time
import flask
#app.route('/')
def generated_response_example():
n = 20
def response_generator():
for i in range(n):
print >>sys.stderr, i
yield "%03d\n" % i
time.sleep(.2)
print >>sys.stderr, "returning generator..."
gen = response_generator()
# the call to flask.make_response is not really needed as it happens imlicitly
# if you return a tuple.
return flask.make_response(gen ,"200 OK", {'Content-length': 4*n})
if __name__ == '__main__':
app.run()
if you run this and try it in a browser, you should see a nice incemental count...
(the content type is not set because it seems if i do that my browser waits until the whole content has been streamed before rendering the page. wget -qO - localhost:5000 doesn't have this problems.

Categories

Resources