I am putting together an http-post server client example in order to send and request data from a client to a server that handle multiple connections. I am using the HTTPServer module from the standard library. The code seems to work fine, but the communication slows down randomly. I have checked the communication traffic using Wireshark and I can see some strange messages going on.
I have checked different solutions on internet, but I have not found anything unusual in my code.
The code for the client it is just a simple http post request
Server code:
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers['Content-Length'])
body = self.rfile.read(content_length)
data = {
'ids': [5, 6]
}
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write(json.dumps(data).encode())
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
test = HTTPServer((SV_HOST, SV_PORT), Handler)
test.timeout = 5
print('Starting server, use <Ctrl-C> to stop')
test.serve_forever()
Here are the Wireshark messages that I see:
I would appreciate if someone can clarify what I am doing wrong, if there is something wrong. Is "TCP segment of a reassembled PDU" normal?
Related
How can I keep my Python HTTP server connected(streaming) to my browser in real time?
(Update image to infinity) Like raspberry pi's motion eye
class MyHttpRequestHandler(http.server.SimpleHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.send_header("Connection", "keep-alive")
self.send_header("keep-alive", "timeout=999999, max=99999")
self.end_headers()
def do_GET(self):
#self.send_response(204)
#self.end_headers()
if self.path == '/':
self.path = 'abc.jpg'
return http.server.SimpleHTTPRequestHandler.do_GET(self)
# Create an object of the above class
handler_object = MyHttpRequestHandler
PORT = 8000
my_server = socketserver.TCPServer(("", PORT), handler_object)
# Star the server
my_server.serve_forever()
Just keep writing, as in:
while True:
self.wfile.write(b"data")
This however won't get you into eventstream / server sent events territory, without using helper external libraries, as far as I'm aware.
I came across the same issue, I then found by chance (after much debugging) that you need to send linebreaks (\r\n or \n\n) to have the packets sent:
import http.server
import time
class MyHttpRequestHandler(http.server.BaseHTTPRequestHandler):
value = 0
# One can also set protocol_version = 'HTTP/1.1' here
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.send_header("Connection", "keep-alive")
self.end_headers()
while True:
self.wfile.write(str(self.value).encode())
self.wfile.write(b'\r\n') # Or \n\n, necessary to flush
self.value += 1
time.sleep(1)
PORT = 8000
my_server = http.server.HTTPServer(("", PORT), MyHttpRequestHandler)
# Start the server
my_server.serve_forever()
This enables you to send Server-sent Events (SSE) or HTTP long poll, or even json/raw http streams with the http.server library.
As the comment in the code says, you can also set the protocol version to HTTP/1.1 to enable keepalive by default. If you do so, you will have to specify Content-Length for every sent packet, otherwise the connection will never be terminated.
It is probably best to combine this with a threaded server to allow concurrent connections, as well as maybe setting a keepalive on the socket itself.
little bit explanation:
i have multiple clients and a very simple http client written in python.
out of all the clients , one client sends a post request to the http server with 4 values (lets call this client "client Alpha") and all the remaining clients send the http post request just to establish the connection to the server (lets call these clients "clients beta") the reason behind clients beta for sending the request is so that they can receive the values that were sent via client alpha...
import kwargs
import args
from http.server import BaseHTTPRequestHandler, HTTPServer
import logging
import requests
class S(BaseHTTPRequestHandler):
def _set_response(self):
self.send_response(200,1)
self.send_header('Content-type', 'int')
self.end_headers()
def breakRequest(self, str):
l = []
x = str.split("&")
for i in x:
a = i.split("=")
l.append(a[1])
return l[0], l[1], l[2], l[3]
def do_POST(self):
content_length = int(self.headers['Content-Length']) # <--- Gets the size of data
post_data = self.rfile.read(content_length) # <--- Gets the data itself
var1,var2,var3,var4 = self.breakRequest(str(post_data.decode('utf-8')))
if (var1 !='ard'):
s = "\n" + var1+"\n"+var2+"\n"+var3 + "\n"+var4 + "\n"
logging.info(s)
logging.info("POST request,\nPath: %s\nHeaders:\n%s\n\nBody:\n%s\n",
str(self.path), str(self.headers), post_data.decode('utf-8'))
self._set_response()
self.wfile.write("1".format(self.path).encode('utf-8'))
def run(server_class=HTTPServer, handler_class=S, port=6060):
logging.basicConfig(level=logging.INFO)
server_address = ('', port)
httpd = server_class(server_address, handler_class)
logging.info('Starting httpd...\n')
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
logging.info('Stopping httpd...\n')
if __name__ == '__main__':
from sys import argv
if len(argv) == 2:
run(port=int(argv[1]))
else:
run()
what the client alpha sends:
client alpha sends 4 values which are being stored in var1, var2, var3, var4
what client beta sends
client beta send http post request only once to establish the connection to the server
what i am trying to achieve
once the client beta has establish the connection to the server i am trying to make the server store the values received by the client alpha into var1,var2,va3,var4 and then send these values out to all the beta clients at once once the values have been sent out then wait and when the new values are received by the server from client alpha, then send these new values to the client beta
every time the ip address of beta client is changed then it sends the request again to establish connection.
and
i am not very good at python and what i currently have is all thanks to google i kept searching for the examples and been implementing and testing them and ended up with a python code that receives and stores the http post data into variables
i will highly appreciate your help
thanks in advance
and sorry for any mistakes
You're talking about having the server connected to 4 different clients and PUSHING data to them when a specific event occurs. You are going to need to look at either using Websockets (https://pypi.org/project/websockets/) or Server Sent Events (https://medium.com/code-zen/python-generator-and-html-server-sent-events-3cdf14140e56).
Those are the only two methods in which a server can push data to other clients, as they are connected, so the server knows that they exist.
I'm building an Http file server using python so I'm using http.server python built-in package like the following
from http.server import HTTPServer, BaseHTTPRequestHandler
from io import BytesIO
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b'Hello, world!')
def do_POST(self):
content_length = int(self.headers['Content-Length'])
body = self.rfile.read(content_length)
self.send_response(200)
self.end_headers()
response = BytesIO()
response.write(b'This is POST request. ')
response.write(b'Received: ')
response.write(body)
self.wfile.write(response.getvalue())
httpd = HTTPServer(('255.255.255.255', 8000), SimpleHTTPRequestHandler)
httpd.serve_forever()
so I've read that this package is not recommended for production. It only implements basic security checks.
are there any alternatives or a better way of doing this?
Python library is not intended to provide a high performance and highly secure http server. It just allows to build both simply and quickly an acceptable http server meeting your functional requirements. If you want to use it in production, just hide it behind an Apache or nginx reverse proxy. That way, only the secure proxy will be exposed to internet.
That is what is commonly done for professional grade Tomcat application servers.
I am attempting to write a simple TCP server in twisted which has to perform the following operations in sequence:
A client connects to the server and the KEEPALIVE flag for this connection is set to 1.
The server receives data from the client.
It then computes the response which is a list.
The server then sends each item of the list one by one while waiting for explicit ACKs from the client in between, i.e., after sending a single item from the list, the server waits for an ACK packet from the client and only after receiving the ACK does it proceed to send the rest of the items in the same manner.
The following is the code:
class MyFactory(ServerFactory):
protocol = MyProtocol
def __init__(self, service):
self.service = service
class MyProtocol(Protocol):
def connectionMade(self):
try:
self.transport.setTcpKeepAlive(1)
except AttributeError:
pass
self.deferred = Deferred()
self.deferred.addCallback(self.factory.service.compute_response)
self.deferred.addCallback(self.send_response)
def dataReceived(self, data):
self.fire(data)
def fire(self, data):
if self.deferred is not None:
d, self.deferred = self.deferred, None
d.callback(data)
def send_response(self, data):
for item in data:
d = Deferred()
d.addCallback(self.transport.write)
d.addCallback(self.wait_for_ack)
d.callback(item)
return
def wait_for_ack(self, dummy):
try:
self.transport.socket.recv(1024)
except socket.error as e:
print e
return
Upon running the server and the client I get the following exception:
Resource temporarily unavailable
I understand the reason for this exception - I'm trying to call a blocking method on non blocking socket.
Please help me in finding a solution to this problem.
There are some problems with your example:
You don't define compute_response anywhere (among other things) so I can't run your example. Consider making it an http://sscce.org
You should never call either send or recv on a socket underlying a Twisted transport; let Twisted call those methods for you. In the case of recv it will deliver the results of recv to dataReceived.
You can't rely upon dataReceived to receive whole messages; packets may always be arbitrarily fragmented in transit so you need to have a framing protocol for encapsulating your messages.
However, since my other answer was so badly botched, I owe you a more thorough explanation of how to set up what you want to do.
As stipulated in your question, your protocol is not completely defined enough to give an answer; you cannot do requests and responses with raw TCP fragments, because your application can't know where they start and end (see point 3 above). So, I've invented a little protocol to serve for this example: it's a line-delimited protocol where the client sends "request foo\n" and the server immediately sends "thinking...\n", computes a response, then sends "response foo\n" and waits for the client to send "ok"; in response, the server will either send the next "response ..." line, or a "done\n" line indicating that it's finished sending responses.
With that as our protocol, I believe the key element of your question is that you cannot "wait for acknowledgement", or for that matter, anything else, in Twisted. What you need to do is implement something along the lines of "when an acknowledgement is received...".
Therefore, when a message is received, we need to identify the type of the message: acknowledgement or request?
if it's a request, we need to compute a response; when the response is finished being computed, we need to enqueue all the elements of the response and send the first one.
if it's an acknowledgement, we need to examine the outgoing queue of responses, and if it has any contents, send the first element of it; otherwise, send "done".
Here's a full, runnable example that implements the protocol I described in that way:
from twisted.internet.protocol import ServerFactory
from twisted.internet.task import deferLater
from twisted.internet import reactor
from twisted.internet.interfaces import ITCPTransport
from twisted.protocols.basic import LineReceiver
class MyProtocol(LineReceiver):
delimiter = "\n"
def connectionMade(self):
if ITCPTransport.providedBy(self.transport):
self.transport.setTcpKeepAlive(1)
self.pendingResponses = []
def lineReceived(self, line):
split = line.rstrip("\r").split(None, 1)
command = split[0]
if command == b"request":
# requesting a computed response
payload = split[1]
self.sendLine("thinking...")
(self.factory.service.computeResponse(payload)
.addCallback(self.sendResponses))
elif command == b"ok":
# acknowledging a response; send the next response
if self.pendingResponses:
self.sendOneResponse()
else:
self.sendLine(b"done")
def sendOneResponse(self):
self.sendLine(b"response " + self.pendingResponses.pop(0))
def sendResponses(self, listOfResponses):
self.pendingResponses.extend(listOfResponses)
self.sendOneResponse()
class MyFactory(ServerFactory):
protocol = MyProtocol
def __init__(self, service):
self.service = service
class MyService(object):
def computeResponse(self, request):
return deferLater(
reactor, 1.0,
lambda: [request + b" 1", request + b" 2", request + b" 3"]
)
from twisted.internet.endpoints import StandardIOEndpoint
endpoint = StandardIOEndpoint(reactor)
endpoint.listen(MyFactory(MyService()))
reactor.run()
I've made this runnable on standard I/O so that you can just run it and type into it to get a feel how it works; if you want to run it on an actual network port, just substitute that with a different type of endpoint. Hopefully this answers your question.
I have written this HTTP web server in python which simply sends reply "Website Coming Soon!" to the browser/client, but I want that this web server should sends back the URL given by the client, like if I write
http://localhost:13555/ChessBoard_x16_y16.bmp
then server should reply back the same url instead of "Website Coming Soon!" message.
please tell how can I do this?
Server Code:
import sys
import http.server
from http.server import HTTPServer
from http.server import SimpleHTTPRequestHandler
#import usb.core
class MyHandler(SimpleHTTPRequestHandler): #handles client requests (by me)
#def init(self,req,client_addr,server):
# SimpleHTTPRequestHandler.__init__(self,req,client_addr,server)
def do_GET(self):
response="Website Coming Soon!"
self.send_response(200)
self.send_header("Content-type", "application/json;charset=utf-8")
self.send_header("Content-length", len(response))
self.end_headers()
self.wfile.write(response.encode("utf-8"))
self.wfile.flush()
print(response)
HandlerClass = MyHandler
Protocol = "HTTP/1.1"
port = 13555
server_address = ('localhost', port)
HandlerClass.protocol_version = Protocol
try:
httpd = HTTPServer(server_address, MyHandler)
print ("Server Started")
httpd.serve_forever()
except:
print('Shutting down server due to some problems!')
httpd.socket.close()
You can do what you're asking, sort of, but it's a little complicated.
When a client (e.g., a web browser) connects to your web server, it sends a request that look like this:
GET /ChessBoard_x16_y16.bmp HTTP/1.1
Host: localhost:13555
This assumes your client is using HTTP/1.1, which is likely true of anything you'll find these days. If you expect HTTP/1.0 or earlier clients, life is much more difficult because there is no Host: header.
Using the value of the Host header and the path passed as an argument to the GET request, you can construct a URL that in many cases will match the URL the client was using.
But it won't necessarily match in all cases:
There may be a proxy in between the client and your server, in which case both the path and hostname/port seen by your code may be different from that used by the client.
There may be packet manipulation rules in place that modify the destination ip address and/or port, so that the connection seen by your code does not match the parameters used by the client.
In your do_GET method, you can access request headers via the
self.headers attribute and the request path via self.path. For example:
def do_GET(self):
response='http://%s/%s' % (self.headers['host'],
self.path)