I´m using HTTPServer to listen for incoming POST requests and serving them. All is working fine with that.
I need to add some periodic tasks in the script (every X seconds: do something). As the HTTP server takes full command after
def run(server_class=HTTPServer, handler_class=S, port=9999):
server_address = (ethernetIP, port)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I guess if there´s any way to include a check for time.time() as part of:
class S(BaseHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
def do_GET(self):
self._set_response()
self.wfile.write("GET request for {}".format(self.path).encode('utf-8'))
def do_POST(self):
# my stuff here
Any ideas are welcome. Thanks!
Thanks to #rdas for pointing me to the separate thread solution. I tried schedule but it didn´t work with the HTTP server, because I can´t tell the script to run the pending jobs.
I tried with threading, running my periodic task as deamon.. and it worked! Here´s the code structure:
import time
import threading
from http.server import BaseHTTPRequestHandler, HTTPServer
polTime = 60 # how often we touch the file
polFile = "myfile.abc"
# this is the deamon thread
def polUpdate():
while True:
thisSecond = int(time.time())
if thisSecond % polTime == 0: # every X seconds
f = open(polFile,"w")
f.close() # touch and close
time.sleep(1) # avoid loopbacks
return "should never come this way"
# here´s the http server starter
def run(server_class=HTTPServer, handler_class=S, port=9999):
server_address = (ethernetIP, port)
httpd = server_class(server_address, handler_class)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
sys.exit(1)
# init the thread as deamon
d = threading.Thread(target=polUpdate, name='Daemon')
d.setDaemon(True)
d.start()
# runs the HTTP server
run(port=conf_port)
The HTTP server doesn´t block the thread, so it works great.
By the way, I´m using the file 'touching' as proof of life for the process.
Related
I created an http server. My task is to make sure that each new post request to the server is processed in a separate thread.I tried using socketserver.ThreadingTCPServer(),but this way I can't control the threads. I need to start a thread only after the previous one is completed or after 5 minutes have passed.How can I implement the server differently, but only so that requests are processed in a separate request and each new thread is processed after the previous one has completed?
from http.server import HTTPServer, BaseHTTPRequestHandler, ThreadingHTTPServer
import socketserver
import threading
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
self.send_response(200)
self.end_headers()
query_args = parse_qs(urlparse(self.path).query)
print(query_args)
if __name__ == "__main__":
try:
with socketserver.ThreadingTCPServer(("", 8080), Handler) as httpd:
while True:
httpd.serve_forever()
except Exception as e:
httpd.server_close()
I've been working on an application which contains a small, simple http server to handle post requests on occasion. The server and all functionality around it works fine, but each time the server runs, log output would tell me that my code is being run multiple times, plus one time for each request the http server handles.
class HttpApp:
def __init__(self, host="localhost", port=25565):
logging = Util.configure_logging(__name__)
server_address = (host, port)
httpd = HTTPServer(server_address, ServerObject)
logging.info('Starting httpd...\n')
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
logging.info('Stopping httpd...\n')
class ServerObject(BaseHTTPRequestHandler):
def _set_response(self):
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
def do_GET(self):
print("GET request,\nPath: %s\nHeaders:\n%s\n", str(self.path), str(self.headers))
self._set_response()
self.wfile.write("GET request for {}".format(self.path).encode('utf-8'))
def do_POST(self):
content_length = int(self.headers['Content-Length'])
content_type = str(self.headers['Content-Type'])
# print(content_length)
post_data = self.rfile.read(content_length)
if content_type == "application/json":
parsed_data = json.loads(post_data.decode('utf-8'))
else:
print("Bad request!")
self._set_response()
self.wfile.write(json.dumps({"Response": "Bad Request"}).encode('utf-8'))
print("POST request,\nPath: %s\nHeaders:\n%s\n\nBody:\n%s\n" %
(str(self.path), str(self.headers), parsed_data))
print("Parsed Params: %s" % parsed_data)
def free_port():
free_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
free_socket.bind(('0.0.0.0', 0))
free_socket.listen(5)
port = free_socket.getsockname()[1]
free_socket.close()
return port
rand_port = free_port()
SpawnSlave(category=parsed_data["category"], tag=parsed_data["tag"], filename=parsed_data["filename"], port=rand_port)
self._set_response()
self.wfile.write(json.dumps({"port": rand_port}).encode('utf-8'))
A cli application passes information to HttpApp, which then starts based on that information. Once it receives a connection the first time, it goes through its steps normally and only prints output once. The second time, output is printed twice, and so on. Only post requests are handled by this server. I have gone over my code a few times to make sure I'm not calling it more than once, but I seem to be stumped. For more context, more of this code is available on github, but this is the only relevant piece.
It turns out that this wasn't an issue with my code, but rather an issue with the logger I was using which was adding multiple console handlers for the same logger, causing output to be repeated. I fixed this in my cli library.
I have a Python (2.7.13) HTTP Server running in Debian, I want to stop any GET request that takes longer than 10 seconds, but can't find a solution anywhere.
I already tried all the snippets posted in the following question: How to implement Timeout in BaseHTTPServer.BaseHTTPRequestHandler Python
#!/usr/bin/env python
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import os
class handlr(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text-html')
self.end_headers()
self.wfile.write(os.popen('sleep 20 & echo "this took 20 seconds"').read())
def run():
server_address = ('127.0.0.1', 8080)
httpd = HTTPServer(server_address, handlr)
httpd.serve_forever()
if __name__ == '__main__':
run()
As a test, I'm running a shell command that takes 20 seconds to execute, so I need the server stop before that.
Put your operation on a background thread, and then wait for your background thread to finish. There isn't a general-purpose safe way to abort threads, so this implementation unfortunately leaves the function running in the background even though it had already given up.
If you can, you might consider putting a proxy server (like say nginx) in front of your server and let it handle timeouts for you, or perhaps use a more robust HTTP server implementation that allows this as a configuration option. But the answer below should basically cover it.
#!/usr/bin/env python
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import os
import threading
class handlr(BaseHTTPRequestHandler):
def do_GET(self):
result, succeeded = run_with_timeout(lambda: os.popen('sleep 20 & echo "this took 20 seconds"').read(), timeout=3)
if succeeded:
self.send_response(200)
self.send_header('Content-type','text-html')
self.end_headers()
self.wfile.write(os.popen('sleep 20 & echo "this took 20 seconds"').read())
else:
self.send_response(500)
self.send_header('Content-type','text-html')
self.end_headers()
self.wfile.write('<html><head></head><body>Sad panda</body></html>')
self.wfile.close()
def run():
server_address = ('127.0.0.1', 8080)
httpd = HTTPServer(server_address, handlr)
httpd.serve_forever()
def run_with_timeout(fn, timeout):
lock = threading.Lock()
result = [None, False]
def run_callback():
r = fn()
with lock:
result[0] = r
result[1] = True
t = threading.Thread(target=run_callback)
t.daemon = True
t.start()
t.join(timeout)
with lock:
return tuple(result)
if __name__ == '__main__':
run()
I am trying to create multi threaded web server in python, but the requests are handled one by one. After searching few hours, I found this link but the approved answer seems to be incorrect as the request over there is also handled one by one.
Here is the code:
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import threading
from time import sleep
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
sleep(5)
message = threading.currentThread().getName()
self.wfile.write(message)
self.wfile.write('\n')
return
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
"""Handle requests in a separate thread."""
if __name__ == '__main__':
server = ThreadedHTTPServer(('localhost', 8080), Handler)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
I added "sleep(5)" for 5 second delay to handle the request. After that I send multiple requests but all the requests are handled one by one and each request took 5 seconds. I am unable to find the reason. Help me.
The key requirement here is to be able to have a 5-second delay between the send_response and the data returned. That means you need streaming; you can't use ThreadingMixIn, gunicorn, or any other such hack.
You need something like this:
import time, socket, threading
sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
host = socket.gethostname()
port = 8000
sock.bind((host, port))
sock.listen(1)
HTTP = "HTTP/1.1 200 OK\nContent-Type: text/html; charset=UTF-8\n\n"
class Listener(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.daemon = True # stop Python from biting ctrl-C
self.start()
def run(self):
conn, addr = sock.accept()
conn.send(HTTP)
# serve up an infinite stream
i = 0
while True:
conn.send("%i " % i)
time.sleep(0.1)
i += 1
[Listener() for i in range(100)]
time.sleep(9e9)
i am new in web development. i am creating a web app for my home automation project, in which i need bi-directional communication. any theft-security alert from home will be send to client from server or if client want to control the main gate, he'll sent a POST request to server. I am still confused what thing to use, SSE or Web sockets. my question is, is it possible to develop an app that uses both, SSE as well as handles traditional (long-polling) HTTP requests from client (GET/POST) ? i have tested each of them individually and they work fine but i am unable to make them work together. i am using python BaseHTTPServer. Or at last, do i have to move to WebSocket? Any suggestion will be highly appreciated. my code here is;
import time
import BaseHTTPServer
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
from SocketServer import ThreadingMixIn
import os
import requests
import threading
from threading import Thread
from chk_connection import is_connected
import socket
HOST_NAME = socket.gethostbyname(socket.gethostname())
PORT_NUMBER = 8040 # Maybe set this to 9000.
ajax_count=0
ajax_count_str=""
switch=0
IP_Update_time=2
keep_alive=0
connected=False
###############################
##############
my_dir = os.getcwd()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_POST(s):
global keep_alive
"""Respond to a POST request."""
s.send_response(200)
s.send_header('Access-Control-Allow-Origin', '*')
s.send_header("Content-type", "text/html")
s.end_headers()
HTTP_request=s.requestline
if HTTP_request.find('keep_alive')>-1:
keep_alive += 1
keep_alive_str = str(keep_alive)
s.wfile.write(keep_alive_str) #sending ajax calls for keep alive
def do_GET(s):
global ajax_count
global my_dir
global switch
global ajax_count_str
global keep_alive
#print 'got Get',
"""Respond to a GET request."""
s.send_response(200)
#s.send_header('Access-Control-Allow-Origin', '*')
s.send_header('content-type', 'text/html')
s.end_headers()
print s.headers
HTTP_request=s.requestline
index_1=HTTP_request.index("GET /")
index_2=HTTP_request.index(" HTTP/1.1")
file_name=HTTP_request[index_1+5:index_2]
#print 'file_name:',file_name
#print 'HTTP_request:',HTTP_request
#if len(file_name)>0:
#if HTTP_request.find('L2ON')>-1:
# print 'sending SSE'
# s.wfile.write('event::'.format(time.time()))
elif HTTP_request.find('GET / HTTP/1.1')>-1:
print 'send main'
file1=open('Index.html','r')
file_read=file1.read()
s.wfile.write(file_read)
elif file_name.find("/")==-1:
for root, dirs, files in os.walk(my_dir):
#print 'in for 1'
for file in files:
#print 'in for'
if HTTP_request.find(file)>-1:
file_path=os.path.join(root,file)
file1=open(file_path,'r')
file_read=file1.read()
s.wfile.write(file_read)
print 'send',file
elif file_name.find("/")>-1:
#print 'get /...'
slash_indexes=[n for n in xrange(len(file_name)) if file_name.find('/', n)==n]
length=len(slash_indexes)
slash=slash_indexes[length-1]
file_path=file_name[0:slash]
root_dir=(my_dir + '/' + file_path + '/')
for root, dirs, files in os.walk(root_dir):
for file in files:
if HTTP_request.find(file)>-1:
image_path=os.path.join(root,file)
image=open(image_path,'r')
image_read=image.read()
s.wfile.write(image_read)
print 'send',file
#else:
#print 'file not found'
class MyHandler_SSE(BaseHTTPRequestHandler):
print 'SSE events class'
def do_GET(self):
print 'this is SSE'
self.send_response(200)
self.send_header('content-type', 'text/event-stream')
self.end_headers()
while True:
print 'SSE sent'
self.wfile.write('event: message\nid: 1\ndata: {0}\ndata:\n\n'.format(time.time()))
time.sleep(2)
class chk_connection(threading.Thread):
"""
# this thread checks weather there is internet connection available ?
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global connected
while 1:
########################################################## INSIDE THE chk_connection import is_connected
#import socket
#REMOTE_SERVER = "www.google.com"
#def is_connected():
#try:
# # see if we can resolve the host name -- tells us if there is
# # a DNS listening
# host = socket.gethostbyname(REMOTE_SERVER)
# connect to the host -- tells us if the host is actually
# reachable
# s = socket.create_connection((host, 80), 2)
# return True
#except:
# pass
#return False
##########################################################
connected=is_connected()
#print 'internet:', connected
time.sleep(1)
class server_main(threading.Thread):
"""
"""
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global connected
#print 'shutdown started'
server_class = BaseHTTPServer.HTTPServer
HOST_NAME = socket.gethostbyname(socket.gethostname())
last_HOST_NAME = HOST_NAME
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
#http_SSE = server_class((HOST_NAME, PORT_NUMBER), MyHandler_SSE)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
while(1):
while is_connected():
httpd._handle_request_noblock()
#print 'no block'
#http_SSE._handle_request_noblock()
time.sleep(1)
HOST_NAME = socket.gethostbyname(socket.gethostname())
if HOST_NAME != last_HOST_NAME:
print 'Serving at new host:', HOST_NAME
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
def start():
tx_socket_thread3 = chk_connection() # this thread checks weather there is internet connection available ?
tx_socket_thread3.start()
tx_socket_thread5 = server_main()
tx_socket_thread5.start()
print 's1:',tx_socket_thread1.is_alive()
if __name__ == '__main__':
start()
i might need to modify the code in a new manner, but don't know how. What i want is, if any interrupt happens at server side, it pulls data to client, and mean while it also responds to the GET and POST requests from client. Help Plx...
It is definitely possible to develop a web application which uses a mixture of normal HTTP traffic, server-side events and WebSockets. However, the web server classes in Python standard library are not designed for this purpose, though one can probably make them to work with enough hammering. You should to install a proper web server and use it facilities.
Examples include
uWSGI and server-side events with WSGI applications
Tornado and WebSockets
Furthermore
Installing Python packages