Is it possible somehow to pass through the keepalive limitation of uwsgi? If not, what is the best way of persistent connection implementation. I'm using NGiNX + uWSGI (Python), and I want clients to have asynchronous updates from server.
UWSGI supports keep-alive via --http-keepalive option if you receive requests via http.
/tmp$ cat app.py
def application(env, start_response):
content = b"Hello World"
start_response('200 OK', [
('Content-Type','text/html'),
('Content-Length', str(len(content))),
])
return [content]
Run with:
/tmp$ uwsgi --http=:8000 --http-keepalive -w app &> /dev/null
And we can see connect calls via strace:
~$ strace -econnect wrk -d 10 -t 1 -c 1 http://127.0.0.1:8000
connect(3, {sa_family=AF_INET, sin_port=htons(8000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
Running 10s test # http://127.0.0.1:8000
1 threads and 1 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 92.32us 56.14us 2.81ms 97.20%
Req/Sec 11.10k 389.34 11.84k 68.32%
111505 requests in 10.10s, 7.98MB read
Requests/sec: 11040.50
Transfer/sec: 808.63KB
+++ exited with 0 +++
See? Only one connection.
You are talking about two different things. If you want persistent connections from your clients to your app, you may want to use async modes (via ugreen, gevent...). In that way you will be able to maintain thousand of concurrent connections. Keepalive is a way to route multiple requests to the same connections, but this is pretty useless for your purpose. Instead If you are referring to persistent connections between nginx and uWSGI, there is no way (currently) in nginx to reach such behaviour. You may want to follow this ticket:
http://projects.unbit.it/uwsgi/ticket/66
it is about the fastrouter, but will be applied in httprouter too. But it is still under heavy development.
Related
I'm trying to create a simple flask server that redirects any http requests to https. I've created a certificate and key file and registered a before_request hook to see if the request is secure and redirect appropriately, following advise this SO answer.
The flask server responds to https requests as expected. However, when I send an http request, the before_request hook never gets called and ther server hangs forever. If I send the http request from the browser, I see an "ERR_EMPTY_RESPONSE". The server doesn't even respond to https requests afterwards. No logs are printed either.
Running the app with gunicorn didn't help either. The only difference was that gunicorn is able to detect that the worker is frozen and eventually kills and replaces it. I've also tried using flask-talisman, with the same results.
Below is the code I'm running
### server.py
from flask import Flask, request, redirect
def verify_https():
if not request.is_secure:
url = request.url.replace("http://", "https://", 1)
return redirect(url, 301)
def create_flask_app():
app = Flask(__name__)
app.before_request(verify_https)
app.add_url_rule('/', 'root', lambda: "Hello World")
return app
if __name__ == '__main__':
app = create_flask_app()
app.run(
host="0.0.0.0",
port=5000,
ssl_context=('server.crt', 'server.key')
)
Running it with either python3.8 server.py or gunicorn --keyfile 'server.key' --certfile 'server.crt' --bind '0.0.0.0:5000' 'server:create_flask_app()' and opening a browser window to localhost:5000 causes the server to hang.
Talking about freezes, its not. Flask and gunicorn can serve only one variant of connection. So it's not freezing because your browser canceled the request and is idling.
I think it is better to use a faster web server, for example, Nginx, if you want to change HTTP to HTTPS. I would recommend it to you.
But it's possible to trigger your verify_https function if you run multiple instances of gunicorn at the same time.
I took your example, generated a certificate, and then run this script in my console (it contains a background job and can be runned in twoo separate ter)
gunicorn --bind '0.0.0.0:80' 'server:create_flask_app()' & gunicorn --certfile server.crt --keyfile server.key --bind '0.0.0.0:443' 'server:create_flask_app()'
now chrome goes to the secure page as expected.
Typically servers don't listen for both http and https on the same port. I have a similar requirement for my personal portfolio, but I use nginx to forward http requests (port 80) to https (port 443) and then the https server passes it off to my uwsgi backend, which listens on port 3031. That's probably more complex than you need, but a possible solution. If you go that route I would recommend letsencrypt for your certificate needs. It will set up the certificates AND the nginx.conf for you.
If you don't want to go the full nginx/apache route I think your easiest solution is the one suggested here on that same thread that you linked.
I am using NGINX as a load balancer and proxy server for a python application hosted by gunicorn. The application server(gunicorn) takes longer duration of time to give response and there is single worker of it. The requests hitting the nginx server at the time when application server is not free then those requests stay in processing queue waiting to be processed.
Here I need a record of the requests siting in the waiting queue along with there request body to get there status.
I tried using nginx logs and also some external third party tools to do so but was unsuccessful in fulfilling my requirement.
My request body looks like this:
{
"BatchNbr": "Batch_80",
"SharedFolderName":"0.0.0.0/SharedFolder",
"InputPath": "TestPath/pdfs/20190516",
"OutputPath":"TestPath/output",
"DecryptFlag":"False"
}
And I maintain batch status in the following format:
Batch Number StartTime (IST) EndTime (IST) Status
Batch_80 2019-10-16 14:16:39 2019-10-16 14:16:39 QUEUED
Batch_70 2019-10-16 14:13:04 2019-10-16 14:13:04 QUEUED
Batch_71 2019-10-16 14:13:04 2019-10-16 14:13:06 FAILURE
batch_test1 2019-10-16 14:09:22 2019-10-16 14:09:22 SUCCESS
I need Batch Status as queued when the Batch request is sitting in the waiting queue of the nginx server, the status changes to RUNNING when request reaches to application server, and when the request's processing is complete the status is changed to SUCCESS.
Any leads would be of great help.
There is possibly no way to fetch waiting requests data from nginx. We can only get number of hits, number of waiting requests or number of active requests but we can't fetch request data from it. So I tried another way to do my task.
I fetched data from tcp dumps of the machine over a port on which nginx was running, this gave me whole request data as soon as the user hit the api, then parsed that data in python to get details what I required. Here is the code to fetch data from tcp layer:
# sudo tcpdump -nn -A -s1500 -l -i ens4 port 5000
while True:
p = sub.Popen(('sudo', 'tcpdump', '-nn', '-A', '-s1500', '-l', '-i', 'ens4', 'port', '5000'), stdout=sub.PIPE)
for row in iter(p.stdout.readline, b''):
val = row.rstrip()
# process here
This code is watching tcp dumps on port 5000
You can also check that by executing this command on linux terminal:
sudo tcpdump -nn -A -s1500 -l -i ens4 port 5000
I have a simple flask server defined like so:
import sys
import flask
from flask import request
app = flask.Flask(__name__)
port = 4057
#app.route('/search', methods=['POST'])
def search():
request.json['query']
results = ['fake', 'data']
return flask.jsonify(results)
if __name__ == '__main__':
app.config['TEMPLATES_AUTO_RELOAD'] = True
app.run(host='0.0.0.0', port=port, debug=(port != 80))
I have a simple client defined like this:
import json
import requests
headers = {'content-type': 'application/json'}
resp = requests.post('http://localhost:4057/search', json.dumps({'query': 'foo'}), headers=headers)
print resp.content
The client works, but it takes like 3 seconds to complete the request.
curl completes in like half a second:
curl 'http://localhost:4057/search' -H 'Content-Type: application/json' -d '{"query": "foo"}'
Try 127.0.0.1 There maybe some odd name resolution rules fumbling requests.
Ah ok, this is my work machine. I took a look at /etc/hosts and saw ~200 routes defined that I didn't realize were there
As mention in the comments, this does not explain the odd behavior replicated using curl.
I've recently encountered a similar issue with slow connections from requests to 'localhost', and also found that using '127.0.0.1' was much faster.
After some investigation, I found that in my case, the slow connections on Windows were related to urllib3 (the transport library used by requests) attempting an IPv6 connection first, which is refused since the server is listening on IPv4 only. For Windows only, however, there is an unavoidable 1 sec timeout on socket.connect for refused TCP connections, while the OS 'helpfully' retries the connection up to 3 times. (See: Why do failed attempts of Socket.connect take 1 sec on Windows?).
On Linux, socket.connect fails immediately if the connection is refused, and it can attempt the IPv4 connection without any delay. In addition, some software, like curl, support limiting name resolution to only IPv4 or IPv6. Unfortunately, requests and urllib3 do not support this, and do not seem to plan to support this anytime soon (See: https://github.com/psf/requests/issues/1691)
For those trying to figure this out as no one gave a clear solution here:
I encountered this issue a while ago, and noticed that the Flask test server is not concurrent. The Python Requests takes FOREVER to retrieve the data from your flask app unless you make it concurrent. Enabling "Threaded" in your app.run option will do the trick!
app.run(port=5000, debug=True, threaded=True)
Good day everybody!
We have a simple web application (REST service) written using good old web.py and hosted on Nginx + uWSGI. The issue is that once in a zillion requests it fails fails with "504 Gateway Timeout" on the exact same handler, which does nothing but simple SQL SELECT query on an empty database (SQLAlchemy + PostgreSQL), meaning query returns nothing. Here is uWSGI config:
uwsgi:
enable-threads: 1
threads: 1
workers: 2
master: 1
socket: :8001
plugin: python
chown-socket: nginx:nginx
pythonpath: /usr/lib/python2.6/site-packages/nailgun
touch-reload: /var/lib/nailgun-uwsgi
virtualenv: /usr
module: nailgun.wsgi
buffer-size: 49180
listen: 4096
pidfile: /tmp/nailgun.pid
logto: /var/log/nailgun/app.log
mule: 1
lazy: true
shared-pyimport: /usr/lib/python2.6/site-packages/nailgun/utils/mule.py
And here is the line from Python application log:
GET /api/nodes/ => generated 2 bytes in 10199124 msecs
uWSGI version is 2.0.3.
Can anyone please give a hint?
Can I create a HTTP server without using
python -m http.server [port number]
Using an old school style with sockets and such.
Latest code and errors...
import socketserver
response = """HTTP/1.0 500 Internal Server Error
Content-type: text/html
Invalid Server Error"""
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
self.request.sendall(response)
if __name__ == "__main__":
HOST, PORT = "localhost", 8000
server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)
server.serve_forever()
TypeError: 'str' does not support the buffer interface
Yes, you can, but it's a terrible idea -- in fact, even http.server is at best a toy implementation.
You're better off writing whatever webapp you want as a standard WSGI application (most Python web frameworks do that -- Django, Pyramid, Flask...), and serving it with one of the dozens of production-grade HTTP servers that exist for Python.
uWSGI (https://uwsgi-docs.readthedocs.org/en/latest/) is my personal favorite, with Gevent a close second.
If you want more info about how it's done, I recommend that you read the source code to the CherryPy server (http://www.cherrypy.org/). While not as powerful as the aforementioned uWSGI, it's a good reference implementation written in pure Python, that serves WSGI apps through a thread pool.
Sure you can, and servers like Tornado already do it this way.
For simple test servers which can do only HTTP/1.0 GET requests and handle only a single request at a time it should not be that hard once you understood the basics of the HTTP protocol. But if you care even a bit about performance it gets complex fast.