(Python) Add certificate to Bottle server - python

I am stuck with a problem for some time and can't find a right solution for it.
I have a python server based on Bottle (Python 3) written with PyCharm. I'm converting my files with "pyinstaller" to one "exe" to start the server on a fixed PC (win7). The server works fine for the things it is needed for but now I want to add more secuity to it.
I have a signed certificate (not self-signed) and a key, which I want to add. I tried to start the server with them but I'm not sure, if I have to do something else with them, because the certificate is not shown on the homepage in the information and the website is still shown as not save.
My normal server is running with:
from bottle import run, ...
...
if __name__ == "__main__":
...
run(host=IP, port=PORT)
I have tried some frameworks for bottle and I end up with cherrypy as the one, that starts my server in a proper way.
The server is running with:
run(host=IP, port=PORT, server='cherrypy', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
It is not working with the current version of cherrypy, so I downgraded it (after some search) to ">=3.0.8, <9.0.0".
The server is running, but the website is still not save. And I don't know if it just does not load the certificate or I miss something else. I tried things like leaving the "keyfile" in the code or adding the key to my certificate, but it does not change anything.
Another framework I tried was gevent:
from gevent import monkey; monkey.patch_all()
...
if __name__ == "__main__":
run(host=IP, port=PORT, reloader=False, server='gevent', certfile='./static/MyCert.pem', keyfile='./static/key.pem')
But here I can't get to the website. When I try, the terminal asks me for the PEM phrase, but I can't add it (or just don't know how) and getting an error I have never seen before:
terminal_error
Like in my cherrypy-example I tried to use some combinations of leaving parts of the code away or changing the certificate but it always ends up here.
It would be nice if someone has a solution for my problem or can give me a hint of what I'm missing or just have not thought of yet. I would like to stay with cherrypy or another framework for Bottle, so I don't have to change much of my current code.
Thanks
P.

It sounds to me like you added a passphrase to your certificate. Regenerate your cert without a passphrase and try again.
Additionally, a word of advice. I highly recommend running your bottle/cherrypy server behind nginx in reverse proxy mode. This simplifies your configuration by letting nginx handle the termination of your SSL session, and then your python web server never needs to know anything about a certificate.
Here's a redacted copy of the nginx config we're using to terminate our (self-signed) SSL cert and reverse proxy our cherrypy site running on localhost on port 9000:
server {
listen example.com:80;
server_name test.example.com;
access_log /var/log/nginx/test.example.com.access.log main;
return 301 https://test.example.com$request_uri;
}
server {
listen example.com:443;
access_log /var/log/nginx/test.example.com.access.log main;
server_name test.example.com;
root /usr/local/www/test.example.com/html;
ssl on;
ssl_certificate /etc/ssl/test.example.com.crt;
ssl_certificate_key /etc/ssl/test.example.com.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # don't use SSLv3 ref: POODLE
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 16M;
# Block access to "hidden" files and directories whose names begin with a
# period. This includes directories used by version control systems such
# as Subversion or Git to store control files.
location ~ (^|/)\. {
return 403;
}
location / {
proxy_pass http://127.0.0.1:9000;
proxy_set_header X-REAL-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Related

Getting client IP address in nginx/flask application

This question seems to be asked often, but I have not found a good resolution to the problem I am having.
I have a flask application that is behind nginx. The app and nginx communicate via uwsgi unix socket. The application has a publicly exposed endpoint that is exposed via Route53. It is also exposed via AWS API Gateway. The reason for this dual exposure is that the application is replacing an existing Lambda solution. With the API Gateway, I can support legacy requests until they can transition to the new publicly exposed endpoint. An additional fact about my application, it is running in a Kubernetes pod, behind a load balancer.
I need to get access to the IP address of the client that made the request so I can use geoIP lookups and exclude collection of data for users outside of US (GDPR) among other things. With two paths into the application, I have two different ways to get to the IP address.
Hitting the endpoint from API Gateway
When I come in through the legacy path, I get an X-Forwarded-For, but I am only seeing IP addresses that are registered to Amazon. I was using the first one in the list, but I only see one or two different IP address. This is a test environment, and that may be correct, but I don't think so because when I hit it from my local browser, I do not find my IP.
Directly hitting the endpoint:
In this case, there is no data in the X-Forwarded-For list, and the only ip address I can find is request.remote_addr. This, unfortunately only has the IP address of either the pod, or maybe the load balancer. I'm not sure which as it is in the same class, but matches neither. Regardless, it is definitely not the client IP address. I found documentation in nginx that describes available variables including $realip_remote_addr. However, when I logged that value, it was the same as remote_addr.
The following is the code that I am using to get the remote_addr:
def remote_addr(self, request):
x_forwarded_for = request.headers.get("X-Forwarded-For")
if x_forwarded_for:
ip_list = x_forwarded_for.split(",")
return ip_list[0]
else:
return request.remote_addr
If it is helpful, this is my nginx server config:
server {
listen 8443 ssl;
ssl_certificate /etc/certs/cert;
ssl_certificate_key /etc/certs/key;
ssl_dhparam /etc/certs/dhparam.pem;
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
server_tokens off;
location = /log {
limit_except POST {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /ping {
limit_except GET {
deny all;
}
include uwsgi_params;
uwsgi_pass unix:///tmp/uwsgi.sock;
}
location = /health-check {
return 200 '{"response": "Healthy"}';
}
location /nginx_status {
stub_status;
}
}
I have spent over a day trying to sort this out. I am sure that the solution is trivial and is likely caused by lack of knowledge/experience using nginx.
Kubernetes, by default do not preserve the client address in the target container (https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/).
Therefore, I solved that by editing my Nginx ingress configuration, where I changed the externalTrafficPolicy property to local.
...
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
...
However, be aware that if you change that you may have a risk of unbalanced traffic among the pods:
Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.

Error about created Let's Encrypt in AWS ubuntu server for flask web deployed

I have applied a domain(for example example.com) from godaddy.com. And I run the command to create Let's Encrypt, but there is error.
(venv) ubuntu2#212.../microblog$ wget https://dl.eff.org/certbot-auto
(venv) ubuntu2#212.../microblog$ chmod a+x ./certbot-auto
(venv) ubuntu2#212...~/microblog$ ../certbot-auto certonly --webroot -w /home/ubuntu2/microblog -d example.com --email example#aa.com
But there is error as following:
Requesting to rerun ./certbot-auto with root privileges...
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for example.com
Using the webroot path /home/ubuntu2/microblog for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. example.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY: "<!DOCTYPE html><body style="padding:0; margin:0;"><html><body><iframe src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com
Type: unauthorized
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
and the /etc/nginx/sites-enabled/microblog as following
server {
# listen on port 80 (http)
listen 80;
server_name example.com;
location / {
# redirect any requests to the same URL but on https
return 301 https://$host$request_uri;
}
}
server {
# listen on port 443 (https)
listen 443 ssl;
server_name example.com;
# location of the self-signed SSL certificate
#ssl_certificate /home/ubuntu/microblog2/certs/cert.pem;
#ssl_certificate_key /home/ubuntu/microblog2/certs/key.pem;
# write access and error logs to /var/log
access_log /var/log/microblog_access.log;
error_log /var/log/microblog_error.log;
location / {
# forward application requests to the gunicorn server
proxy_pass http://127.0.0.1:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
# handle static files directly, without forwarding to the application
alias /home/ubuntu2/microblog/static;
expires 30d;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /home/ubuntu2/microblog/;
}
location = /.well-known/acme-challenge/ {
return 404;
}
}
I don't know where is wrong, could you help me solve this issue, thanks!
It appears from the error message that example.com is not pointing to your Nginx server address yet. Notice this in the error message:
Detail: Invalid response from
http://example.com/.well-known/acme-challenge/V9B6Dz7gPx7RhyLmpYIlwYUhs1d4rWJF2HlpJbNbjbY:
"<!DOCTYPE html><body style="padding:0;
margin:0;"><html><body><iframe
src="http://mcc.godaddy.com/park/MaO2MaO2LKWaYaOvrt==/fe/M" <<<<<<<<<<<<<<<
The godaddy.com/park part likely means that the domain is parked at GoDaddy, and is not reaching your server yet.
The error message provide is also useful to understand the problem:
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Check DNS settings, and try again. Also note - it could be DNS propagation time related, it the DNS change was recent. In that case you need to wait for the change to be propagated, up to 24 hours.
Ps You can confirm that by running curl example.com on the command line. Make sure it returns the home page of your server. When it does, try again.

sendfile() failed (32: Broken pipe) while sending request to upstream nginx 502

I am running Django, uwsgi, ngix server.
My server works fine for GET, POST requests of smaller size. But when POSTing requests of large size, nginx returns 502:
nginx error.log is:
2016/03/01 13:52:19 [error] 29837#0: *1482 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 175.110.112.36, server: server-ip, request: "POST /me/amptemp/ HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "servername"
So, in order to find where the real problem is, I ran uwsgi on a different port and checked if any error occurs with the same request. But the request was successful. So, the problem is with nginx or unix socket configuration.
Ngin-x configuration:
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
# server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name 52.25.29.179; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/usman/Projects/trequant/trequant-python/trequant/media; # your Django project's media files - amend as required
}
location /static {
alias /home/usman/Projects/trequant/trequant-python/trequant/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
######## proxy_pass http://127.0.0.1:8000;
######## proxy_set_header Host $host;
######## proxy_set_header X-Real-IP $remote_addr;
######## proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_pass django;
uwsgi_read_timeout 600s;
uwsgi_send_timeout 600s;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
So, any idea what I am doing wrong? Thank you in advance.
Supposedly setting post-buffering = 8192 in your uwsgi.ini file will fix this. I got this answer from a 2.5-yr-old answer here and it implies this fix is not the root cause. Hope it helps!
Another fix is to use a TCP socket instead of a unix socket in your conf files:
In uwsgi.ini, use something like socket = 127.0.0.1:8000 in the [uwsgi] section instead of:
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
In your nginx.conf file (btw in Ubuntu, I'm referring to /etc/nginx/conf.d/nginx.conf, NOT the one simply in /etc/nginx/) use uwsgi_pass 127.0.0.1:8000; instead of include uwsgi_params;
I've posted this as a separate answer because either answer may work, and I'm interested to see which answer helps others the most.
In my case this seemed to be for requests that would have given a 308 redirect. I think my Node backend was sending response before postdata was fully received. Updating the client to hit new endpoint (no redirect) may permanently fix my case. Seems promising.
Set higher body buffer size client_body_buffer_size 1M; This will fix.
References:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
https://www.nginx.com/resources/wiki/start/topics/examples/full/

How to convert http to https using nginx for local server(self signed cetificate)

I am trying to convert http to https (secure) inside nginx, for that purpose I created and add self signed certificate inside nginx conf file.
server {
listen 80;
server_name www.local.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443;
server_name www.local.com;
ssl on;
ssl_certificate /etc/ssl/self_signed_certificate.crt;
ssl_certificate_key /etc/ssl/self_signed_certificate.key;
}
Now, When I give url, it redirects from http to https. and show me messeage in crome "The site security certificate is not Trusted.! ". When I clicked on Proceed anyway. I got "SSL Connection" Error. Its working perfectly on http. How to run my local website using https.
I am using uwsgi server and website is in python - Django. What mistake I did or something else I have to do ? Please Help. Thanks in advance (Sorry for my english).
You are getting the message because you are using a self signed certificate, you need to use a SSL certificate from a trusted provider to avoid the warning. You can get a free SSL cert that is trusted by most major browsers at StartSSL. You can see all of the browsers that are supported here.

Flask app gives ubiquitous 404 when proxied through nginx

I've got a flask app daemonized via supervisor. I want to proxy_pass a subfolder on the localhost to the flask app. The flask app runs correctly when run directly, however it gives 404 errors when called through the proxy. Here is the config file for nginx:
upstream apiserver {
server 127.0.0.1:5000;
}
location /api {
rewrite /api/(.*) /$1 break;
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://apiserver;
proxy_next_upstream error timeout http_502;
proxy_buffering off;
}
For instance, when I go to http://127.0.0.1:5000/me, I get a valid response from the app. However when I go to http://127.0.0.1/api/me I get a 404 from the flask app (not nginx). Also, the flask SERVER_NAME variable is set to 127.0.0.1:5000, if that's important.
I'd really appreciate any suggestions; I'm pretty stumped! If there's anything else I need to add, let me know!
I suggest not setting SERVER_NAME.
If SERVER_NAME is set, it will 404 any requests that don't match the value.
Since Flask is handling the request, you could just add a little bit of information to the 404 error to help you understand what's passing through to the application and give you some real feedback about what effect your nginx configuration changes cause.
from flask import request
#app.errorhandler(404)
def page_not_found(error):
return 'This route does not exist {}'.format(request.url), 404
So when you get a 404 page, it will helpfully tell you exactly what Flask was handling, which should help you to very quickly narrow down your problem.
I ran into the same issue. Flask should really provide more verbose errors here since the naked 404 isn't very helpful.
In my case, SERVER_NAME was set to my domain name (e.g. example.com).
nginx was forwarding requests without the server name, and as #Zoidberg notes, this caused Flask to trigger a 404.
The solution was to configure nginx to use the same server name as Flask.
In your nginx configuration file (e.g. sites_available or nginx.conf, depending on where you're defining your server):
server {
listen 80;
server_name example.com; # this should match Flask SERVER_NAME
...etc...

Categories

Resources