django alternative to prevent header poison - python

Firstly I don't speak English very well, but anyway...
i know I need to use allowed_hosts, but I need to use all "*" and a header attack can cause something like:
<script src = "mysite.com/js/script.js"> <script>
to
<script src = "attacker.com/js/script.js"> <script>
or
mysite.com/new_password=blabla&token=blabla8b10918gd91d1b0i1
to
attacker.com/new_password=blabla&token=blabla8b10918gd91d1b0i1
But all static files are load on a nodejs server "cdn.mysite.com" and all domains are in the database, so I always take the domain from the database to compare with the request header, and use the domain from the database of data to send anything to the client:
views.py:
def Index(request):
url = request.META['HTTP_HOST']
cf = Config.objects.first()
if cf.domain == url:
form = regForm()
return render(request, 'page/site/home.html', {'form': form})
elif cf.user_domain == url:
ur = request.user.is_authenticated
if ur:
config = {'data' : request.user}
lojas = 'json.loads(request.user.user_themes)'
return render(request, 'app/home.html', {"config":config, "lojas":lojas})
else:
forml = loginForm()
return render(request, 'page/users/login/login.html', {'form':forml})
else:
redirect("//" + cf.domain)
Would that still be unsafe to use this way?

You do not need to create yet one bike shed. The allowed_hosts settings is totally enough to prevent spoofing the host name in the request (you can see in Practical HTTP Host header attacks how the spoofing of host name works).
allowed_hosts means domains kind of [YourSite.com, www.YourSite.com, *.YourSite.com] - this is domain names on which you site should operate (not from which you site can load external scripts).
And use HTTP/2 instead of HTTP/1.1 on server, because:
according to the HTTP / 1.1 protocol specification, when specifying an
absolute path of a resource, the Host header value is ignored, and the
host from the resource path is taken as it. This leads to the fact
that even a securely configured web server in this case accepts a
request with a spoofed Host value, and the web application that uses
HOST instead of SERVER_NAME is vulnerable to this attack.
So if you do use SERVER_NAME - this kind of attacks is not affects you.
If you wish to control the possible spoofing of scripts on public CDNs - use the:
Content Security Policy HTTP header (you could select you language to read).
SubResource Integrity (currently is not supported by Safari supported in Safari 12)

Related

Forbidden (403) CSRF verification failed. Request aborted. Reason given for failure: Origin checking failed does not match any trusted origins

Help
Reason given for failure:
Origin checking failed - https://praktikum6.jhoncena.repl.co does not match any trusted origins.
In general, this can occur when there is a genuine Cross Site Request Forgery, or when Django’s CSRF mechanism has not been used correctly. For POST forms, you need to ensure:
Your browser is accepting cookies.
The view function passes a request to the template’s render method.
In the template, there is a {% csrf_token %} template tag inside each POST form that targets an internal URL.
If you are not using CsrfViewMiddleware, then you must use csrf_protect on any views that use the csrf_token template tag, as well as those that accept the POST data.
The form has a valid CSRF token. After logging in in another browser tab or hitting the back button after a login, you may need to reload the page with the form, because the token is rotated after a login.
You’re seeing the help section of this page because you have DEBUG = True in your Django settings file. Change that to False, and only the initial error message will be displayed.
You can customize this page using the CSRF_FAILURE_VIEW setting.
Check if you are using Django 4.0. I was using 3.2 and had this break for the upgrade to 4.0.
If you are on 4.0, this was my fix. Add this line to your settings.py. This was not required when I was using 3.2 and now I can't POST a form containing a CSRF without it.
CSRF_TRUSTED_ORIGINS = ['https://*.mydomain.com','https://*.127.0.0.1']
Review this line for any changes needed, for example if you need to swap out https for http.
Root cause is the addition of origin header checking in 4.0.
https://docs.djangoproject.com/en/4.0/ref/settings/#csrf-trusted-origins
Changed in Django 4.0:
Origin header checking isn’t performed in older versions.
Mar, 2022 Update:
If your django version is "4.x.x":
python -m django --version
// 4.x.x
Then, if the error is as shown below:
Origin checking failed - https://example.com does not
match any trusted origins.
Add this code to "settings.py":
CSRF_TRUSTED_ORIGINS = ['https://example.com']
In your case, you got this error:
Origin checking failed - https://praktikum6.jhoncena.repl.co does not
match any trusted origins.
So, you need to add this code to your "settings.py":
CSRF_TRUSTED_ORIGINS = ['https://praktikum6.jhoncena.repl.co']
Origin and host are the same domain
If, like me, you are getting this error when the origin and the host are the same domain.
It could be because:
You are serving your django app over HTTPS,
Your django app is behind a proxy e.g. Nginx,
You have forgotten to set SECURE_PROXY_SSL_HEADER in your settings.py e.g. SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') and/or
You have forgotten to set the header in your server configuration e.g. proxy_set_header X-Forwarded-Proto https; for Nginx.
In this case:
The origin header from the client's browser will be https://www.example.com due to 1.
request.is_secure() is returning False due to 2, 3 and 4.
Meaning _origin_verified() returns False because of line 285 of django.middleware.csrf (comparison of https://www.example.com to http://www.example.com):
def _origin_verified(self, request):
request_origin = request.META["HTTP_ORIGIN"]
try:
good_host = request.get_host()
except DisallowedHost:
pass
else:
good_origin = "%s://%s" % (
"https" if request.is_secure() else "http",
good_host,
)
if request_origin == good_origin:
return True
Make sure you read the warning in https://docs.djangoproject.com/en/4.0/ref/settings/#secure-proxy-ssl-header before changing this setting though!
You can also have this error because you are using a container on Proxmox.
If your https domain name is routed by Proxmox via an internal http connection you will have this error.
DOMAIN NAME (https) => Proxmox => (http) => Container with Django : CSRF ERROR
I had this error, and change the routing via Proxmox to my container via an https internal connection (I had to create and sign a certificate on my CT).
DOMAIN NAME (hhtps) => Proxmox => (https) => Container with Django
Since the CSRF error on Django disappeared.

Flask-Session cookie works on other browsers for ip address & domain, but for chrome it only works on ip address

I found a question with this same problem, except it was 7 years old, and they had the opposite issue, where chrome worked for their domain, but not IP. I need this application to work on the domain, not the ip, which is unfortunate.
If I Have some basic code like this:
Flask:
app = Flask(__name__)
from dotenv import load_dotenv
load_dotenv()
SECRET_KEY = os.getenv('FLASK_APP_SECRET_KEY')
SESSION_TYPE = 'filesystem'
app.config.from_object(__name__)
Session(app)
CORS(app)
#app.route('/give', methods = ['GET'])
#cross_origin(supports_credentials=True)
def user_make(id):
session['Hi'] = 'There'
return 'ye'
#app.route('/take', methods = ['GET'])
#cross_origin(supports_credentials=True)
def user_load(id):
return session['Hi']
reactjs:
let data = new FormData()
return axios
.get('12.34.56.78' + '/give', data, {
headers: {
"Content-Type": "multipart/form-data",
},
}).then(
return axios
.take('12.34.56.78' + '/take', data, {
headers: {
"Content-Type": "multipart/form-data",
},
}))
On a server with ip='12.34.56.78' and domain 'example.com':
When using the domain or ip on safari, the output is
'there'
for both
however, on chrome,
for ip the output is
'there'
however, for domain, the output is
Key Error
edit:
Some more info:
This is on an AWS ec2 ubuntu server, which is running on port 80 for the frontend and 5000 for the backend. I connected the ip address to the domain name with AWS Route 53... just in case this is relevant. To access the frontend, one can go to the ip or the domain, whereas to access the backend, one must go to ip:5000
Any more info needed?
Is this fixable?
Thanks!
I think the problem is with how google chrome manage the cookies. It's the 'SameSite' attribute. Back on July 14th, 2020, Google started gradually rolling out a new browser policy with a few major changes. One that treats cookies as SameSite=Lax by default, if no SameSite attribute is specified. The other deprecates and removes the use of cookies with the SameSite=None attribute that did not include the Secure attribute. That means that any cookie that requests SameSite=None but is not marked Secure is now being rejected. This means that the front-end can’t contact the back-end and the site is not working. To fix it, you just need to make sure that when your _SESSION_ID cookie is created it includes the SameSite=None and Secure attributes.
P.S.1: Based on the article of Caleb. Back-end is Ruby on Rails but i don't think this is an issue.
P.S.2: Before change anything, try other chrome-based browsers like Vivaldi, Comodo or even the new Microsoft Edge.

Onelogin SAML with AWS load balancer

I am trying to get OneLogin Saml authentication working on my live servers and am running into a problem with my AWS load balancer setup. I think the problem is that I have a classic load balancer which is listening on both port 80 and 443 with a AWS wildcard HTTPS certificate. The load balancer forwards both ports to port 80 on my servers and adds the HTTP_X_FORWARDED_PROTO headers.
When I use my normal dev server (not behind load balancer) the SAML authentication works fine. I am getting a proper response back. But when I push to live the SAML response returns an empty POST dictionary without RELAY STATE.
Any idea why the POST would be empty?
My setup is:
Python social auth with the SAML connector
Works fine on Dev server
When I use my live servers behind the firewall, the response is empty
I suspect it has something to do with my SSL certificate or my load balancer forwarding the 443 to the server on port 80 with the additional header. I tried fixing this by creating the auth request by analyzing the forwarded headers:
def _create_saml_auth(self, idp):
"""Get an instance of OneLogin_Saml2_Auth"""
config = self.generate_saml_config(idp)
request_info = {
'https': 'on' if self.strategy.request_is_secure() else 'off',
'http_host': self.strategy.request_host(),
'script_name': self.strategy.request_path(),
'server_port': self.strategy.request_port(),
'get_data': self.strategy.request_get(),
'post_data': self.strategy.request_post(),
}
if 'HTTP_X_FORWARDED_PROTO' in self.strategy.request.META:
request_info['https'] = 'on' if self.strategy.request.META.get('HTTP_X_FORWARDED_PROTO') == 'https' else 'off'
request_info['server_port'] = self.strategy.request.META.get('HTTP_X_FORWARDED_PORT')
But that still does return the empyy POST dictionary on the SAML response from Onelogin. The intial url is generated properly with HTTPS on though.
Has anyone had a similar issue. I am stuck and would love to get Onelogin to work.
Thanks so much for your time.
Cheers,
Phil
You can check what's in the errors. i'm using code like this:
auth = OneLogin_Saml2_Auth(saml_req, saml_settings)
auth.process_response()
if not auth.is_authenticated():
error = auth.get_last_error_reason() # here's the error message
i'm guessing you are bumping into the same issue with the ELB that i was, which will go something like Authentication error: The response was received at http://... instead of https://...
The solution is either to perform a https->https redirect, or to make pysaml think it received the response on https. Here's how i did it (in this case in a django app, but should be easy enough to modify for other environments):
saml_req = {
'http_host': request.META['HTTP_HOST'],
'server_port': request.META['SERVER_PORT'],
'script_name': request.META['PATH_INFO'],
'get_data': request.GET.copy(),
'post_data': request.POST.copy()
}
if settings.SAML_FUDGE_HTTPS: # made a settings flag so it can be toggled
saml_req['https'] = True # this one forces https in the check url
saml_req['server_port'] = None # this one removes the port number (80)
auth = OneLogin_Saml2_Auth(saml_req, saml_settings)
auth.process_response()
Update: as #smartin mentions in the comment - you might be able to sniff out HTTP_X_FORWARDED_FOR header and avoid creating a settings var

Why does the session cookie work when serving from a domain but not when using an IP?

I have a Flask application with sessions that works well on my local development machine. However, when I try to deploy it on an Amazon server, sessions do not seem to work.
More specifically, the session cookie is not set. I can, however, set normal cookies. I made sure I have a static secure key, as others have indicated that might be an issue. The only difference is in how the server is set up. During development, I use
app.run()
to run locally. When deployed, I use
app.config['SERVER_NAME'] = '12.34.56.78' # <-- insert a "real" IP
app.run(host='0.0.0.0', port=80)
I suspect the problem might be in the above, but am not completely certain.
The session does seem to work on Firefox, but not Chrome.
The following small application demonstrates the problem, with the configuration differences at the bottom:
from flask import Flask, make_response, request, session
app = Flask(__name__)
app.secret_key = 'secretKey'
# this is to verify that cookies can be set
#app.route('/setcookie')
def set_cookie():
response = make_response('Cookie set')
response.set_cookie('cookie name', 'cookie value')
return response
#app.route('/getcookie')
def get_cookie():
if 'cookie name' in request.cookies:
return 'Cookie found. Its value is %s.' % request.cookies['cookie name']
else:
return 'Cookie not found'
# this is to check if sessions work
#app.route('/setsession')
def set_session():
session['session name'] = 'session value'
return 'Session set'
#app.route('/getsession')
def get_session():
if 'session name' in session:
return 'Session value is %s.' % session['session name']
else:
return 'Session value not found'
if __name__ == '__main__':
app.debug = True
# windows, local development
#app.run()
# Ubuntu
app.config['SERVER_NAME'] = '12.34.56.78' # <-- insert a "real" IP
app.run(host='0.0.0.0', port=80)
This is a "bug" in Chrome, not a problem with your application. (It may also affect other browsers as well if they change their policies.)
RFC 2109, which describes how cookies are handled, seems to indicate that cookie domains must be an FQDN with a TLD (.com, .net, etc.) or be an exact match IP address. The original Netscape cookie spec does not mention IP addresses at all.
The Chrome developers have decided to be more strict than other browsers about what values they accept for cookie domains. While at one point they corrected a bug that prevented cookies on IP addresses, they have apparently backpedaled since then and don't allow cookies on non-FQDN domains (including localhost) or IP addresses. They have stated they will not fix this, as they do not consider it a bug.
The reason "normal" cookies are working but the session cookie is not is that you are not setting a domain for the "normal" cookies (it's an optional parameter), but Flask automatically sets the domain for the session cookie to the SERVER_NAME. Chrome (and others) accept cookies without domains and auto-set them to the domain of the response, hence the observed difference in behavior. You can observer normal cookies failing if you set the domain to the IP address.
During development, you can get around this by running the app on localhost rather than letting it default to 127.0.0.1. Flask has a workaround that won't send the domain for the session cookie if the server name is localhost. app.run('localhost')
In production, there aren't any real solutions. You could serve this on a domain rather than an IP, which would solve it but might not be possible in your environment. You could mandate that all your clients use something besides Chrome, which isn't practical. Or you could provide a different session interface to Flask that does the same workaround for IPs that it already uses for localhost, although this is probably insecure in some way.
Chrome does not allow cookies with IPs for the domain, and there is no practical workaround.
It is possible to create session in the Chrome browser using IP.
My config file has these configurations:
SERVER_NAME = '192.168.0.6:5000'
SESSION_COOKIE_DOMAIN = '192.168.0.6:5000'
It allowed me to use a local virtual machine and the cookie worked perfectly on Chrome, without the need for a local FQDN.
Notice that in the workaround for localhost that #davidism posted -- https://github.com/mitsuhiko/flask/blob/master/flask/sessions.py#L211-L215 , you can patch the Flask code and change if rv == '.localhost': rv = None to simply rv = None and then the cookie domain won't be set and your cookies will work.
You wouldn't want to do this on a real production app, but if your server is just some kind of testing/staging server without sensitive data it might be fine. I just did this to test an app over a LAN on a 192.168.x.x address and it was fine for that purpose.

Twisted Web behind Apache - How to correct links?

I am attempting to write a web application using the Twisted framework for python.
I want the application to work if run as a standalone server (ala twistd), or if Apache reverse proxies to it. E.g.
Apache https://example.com/twisted/ --> https://internal.example.com/
After doing some research, it seemed like I needed to use the vhost.VHostMonsterResource to make this work. So I set up apache with the following directive:
ProxyPass /twisted https://localhost:8090/twisted/https/127.0.0.1:443
Here is my basic SSL server:
from twisted.web import server, resource, static
from twisted.internet import reactor
from twisted.application import service, internet
from twisted.internet.ssl import SSL
from twisted.web import vhost
import sys
import os.path
from textwrap import dedent
PORT = 8090
KEY_PATH = "/home/waldbiec/projects/python/twisted"
PATH = "/home/waldbiec/projects/python/twisted/static_files"
class Index(resource.Resource):
def render_GET(self, request):
html = dedent("""\
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Index</title>
</head>
<body>
<h1>Index</h1>
<ul>
<li>Files</li>
</ul>
</body>
</html>
""")
return html
class ServerContextFactory:
def getContext(self):
"""
Create an SSL context.
Similar to twisted's echoserv_ssl example, except the private key
and certificate are in separate files.
"""
ctx = SSL.Context(SSL.SSLv23_METHOD)
ctx.use_privatekey_file(os.path.join(KEY_PATH, 'serverkey.pem'))
ctx.use_certificate_file(os.path.join(KEY_PATH, 'servercert.pem'))
return ctx
class SSLService(internet.SSLServer):
def __init__(self):
root = resource.Resource()
root.putChild("", Index())
root.putChild("twisted", vhost.VHostMonsterResource())
root.putChild("files", static.File(PATH))
site = server.Site(root)
internet.SSLServer.__init__(self, PORT, site, ServerContextFactory())
application = service.Application("SSLServer")
ssl_service = SSLService()
ssl_service.setServiceParent(application)
It almost works-- but the "files" link on the index page does not behave how I want it to when using apache as a reverse proxy, because it is an absolute link.
My main question is, other than using a relative link, is there some way to compute what the full URL path of the link ought to be in such a way that the link still works in standalone server mode?
A second question would be, am I using VHostMonsterResource correctly? I did not find much documentation, and I pieced together my code from examples I found on the web.
This seems like too much work. Why use VHostMonsterResource at all? You may have very specific reasons for wanting some of this but....Most times:
Have apache handle the ssl. apache then passes off to your twisted app serving non SSL goodies back to apache. Documentation all over the net on the apache config stuff.
you can sill add another server on an ssl port if you really want to
Haven't tested but structure more like:
root = resource.Resource()
root.putChild("", Index())
root.putChild("files", static.File(PATH))
http = internet.TCPServer(8090, server.Site(root))
# change this port # to 443 if no apache
https= internet.SSLServer(8443, server.Site(root), ServerContextFactory())
application = service.Application("http_https_Server")
http.setServiceParent(application)
https.setServiceParent(application)
Dev tip:
During development, for the cost of a couple of extra lines you can add an ssl server so that you can ssh into the running web_server and inspect variables and other state. Way cool.
ssl = internet.TCPServer(8022, getManholeFactory(globals(), waldbiec ='some non-system waldbiec passwork'))
ssl.setServiceParent(application)
Configure the Twisted application so that it knows its own root location. It can use that information to generate URLs correctly.
So after digging into the vhost.VHostMonsterResource source, I determined I could create another resource that could let the reverse proxied URL prefix be specified by an additional marker in the Apache ProxyPass URL.
Firstly, I finally figured out that vhost.VHostMonsterResource is supposed to be a special URL in your back end web site that figures out the reverse proxy host and port from data encoded in the URL path. The URL path (sans scheme and net location) looks like:
/$PATH_TO_VHMONST_RES/$REV_PROXY_SCHEME/$REV_PROXY_NETLOC/real/url/components/
$PATH_TO_VHMONST : Path in the (internal) twisted site that corresponds to the VHostMonsterResource resource.
$REV_PROXY_SCHEME : http or https that is being used by the reverse proxy (Apache).
$REV_PROXY_NETLOC : The net location (host and port) or the reverse proxy (Apache).
So you can control the configuration from the reverse proxy by encoding this information in the URL. The result is that the twisted site will understand the HTTP request came from the reverse proxy.
However, if you are proxying a subtree of the external site as per my original example, this information is lost. So my solution was to create an additional resource that can decode the extra path information. The new proxy URL path becomes:
/$PATH_TO_MANGLE_RES/$REV_PROXY_PATH_PREFIX/$VHOSTMONST_MARKER/$REV_PROXY_SCHEME/$REV_PROXY_NETLOC/real/url/components/
$PATH_TO_MANGLE_RES : The path to the resource that decodes the reverse proxy path info.
$REV_PROXY_PATH_PREFIX : The subtree prefix of the reverse proxy.
$VHOSTMONST_MARKER : A path component (e.g. "vhost") that signals a VHostMonster Resource should be used to further decode the path.

Categories

Resources