I am using HAProxy(open to alternatives) as a Rotating proxy server for my python crawler. I want a persistence session in HAProxy.
But not able to do so, because HAProxy cant change the headers of an HTTPS request.
Am using this
backend bk_web
balance round-robin
cookie SERVERID insert indirect nocache
my HAProxy server is not adding this cookie to the HTTPS request.
Thanks
Session persistence is usually when you want to persist your client to a specific backend server. However, you seem to indicate that you want to use it as a rotating proxy server, so I'm not quite sure you really want session persistence.
Are you terminating HTTPS in HAProxy? You need to terminate HTTPS in HAProxy (or any proxy for that matter) in order to modify headers within an HTTPS session. Alternatively, you can use HAProxy's stick-tables for persisting TCP connections where header modification is not possible or balance source
Related
I'm wanting to implement SECURE_HSTS_SECONDS to my Django settings for extra security - however the warning from the Django docs is making me abit scared so I want some clarification. Here is what is says:
SECURE_HSTS_SECONDS
Default: 0
If set to a non-zero integer value, the SecurityMiddleware sets the HTTP Strict Transport Security header on all responses that do not
already have it.
Warning:
Setting this incorrectly can irreversibly (for some time) break your site. Read the HTTP Strict Transport Security documentation first.
What has to happen for it to "break my site"? I read the HTTP Strict Transport Security documentation first and it didn't make it any clearer.
HTTP Strict Transport Security
HTTP Strict Transport Security lets a web site inform the browser that
it should never load the site using HTTP and should automatically
convert all attempts to access the site using HTTP to HTTPS requests
instead. It consists in one HTTP header, Strict-Transport-Security,
sent back by the server with the resource.
In other words, if you set the value of SECURE_HSTS_SECONDS to e.g. 518400 (6 days) your web server will inform your client's browser the first time he visits your site to exclusively access your website over https in the future. This applies to the entire defined period. If for any reason you no longer provide access to your website over https the browser couldn't access your services anymore.
Therefore, you should initially set this variable to a low value of like 60s and make sure that everything works as expected, otherwise you could prevent yourself and your client from visiting your site.
Browsers properly respecting the HSTS header will refuse to allow
users to bypass warnings and connect to a site with an expired,
self-signed, or otherwise invalid SSL certificate. If you use HSTS,
make sure your certificates are in good shape and stay that way!
Source
Okay so this is my current situation.
I am trying to send an ajax request to api.mysite.com from main.mysite.com. Everything is working fine but cookies are not being sent.
Based on couple hours of research, it seems like I need to change the domain of cookie.
In my case, cookie domain is main.mysite.com, and cookie domain should be .mysite.com if I want to include cookie in ajax requests.
So my question is...how do I change the cookie domain? Or are there any other ways to do it?
My current stack is
nginx for reverse proxy |
node.js(express.js) for front end server |
python(flask) and mysql for api server |
redis for session saving
They are all running in a same box.
It depends on how you are setting cookies. One way to this is While setting a cookie, use domain attribute to set a cookie wrt .mysite.com.
Attributes of HTTP cookie
I am using http proxy in my python webbrowser. In my php script on server side I am still able to detect requests go through proxy. How can I mask it so other servers are not able to find out I am using http proxy?
Thank you.
Modify your proxy so it does not add the X-Forwarded-For header identifying the request as coming from a proxy.
If you don't control the proxy, you are SOL.
You could also, conceivably, use a SOCKS proxy instead of an HTTP one.
Is there a way to specify a proxy server when using urlfetch on Google App Engine?
Specifically, every time I make a call using urlfetch, I want GAE to go through a proxy server. I want to do this on production, not just dev.
I want to use a proxy because there are problems with using google's outbound IP addresses (rate limiting, no static outbound IP, sometimes blacklisted, etc.). Setting a proxy is normally easy if you can edit the http message itself, but GAE's API does not appear to let you do this.
You can always roll your own:
In case of fixed destination: just setup a fixed port forwarding on a proxy server. Then send requests from GAE to proxy. If you have multiple destinations, then set forwarding on separate ports, one for each destination.
In case of dynamic destination (too much to handle via fixed port forwarding), your GAE app adds a custom http header (X-Something) containing final destination and then connects to custom proxy. Custom proxy inspects this field and forwards the request to the destination.
We ran into this issue and reached out to Google Cloud support. They suggested we use Google App Engine flexible with some app.yaml settings, custom network, and an ip-forwarding NAT gateway instance.
This didn't work for us because many core features from App Engine Standard are not implemented in App Engine Flexible. In essence we would need to rewrite our product.
So, to make applicable URL fetch requests appear to have a static IP we made a custom proxy: https://github.com/csgactuarial/app-engine-proxy
For redundancy reasons, I suggest implementing this as a multi region, multi zone, load balanced system.
I'm trying to fetch fresh content from a wordpress blog, and it uses varnish on server side. Is there a way to bypass varnish cache control so that I can get fresh content each time I request that site?
Thanks
The documentation claims that Varnish will not cache any requests with a Cookie header, so a quick work around might be to include a Cookie: … header. Alternatively, you can include an un-used GET parameter like ?cachebuster=1234, which should bypass caching.
Usually, Everybody install Varnish on port 80 and change the default web service (Apache, Nginx or etc...) to listen on 8080.
So, Add the port 8080 to bypass Varnish:
Example: www.example.com:8080