I have create a webproject Web2Py and would like user to access the pages on normal http:// instaed of http://.
Each time I type http://domain.pythonanywhere.com et redirect me to http://domain.pythonanywhere.com.
It taces 0.5 sec. to do the SSL check and I would like to avoid that.
This was as default:
## if SSL/HTTPS is properly configured and you want all HTTP requests to
## be redirected to HTTPS, uncomment the line below:
# request.requires_https()
PythonAnywhere dev here: that looks like a bug on our side. We "pin" HTTPS for our own site, so that people always go to https://www.pythonanywhere.com/, but it looks like that might have leaked over to customer sites.
Just for clarity -- if someone goes to http://yourusername.pythonanywhere.com/ then we won't initially force it to go to the https site -- they'll get the http one. But if they then go to https://yourusername.pythonanywhere.com, then their browser will remember that they have visited the https domain, so all future requests will redirect there.
That's actually generally good practice (it works around a number of security problems) but we shouldn't be forcing it on people.
[UPDATE] the bug is now fixed, many thanks to boje for pointing us at it :-) One caveat -- if you've ever visited your site over HTTPS before we applied the fix, then you'll still be forced to HTTPS. You need to clear your browser history to see the new unpinned behaviour.
I had an issue let http:// redirect to https:// And I found google group post on here. The following code maybe give you some ideas on your problem, Under db.py add:
############ FORCED SSL #############
from gluon.settings import global_settings
if global_settings.cronjob:
print 'Running as shell script.'
elif not request.is_https:
redirect(URL(scheme='https', args=request.args, vars=request.vars))
session.secure()
#####################################
Related
I followed the steps in this tutorial to enable SSO with Azure Active Directory for the admin portion (to start) of my Django app:
https://django-microsoft-auth.readthedocs.io/en/latest/usage.html
Navigating to /admin yields this page, which is good:
Clicking Microsoft brings up this new window:
The important error seems to be:
AADSTS90102: 'redirect_uri' value must be a valid absolute URI.
In this window, I used the browser console and found that a GET request was being made like this:
https://login.microsoftonline.com/50ce...90ac7/oauth2/v2.0/authorize?response_type=code&client_id=f4...27&redirect_uri=https,https://example.org/microsoft/auth-callback/&s...
Note the redirect_uri=https,https://.... It seems like that leading "https," is superfluous and is causing the problem. Any ideas where that could be coming from?
In my Azure app, the redirect URI is set to https://example.org/microsoft/auth-callback/:
I'm using Python 3.9.6, Django 3.2, django-microsoft-auth 2.4.0, NGINX 1.18.0, uvicorn 0.14.0
I've searched for help on this and haven't found anything relevant to my situation. Thanks in advance!
Based on the SO Thread Reference.
Use http as the redirect URI instead of https to resolve the issue in most cases.
use
http://localhost:8080/microsoft/auth-callback/
Instead of
https://localhost:8080/microsoft/auth-callback/
If there is a option,
Use localhost:8080 into the table django_site
Reference SO Thread: django-microsoft-auth : The provided value for the input parameter 'redirect_uri' is not valid
As you think, the first https is superfluous, you just need to delete it.
https://login.microsoftonline.com/50ce...90ac7/oauth2/v2.0/authorize?response_type=code&client_id=f4...27&redirect_uri=https://example.org/microsoft/auth-callback/&s...
By the way, I think there is no problem with the redirect_uri you set in the Azure portal.
I guess it is a problem of the redirecting URL. The example URL is coming from django site table. So first of all you need to enable the site:
#in settings.py
SITE_ID = 1
Afterwards you can go to the admin interface and set the url of the site to the correct domain. From my experience I know that it won't work without that.
I want to open a URL with Python code but I don't want to use the "webbrowser" module. I tried that already and it worked (It opened the URL in my actual default browser, which is what I DON'T want). So then I tried using urllib (urlopen) and mechanize. Both of them ran fine with my program but neither of them actually sent my request to the website!
Here is part of my code:
finalURL="http://www.locationary.com/access/proxy.jsp?ACTION_TOKEN=proxy_jsp$JspView$SaveAction&inPlaceID=" + str(newPID) + "&xxx_c_1_f_987=" + str(ZA[z])
print finalURL
print ""
br.open(finalURL)
page = urllib2.urlopen(finalURL).read()
When I go into the site, locationary.com, it doesn't show that any changes have been made! When I used "webbrowser" though, it did show changes on the website after I submitted my URL. How can I do the same thing that webbrowser does without actually opening a browser?
I think the website wants a "GET"
I'm not sure what OS you're working on, but if you use something like httpscoop (mac) or fiddler (pc) or wireshark, you should be able to watch the traffic and see what's happening. It may be that the website does a redirect (which your browser is following) or there's some other subsequent activity.
Start an HTTP sniffer, make the request using the web browser and watch the traffic. Once you've done that, try it with the python script and see if the request is being made, and what the difference is in the HTTP traffic. This should help identify where the disconnect is.
A HTTP GET doesn't need any specific code or action on the client side: It's just the base URL (http://server/) + path + optional query.
If the URL is correct, then the code above should work. Some pointers what you can try next:
Is the URL really correct? Use Firebug or a similar tool to watch the network traffic which gives you the full URL plus any header fields from the HTTP request.
Maybe the site requires you to log in, first. If so, make sure you set up cookies correctly.
Some sites require a correct "referrer" field (to protect themselves against deep linking). Add the referrer header which your browser used to the request.
The log file of the server is a great source of information to trouble shoot such problems - when you have access to it.
They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?
So I wonder it does cache stuff somewhere, right?
It doesn't.
If you don't see new data, this could have many reasons. Most bigger web services use server-side caching for performance reasons, for example using caching proxies like Varnish and Squid or application-level caching.
If the problem is caused by server-side caching, usally there's no way to force the server to give you the latest data.
For caching proxies like squid, things are different. Usually, squid adds some additional headers to the HTTP response (response().info().headers).
If you see a header field called X-Cache or X-Cache-Lookup, this means that you aren't connected to the remote server directly, but through a transparent proxy.
If you have something like: X-Cache: HIT from proxy.domain.tld, this means that the response you got is cached. The opposite is X-Cache MISS from proxy.domain.tld, which means that the response is fresh.
Very old question, but I had a similar problem which this solution did not resolve.
In my case I had to spoof the User-Agent like this:
request = urllib2.Request(url)
request.add_header('User-Agent', 'Mozilla/5.0')
content = urllib2.build_opener().open(request)
Hope this helps anyone...
Your web server or an HTTP proxy may be caching content. You can try to disable caching by adding a Pragma: no-cache request header:
request = urllib2.Request(url)
request.add_header('Pragma', 'no-cache')
content = urllib2.build_opener().open(request)
If you make changes and test the behaviour from browser and from urllib, it is easy to make a stupid mistake.
In browser you are logged in, but in urllib.urlopen your app can redirect you always to the same login page, so if you just see the page size or the top of your common layout, you could think that your changes have no effect.
I find it hard to believe that urllib2 does not do caching, because in my case, upon restart of the program the data is refreshed. If the program is not restarted, the data appears to be cached forever. Also retrieving the same data from Firefox never returns stale data.
If I point Firefox at http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes, I get a page of HTML. But if I try this in Python:
import urllib
site = 'http://bitbucket.org/tortoisehg/stable/wiki/Home/ReleaseNotes'
req = urllib.urlopen(site)
text = req.read()
I get the following:
500 Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
What am I doing wrong?
You are not doing anything wrong, bitbucket does some user agent detection (to detect mercurial clients for example). Just changing the user agent fixes it (if it doesn't have urllib as a substring).
You should fill an issue regarding this: http://bitbucket.org/jespern/bitbucket/issues/new/
You're doing nothing wrong, on the surface, and as the error page says you should contact the site's administrators because they're the ones with the server logs which may explain what's happening. Fortunately, bitbucket's site admins are a friendly bunch!
No doubt there is some header or combination of headers that browsers set one way, urllib sets another way, and a bug on the server gets tickled in the latter case. You may want to see exactly what headers are being sent e.g. with firebug in firefox, and reproduce those until you isolate exactly the server bug; most likely it's going to be the user agent or some "accept"-ish header that's tickling that bug.
I don't think you're doing anything wrong -- it looks like this server was just down? Your script worked fine for me ('text' contained the same data as that displayed in the browser).
I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?
Two things:
The OAuth Service Provider in question is violating the OAuth spec if it's giving you an error if you don't specify a callback URL. callback_url is spec'd to be an OPTIONAL parameter.
But, pedantry aside, you probably want to get a callback when the user's done just so you know you can redeem the Request Token for an Access Token. Yahoo's FireEagle developer docs have lots of great information on how to do this.
Even in the second case, the callback URL doesn't actually have to be visible from the Internet at all. The OAuth Service Provider will redirect the browser that the user uses to provide his username/password to the callback URL.
The two common ways to do this are:
Create a dumb web service from within your application that listens on some port (say, http://localhost:1234/) for the completion callback, or
Register a protocol handler (you'll have to check with the documentation for your OS specifically on how to do such a thing, but it enables things like <a href="skype:555-1212"> to work).
(An example of the flow that I believe you're describing lives here.)
In case you are using *nix style system, create a alias like 127.0.0.1 mywebsite.dev in /etc/hosts (you need have the line which is similar to above mentioned in the file, Use http://website.dev/callbackurl/for/app in call back URL and during local testing.
This was with the Facebook OAuth - I actually was able to specify 'http://127.0.0.1:8080' as the Site URL and the callback URL. It took several minutes for the changes to the Facebook app to propagate, but then it worked.
This may help you:
http://www.marcworrell.com/article-2990-en.html
It's php so should be pretty straightforward to set up on your dev server.
I've tried this one once:
http://term.ie/oauth/example/
It's pretty simple. You have a link to download the code at the bottom.
localtunnel [port] and voila
http://blogrium.wordpress.com/2010/05/11/making-a-local-web-server-public-with-localtunnel/
http://github.com/progrium/localtunnel
You could create 2 applications? 1 for deployment and the other for testing.
Alternatively, you can also include an oauth_callback parameter when you requesting for a request token. Some providers will redirect to the url specified by oauth_callback (eg. Twitter, Google) but some will ignore this callback url and redirect to the one specified during configuration (eg. Yahoo)
So how I solved this issue (using BitBucket's OAuth interface) was by specifying the callback URL to localhost (or whatever the hell you want really), and then following the authorisation URL with curl, but with the twist of only returning the HTTP header. Example:
curl --user BitbucketUsername:BitbucketPassword -sL -w "%{http_code} %{url_effective}\\n" "AUTH_URL" -o /dev/null
Inserting for your credentials and the authorisation url (remember to escape the exclamation mark!).
What you should get is something like this:
200 http://localhost?dump&oauth_verifier=OATH_VERIFIER&oauth_token=OATH_TOKEN
And you can scrape the oath_verifier from this.
Doing the same in python:
import pycurl
devnull = open('/dev/null', 'w')
c = pycurl.Curl()
c.setopt(pycurl.WRITEFUNCTION, devnull.write)
c.setopt(c.USERPWD, "BBUSERNAME:BBPASSWORD")
c.setopt(pycurl.URL, authorize_url)
c.setopt(pycurl.FOLLOWLOCATION, 1)
c.perform()
print c.getinfo(pycurl.HTTP_CODE), c.getinfo(pycurl.EFFECTIVE_URL)
I hope this is useful for someone!