I am using pywebpush 1.4.0 library to push Web Notifications from Django backend. The keys that I'm using were obtained from https://web-push-codelab.glitch.me/. Subscription seems to working fine. Moreover, I tested this on Firefox and it is working fine there.
I receive the following error server side while pushing on Chrome:
Push failed: <Response [400]>: <HTML>
<HEAD>
<TITLE>UnauthorizedRegistration</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>UnauthorizedRegistration</H1>
<H2>Error 400</H2>
</BODY>
</HTML>
The strange part is that my backend controls 3 domains and push is working fine even for Chrome on one domain and not working on others. I ruled out the following possible issues:
Improper Private and Public Key pairs as it is working fine on Firefox
Outdated pywebpush library as on one domain on Chrome it is working fine
Few answers (Chrome Web Notification Push Unauthorized Registration exception) pointed out to update the py-vapid version installed by pywebpush but it's version is already py-vapid==1.3.0
The only possibility I see is if Chrome doesn't allow push notifications on different domains from the same backend. Is anyone aware of such a limitation or could help me with any other pointers here?
Note: I'm using different keys for all the three domains.
Here is the code I'm using to push:
from pywebpush import webpush
webpush(subscription_info,
data,
vapid_private_key=vapid_private_key,
vapid_claims={"sub": "mailto:xyz#example.com"})
The subscription_info is the json as received while subscribing a user, vapid_private_key is the corresponding private key.
The FAQ lists the following reasons for your error (quote):
If you fail to define an Authorization header in the request to FCM.
Your application key used to subscribe the user doesn't match the key used to sign the Authorization header.
The expiration is invalid in your JWT, i.e. the expiration exceeds 24 hours or the JWT has expired.
The JWT is malformed or has invalid values
It also states the requirement of adding a applicationServerKey to the request, and that this is not mandated in Firefox. Your issue may lie here: are you sure that the vapid_private_key variable refers to a correct private key for each domain? It might be that it's actual consistently the key of your working domain.
It might be easier to find some potential errors if we had more of the code you used. Chrome just needs a single key per server, but should be able to handle several different servers subscribing.
Related
I am trying to use the python package yagmail to send emails but am having a tough time getting authorization to work.
My issue is getting an Oauth 2 token, but there is a disconnect with yagmail, as specified in a github thread. As stated in this post, https://github.com/kootenpv/yagmail/issues/143, it appears that google does not supply the credential file in the correct format. But I tried a bunch of things and each has its own problem.
When I set up a Client ID in the Google API console, download the
json as credentials.json and let the system create the token.json,
things work to a point: I am brought through google to "pick an
account, do you want to continue" and token is created. I am able to
print labels for the gmail account. But when I issue
yag.send(to='xxx#gmail.com', subject='Testing Yagmail',
contents='Hurray, it worked!'), I get an error "TypeError:
refresh_authorization() got an unexpected keyword argument 'token'."
When I look at the token file, it does contain the key 'token' which
it should not per this github post https://github.com/kootenpv/yagmail/issues/143#issuecomment-527115298][2].
If I go into the token and edit it to reflect the the expected
contents as identified in the above link by removing keys that are
not specified and prefixing the names with 'google_', I get an error
"ValueError: Authorized user info was not in the expected format,
missing fields refresh_token, client_id, client_secret." It doesn't
seem to like the 'google_' prefix.
editing the token file as above without the 'google_' prefix seems to
get further producing a different error "An error occurred:
<HttpError 403 when requesting
https://gmail.googleapis.com/gmail/v1/users/me/labels?alt=json
returned "Request had insufficient authentication scopes"
I am stuck. Relatively new to Oauth2, but it seems others are able to use yagmail. Is there a trick I am missing? I originally posted on Github because I found that other related post, but it seems SO is more geared toward Q&A. Is there a relation betweeen Github and SO? Difference?
Thanks for any assistance,
Brian
I finally found a solution and the answer was hidden in plain sight.
First the Oauth authorization needed to be set up as outlined in this post (which is excellent): Sending email via Gmail & Python
As stated, when yagmail is run the first time the authorization process gives instructions, the final stating to "Navigate to the following URL to auth:" and asks "Enter the localhost URL you were redirected to:"
The problem is the browser window shows what appears to be an error message, a sad face with a message "This site can’t be reached, localhost refused to connect, reload", etc. I never thought this was expected behavior. The url is the one being navigated to in the error screen.
Simply stating the error should be expected, and the url needs to be copied and pasted in the post above would help a lot.
I followed the steps in this tutorial to enable SSO with Azure Active Directory for the admin portion (to start) of my Django app:
https://django-microsoft-auth.readthedocs.io/en/latest/usage.html
Navigating to /admin yields this page, which is good:
Clicking Microsoft brings up this new window:
The important error seems to be:
AADSTS90102: 'redirect_uri' value must be a valid absolute URI.
In this window, I used the browser console and found that a GET request was being made like this:
https://login.microsoftonline.com/50ce...90ac7/oauth2/v2.0/authorize?response_type=code&client_id=f4...27&redirect_uri=https,https://example.org/microsoft/auth-callback/&s...
Note the redirect_uri=https,https://.... It seems like that leading "https," is superfluous and is causing the problem. Any ideas where that could be coming from?
In my Azure app, the redirect URI is set to https://example.org/microsoft/auth-callback/:
I'm using Python 3.9.6, Django 3.2, django-microsoft-auth 2.4.0, NGINX 1.18.0, uvicorn 0.14.0
I've searched for help on this and haven't found anything relevant to my situation. Thanks in advance!
Based on the SO Thread Reference.
Use http as the redirect URI instead of https to resolve the issue in most cases.
use
http://localhost:8080/microsoft/auth-callback/
Instead of
https://localhost:8080/microsoft/auth-callback/
If there is a option,
Use localhost:8080 into the table django_site
Reference SO Thread: django-microsoft-auth : The provided value for the input parameter 'redirect_uri' is not valid
As you think, the first https is superfluous, you just need to delete it.
https://login.microsoftonline.com/50ce...90ac7/oauth2/v2.0/authorize?response_type=code&client_id=f4...27&redirect_uri=https://example.org/microsoft/auth-callback/&s...
By the way, I think there is no problem with the redirect_uri you set in the Azure portal.
I guess it is a problem of the redirecting URL. The example URL is coming from django site table. So first of all you need to enable the site:
#in settings.py
SITE_ID = 1
Afterwards you can go to the admin interface and set the url of the site to the correct domain. From my experience I know that it won't work without that.
Python Django /w Microsoft Graphs -
I'm following this Microsoft Tutorial for building Django apps with Microsoft Graph (using it on my existing Django webapp), and I am having an issue with authentication: https://learn.microsoft.com/en-us/graph/tutorials/python
I'm on the step 'Add Azure AD authentication' and, after implementing,
I hit the sign in button and enter credentials...and I keep getting value error "state missing from auth_code_flow".
The "callback" method is only making it to result=get_token_from_code(request) and then fails.
Here is the get_token_from_code method:
def get_token_from_code(request):
cache = load_cache(request)
auth_app = get_msal_app(cache)
# Get the flow saved in session
flow = request.session.pop('auth_flow', {})
result = auth_app.acquire_token_by_auth_code_flow(flow, request.GET)
save_cache(request, cache)
return result
What I'm trying to do is eventually access excel online from my webapp.
Any help is greatly appreciated!
I just had this issue and resolved it. It is one of these two things:
You are starting out at 127.0.0.1:8000 and then when you're redirected you're at localhost:8000, which is a different domain. The sessions aren't remembered from one domain to the other. The solution is to start out on localhost:8000 so that the session persists across login.
Your browser is using super-strict cookie settings. Microsoft Edge appears to default to this mode on localhost and 127.0.0.1. There is a lock or shield icon in or near your address bar that lets you relax the restrictions on your cookie settings.
Try one or both of these and you should succeed.
I'm a beginner coder, so i'm pretty sure im just circumventing around the error. But replacing the website URL with http://localhost:8000/# and re running it somehow got around the error. maybe that could be of some use.
If you are running on chrome, rather than running application on http://127.0.0.1:8000 run it on http://localhost:8000, because chrome isn't saving the cookies when the ip address is being used.
I've created a B2C setup, based on some documentation. I've referred to the following link.
https://blogs.technet.microsoft.com/ad/2015/09/16/azure-ad-b2c-and-b2b-are-now-in-public-preview/
So I have setup a redirect_uri, say,
"http s://mycompany.com/login/"
and used Google as my identity provider. However, when I do a sign-up / sign-in, the system redirects me from the sign-up / sign-in page to
"http s://mycompany.com/login/#id_token=eyJ0eXAi..."
The redirect URL returned by B2C contains an "id_token" variable, and upon checking it in "http://calebb.net/", the details it contains are as expected.
The issue I have is with the hash "#" mark after the redirect_uri, and before the id_token variable. Because of the hash, the id_token variable is not sent to our server, because of the default behavior of browsers to not send anything after the hash mark. The hash mark is a fragment identifier.
Thus I am unable to obtain the value of the id_token.
Is there a way to overcome this limitation, so that our server application can obtain the value of id_token from the URL returned by the B2C system? Or is this like a bug in B2C that needs fixing?
I am using a Python/Django web application.
Thanks.
Pass "response_mode" parameter value as "query" or "form_post" in policy linking URL to overcome the # issue.
For more information, please review: https://learn.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-reference-oauth-code
I'm not allowed to comment also,
If you are using AngularJS for front-end then enable HTML5 mode.
I've used this $locationProvider.html5Mode(true);
According to AngularJS: Developer Guide
In HTML5 mode, the $location service getters and setters interact with
the browser URL address through the HTML5 history API. This allows for
use of regular URL path and search segments, instead of their hashbang
equivalents. If the HTML5 History API is not supported by a browser,
the $location service will fall back to using the hashbang URLs
automatically. This frees you from having to worry about whether the
browser displaying your app supports the history API or not; the
$location service transparently uses the best available option.
Opening a regular URL in a legacy browser -> redirects to a hashbang
URL Opening hashbang URL in a modern browser -> rewrites to a regular
URL Note that in this mode, Angular intercepts all links (subject to
the "Html link rewriting" rules below) and updates the url in a way
that never performs a full page reload.
I'm not (yet) allowed to comment, so I have to put my remark in an answer.
I had the same problem with the NodeJS B2C sample some minutes ago. I put a POST route on what is your http s://mycompany.com/login/ endpoint
app.post('/',
passport.authenticate('azuread-openidconnect', { failureRedirect: '/login' }),
function(req, res) {
log.info('We received a POST from AzureAD.');
log.info(req.body.id_token);
res.redirect('/');
});
and then channeled it into the passport JavaScript libraries authenticate.
May be this gives you an indication and you can transfer it to Python/Django.
I want to do some web scraping with GAE. (Infinite Campus Student Information Portal, fyi). This service requires you to login to get in the website.
I had some code that worked using mechanize in normal python. When I learned that I couldn't use mechanize in Google App Engine I ended up using urllib2 + ClientForm. I couldn't get it to login to the server, so after a few hours of fiddling with cookie handling I ran the exact same code in a normal python interpreter, and it worked. I found the log file and saw a ton of messages about stripping out the 'host' header in my request... I found the source file on Google Code and the host header was in an 'untrusted' list and removed from all requests by user code.
Apparently GAE strips out the host header, which is required by I.C. to determine which school system to log you in, which is why it appeared like I couldn't login.
How would I get around this problem? I can't specify anything else in my fake form submission to the target site. Why would this be a "security hole" in the first place?
App Engine does not strip out the Host header: it forces it to be an accurate value based on the URI you are requesting. Assuming that URI's absolute, the server isn't even allowed to consider the Host header anyway, per RFC2616:
If Request-URI is an absoluteURI, the host is part of the Request-URI.
Any Host header field value in the
request MUST be ignored.
...so I suspect you're misdiagnosing the cause of your problem. Try directing the request to a "dummy" server that you control (e.g. another very simple app engine app of yours) so you can look at all the headers and body of the request as it comes from your GAE app, vs, how it comes from your "normal python interpreter". What do you observe this way?