verify_oauth2_token uses object as function - python

I was doing google auth with use of backend from there:
https://developers.google.com/identity/sign-in/android/backend-auth
It seems a bit outdated and the most strange thing is that there is a line:
idinfo = id_token.verify_oauth2_token(token, requests.Request(), CLIENT_ID)
and in implementation you can see that in nested function calls, same request object lands there:
def _fetch_certs(request, certs_url):
"""Fetches certificates.
Google-style cerificate endpoints return JSON in the format of
``{'key id': 'x509 certificate'}``.
Args:
request (google.auth.transport.Request): The object used to make
HTTP requests.
certs_url (str): The certificate endpoint URL.
Returns:
Mapping[str, str]: A mapping of public key ID to x.509 certificate
data.
"""
response = request(certs_url, method='GET')
request is an object, even documentation claims so and it uses it as function. The error I get is:
TypeError: 'Request' object is not callable
What should be changed there?

Most likely you are calling the wrong python requests lib.
If you need to differentiate between the 2 available requests lib.
from google.auth.transport import requests as google_auth_request
import requests
req = google_auth_request.Request()
idinfo = id_token.verify_oauth2_token(token, req, CLIENT_ID)
See: https://google-auth.readthedocs.io/en/latest/reference/google.oauth2.id_token.html

Related

Unit testing Python Azure Function: How do I construct a mock test request message with a JSON payload?

I want to unit test my Python Azure function. I'm following the Microsoft documentation.
The documentation mocks the call to the function as follows
req = func.HttpRequest(
method='GET',
body=None,
url='/api/HttpTrigger',
params={'name': 'Test'})
I would like to do this but with the parameters passed as a JSON object so that I can follow the req_body = req.get_json() branch of the function code. I guessed I would be able to do this with a function call like
req = func.HttpRequest(
method='GET',
body=json.dumps({'name': 'Test'}),
url='/api/HttpTrigger',
params=None)
If I construct the call like this, req.get_json() fails with the error message AttributeError: 'str' object has no attribute 'decode'.
How do I construct the request with JSON input parameters? It should be trivial but I'm clearly missing something obvious.
If I construct my mock call as follows:
import json
req = func.HttpRequest(
method='POST',
body=json.dumps({'name': 'Test'}).encode('utf8'),
url='/api/HttpTrigger',
params=None)
Then I am able to make a successful call to req.get_json(). Thanks to #MrBeanBremen and #JoeyCai for pointing me in the correct direction i.e. don't call GET and make the message a byte string.
Any HTTP request message is allowed to contain a message body, and thus must parse messages with that in mind. Server semantics for GET, however, are restricted such that a body, if any, has no semantic meaning to the request. The requirements on parsing are separate from the requirements on method semantics.
For your http request, it is a GET method. You can send a request body with GET but it should not have any meaning.
So use the below code to construct a mock HTTP request with a json payload. For more details, you could refer to this article.
req = func.HttpRequest(
method='GET',
body=None,
url='/api/HttpTrigger',
params={'name': 'Test'})
Update:
For Post request, you could send json payload with body=json.dumps({'name': 'Test'}).encode('utf8') while body expects a byte string:
req = func.HttpRequest(
method='POST',
body=json.dumps({'name': 'Test'}).encode('utf8'),
url='/api/HttpTrigger',
params=None)

python requests: URL without encoding

I understood that the requests library internally encodes the URL similar to urllib.parse.quote(). Earlier this used to be configurable in the request with config={'encode_uri': False} but this configuration has been discontinued.
I need to put to an AWS S3 presigned URL that contains a signature. when I use requests.put() to the URL as received, I get 403 response with SignatureDoesNotMatch.
The signature part of the URL given to requests.put() is
Bf%2BBufRIxITziSJGuAq9cYFbxBM%3D
The server sees it as:
Bf+BufRIxITziSJGuAq9cYFbxBM=
Is this related to the requests library encoding the URL and probably converting the signature part as above? If so, is there anyway to prevent it and get it to use the URL as passed?
Override the encoding function
import requests, urllib
class NoQuotedCommasSession(requests.Session):
def send(self, *a, **kw):
# a[0] is prepared request
a[0].url = a[0].url.replace(urllib.quote(","), ",")
return requests.Session.send(self, *a, **kw)
s = NoQuotedCommasSession()
s.get("http://somesite.com/an,url,with,commas,that,won't,be,encoded.")

Google API client (Python): is it possible to use BatchHttpRequest with ETag caching

I'm using YouTube data API v3.
Is it possible to make a big BatchHttpRequest (e.g., see here) and also to use ETags for local caching at the httplib2 level (e.g., see here)?
ETags work fine for single queries, I don't understand if they are useful also for batch requests.
TL;DR:
BatchHttpRequest cannot be used with caching
HERE IT IS:
First lets see the way to initialize BatchHttpRequest:
from apiclient.http import BatchHttpRequest
def list_animals(request_id, response, exception):
if exception is not None:
# Do something with the exception
pass
else:
# Do something with the response
pass
def list_farmers(request_id, response):
"""Do something with the farmers list response."""
pass
service = build('farm', 'v2')
batch = service.new_batch_http_request()
batch.add(service.animals().list(), callback=list_animals)
batch.add(service.farmers().list(), callback=list_farmers)
batch.execute(http=http)
Second lets see how ETags are used:
from google.appengine.api import memcache
http = httplib2.Http(cache=memcache)
Now lets analyze:
Observe the last line of BatchHttpRequest example: batch.execute(http=http), and now checking the source code for execute, it calls _refresh_and_apply_credentials, which applies the http object we pass it.
def _refresh_and_apply_credentials(self, request, http):
"""Refresh the credentials and apply to the request.
Args:
request: HttpRequest, the request.
http: httplib2.Http, the global http object for the batch.
"""
# For the credentials to refresh, but only once per refresh_token
# If there is no http per the request then refresh the http passed in
# via execute()
Which means, execute call which takes in http, can be passed the ETag http you would have created as:
http = httplib2.Http(cache=memcache)
# This would mean we would get the ETags cached http
batch.execute(http=http)
Update 1:
Could try with a custom object as well:
from googleapiclient.discovery_cache import DISCOVERY_DOC_MAX_AGE
from googleapiclient.discovery_cache.base import Cache
from googleapiclient.discovery_cache.file_cache import Cache as FileCache
custCache = FileCache(max_age=DISCOVERY_DOC_MAX_AGE)
http = httplib2.Http(cache=custCache)
# This would mean we would get the ETags cached http
batch.execute(http=http)
Because, this is just a hunch on the comment in http2 lib:
"""If 'cache' is a string then it is used as a directory name for
a disk cache. Otherwise it must be an object that supports the
same interface as FileCache.
Conclusion Update 2:
After again verifying the google-api-python source code, I see that, BatchHttpRequest is fixed with 'POST' request and has a content-type of multipart/mixed;.. - source code.
Giving a clue about the fact that, BatchHttpRequest is useful in order to POST data which is then processed down the later.
Now, keeping that in mind, observing what httplib2 request method uses: _updateCache only when following criteria are met:
Request is in ["GET", "HEAD"] or response.status == 303 or is a redirect request
ElSE -- response.status in [200, 203] and method in ["GET", "HEAD"]
OR -- if response.status == 304 and method == "GET"
This means, BatchHttpRequest cannot be used with caching.

Getting Azure Event Hubs metrics using rest api?

I'm trying to pull the event hubs metrics using the rest API,
after reading https://msdn.microsoft.com/en-us/library/azure/dn163589.aspx and https://msdn.microsoft.com/en-us/library/azure/mt652158.aspx
I have got python code that can actually call the url and get a response
I currently try the following code
def get_metrics(subscription, eventhub, cert, specific_partition=None):
apiversion = '2014-01'
namespace = eventhub['namespace']
eventhubname = eventhub['name']
url = "https://management.core.windows.net/{}/services/ServiceBus/Namespaces/{}/eventhubs/{}/Metrics/requests.total/Rollups/P1D/Values/?$filter=timestamp%20gt%20datetime'2016-04-09T00:00:00.0000000Z'&api-version={}".format(
subscription, namespace, eventhubname, apiversion)
request = requests.Request('GET', url, headers=DEFAULT_HEADERS).prepare()
session = requests.Session()
if cert is None or not os.path.isfile(cert):
raise ValueError('You must give certificate file')
session.cert = cert
result = session.send(request)
return result
my problem is with the url, when using the url in the code above I get
<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><Code>InternalError</Code><Message>The server encountered an internal error. Please retry the request.</Message></Error>
I can get the API to output all possible rollups and all possible metrics but when trying to get actual values it fails.
is there something wrong in the url or is it a bug in azure/azure documantation?
Usually, when we occur this issue, it means that there something wrong with the endpoint we combined for the Rest Apis, so that then service raise exception when parse the endpoint.
As compared with my successfully test, what the interesting I found is that the issue raised by the filter param timestamp whose first letter should be uppercased as Timestamp. The following endpoint works fine on my side. Hope it will be helpful to you.
url = "https://management.core.windows.net/{}/services/ServiceBus/Namespaces/{}/eventhubs/{}/Metrics/requests.total/Rollups/P1D/Values/?$filter=Timestamp%20gt%20datetime'2016-04-09T00:00:00.0000000Z'&api-version={}".format(
subscription, namespace, eventhubname, '2012-03-01')

Best practise when using httplib2.Http() object

I'm writing a pythonic web API wrapper with a class like this
import httplib2
import urllib
class apiWrapper:
def __init__(self):
self.http = httplib2.Http()
def _http(self, url, method, dict):
'''
Im using this wrapper arround the http object
all the time inside the class
'''
params = urllib.urlencode(dict)
response, content = self.http.request(url,params,method)
as you can see I'm using the _http() method to simplify the interaction with the httplib2.Http() object. This method is called quite often inside the class and I'm wondering what's the best way to interact with this object:
create the object in the __init__ and then reuse it when the _http() method is called (as shown in the code above)
or create the httplib2.Http() object inside the method for every call of the _http() method (as shown in the code sample below)
import httplib2
import urllib
class apiWrapper:
def __init__(self):
def _http(self, url, method, dict):
'''Im using this wrapper arround the http object
all the time inside the class'''
http = httplib2.Http()
params = urllib.urlencode(dict)
response, content = http.request(url,params,method)
Supplying 'connection': 'close' in your headers should according to the docs close the connection after a response is received.:
headers = {'connection': 'close'}
resp, content = h.request(url, headers=headers)
You should keep the Http object if you reuse connections. It seems httplib2 is capable of reusing connections the way you use it in your first code, so this looks like a good approach.
At the same time, from a shallow inspection of the httplib2 code, it seems that httplib2 has no support for cleaning up unused connections, or to even notice when a server has decided to close a connection it no longer wants. If that is indeed the case, it looks like a bug in httplib2 to me - so I would rather use the standard library (httplib) instead.

Categories

Resources