I've been tasked to update an existing Python module that sends requests to an external API in many different places with requests. That is, in this module there are at least 50 usages of requests to send requests to said API.
There was a change in the API that requires a header to be added to all requests. Before manually adding the header to all 50 requests, I was wondering if it is possible to define some kind of "middleware" (as for example in Django) that could add the header for all requests at once.
Is something like this possible with requests?
You can monkey-patch the requests.request function with a wrapper that updates the dict specified by the headers arguments with additional entries.
Below is an example that would force all requests to have a header of User-Agent with the value My Browser:
import requests
import inspect
def override_headers(self, func, global_headers):
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
bound.arguments.setdefault('headers', {}).update(global_headers)
return func(*bound.args, **bound.kwargs)
sig = inspect.signature(func)
return wrapper
requests.request = override_headers(requests.request, {'User-Agent': 'My Browser'})
Related
I have created a an application in python-django, where it makes some api call to some 3rd party services and then return response, post that I am doing some processing on response data and generating some final document. Below is something I am doing:
def get_processed_data(url, payload, tenant, req_id, timeout=None):
query_kwargs = HTTPRequestUtils.__get_query_kwargs(timeout)
query_kwargs['json'] = payload
response = HTTPRequestUtils.__get_response(url, query_kwargs, requests.post)
.....
data=process_response(response)
return more_processings(data)
Abobe is one of the function , being called during actual execution of code. And response varies with url.
Now problem is I have am writing Unit Test , and i have to emulate/mock http call, so that for different url, i may return different mocked response, that will be further processed.
I went through several libraries like responses etc, but what I conclude from them is , i can test just api call and its returned response. But in actual I need to just emulate/mock http call for different and return response back, like we do in patch during mock, so that the response can be further go for processing.
Any library or method by which we can achieve this.
If you have an idea of order in which the API call would take place, you can make use of side-effect func of mock library, so what it does is it will give different response for each time the mock function is called
for eg:
mock_api.side_effect = [(resp1),(resp2)]
so when api() will be called for the 1st time => response will be resp1 and for the second time ==> response will be resp2
I think this will solve your problem
I understood that the requests library internally encodes the URL similar to urllib.parse.quote(). Earlier this used to be configurable in the request with config={'encode_uri': False} but this configuration has been discontinued.
I need to put to an AWS S3 presigned URL that contains a signature. when I use requests.put() to the URL as received, I get 403 response with SignatureDoesNotMatch.
The signature part of the URL given to requests.put() is
Bf%2BBufRIxITziSJGuAq9cYFbxBM%3D
The server sees it as:
Bf+BufRIxITziSJGuAq9cYFbxBM=
Is this related to the requests library encoding the URL and probably converting the signature part as above? If so, is there anyway to prevent it and get it to use the URL as passed?
Override the encoding function
import requests, urllib
class NoQuotedCommasSession(requests.Session):
def send(self, *a, **kw):
# a[0] is prepared request
a[0].url = a[0].url.replace(urllib.quote(","), ",")
return requests.Session.send(self, *a, **kw)
s = NoQuotedCommasSession()
s.get("http://somesite.com/an,url,with,commas,that,won't,be,encoded.")
I'm writing a python client for a RESTful API that requires a special header to be passed on each request. The header has the form: X-Seq-Count: n, where n is the sequential number of the request: first made request should have the header X-Seq-Count: 1 be present, the second one should have X-Seq-Count: 2 etc.
I'm using the requests library to handle the low level HTTP calls. What would be the best approach to track the amount of requests made and inject the custom header? What I came up with is subclassing the requests.Session and overriding the Session.prepare_request method:
class CustomSession(requests.Session):
def __init__(self):
super().__init__()
self.requests_count = 0
def prepare_request(self, request):
# increment requests counter
self.requests_count += 1
# update the header
self.headers['X-Seq-Count'] = str(self.requests_count)
return super().prepare_request(request)
Hovewer, I'm not very happy with subclassing Session. Is there a better way? I stumbled upon the event hooks in the docs, but unsure how to use them - looking at the source code, it seems that the hooks can only be applied to the response object, not the request object?
as an alternative, you can take advantage of auth mechanism of requests, you can modify the prepared Request object:
def auth_provider(req):
global requests_count
requests_count += 1
req.headers['X-Seq-Count'] = requests_count
print('requests_count:', requests_count)
return req
requests_count = 0
s = requests.Session()
s.auth = auth_provider
s.get('https://www.example.com')
requests.get('https://www.example.com', auth=auth_provider)
output:
requests_count: 1
requests_count: 2
however subclassing Session sounds okay to me.
Related to Python 2.7
How would one go about building a request through a variable number of kwargs when using requests.
I am using the requests module to directly interact with a REST API which requires a variable number of keyword arguments in order to be successful.
Rather than re-writing the same GET/POST request code, I would like to maintain it within a single api class. However handling the variable number of arguments seems to boil down to a series of if-else statements which isn't particularly readable.
For example:
def request(self):
try:
if self.data:
request = requests.post(url=self.url, headers=self.headers,
data=self.data, timeout=self.timeout, verify=False)
else:
request = requests.get(url=self.url, headers=self.headers,
timeout=self.timeout, verify=False)
...
...
Preferably the request properties are build over time and them passed through a single GET or POST request (granted, the above code would still be require but that is minor).
If you make the attributes default to the same values as arguments to requests.post (basically, None), than you can safely pass all of them as keyword arguments:
def request(self):
try:
request = requests.post(url=self.url, headers=self.headers,
data=self.data, timeout=self.timeout,
verify=False)
...
I'm writing a pythonic web API wrapper with a class like this
import httplib2
import urllib
class apiWrapper:
def __init__(self):
self.http = httplib2.Http()
def _http(self, url, method, dict):
'''
Im using this wrapper arround the http object
all the time inside the class
'''
params = urllib.urlencode(dict)
response, content = self.http.request(url,params,method)
as you can see I'm using the _http() method to simplify the interaction with the httplib2.Http() object. This method is called quite often inside the class and I'm wondering what's the best way to interact with this object:
create the object in the __init__ and then reuse it when the _http() method is called (as shown in the code above)
or create the httplib2.Http() object inside the method for every call of the _http() method (as shown in the code sample below)
import httplib2
import urllib
class apiWrapper:
def __init__(self):
def _http(self, url, method, dict):
'''Im using this wrapper arround the http object
all the time inside the class'''
http = httplib2.Http()
params = urllib.urlencode(dict)
response, content = http.request(url,params,method)
Supplying 'connection': 'close' in your headers should according to the docs close the connection after a response is received.:
headers = {'connection': 'close'}
resp, content = h.request(url, headers=headers)
You should keep the Http object if you reuse connections. It seems httplib2 is capable of reusing connections the way you use it in your first code, so this looks like a good approach.
At the same time, from a shallow inspection of the httplib2 code, it seems that httplib2 has no support for cleaning up unused connections, or to even notice when a server has decided to close a connection it no longer wants. If that is indeed the case, it looks like a bug in httplib2 to me - so I would rather use the standard library (httplib) instead.