Issue with singleton pattern in python - python

I am using a python class with Singleton pattern to get different types of benefits e.g
To limit concurrent access to a shared resource.
To create a global point of access for a resource.
To create just one instance of a class, throughout the lifetime of a program.
But now having some issues with that also, so let me know how can I fix this issue but get the benefits of the Singleton pattern mentioned above. Note: Here I am using python zeep for SOAP calls.
Sample Code:
from zeep.plugins import HistoryPlugin
from shared import Singleton
import zeep
class MySoapClient(metaclass=Singleton):
"""MyFancySoapClient"""
def __init__(
self,
user: str = "username",
password: str = "password",
wsdl: str = "wsdl_url_goes_here",
):
self._history = HistoryPlugin()
self._soap_client = zeep.Client(wsdl, plugins=[self._history])
def methodA():
resp = self._soap_client.ServiceA(
body
)
return resp
def methodB():
resp = self._soap_client.ServiceB(
body
)
return resp
def methodC(request_proxy_url):
self._soap_client.transport.session.proxies = {"https": request_proxy_url}
resp = self._soap_client.ServiceE(
body
)
return resp
def methodD():
resp = self._soap_client.ServiceC(
body
)
return resp
def methodE():
resp = self._soap_client.ServiceD(
body
)
return resp
client = MySoapClient()
client.methodA()
client.methodB()
client.methodC("https://example.com") <---- after this call it modifies the '_soap_client' attribute
client.methodD()
client.methodE()
That's why methodD() and methodE() get affected because of added self._soap_client.transport.session.proxies, actually I need to set proxy URL only for methodC() but due to singleton it propagates the updated attribute value to mothodD() and methodE() also. Finally make my methodD() and methodE() fails because SOAP call inside those method doesn't need use to proxy.

You can rewrite your methodC as:
def methodC(request_proxy_url):
original_proxies = self._soap_client.transport.session.proxies
self._soap_client.transport.session.proxies = {"https": request_proxy_url}
resp = self._soap_client.ServiceE(
body
)
self._soap_client.transport.session.proxies = original_proxies
return resp
(and do similar for any other methods which need to modify the set up of the self._soap_client instance before making the request)
I am a bit skeptical of enforcing the singleton pattern, rather than just creating a global var in a module and importing that from everywhere... but that is just personal taste and no relation to your issue.
Knowing the nature of SOAP APIs I expect the zeep.Client instance is quite a heavy object so it totally makes sense to try to avoid having multiple instances if you can avoid it.
If you use a multi threaded platform (like e.g. Python with gevent) then you have to be careful to avoid global vars which mutate their shared state, like this MySoapClient now does.
An alternative would be for it to maintain a small number of distinct zeep.Client instances, and for each methodA, methodC etc to use the appropriate _soap_client instance. Something like:
class MySoapClient(metaclass=Singleton):
"""MyFancySoapClient"""
def __init__(
self,
user: str = "username",
password: str = "password",
wsdl: str = "wsdl_url_goes_here",
request_proxy_url: str = "default value",
):
self._history = HistoryPlugin()
self._soap_client = zeep.Client(wsdl, plugins=[self._history])
self._soap_client_proxied = zeep.Client(wsdl, plugins=[self._history])
self._soap_client_proxied.transport.session.proxies = {"https": request_proxy_url}
def methodB():
resp = self._soap_client.ServiceB(
body
)
return resp
def methodC(request_proxy_url):
request_proxy_url}
resp = self._soap_client_proxied.ServiceE(
body
)
return resp
# etc

Related

How to mock client object

I am working on writing unittest for my fastapi project.
One endpoint includes getting a serviceNow ticket. Here is the code i want to test:
from aiosnow.models.table.declared import IncidentModel as Incident
from fastapi import APIRouter
router = APIRouter()
#router.post("/get_ticket")
async def snow_get_ticket(req: DialogflowRequest):
"""Retrieves the status of the ticket in the parameter."""
client = create_snow_client(
SNOW_TEST_CONFIG.servicenow_url, SNOW_TEST_CONFIG.user, SNOW_TEST_CONFIG.pwd
)
params: dict = req.sessionInfo["parameters"]
ticket_num = params["ticket_num"]
try:
async with Incident(client, table_name="incident") as incident:
response = await incident.get_one(Incident.number == ticket_num)
stage_value = response.data["state"].value
desc = response.data["description"]
[...data manipulation, unimportant parts]
What i am having trouble with is trying to mock the client response, every time the actual client gets invoked and it makes the API call which i dont want.
Here is the current version of my unittest:
from fastapi.testclient import TestClient
client = TestClient(app)
#patch("aiosnow.models.table.declared.IncidentModel")
def test_get_ticket_endpoint_valid_ticket_num(self, mock_client):
mock_client.return_value = {"data" : {"state": "new",
"description": "test"}}
response = client.post(
"/snow/get_ticket", json=json.load(self.test_request)
)
assert response.status_code == 200
I think my problem is patching the wrong object, but i am not sure what else to patch.
In your test your calling client.post(...) if you don't want this to go to the Service Now API this client should be mocked.
Edit 1:
Okay so the way your test is setup now the self arg is the mocked IncidentModel object. So only this object will be a mock. Since you are creating a brand new IncidentModel object in your post method it is a real IncidentModel object, hence why its actually calling the api.
In order to mock the IncidentModel.get_one method so that it will return your mock value any time an object calls it you want to do something like this:
def test_get_ticket_endpoint_valid_ticket_num(mock_client):
mock_client.return_value = {"data" : {"state": "new",
"description": "test"}}
with patch.object(aiosnow.models.table.declared.IncidentModel, "get_one", return_value=mock_client):
response = client.post(
"/snow/get_ticket", json=json.load(self.test_request)
)
assert response.status_code == 200
The way variable assignment works in python, changing aiosnow.models.table.declared.IncidentModel will not change the IncidentModel that you've imported into your python file. You have to do the mocking where you use the object.
So instead of #patch("aiosnow.models.table.declared.IncidentModel"), you want to do #patch("your_python_file.IncidentModel")

How do you mock dynamically added object methods?

I'm trying to mock external API calls in the pyonfleet package. My desired usage is something like below.
def get_task(id):
from onfleet import Onfleet
api = Onfleet()
return api.tasks.get(id=id)
with patch('onleet.Onfleet.tasks.get', return_value={}):
task = get_task(id='...')
The problem is the tasks attribute and it's API endpoints are set on the Onfleet object using setattr in its _initialize_resources method. Because of this all my tests are raising the error:
AttributeError: type object 'Onfleet' has no attribute 'tasks'
class Onfleet(object):
# Loading config data
data = Config.data
# Look up local authentication JSON if no api_key was passed
if (os.path.isfile(".auth.json")):
with open(".auth.json") as json_secret_file:
secret = json.load(json_secret_file)
def __init__(self, api_key=None):
self._session = requests.Session()
# auth takes api_key and api_secret
self._session.auth = (api_key if api_key else self.secret["API_KEY"], "")
self._initialize_resources(self._session)
def auth_test(self):
path = self.data["URL"]["base_url"] + self.data["URL"]["version"] + self.data["URL"]["auth_test"]
response = self._session.get(path)
return response.json()
def _initialize_resources(self, session):
resources = self.data["RESOURCES"]
# Go through the config module to create endpoints
for endpoint, http_methods in resources.items():
setattr(self, endpoint, Endpoint(http_methods, session))
How do I patch methods on an object that are added dynamically?

Defining request and response objects for webapp2 handlers on GAE python

I already have a REST API with GAE python built using webapp2. I was looking at protorpc messages used in protorpc and Cloud Enpoints and really like how I can define the request and responses. Is there a way to incorporate that into my webapp2 handlers?
Firstly I use a decorator on the webapp2 method. I define the decorator as follows*:
# Takes webapp2 request (self.request on baseHandler) and converts to defined protoRpc object
def jsonMethod(requestType, responseType, http_method='GET'):
"""
NB: if both URL and POST parameters are used, do not use 'required=True' values in the protorpc Message definition
as this will fail on one of the sets of parms
"""
def jsonMethodHandler(handler):
def jsonMethodInner(self, **kwargs):
requestObject = getJsonRequestObject(self, requestType, http_method, kwargs)
logging.info(u'request={0}'.format(requestObject))
response = handler(self, requestObject) # Response object
if response:
# Convert response to Json
responseJson = protojson.encode_message(response)
else:
responseJson = '{}'
logging.info(u'response json={0}'.format(responseJson))
if self.response.headers:
self.response.headers['Content-Type'] = 'application/json'
if responseJson:
self.response.write(responseJson)
self.response.write('')
return jsonMethodInner
return jsonMethodHandler
The jsonMethod decorator uses a protorpc message for 'requestType' and 'responseType'.
I have constrained the http_method to be either GET, POST or DELETE for a method; you may wish to change this.
Note that this decorator must be applied to instance methods on a webapp2.RequestHandler class (see the example below) as it needs to access the webapp2 request and response objects.
The protorpc message is populated in getJsonRequestObject():
def getJsonRequestObject(self, requestType, http_method, kwargs):
"kwargs: URL keywords eg: /api/test/<key:\d+> => key"
"request.GET: used for 'GET' URL query string arguments"
"request.body: used for 'POST' or 'DELETE' form fields"
logging.info(u'URL parameters: {0}'.format(kwargs))
if http_method == 'POST' or http_method == 'DELETE':
requestJson = self.request.body
if requestJson == None:
requestJson = '' # Cater for no body (eg: IE10)
try:
logging.info("Content type = {}".format(self.request.content_type))
logRequest = requestJson if len(requestJson) < 1024 else requestJson[0:1024]
try:
logging.info(u'Request JSON: {0}'.format(logRequest))
except:
logging.info("Cannot log request JSON (invalid character?)")
postRequestObject = protojson.decode_message(requestType, requestJson)
except:
logError()
raise
if self.request.query_string:
# combine POST and GET parameters [GET query string overrides POST field]
getRequestObject = protourlencode.decode_message(requestType, self.request.query_string)
requestObject = combineRequestObjects(requestType, getRequestObject, postRequestObject)
else:
requestObject = postRequestObject
elif http_method == 'GET':
logging.info(u'Query strings: {0}'.format(self.request.query_string))
requestObject = protourlencode.decode_message(requestType, self.request.query_string)
logging.info(u'Request object: {0}'.format(requestObject))
else:
raise ValidationException(u'Invalid HTTP method: {0}'.format(http_method))
if len(kwargs) > 0:
#Combine URL keywords (kwargs) with requestObject
queryString = urllib.urlencode(kwargs)
keywordRequestObject = protourlencode.decode_message(requestType, queryString)
requestObject = combineRequestObjects(requestType, requestObject, keywordRequestObject)
return requestObject
getJsonRequestObject() handles GET, POST and webapp2 URL arguments (note: these are entered as kwargs).
combineRequestObjects() combines two objects of the requestType message:
def combineRequestObjects(requestType, request1, request2):
"""
Combines two objects of requestType; Note that request2 values will override request1 values if duplicated
"""
members = inspect.getmembers(requestType, lambda a:not(inspect.isroutine(a)))
members = [m for m in members if not m[0].startswith('_')]
for key, value in members:
val = getattr(request2, key)
if val:
setattr(request1, key, val)
return request1
Finally, a decorated webapp2 method example:
from protorpc import messages, message_types
class FileSearchRequest(messages.Message):
"""
A combination of file metadata and file information
"""
filename = messages.StringField(1)
startDateTime = message_types.DateTimeField(2)
endDateTime = message_types.DateTimeField(3)
class ListResponse(messages.Message):
"""
List of strings response
"""
items = messages.StringField(1, repeated=True)
...
class FileHandler(webapp2.RequestHandler):
#jsonMethod(FileSearchRequest, ListResponse, http_method='POST')
def searchFiles(self, request):
# Can now use request.filename etc
...
return ListResponse(items=items)
Hopefully this will give you some idea of how to go about implementing your own webapp2/protorpc framework.
You can also check and see how Cloud Endpoints is implementing their protorpc message handling. You may also need to dive into the protorpc code itself.
Please note that I have attempted to simplify my existing implementation, so you may come across various issues that you will need to address in your implementation.
In addition methods like 'logError()' and classes like 'ValidationException' are non-standard, so you will need to replace them as you see fit.
You may also wish to remove the logging at some point.

Updating Attributes of Class as Parameters Changes: How to Keep Brokerage Account Class up-to-date?

How does one keep the attributes of an instance of a class up-to-date if the are changing moment to moment?
For example, I have defined a class describing my stock trading brokerage account balances. I have defined a function which pings the brokerage API and returns a JSON object with the current status of various parameters. The status of these parameters are then set as attributes of a given instance.
import json
import requests
from ConfigParser import SafeConfigParser
class Account_Balances:
def Account_Balances_Update():
"""Pings brokerage for current status of target account"""
#set query args
endpoint = parser.get('endpoint', 'brokerage') + 'user/balances'
headers = {'Authorization': parser.get('account', 'Auth'), 'Accept': parser.get('message_format', 'accept_format')}
#send query
r = requests.get(endpoint, headers = headers)
response = json.loads(r.text)
return response
def __init__(self):
self.response = self.Account_Balances_Update()
self.parameterA = response['balances']['parameterA']
self.parameterB = response['balances']['parameterB']
As it stands, this code sets the parameters at the moment the instance is created but they become static.
Presumably parameterA and parameterB are changing moment to moment so I need to keep them up-to-date for any given instance when requested. Updating the parameters requires rerunning the Account_Balances_Update() function.
What is the pythonic way to keep the attribute of a given instance of a class up to date in a fast moving environment like stock trading?
Why not just creating an update method?
class Account_Balances:
#staticmethod
def fetch():
"""Pings brokerage for current status of target account"""
#set query args
endpoint = parser.get('endpoint', 'brokerage') + 'user/balances'
headers = {'Authorization': parser.get('account', 'Auth'), 'Accept': parser.get('message_format', 'accept_format')}
#send query
r = requests.get(endpoint, headers = headers)
response = json.loads(r.text)
balances = response['balances']
return balances['parameterA'], balances['parameterB']
def update(self):
self.parameterA, self.parameterB = self.fetch()

What is the best way to force a keyword while using **kwargs?

I'm not sure if I have used the correct terminology in the question.
Currently, I am trying to make a wrapper/interface around Google's Blogger API (Blog service).
[I know it has been done already, but I am using this as a project to learn OOP/python.]
I have made a method that gets a set of 25 posts from a blog:
def get_posts(self, **kwargs):
""" Makes an API request. Returns list of posts. """
api_url = '/blogs/{id}/posts'.format(id=self.id)
return self._send_request(api_url, kwargs)
def _send_request(self, url, parameters={}):
""" Sends an HTTP GET request to the Blogger API.
Returns JSON decoded response as a dict. """
url = '{base}{url}?'.format(base=self.base, url=url)
# Requests formats the parameters into the URL for me
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
The problem is, I have to specify the API key every time I call the get_posts function:
someblog = BloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts(key=self.key)
Every API call requires that the key be sent as a parameter on the URL.
Then, what is the best way to do that?
I'm thinking a possible way (but probably not the best way?), is to add the key to the kwargs dictionary in the _send_request():
def _send_request(self, url, parameters={}):
""" Sends an HTTP get request to Blogger API.
Returns JSON decoded response. """
# Format the full API URL:
url = '{base}{url}?'.format(base=self.base, url=url)
# The api key will be always be added:
parameters['key']= self.key
try:
r = requests.get(url, params=parameters)
except:
print "** Could not reach url:\n", url
return
api_response = r.text
return self._jload(api_response)
I can't really get my head around what is the best way (or most pythonic way) to do it.
You could store it in a named constant.
If this code doesn't need to be secure, simply
API_KEY = '1ih3f2ihf2f'
If it's going to be hosted on a server somewhere or needs to be more secure, you could store the value in an environment variable
In your terminal:
export API_KEY='1ih3f2ihf2f'
then in your python script:
import os
API_KEY = os.environ.get('API_KEY')
The problem is, I have to specify the API key every time I call the get_posts function:
If it really is just this one method, the obvious idea is to write a wrapper:
def get_posts(blog, *args, **kwargs):
returns blog.get_posts(*args, key=key, **kwargs)
Or, better, wrap up the class to do it for you:
class KeyRememberingBloggerClient(BloggerClient):
def __init__(self, *args, **kwargs):
self.key = kwargs.pop('key')
super(KeyRememberingBloggerClient, self).__init__(*args, **kwargs)
def get_posts(self, *args, **kwargs):
return super(KeyRememberingBloggerClient, self).get_posts(
*args, key=self.key, **kwargs)
So now:
someblog = KeyRememberingBloggerClient(url='http://someblog.blogger.com', key='0123')
someblog.get_posts()
Yes, you can override or monkeypatch the _send_request method that all of the other methods use, but if there's only 1 or 2 methods that need to be fixed, why delve into the undocumented internals of the class, and fork the body of one of those methods just so you can change it in a way you clearly weren't expected to, instead of doing it cleanly?
Of course if there are 90 different methods scattered across 4 different classes, you might want to consider building these wrappers programmatically (and/or monkeypatching the classes)… or just patching the one private method, as you're doing. That seems reasonable.

Categories

Resources