call method of lambda function from another lambda function - Python - python

I am able to call lambda function from another lambda function but by default it goes to handler method. How do I invoke some other method defined in it?
Suppose there is lambda function master.py, which will have common methods which can be used by other lambda functions so that I don't have to write them again and again in every function. Now I want to call methods of master.py (lets say getTime(), authenticateUser() etc) from other lambda functions.
Basically I want to keep a lambda function which will have common methods which can be used by other lambda functions.
Any help is appreciated.
Below is the code I have tried to call one lambda function from another (i have taken code from this question) but it goes to handler() method:
lambda function A
def handler(event,context):
params = event['list']
return {"params" : params + ["abc"]}
lambda function B invoking A
import boto3
import json
lambda_client = boto3.client('lambda')
a=[1,2,3]
x = {"list" : a}
invoke_response = lambda_client.invoke(FunctionName="functionA",
InvocationType='RequestResponse',
Payload=json.dumps(x))
print (invoke_response['Payload'].read())
Output
{
"params": [1, 2, 3, "abc"]
}

You can pass the data needed to run your desired lambda function method within the event parameter upon calling invoke. If you include the following code in the top of your lambda_handler from the lambda function with the method you would like to invoke.
def lambda_handler(event, context):
"""
Intermediary method that is invoked by other lambda functions to run methods within this lambda
function and return the results.
Parameters
----------
event : dict
Dictionary specifying which method to run and the arguments to pass to it
{function: "nameOfTheMethodToExecute", arguments: {arg1: val1, ..., argn: valn}}
context : dict
Not used
Returns
-------
object : object
The return values from the executed function. If more than one object is returned then they
are contained within an array.
"""
if "function" in event:
return globals()[event["function"]](**event["arguments"])
else:
# your existing lambda_handler code...
Then, to call the method and get the return values from your other lambda function using the following method invoke.
import json
# returns the full name for a lambda function from AWS based on a given unique part
getFullName = lambda lambdaMethodName: [method["FunctionName"] for method in lambda_client.list_functions()["Functions"] if lambdaMethodName in method["FunctionName"]][0]
# execute a method in a different lambda function and return the results
def invoke(lambda_function, method_name, params):
# wrap up the method name to run and its parameters
payload = json.dumps({"function": method_name, "arguments": params})
# invoke the function and record the response
response = lambda_client.invoke(FunctionName=getFullName(lambda_function), InvocationType='RequestResponse', Payload=payload)
# parse the return values from the response
return json.loads(response["Payload"].read())
[rtn_val_1, rtn_val_2] = invoke("fromLambdaA", "desiredFunction", {arg1: val1, arg2: val2})
Note your lambda policy attached to the function that is invoking the other lambda function will need two polices: "lambda:ListFunctions" and "lambda:InvokeFunction"

Related

mock object library ANY not working as expected

I'm currently trying to mock a patch request to a server and I'm trying to make use of the ANY attribute in the mock object library. I have the following code:
#patch('path_to_patch.patch')
def test_job_restarted_succesfully(mock_patch):
make_patch_call()
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
I'm getting the following error:
AssertionError: Expected call: patch(async_run=<ANY>, callback=<ANY>, kwargs=<ANY>, payload=<ANY>, url=<ANY>)
E Actual call: patch(async_run=True, callback=<function JobSvc.send_job_patch_request.<locals>.retry_on_404 at 0x000002752B873168>, payload={'analyzer': {'state': 'started'}, 'meta': {}}, svc_auth=UUID('40ed1a00-a51f-11eb-b1ed-b46bfc345269'), url='http://127.0.0.1:8080/rarecyte/1.0/jobs/slide1#20210422_203831_955885')
I found ANY in the docs given below and can't figure out why assert_called_once_with() is expecting the actual parameter that's called.
Here is the relevant section in the docs: https://docs.python.org/3/library/unittest.mock.html#any
EDIT:
The make_patch_call() ultimately calls this patch function after computing all the parameters needed for the patch function.
def patch(self, url, payload, callback=None, async_run=False, **kwargs):
payload = self._serialize_payload(payload)
func = self._do_async_request if async_run else self._do_request
return func('patch', (url, payload), callback, kwargs)
For assert_called_with, the arguments and the used keywords have to exactly match. Substituting an argument for ANY will always match the argument value, but the keyword must still match the used keyword. The generic keywords args and kwargs are no exception: if you expect them, they have to be used in the call to match.
In this case, the kwargs keyword in the expected call:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, kwargs=ANY)
has to be changed to the really used keyword svc_auth:
mock_patch.assert_called_with(url=ANY, payload=ANY, callback=ANY, async_run=ANY, svc_auth=ANY)
Note that the same applies for keyword versus positional arguments, which is a common pitfall. If you have a function foo(bar), then you have to expect the call exactly as it is made, e.g:
#mock.patch("my_module.foo")
def test_foo(patched):
foo(42)
patched.assert_called_with(ANY) # passes
patched.assert_called_with(foo=ANY) # fails
foo(bar=42)
patched.assert_called_with(ANY) # fails
patched.assert_called_with(foo=ANY) # passes

Python/Pytest: patched function inside pytest_sessionstart() is not returning the expected value

I'm trying to patch a function in Pytest's pytest_sessionstart(). I was expecting the patch function to return {'SENTRY_DSN': "WRONG"}, however. I'm getting back <MagicMoc ... id='4342393248'> object in the test run.
import pytest
from unittest.mock import patch, Mock
def pytest_sessionstart(session):
"""
:type request: _pytest.python.SubRequest
:return:
"""
mock_my_func = patch('core.my_func')
mock_my_func.return_value = {'SENTRY_DSN': "WRONG"}
mock_my_func.__enter__()
def unpatch():
mock_my_func.__exit__()
This has been correctly answered by #gold_cy, so this is just an addition: as already mentioned, you are setting return_value to the patch object, not to the mock itself. The easiest way to correct is is to use instead:
from unittest.mock import patch
def pytest_sessionstart(session):
"""
:type request: _pytest.python.SubRequest
:return:
"""
mock_my_func = patch('core.my_func', return_value = {'SENTRY_DSN': "WRONG"})
mock_my_func.start()
This sets the return value to the mock without the need to create a separate Mock object.
Issue here seems to be that you are not properly configuring the Mock object. Given the code you have shown I am going under the assumption that you are calling some function in the following way:
with foobar() as fb:
# something happens with fb here
That call evaluates to to this essentially:
foobar().__enter__()
However, in the patching that you have shown, you have made a few critical mistakes.
You have not defined the return value of __enter__.
When you initialize a Mock object, it returns a brand new Mock object, therefore when you call __enter__ at the end of your function, it is returning a brand new object, not the one you originally created.
If I understand correctly you probably want something like this:
import pytest
from unittest.mock import patch, Mock
def pytest_sessionstart(session):
"""
:type request: _pytest.python.SubRequest
:return:
"""
mock_my_func = patch('core.my_func')
mock_context = Mock()
mock_context.return_value = {'SENTRY_DSN': "WRONG"}
mock_my_func.return_value.__enter__.return_value = mock_context
# this now returns `mock_context`
mock_my_func().__enter__()
Now mock_my_func().__enter__() returns mock_context which we can see works as expected when we do the following:
with mock_my_func() as mf:
print(mf())
>> {'SENTRY_DSN': 'WRONG'}

selenium - can WebDriverWait().until(myFunc) use functions outside of the WebDriver?

Is it possible to call a function outside of the WebDriver in the .until? No matter what I try, I get the exception:
Exception: 'WebDriver' object has no attribute 'verifyObj_tag'.
I have a class called 'ad_selenium' and all calls to selenium are encapsulated within the library. The explicitWait function I wrote is trying to use another class method in the .until:
def explicitWait(self,tag_name,search_for,element=None,compare='contains',seconds=20):
element = WebDriverWait(self.__WD, seconds).until( lambda self: \
self.verifyObj_tag(tag_name,search_for,element=element,compare=compare))
I've tried all sorts of combinations of lambda functions and function varaibles like:
def explicitWait(self,tag_name,search_for,element=None,compare='contains',seconds=20):
x = self.verifyObj_tag
element = WebDriverWait(self.__WD, seconds).until( lambda x: \
x(tag_name,search_for,element=element,compare=compare))
Looking at the code inside selenium/webdriver/support/wait.py, it looks like it always passes webriver to the method passed in the until:
def until(self, method, message=''):
while(True):
try:
value = method(self._driver) #<<--webdriver passed here
if value:
return value
except self._ignored_exceptions:
pass
Any ideas on how to make that work?
You need to let it pass the driver as an argument:
element = WebDriverWait(self.__WD, seconds).until(lambda driver: \
self.verifyObj_tag(tag_name, search_for, element=element, compare=compare))

Python - Monkey Patching weird bug

My Fake Mock Class looks like following:
class FakeResponse:
method = None #
url = None # static class variables
def __init__(self, method, url, data):#, response):
self.status_code = 200 # always return 200 OK
FakeResponse.method = method #
FakeResponse.url = url #
#staticmethod
def check(method, url, values):
""" checks method and URL.
"""
print "url fake: ", FakeResponse.url
assert FakeResponse.method == method
assert FakeResponse.url == url
I have another decorator which is applicable over all the test cases:
#pytest.fixture(autouse=True)
def no_requests(monkeypatch):
monkeypatch.setattr('haas.cli.do_put',
lambda url,data: FakeResponse('PUT', url, data))
monkeypatch.setattr("haas.cli.do_post",
lambda url,data: FakeResponse('POST', url, data))
monkeypatch.setattr("haas.cli.do_delete",
lambda url: FakeResponse('DELETE', url, None))
I am using Py.test to test the code.
Some example test cases are:
class Test:
#test case passes
def test_node_connect_network(self):
cli.node_connect_network('node-99','eth0','hammernet')
FakeResponse.check('POST','http://abc:5000/node/node-99/nic/eth0/connect_network',
{'network':'hammernet'})
# test case fails
def test_port_register(self):
cli.port_register('1') # This make a indirect REST call to the original API
FakeResponse.check('PUT','http://abc:5000/port/1', None)
# test case fails
def test_port_delete(self):
cli.port_delete('port', 1)
FakeResponse.check('DELETE','http://abc:5000/port/1', None)
A sample error message which I get:
method = 'PUT', url = 'http://abc:5000/port/1', values = None
#staticmethod
def check(method, url, values):
""" checks method and URL.
'values': if None, verifies no data was sent.
if list of (name,value) pairs, verifies that each pair is in 'values'
"""
print "url fake: ", FakeResponse.url
> assert FakeResponse.method == method
E assert 'POST' == 'PUT'
E - POST
E + PUT
haas/tests/unit/cli_v1.py:54: AssertionError
--------------------------------------------- Captured stdout call -------------------------------------
port_register <port>
Register a <port> on a switch
url fake: http://abc:5000/node/node-99/nic/eth0/connect_network
--------------------------------------------- Captured stderr call -------------------------------------
Wrong number of arguements. Usage:
Whereas if I call the second test case in the following way considering the
check function takes "self" argument and #staticmethod is not used then the test case works:
def test_port_register(self):
cli.port_register('1')
fp = FakeResponse('PUT','http://abc:5000/port/1', None) #Create a FakeResponse class instance
fp.check('PUT','http://abc:5000/port/1', None) # Just call the check function with the same
arguments
Questions:
Are there any side effects of using monkey patching and #staticmethod
How is the url defined for a previous test function being used in the next function call.
Should'nt there be a scoping of argument to disallow the above unwanted behavior.
Is there a better way to monkey patch.
Sorry for the long post, I have been trying to resolve this for a week and wanted some perspective
of other programmers.
The issue was not having the right signature for one of the functions. It was resolved by changing the argument passed to the MonkeyPatch function as en empty dictionary {} instead of 'None' value which is kind of specific to my code.
The reason the I was initially hitting the issue was, as the current function's call(cli.port_register) was failing when the parameters where passed to port_register itself, so it picked up the argument values from a previous state and doing the assert with the FakeResponse call.

Patterns - Event Dispatcher without else if?

I'm creating a Python wrapper for the Detours library. One piece of the tool is a dispatcher to send all of the hooked API calls to various handlers.
Right now my code looks like this:
if event == 'CreateWindowExW':
# do something
elif event == 'CreateProcessW':
# do something
elif ...
This feels ugly. Is there a pattern to create an event dispatcher without my having to create an elif branch for each Windows API function?
One nice way to do this is to define a class which has methods equating to the relevant API function names, plus a dispatch method which dispatches to the correct method. For example:
class ApiDispatcher(object):
def handle_CreateWindowExW(self):
# do whatever
def handle_CreateProcessW(self):
# do this one
def dispatch(self, event):
method = getattr(self, 'handle_%s' % event)
method()
Those if's will eventually have to go somewhere. Why not do it like this:
handler = get_handler(event)
handler.process()
and in the get_handler you'd have your ifs, each returning an object which does its work in the process method.
An alternative would be a map to callables, like this:
def react_to_create_window_exw():
# do something with event here
pass
handlers = {
"CreateWindowExW" : react_to_create_window_exw
}
and you would use it like this:
handler = handlers[event]
handler()
This way you would not use any if/else conditions.
You can use the dispatch dict method.
def handle_CreateWindowExW():
print "CreateWindowExW"
#do something
events = {
"CreateWindowExW": handle_CreateWindowExW
}
events[event]()
This way, you can just add events without having to add different if statements.
Usually in such cases when you have a predefined list of actions to take, use a map e.g.
def CreateWindowExW():
print 'CreateWindowExW'
def CreateProcessW():
print 'CreateProcessW'
action_map = {
'CreateWindowExW': CreateWindowExW,
'CreateProcessW': CreateProcessW
}
for action in ['CreateWindowExW', 'UnkownAction']:
try:
action_map[action]()
except KeyError:
print action, "Not Found"
Output:
CreateWindowExW
UnkownAction Not Found
so using a map you can create a very powerful dispatcher
I didn't find anything that was as graceful as it could be in this area, so I wrote something that let's you do:
from switcheroo import Switch, default
switch = Switch({
'foo': lambda x: x+1,
default: lambda x: x-1,
})
>>> switch['foo'](1)
2
>>> switch['bar'](1)
0
There are some other flavours; docs are here, code is on github, package is on pypi.

Categories

Resources