I want to mock the read and write functions of the Serial object from pyserial, which is used inside my class, to check for call arguments and edit the return values, but it doesn't seem to work.
Currently I have a file 'serialdevice.py', like this:
import serial
class SerialDevice:
def __init__(self):
self._serial = serial.Serial(port='someport')
def readwrite(self, msg):
self._serial.write(msg)
return self._serial.read(1024)
Then I have a file 'test_serialdevice.py', like this:
import mock
from serialdevice import SerialDevice
#mock.patch('serialdevice.serial.Serial.read')
#mock.patch('serialdevice.serial.Serial.write')
#mock.patch('serialdevice.serial.Serial')
def test_write(mock_serial, mock_write, mock_read):
mock_read.return_value='hello'
sd = SerialDevice()
resp = sd.readwrite('test')
mock_write.assert_called_once_with('test')
assert resp == 'hello'
But both asserts fail. Somehow the mock_write is not called with the argument 'test' and the write method returns a mock object instead of the 'hello' string. I've also tried using #patch('serial.Serial.write) etc. with the same results. Also using the return objects of mock_serial, so e.g. mock_read = mock_serial.read() does not seem to work.
The constructor, i.e. mock_serial, does seem to be called with the expected arguments however.
How can I achieve what I want in this case?
The python version is 2.7.9
Apparently you have to use the return value of the mock_serial object as the instance returned by the constructor, so something like
mock_serial.return_value.read.return_value = 'hello'
This works, but I still wonder if there is a better way to do this
Related
I want to allow two different access to a particular functions. I'm not the author of sample.test_foo and I am making use of the #service decorator to work in these two ways.
The method say sample.test_foo is decorated by #service and when directly imported and accessed, it will run certain code and return the result.
Other mode I want is to make the function use a cache to fetch the data. How I'm doing this is, I'm asking the users to set a context variable - cache_mode_modules - and users are expected to add "sample". The decorator checks if the parent module is in the env, then it fetches from cache otherwise calls the method. How I fetch from cache etc are bit complex and isn't related to the question.
I really want to change the second mode of access. I want to create an API which takes in "sample" module and returns me a different callable.
import sample
new_sample = magic_wrapper(sample)
sample.test_foo() -> calls the actual func
new_sample.foo() -> calls the cache
The magic_wrapper need to set some module variable, say "_cache_mode=True" which I can use it in my decorator and decide how to access.
Note that even if package is passed to the magic_wrapper, I need the functionality to work.
I tried the below one
In [1]: def test(module):
...: SPEC_OS = importlib.util.find_spec(module)
...: a = importlib.util.module_from_spec(SPEC_OS)
...: SPEC_OS.loader.exec_module(a)
...: a._cache_mode = True
...: return a
It works for module but not for package. Can someone suggest if it is possible to have the magic_wrapper ?
From your comment, I think you are trying to change the behavior of specific functions in the package/module by overriding defaults and not actually make changes to the package/module core functionality. If that is right, functions are first class objects in Python so they can be passed around just like any other object and thus set as a default argument to another function.
Example:
def function_in_module(b, default=False):
if default:
return -b
return b
def magic_wrapper(b, func=function_in_module):
return func(b, default=True)
print(magic_wrapper(10)) # prints -10
print(function_in_module(10)) # prints 10
Same is true with an imported from an imported module. If the above function_in_module were actually in module, you could make a file magic_wrapper.py:
from module import function_in_module
def magic_wrapper(b, func=function_in_module):
return func(b, default=True)
and in main.py:
from module import function_in_module
from magic_wrapper import magic_wrapper
print(magic_wrapper(10)) # prints -10
print(function_in_module(10)) # prints 10
I have a Python class in a base_params.py module within an existing codebase, which looks like this:
import datetime
class BaseParams:
TIMESTAMP = datetime.datetime.now()
PATH1 = f'/foo1/bar/{TIMESTAMP}/baz'
PATH2 = f'/foo2/bar/{TIMESTAMP}/baz'
Callers utilize it this way:
from base_params import BaseParams as params
print(params.PATH1)
Now, I want to replace the TIMESTAMP value with one that is dynamically specified at runtime (through e.g. CLI arguments).
Is there a way to do this in Python without requiring my callers to refactor their code in a dramatic way? This is currently confounding me because the contents of the class BaseParams get executed at 'compile' time, so there is no opportunity there to pass in a dynamic value as it's currently structured. And in some of my existing code, this object is being treated as "fully ready" at 'compile' time, for example, its values are used as function argument defaults:
def some_function(value1, value2=params.PATH1):
...
I am wondering if there is some way to work with Python modules and/or abuse Python's __special_methods__ to get this existing code pattern working more or less as-is, without a deeper refactoring of some kind.
My current expectation is "this is not really possible" because of that last example, where the default value is being specified in the function signature. But I thought I should check with the Python wizards to see if there may be a suitably Pythonic way around this.
Yes, you just need to make sure that the command line argument is parsed before the class is defined and before any function that uses the class's attribute as a default argument is defined (but that should already be the case).
(using sys.argv for sake of simplicity. It is better to use an actual argument parser such as argparse)
import datetime
import sys
class BaseParams:
try:
TIMESTAMP = sys.argv[1]
except IndexError:
TIMESTAMP = datetime.datetime.now()
PATH1 = f'/foo1/bar/{TIMESTAMP}/baz'
PATH2 = f'/foo2/bar/{TIMESTAMP}/baz'
print(BaseParams.TIMESTAMP)
$ python main.py dummy-argument-from-cli
outputs
dummy-argument-from-cli
while
$ python main.py
outputs
2021-06-26 02:32:12.882601
You can still totally replace the value of a class attribute after the class has been defined:
BaseParams.TIMESTAMP = <whatever>
There are definitely some more "magic" things you can do though, such as a class factory of some kind. Since Python 3.7 you can also take advantage of module __getattr__ to create a kind of factory for the BaseParams class (PEP 562)
In base_params.py you might rename BaseParams to _BaseParams or BaseParamsBase or something like that :)
Then at the module level define:
def __getattr__(attr):
if attr == 'BaseParams':
params = ... # whatever code you need to determine class attributes for BaseParams
return type('BaseParams', (_BaseParams,), params)
raise AttributeError(attr)
So let's say I have this bit of code:
import coolObject
def doSomething():
x = coolObject()
x.coolOperation()
Now it's a simple enough method, and as you can see we are using an external library(coolObject).
In unit tests, I have to create a mock of this object that roughly replicates it. Let's call this mock object coolMock.
My question is how would I tell the code when to use coolMock or coolObject? I've looked it up online, and a few people have suggested dependency injection, but I'm not sure I understand it correctly.
Thanks in advance!
def doSomething(cool_object=None):
cool_object = cool_object or coolObject()
...
In you test:
def test_do_something(self):
cool_mock = mock.create_autospec(coolObject, ...)
cool_mock.coolOperation.side_effect = ...
doSomthing(cool_object=cool_mock)
...
self.assertEqual(cool_mock.coolOperation.call_count, ...)
As Dan's answer says, one option is to use dependency injection: have the function accept an optional argument, if it's not passed in use the default class, so that a test can pass in a moc.
Another option is to use the mock library (here or here) to replace your coolObject.
Let's say you have a foo.py that looks like
from somewhere.else import coolObject
def doSomething():
x = coolObject()
x.coolOperation()
In your test_foo.py you can do:
import mock
def test_thing():
path = 'foo.coolObject' # The fully-qualified path to the module, class, function, whatever you want to mock.
with mock.patch('foo.coolObject') as m:
doSomething()
# Whatever you want to assert here.
assert m.called
The path you use can include properties on objects, e.g. module1.module2.MyClass.my_class_method. A big gotcha is that you need to mock the object in the module being tested, not where it is defined. In the example above, that means using a path of foo.coolObject and not somwhere.else.coolObject.
I am instantiating this object below every time I call csv in my function. Was just wondering if there's anyway I could just instantiate the object just once?
I tried to split the return csv from def csv() to another function but failed.
Code instantiating the object
def csv():
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
csv = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return csv
Test function
def test_df(csv,df)
..............
Is your csv function actually a pytest.fixture? If so, you can change its scope to session so it will only be called once per py.test session.
#pytest.fixture(scope="session")
def csv():
# rest of code
Of course, the returned data should be immutable so tests can't affect each other.
You can use a global variable to cache the object:
_csv = None
def csv():
global _csv
if _csv is None:
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
_csv = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return _csv
Another option is to change the caller to cache the result of csv() in a manner similar to the snippet above.
Note that your "code to call the function" doesn't call the function, it only declares another function that apparently receives the csv function's return value. You didn't show the call that actually calls the function.
You can use a decorator for this if CSVDatasource doesn't have side effects like reading the input line by line.
See Efficient way of having a function only execute once in a loop
You can store the object in the function's local dictionary. And return that object if it exists, create a new one if it doesn't.
def csv():
if not hasattr(csv, 'obj'):
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
csv.obj = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return csv.obj
def mention_notifier(self):
print self.stat_old
if __name__ == "__main__":
import sys
self.stat_old = Set([])
l = task.LoopingCall(mention_notifier).start(timeout)
This is the basic skeleton of my code. I want stat_old to be a global variable that doesn't get re intialized every time I call mention_notifier. Thus, I did something like this. But got this error of 'self' not defined. Any clues how to go about this?
I don't use Twisted, but from looking at the docs, something like this might work:
def mention_notifier(self):
print self.stat_old
class Namespace(object):
pass
if __name__ == "__main__":
import sys
self=Namespace()
self.stat_old = Set([])
l = task.LoopingCall(mention_notifier,self).start(timeout)
Of course, here the variable name self should probably be changed to something else -- by convention self is typically used inside of classes to reference the instance of the class in a method call ...
It looks like LoopingCall can be given arguments to be passed along to the function (in this case, the Namespace object self is passed). Then inside the function, "self" is modified (as long as you don't do something like self=... inside the function, you're golden -- self.attribute=... is completely fine)