I would like to find all instances in the code where np.random.seed is called (without using grep). In order to set a breakpoint in ipdb, I tried to find the source file with
import inspect; inspect.getsourcefile(np.random.seed)
but it throws a TypeError because it is a built-in method (because it is coded in C).
Is it possible to watch any calls to np.random.seed by modifying something in the main source file?
Additionally, it would be suitable to patch this method, e.g. additionally logging it (or calling a debugger):
def new_random_seed(seed):
"""
This method should be called instead whenever np.random.seed
is called in any module that is invoked during the execution of
the main script
"""
print("Called with seed {}".format(seed))
#or: import ipdb; ipdb.set_trace()
return np.random.seed()
Maybe using a mock framework is the way to go?
The second question concerns the scenario in which a class B inherits from a class A in a library and I want to use the functionality of class B, but overwrite a function it uses from class A without modifying classes A and B. Probably, I should use mocking, but I am not sure about the overhead, so I wrote the following:
#in library
class A():
def __init__(self, name):
self.name = name
def work(self):
print("{} working".format(self.name))
class B():
def __init__(self):
self.A = A("Machine")
def run_task(self):
self.A.work()
# in main script
# Cannot change classes A and B, so make a subclass C
import types
class C(B):
def __init__(self, modified_work):
super().__init__()
self.A.work = types.MethodType(modified_work, self.A) #MethodType for self
b = B()
b.run_task()
modified_work = lambda self: print("{} working faster".format(self.name))
c = C(modified_work)
c.run_task()
The output is:
Machine working
Machine working faster
Is this good style?
This might be a simpler solution to your second question:
# lib.py
class A():
def work(self):
print('working')
class B():
def __init__(self):
self.a = A()
def run(self):
self.a.work()
Then in your code:
import lib
class A(lib.A):
def work(self):
print('hardly working')
lib.A = A
b = lib.B()
b.run()
Or:
import lib
class AA(lib.A):
def work(self):
print('hardly working')
class BB(lib.B):
def __init__(self):
self.a = AA()
b = lib.B()
b.run()
b = BB()
b.run()
Related
I have the following situation:
class A():
def __init__(self, log = True):
self.log = log
def __call__(self):
if self.log:
self.log ='\n' # whatever no-Null string to reinitialize the log attribute at each call
do_things()
self.log += 'I did things'
class B()
def __init__(self):
self.a = A(log = True)
self.log_master = []
def __call__(self):
for i in range(num):
self.a()
self.log_master.append(a.log)
self.log_master.append('other things')
save_to_file(self.log_master)
So I have a class B which is initialized with an instance of class A. B calls A. When the class A is called it initializes a string which serve as a container to log the operations. When call ends, B checks A's log string and append it to log_master, besides other things. At the end of everything log_master is saved to file. Basically I have two classes, one of which serve as a container for an instance of the other. Both 'collaborate' at writing a log file.
I can feel the horror with this approach. My A code is cluttered with ugly "if self.log: ..". What is the right approach to produce a decent and customizable logfile?
I'm trying to find a way to dynamically add methods to a class through decorator.
The decorator i have look like:
def deco(target):
def decorator(function):
#wraps(function)
def wrapper(self, *args, **kwargs):
return function(*args, id=self.id, **kwargs)
setattr(target, function.__name__, wrapper)
return function
return decorator
class A:
pass
# in another module
#deco(A)
def compute(id: str):
return do_compute(id)
# in another module
#deco(A)
def compute2(id: str):
return do_compute2(id)
# **in another module**
a = A()
a.compute() # this should work
a.compute2() # this should work
My hope is the decorator should add the compute() function to class A, any object of A should have the compute() method.
However, in my test, this only works if i explicitly import compute into where an object of A is created. I think i'm missing something obvious, but don't know how to fix it. appreciate any help!
I think this will be quite simpler using a decorator implemented as a class:
class deco:
def __init__(self, cls):
self.cls = cls
def __call__(self, f):
setattr(self.cls, f.__name__, f)
return self.cls
class A:
def __init__(self, val):
self.val = val
#deco(A)
def compute(a_instance):
print(a_instance.val)
A(1).compute()
A(2).compute()
outputs
1
2
But just because you can do it does not mean you should. This can become a debugging nightmare, and will probably give a hard time to any static code analyser or linter (PyCharm for example "complains" with Unresolved attribute reference 'compute' for class 'A')
Why doesn't it work out of the box when we split it to different modules (more specifically, when compute is defined in another module)?
Assume the following:
a.py
print('importing deco and A')
class deco:
def __init__(self, cls):
self.cls = cls
def __call__(self, f):
setattr(self.cls, f.__name__, f)
return self.cls
class A:
def __init__(self, val):
self.val = val
b.py
print('defining compute')
from a import A, deco
#deco(A)
def compute(a_instance):
print(a_instance.val)
main.py
from a import A
print('running main')
A(1).compute()
A(2).compute()
If we execute main.py we get the following:
importing deco and A
running main
Traceback (most recent call last):
A(1).compute()
AttributeError: 'A' object has no attribute 'compute'
Something is missing. defining compute is not outputted. Even worse, compute is never defined, let alone getting bound to A.
Why? because nothing triggered the execution of b.py. Just because it sits there does not mean it gets executed.
We can force its execution by importing it. Feels kind of abusive to me, but it works because importing a file has a side-effect: it executes every piece of code that is not guarded by if __name__ == '__main__, much like importing a module executes its __init__.py file.
main.py
from a import A
import b
print('running main')
A(1).compute()
A(2).compute()
outputs
importing deco and A
defining compute
running main
1
2
I have some code that creates instances from a list of classes that is passed to it. This cannot change as the list of classes passed to it has been designed to be dynamic and chosen at runtime through configuration files). Initialising those classes must be done by the code under test as it depends on factors only the code under test knows how to control (i.e. it will set specific initialisation args). I've tested the code quite extensively through running it and manually trawling through reams of output. Obviously I'm at the point where I need to add some proper unittests as I've proven my concept to myself. The following example demonstrates what I am trying to test:
I would like to test the run method of the Foo class defined below:
# foo.py
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run():
for thing in self._stuff:
stuff = stuff()
stuff.run()
Where one (or more) files would contain the class definitions for stuff to run, for example:
# classes.py
class Abc:
def run(self):
print("Abc.run()", self)
class Ced:
def run(self):
print("Ced.run()", self)
class Def:
def run(self):
print("Def.run()", self)
And finally, an example of how it would tie together:
>>> from foo import Foo
>>> from classes import Abc, Ced, Def
>>> f = Foo([Abc, Ced, Def])
>>> f.run()
Abc.run() <__main__.Abc object at 0x7f7469f9f9a0>
Ced.run() <__main__.Abc object at 0x7f7469f9f9a1>
Def.run() <__main__.Abc object at 0x7f7469f9f9a2>
Where the list of stuff to run defines the object classes (NOT instances), as the instances only have a short lifespan; they're created by Foo.run() and die when (or rather, sometime soon after) the function completes. However, I'm finding it very tricky to come up with a clear method to test this code.
I want to prove that the run method of each of the classes in the list of stuff to run was called. However, from the test, I do not have visibility on the Abc instance which the run method creates, therefore, how can it be verified? I can't patch the import as the code under test does not explicitly import the class (after all, it doesn't care what class it is). For example:
# test.py
from foo import Foo
class FakeStuff:
def run(self):
self.run_called = True
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert <SOMETHING>.run_called, "FakeStuff.run() was not called"
It seems that you correctly realise that you can pass anything into Foo(), so you should be able to log something in FakeStuff.run():
class Foo:
def __init__(self, stuff):
self._stuff = stuff
def run(self):
for thing in self._stuff:
stuff = thing()
stuff.run()
class FakeStuff:
run_called = 0
def run(self):
FakeStuff.run_called += 1
def test_foo_runs_all_stuff():
under_test = Foo([FakeStuff, FakeStuff])
under_test.run()
# How to verify that FakeStuff.run() was called?
assert FakeStuff.run_called == 2, "FakeStuff.run() was not called"
Note that I have modified your original Foo to what I think you meant. Please correct me if I'm wrong.
Why do I get the error listed is not defined in the following code snippet?
import socket,select
from threading import *
import time
neighbours=[]
def neighbourfuncall():
print('In neighbours')
class pyserver(Thread):
dictn={}
HOST=socket.gethostname()
PORT=8888
buf=1024
ADDR=(HOST,PORT)
listed=[]
sock=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.bind(ADDR)
sock.listen(30)
def __init__(self):
self.interval=6
listed.append(sock)
thread=threading.Thread(target=neighbourfuncall,args=())
thread.daemon=True
thread.start()
def run(self):
while True:
sel,out,spl=select.select(listed,[],[],15.0)
for s in sel:
if s==sock:
client,address=sock.accept()
listed.append(client)
dest=client.recv(buf)
dictn[client]=dest
else:
pass
serv=pyserver()
serv.run()
You have to access listed with the following syntax:
pyserver.listed = ["I need to study more Python!"]
since it's a static class variable.
As you're in a class, you need to use self.list.append(smth). All class variables must be accessed using self.
By the way, socket operations must be in __init__(). You'd better do this:
def __init__(self):
self.smth=socket()
self.other=[]
self.smth.DoSomething()
def Hello(self):
self.other.append("Hello") #just example
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
File engine.py:
class Engine(object):
def __init__(self, variable):
self.variable = variable
class Event(object):
def process(self):
variable = '123' # this should be the value of engine.variable
Python
>>> from engine import Engine, Event
>>> engine = Engine('123')
>>> e = Event()
>>> e.process()
What's the best way to accomplish this? Because of limitations with the Event class (it's actually a subclass of a third-party library that I'm splicing new functionality into) I can't do something like e = Event(engine).
In depth explanation:
Why am I not using e = Event(engine)?
Because Event is actually a subclass of a third-party library. Additionally, process() is an internal method. So the class actually looks like this:
class Event(third_party_library_Event):
def __init__(*args, **kwargs):
super(Event, self).__init__(*args, **kwargs)
def _process(*args, **kwargs):
variable = engine.variable
# more of my functionality here
super(Event, self)._process(*args, **kwargs)
My new module also has to run seamlessly with existing code that uses the Event class already. So I can't add pass the engine object to each _process() call or to the init method either.
functools.partial might help:
#UNTESTED
class Engine(object):
def __init__(self, variable):
self.variable = variable
class Event(object):
def __init__(self, engine):
super().__init__()
self.engine = engine
def process(self):
print self.engine.variable
engine = Engine('123')
Event = functools.partial(Event, engine)
ThirdPartyApiThatNeedsAnEventClass(Event)
Now, when the 3rd-party library creates an Event, it is automatically passed engine.
"Because of limitations with the Event class (it's actually a subclass
of a third-party library that I'm splicing new functionality into) I
can't do something like e = Event(engine)."
It appears that you're concerned that Event is inheriting some other class and you are therefore unable to alter the constructor method for the class.
Your question is similar to this other one. Fortunately, the super().__init__() method does this for you.
Consider the following example:
>>> class C(object):
def __init__(self):
self.b = 1
>>> class D(C):
def __init__(self):
super().__init__()
self.a = 1
>>> d = D()
>>> d.a
1
>>> d.b # This works because of the call to super's init
1
Why not pass the variable in to the process function? You said the class's constructor can't be changed, but it seems like you are defining process. Just make it:
def process(self, engine):
variable = engine.variable
<do stuff>
or
def process(self, variable):
<do stuff>