del methods in python class has two different output in different text editors
class test:
def __init__(self) :
print("init")
def __del__(self):
print("del")
a=test()
outpuu in vs code :
init
del
out put in jupyter :
init
When you run the Python script in the terminal (which is similar as in vscode), after the last line has been executed, the script terminates. When a script terminates, the desctructor of the class test() is called. A desctructor is defined in __del__() method in a class.
In Jupyter notebook, the script does not terminate and is up for your next code chunk (apols the terminology might be slightly off here). The __del__() method is not called.
It is explicitely stated in Python Language Reference. Data model / Special method names / Basic customization says (emphasize mine):
object.del(self)
Called when the instance is about to be destroyed...
It is not guaranteed that del() methods are called for objects that still exist when the interpreter exits.
That means that different environments may have different usage regarding the calling of __del__
Related
Introduction:
We know that, in python, a function is a class. To some extent, We can look at it as a data type which can be called and return a value. So it is a callable. We also know that Python classes are callables. When they are called, we are actually making objects as their instances.
My implementation: In a current task, I have defined the following class with two methods:
class SomeClass():
def some_method_1():
some_code
def some_method_2():
some_code
self.some_method_1()
some_code
To describe the code above, some_method_2 is using some_method_1 inside it.
Now I am seeking to test some_method_2. In this case, I need to replace some_method_1 with a mock object and specify the mock object to return what I define:
from unittest.mock import Mock
import unittest
class TestSomeClass(unittest.TestCase):
some_object=Some_Class()
some_object.some_method_1 = Mock(return_value=foo)
self.assertEqual(an_expected_value, some_object.some_method_2())
This works totally fine and the script runs without error and the test result is also ok.
A fast Intro about Mypy: Mypy is a python type checking tool which can run and check if the script is written correctly based on variable types. It does this process by some criteria such as type annotations, variable assignments and libraries stub files. This process is done without interpreting and running the code.
What is the Problem?
When I try to check my code with mypy, it gives error for this line:
some_object.some_method_1 = Mock(return_value=foo)
The error indicates that I am not allowed to assign an object to a callable in python. Sometimes mypy does not report real errors and I doubt if this is the case. Especially because I can run my code with no problem.
Now, my question is, have I done my job wrong or just the mypy report is wrong? If I have done wrong, how can I implement the same scenario in a correct manner?
Using Django 1.10 and python 3.5.1.
I'm trying to mock 'call_command' function to throw an exception. The problem is that seems like the moment it gets the 'side_effect' function - it keeps to it also for other tests. What am I doing wrong or how can I 'revert' the side_effect from that function?
In this example, after running one of the tests, all other tests that run afterwards will throw the same exception even if it's not supposed to throw exception in that test.
def test_run_migrations_raise_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=Exception('e message')):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
def test_run_migrations_raise_flow_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=FlowException(500, 'fe message', {'a': 1})):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
You should not patch a function that is in your module-local (i.e. Python's "global" - which is actually "module") namespace.
When in Python you do
from module.that import this
this becomes a variable on the module that contains the import statement. Any changes to "module.that.this" will affect the object pointed in the other module, but using only this will still reefer to the original object.
Perhaps your code is not exactly as you show us, or maybe "mock.pacth" can find out that the module-local call_command is pointing to django.core.management.call_command in the other module when it makes the patch - but not when reversing the patch. The fact is your module-local name call_command is being changed.
You can fix that by simply changing your code to not bind a module variable directly to the function you want to change:
from django.core import management
def test_run_migrations_raise_exception(self):
with mock.patch('django.core.management.call_command', return_value=None, side_effect=Exception('e message')):
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
management.call_command('run_data_migrations')
self.check_migrations_called(MigrationTracker.objects.all(), data_migrations_settings_in_db)
I hope you can understand that and solve this problem. Now, that said, this use of mock makes no sense at all: the idea of using mock is that some callable used indirectly by code you call within the code-block that applies the patch does not have the original effect - so the intermetiate code can run and be tested. You are calling directly the mock object - so it will have none of the original code - calling call_command('run_data_migrations') runs no code on your code base at all, and thus, there is nothing there to test. It just calls the mocked instance, and it will not change the status of anything that could be detected with check_migrations_called.
I have an architecture, where I use wrapper for calling functions from package module. Inside the module there is a function that calls another three. I need to override one of them in run-time. Exactly I need to change parameters that are forwarded to another set of functions being called.
Here is a case sample:
a.py
import b_wrapper as wrapper
def foo():
if wrapper.bar(parameter):
"""some more code goes here"""
b_wrapper.py
import some.package.module as module
def bar(parameter):
return module.baz(veryImportantParameter, parameter)
file.py
def functionThree(par): # needs to be overwritten
"""more functions called forwarding par as a parameter"""
def baz(veryImportantParameter, parameter)
functionOne(veryImportantParameter, otherParameters)
functionTwo(veryImportantParameter, someMoreParameters)
functionThree(veryImportantParameter, parameterToChange, evenMoreParameters)
What I tried to do is overriding in wrapper file, didn't work out, as other functions are interfering with it. As reference used this post.
I'm not quite sure that this is doable, because of unique functions that are called inside this module, also looking for alternatives that won't require overriding portion of module.
Edit: mixing up arguments and parameters is intentional for demonstration purpose only.
I started a python project and attempted to have it OOP.
I used a script to call function from classes, in the following way:
class Object():
def function(input):
print(input)
The command Object.Object.function("example") used to work fine.
I had to reinstall pycharm, and now when running the same code I get the error of not sending enough input.
This can be solved by changing the call to Object.Object().function("example"),
and the function definition to def function(a,input):
Where the variable a is never used. This however causes new problems when using libraries.
How can I use the previous configuration?
Object.Object.function("example") and Object.Object().function("example") are different beasts entirely. The first run invokes the method function on the class Object.Object, while in the latter, Object.Object() creates an instance of type Object.Object and invokes function on that instance (which fails as you must provide the instance itself as the first parameter to the method). It sounds like you are trying to make something like a staticmethod,
class A:
#staticmethod
def f(input):
print(input)
for which both A.f and A().f will act as print.
This question already has answers here:
How do I forward-declare a function to avoid `NameError`s for functions defined later?
(17 answers)
Closed 8 months ago.
Is it possible to call a function without first fully defining it? When attempting this I get the error: "function_name is not defined". I am coming from a C++ background so this issue stumps me.
Declaring the function before works:
def Kerma():
return "energy / mass"
print Kerma()
However, attempting to call the function without first defining it gives trouble:
print Kerma()
def Kerma():
return "energy / mass"
In C++, you can declare a function after the call once you place its header before it.
Am I missing something here?
One way that is sort of idiomatic in Python is writing:
def main():
print Kerma()
def Kerma():
return "energy / mass"
if __name__ == '__main__':
main()
This allows you to write you code in the order you like as long as you keep calling the function main at the end.
When a Python module (.py file) is run, the top level statements in it are executed in the order they appear, from top to bottom (beginning to end). This means you can't reference something until you've defined it. For example the following will generate the error shown:
c = a + b # -> NameError: name 'a' is not defined
a = 13
b = 17
Unlike with many other languages, def and class statements are executable in Python—not just declarative—so you can't reference either a or b until that happens and they're defined. This is why your first example has trouble—you're referencing the Kerma() function before its def statement has executed and body have been processed and the resulting function object bound to the function's name, so it's not defined at that point in the script.
Programs in languages like C++ are usually preprocessed before being run and during this compilation stage the entire program and any #include files it refers to are read and processed all at once. Unlike Python, this language features declarative statements which allow the name and calling sequence of functions (or static type of variables) to be declared (but not defined), before use so that when the compiler encounters their name it has enough information to check their usage, which primarily entails type checking and type conversions, none of which requires their actual contents or code bodies to have been defined yet.
This isn't possible in Python, but quite frankly you will soon find you don't need it at all. The Pythonic way to write code is to divide your program into modules that define classes and functions, and a single "main module" that imports all the others and runs.
For simple throw-away scripts get used to placing the "executable portion" at the end, or better yet, learn to use an interactive Python shell.
If you are willing to be like C++ and use everything inside a functions. you can call the first function from the bottom of the file, like this:
def main():
print("I'm in main")
#calling a although it is in the bottom
a()
def b():
print("I'm in b")
def a():
print("I'm in a")
b()
main()
That way python is first 'reading' the whole file and just then starting the execution
Python is a dynamic programming language and the interpreter always takes the state of the variables (functions,...) as they are at the moment of calling them. You could even redefine the functions in some if-blocks and call them each time differently. That's why you have to define them before calling them.