Organizing functions into modules based on choice - python

I have 2 Python modules (e.g. A.py & B.py) that have the same function names (the implementation is different though). A third module (C.py) has a function that requires the functions in A or B, depending on user choice.
A fourth module (D.py) will import either A or B, and then C.
How do I correctly setup the modules and imports?
# module A.py
def write():
print('A')
# module B.py
def write():
print('B')
# module C.py
def foo():
write()
# module D.py (main executed module)
if choiceA:
import A
import C
else:
import B
import C
C.foo()

This is essentially a basic case of a Strategy Pattern. Instead of doing a double-import and implicitly expecting module C to get the right module, you should just explicitly pass the appropriate selection for it to call.
Using modules A and B as before:
# module C.py
def foo(writer):
writer.write()
# module D.py
import A
import B
import C
if choiceA:
my_writer = A
elif choiceB:
my_writer = B
C.foo(my_writer)
This will also continue to work exactly the same way if you choose to define A, B, and C as a class hierarchy instead of modules.

Not much really.
Module A.py:
def write():
print('A')
Module B.py
def write():
print('B')
Module C.py
def foo(choice):
if choice == 'A':
import A
A.write()
elif choice == 'b':
import B
B.write()
else:
#whatever
module D.py
choice = input('Enter choice: ')
import C
C.foo(choice)

Related

Python pass variable to imported module at "compilation" time

I'm trying to create an object from a custom class with a decorator in it.
Then I need to use the decorator of that object to create a bunch of functions in a different file.
I found the solution of using builtins online but it feels a bit hacky and makes my lsp spit out a bunch of errors.
I'm wondering if anyone knows a better solution.
This is my file structure:
main.py
mylib
├── A.py
└── B.py
main.py
import builtins
from mylib.A import A
a = A("This is printing in the decorator")
builtins.FOO = a
import mylib.B as B
B.foo()
B.poo()
A.py
class A:
def __init__(self, _message):
self.message = _message
def our_decorator(self, func):
def function_wrapper():
print(self.message)
func()
return function_wrapper
B.py
import builtins
#builtins.FOO.our_decorator
def foo():
print("foo")
#builtins.FOO.our_decorator
def poo():
print("poo")
I don't want to change the file structure if it can be avoided.
Using builtins to have a magically created decorator is indeed hacky.
An alternative would be to patch the functions in B from main. It is slightly cleaner (and linters should not complain) because the B module has no longer to be aware of the decorator:
main.py:
from mylib.A import A
a = A()
import mylib.B as B
# decorate here the functions of the B module
B.foo = a.our_decorator(B.foo)
B.poo = a.our_decorator(B.poo)
B.foo()
B.poo()
A.py is unchanged...
B.py:
def foo():
print("foo")
def poo():
print("poo")
As only one version of a module exists in a process, you can even use the functions of B from a third module, provided:
either it is imported after the decoration
or it only imports the module name (import mylib.B as B) and not directly the functions (from mylib.B import foo)

How to change and save variable in different files

I have three python files:
a.py
theVariable = "Hello!"
b.py
import a
a.theVariable = "Change"
c.py
import a
while True:
print(a.theVariable)
I want c.py to print out "Change" instead of "Hello!". How can I do this and why wont this work?
There needs to be separate files because b.py would be running a Tkinter gui.
You can use classes to get an instance variable which holds some state
a.py
class Foo():
def __init__(self):
self.greeting = "Hello"
And create functions so that you can defer actions on references
b.py
def changer(f):
f.greeting = "Change"
When you import a class and then pass it to a function, you are passing a reference to something that can change state
c.py
from a import Foo
from b import changer
a = Foo()
for x in range(10): # simple example
if x > 5:
changer(a)
print(a.greeting)
a.py
theVariable = "Hello!"
def change(variable):
theVariable = variable
b.py
import a
change('change')
c.py
import a
while True:
print(theVariable)
i've been working with PyQt recently and this worked fine. setting a function in the orgin .py file makes it much simpler.

Override function in a module with complex file tree

I have a module module containing 2 functions a and b splited into 2 different files m1.py and m2.py.
The module's file tree:
module/
__init__.py
m1.py
m2.py
__init__.py contains:
from .m1 import a
from .m2 import b
m1.py contains:
def a():
print('a')
m2.py contains:
from . import a
def b():
a()
Now, I want to override the function a in a main.py file, such that the function b uses the new function a. I tried the following:
import module
module.a = lambda: print('c')
module.b()
But it doesn't work, module.b() still print a.
I found a solution, that consists in not importing with from . import a but with import module.
m2.py becomes:
import module
def b():
module.a()

Can I use module name as a 'variable' in Python?

Is it possible to use the name of the module as a variable to send to a function and specify/restrict some object with it? Thanks!
module1.py
def foo(a, b):
return (a + b) / 2.0
module2.py
def foo(a, b):
return 2.0 * a*b / (a+b)
file3.py
import module1
import module2
def do(a, b, module_name):
return module_name.foo(a, b)
Following your question, I assumed that you want to be able to invoke the foo method of any locally imported modules, providing args + module_name (as a string, like the variable name suggests).
Using locals() instead of sys.modules narrows the search to the modules imported in this file.
My solution:
import inspect
import module1
import module2
def do(a, b, module_name):
module = locals().get(module_name)
if inspect.ismodule(module):
return module.foo(a, b)
raise NameError('no such module imported with this name')
You can access modules in the form of a dictionary for easy lookup using sys.modules:
import module1
import module2
import sys
def do(a, b, module_name):
return sys.modules[module_name].foo(a, b)
This way they can be called for example:
do(3, 5, 'module1') # returns 4.0
do(3, 5, 'module2') # returns 3.75

Three python modules, calling one another

I am working on a project where I have three python modules (a.py, b.py and c.py).
Module a is calling module b, module b is calling module c, and module c is calling module a. But the behaviour is very bizzare when it runs.
Here are my three modules:
a.py
print('module a')
def a() :
print('inside a')
return True
import b
b.b()
b.py
print('module b')
def b() :
print('inside b')
return True
import c
c.c()
c.py
print('module c')
def c() :
print('inside c')
return True
import a
a.a()
When I run a.py, the output observed is :
module a
module b
module c
module a
inside b
inside a
inside c
inside b
Whereas the expected behavior is:
module a
module b
module c
module a
inside b
Why does this happen? Is there an alternative way for such an implementation?
This has to do with stack frames and how functions and imports are called.
You start by running a.py.
'module a'
First thing that happens: import b:
'module b'
Within b, c.py is imported:
'module c'
Module c imports a, and we get:
'module a'
b has already been imported from running a.py in the first place, so this call of import b is passed (we do not re-import b.py). We then see the effects of calling b.b(), the next statement:
inside b
And we return to c.py's frame, where we call a.a():
inside a
c.py has run its course; next we jump back to b.py, where we left off (right after importing), and call c.c():
inside c
Now b.py has finished running as well. Finally we return to the a.py frame from which we ran the program, and call b.b():
inside b
Hope this helps explain this behavior. Here's an example of how you could rectify this problem and get your desired output:
a.py:
print("module a")
import b
def main():
b.b()
def a():
print("inside a")
if __name__ == "__main__":
main()
b.py:
print("module b")
import c
def main():
c.c()
def b():
print("inside b")
if __name__ == "__main__":
main()
c.py:
print("module c")
import a
def main():
a.a()
def c():
print("inside c")
if __name__ == "__main__":
main()
Here's a link to explain what happens with the if __name__ == "__main__": main() call. Essentially it will only run the main() function for each script if that is the script that is built & run in the first place. In this way, you get the desired output:
module a
module b
module c
module a
inside b
I think the key misunderstanding is that you don't expect all the modules to run after their imports, but they do. They get interrupted mid script to do another import but they will return to finish out the commands.
So what ends up happening is: (I'm removing the function declarations, just for clarity)
print('module a')
import b
>>> importing b
print('module b')
import c
>>> importing c
print('module c')
import a
>>> importing a
print('module a')
import b
>>> Doesn't reimport b
b.b()
a.a()
c.c()
b.b()
So to just show the order of commands without the imports and nesting:
print('module a')
print('module b')
print('module c')
print('module a')
b.b()
a.a()
c.c()
b.b()
And this does match your actual output.

Categories

Resources