Initialize module only once on imports from multiple files - python

I have multiple python scripts, like this:
utils.py
module1.py
module2.py
module3.py
main.py
...
In utils.py:
import some_lib
role_id = "some_key_1"
secret_id = "some_key_2"
def initialize():
real_key = some_web_request(role_id, secret_id)
some_lib.init(real_key)
# define some other functions using some_lib
In module1.py, module2.py, module3.py:
import utils
# define some other functions using functions in utils.py
In main.py:
import module1
import module2
import module3
# do something
I want to run utils.initialize() only once for initializing some_lib. However, due to how import works in python, if I put initialize() on global area in utils.py then it will run on every import statement run.
I can implement this like this, in utils.py
import some_lib
initialized = False
def initialize():
# same as above
if not initialized:
initialize()
initialized = True
Is this a good practice? Is there better or elegant way to implement this?

Related

Pytest path of mock.patch() for third party package

Why the 1st case succeed. But the 2nd case failed AssertionError: Expected 'Jenkins' to have been called.
util.py
from jenkinsapi.jenkins import Jenkins
import os
class Util:
#staticmethod
def rm(filename):
os.remove(filename)
#staticmethod
def get_jenkins_instance():
Jenkins(
'host',
username='username',
password='password',
ssl_verify=False,
lazy=True)
test_util.py
import pytest
from util import Util
def test_util_remove(mocker):
m = mocker.patch('os.remove')
Util.rm('file')
m.assert_called()
def test_util_get_instance(mocker):
m = mocker.patch('jenkinsapi.jenkins.Jenkins')
Util.get_jenkins_instance()
m.assert_called()
Two files are in the same root folder.
Not very clear what's the differences between Python's import and from ... import ....
But if you use from ... import ..., the mock looks as following:
util.py
from jenkinsapi.jenkins import Jenkins # <-- difference A
class Util:
#staticmethod
def get_jenkins_instance():
Jenkins(
'host',
username='username',
password='password',
ssl_verify=False,
lazy=True)
test_util.py
import pytest
from util import Util
def test_util_get_instance(mocker):
m = mocker.patch('util.Jenkins') # <-- difference B
Util.get_jenkins_instance()
m.assert_called()
If you use import directly, the mock looks as following:
util.py
import jenkinsapi.jenkins # <-- difference A
class Util:
#staticmethod
def get_jenkins_instance():
jenkinsapi.jenkins.Jenkins(
'host',
username='username',
password='password',
ssl_verify=False,
lazy=True)
test_util.py
import pytest
from util import Util
def test_util_get_instance(mocker):
m = mocker.patch('jenkinsapi.jenkins.Jenkins') # <-- difference B
Util.get_jenkins_instance()
m.assert_called()
========== Edit (Aug 5, 2022) ==========
Here's the reason why patched like this.
a.py
-> Defines SomeClass
b.py
-> from a import SomeClass
-> some_function instantiates SomeClass
Now we want to test some_function but we want to mock out SomeClass using patch(). The problem is that when we import module b, which we will have to do then it imports SomeClass from module a. If we use patch() to mock out a.SomeClass then it will have no effect on our test; module b already has a reference to the real SomeClass and it looks like our patching had no effect.
The key is to patch out SomeClass where it is used (or where it is looked up ). In this case some_function will actually look up SomeClass in module b, where we have imported it.

Override function in a module with complex file tree

I have a module module containing 2 functions a and b splited into 2 different files m1.py and m2.py.
The module's file tree:
module/
__init__.py
m1.py
m2.py
__init__.py contains:
from .m1 import a
from .m2 import b
m1.py contains:
def a():
print('a')
m2.py contains:
from . import a
def b():
a()
Now, I want to override the function a in a main.py file, such that the function b uses the new function a. I tried the following:
import module
module.a = lambda: print('c')
module.b()
But it doesn't work, module.b() still print a.
I found a solution, that consists in not importing with from . import a but with import module.
m2.py becomes:
import module
def b():
module.a()

Put function into a new module which contains objects from the main module? Python

Is ther a way to put function into a new module, which calls functions/objects from the main module?
Main.py
from xxx import some
from yyy import hello
from funtions import function_y
object = some.thing()
def function_x():
dostuff...
hello(x, y)
function_y()
functions.py
def function_y():
dostuff...
object.bye() #object from Main.py
dostuff...
function_x() #function from Main.py
If I do this like that. I get:
NameError: name 'object' is not defined
But if i try to import Main.py into the functions.py.. its goin worse..
Yes.. as you can see.. i am a newb.. :-) sorry for that. I just want to structure my code into pieces. for every
topic another *.py ... but maybe i am wrong? and have to put every code that calls something from the main namespace into the Main.py? Unless i can pass the objects as parameters?
You can do the way you want if you import Main.py from within function_y():
functions.py
def function_y():
import main
dostuff...
main.object.bye() #object from Main.py
dostuff...
main.function_x() #function from Main.py
I'd suggest following structure.
functions.py
from yyy import hello
def function_x():
dostuff...
hello(x, y)
def function_y(object):
dostuff...
object.bye()
dostuff...
function_x()
main.py
from xxx import some
from funtions import function_x, function_y
object = some.thing()
function_y(object)
Or even pack function_x and function_y in separated packages. Hard to decide without understanding the purposes.
It looks like you should be able to structure your code like this:
main.py
from xxx import some
import functions
object = some.thing()
functions.function_y(object)
functions.py
from yyy import hello
def function_x():
dostuff...
hello(x, y)
def function_y(obj):
dostuff...
obj.bye() #object passed as parameter
dostuff...
function_x() #function now local in function.py

Python - Intra-package importing when package modules are sometimes used as standalone?

Sorry if this has already been answered using terminology I don't know to search for.
I have one project:
project1/
class1.py
class2.py
Where class2 imports some things from class1, but each has its own if __name__ == '__main__' that uses their respective classes I run frequently. But then, I have a second project which creates a subclass of each of the classes from project1. So I would like project1 to be a package, so that I can import it into project2 nicely:
project2/
project1/
__init__.py
class1.py
class2.py
subclass1.py
subclass2.py
However, I'm having trouble with the importing with this. If I make project1 a package then inside class2.py I would want to import class1.py code using from project1.class1 import class1. This makes project2 code run correctly. But now when I'm trying to use project1 not as a package, but just running code from directly within that directory, the project1 code fails (since it doesn't know what project1 is). If I set it up for project1 to work directly within that directory (i.e. the import in class2 is from class1 import Class1), then this import fails when trying to use project1 as a package from project2.
Is there a way to have it both ways (use project1 both as a package and not as a package)? If there is a way, is it a discouraged way and I should be restructuring my code anyway? Other suggestions on how I should be handling this? Thanks!
EDIT
Just to clarify, the problem arrises because subclass2 imports class2 which in turn imports class1. Depending on which way class2 imports class1 the import will fail from project2 or from project1 because one sees project1 as a package while the other sees it as the working directory.
EDIT 2
I'm using Python 3.5. Apparently this works in Python 2, but not in my current version of python.
EDIT 2: Added code to class2.py to attach the parent directory to the PYTHONPATH to comply with how Python3 module imports work.
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
Removed relative import of Class1.
Folder structure:
project2
- class3.py
- project1
- __init__.py
- class1.py
- class2.py
project2/project1/class1.py
class Class1(object):
def __init__(self):
super(Class1, self).__init__()
self.name = "DAVE!"
def printname(self):
print(self.name)
def run():
thingamy = Class1()
thingamy.printname()
if __name__ == "__main__":
run()
project2/project1/class2.py
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from class1 import Class1
class Class2(Class1):
def childMethod(self):
print('Calling child method')
def run():
thingamy = Class2()
thingamy.printname()
thingamy.childMethod()
if __name__ == "__main__":
run()
project2/class3.py
from project1.class2 import Class2
from project1.class1 import Class1
class Class3(Class2):
def anotherChildMethod(self):
print('Calling another child method')
def run():
thingamy = Class3()
thingamy.printname()
thingamy.anotherChildMethod()
if __name__ == "__main__":
run()
With this setup each of class1, 2 and 3 can be run as standalone scripts.
You could run class2.py from inside the project2 folder, i.e. with the current working directory set to the project2 folder:
user#host:.../project2$ python project1/class2.py
On windows that would look like this:
C:\...project2> python project1/class2.py
Alternatively you could modify the python path inside of class2.py:
import sys
sys.path.append(".../project2")
from project1.class1 import class1
Or modify the PYTHONPATH environment variable similarly.
To be able to extend your project and import for example something in subclass1.py from subclass2.py you should consider starting the import paths always with project2, for example in class2.py:
from project2.project1.class1 import class1
Ofcourse you would need to adjust the methods I just showed to match the new path.

Python module importing

I'm working on a GUI app. Let's say that I have a file main.py which is in the root directory and I want to import widgets like this:
from widgets import FancyWindow, ColorPicker
# ...
And my app is structured like so:
| main.py
+ widgets/
| __init__.py
| fancy_window.py
| color_picker.py
...
IIRC, I would have to import classes from the other modules like so:
from widgets.color_picker import ColorPicker
from widgets.fancy_window import FancyWindow
So my question is if there is something I can do in widgets/__init__.py to make it so that I can do imports in the way I want to?
You actually have it there already, just make your __init__.py have those two lines:
from widgets.color_picker import ColorPicker
from widgets.fancy_window import FancyWindow
Anything that you import (and any other symbols you define) in __init__.py will then be available.
E.g. if you put:
apple = 5
in __init__.py you could also do: from widgets import apple. No magic :)
In __init__.py you can import the modules, then export them:
from fancy_window import FancyWindow
from color_picker import ColorPicker
__all__ = [FancyWindow, ColorPicker]
Now, you can do the following:
from widgets import * # Only imports FancyWindow and ColorPicker
from widgets import FancyWindow
from widgets import ColorPicker
import widgets
color_picker = widgets.ColorPicker()
The best way would be to use __init__.py to perform setups needed for the package.
# __init__.py
from fancy_window import FancyWindow
from color_picker import ColorPicker
Then in main.py you can perform imports directly.
# main.py
from widgets import FancyWindow, ColorPicker
A more convenient way to package everything in __init__.py
# __init__.py
from fancy_window import *
from color_picker import *
yes, by adding the public variables of the other modules to __all__ in __init__.py
see https://docs.python.org/3/reference/simple_stmts.html#import

Categories

Resources