I'm trying to patch dependencies with my errbot tests. The problem I'm having is how errbot imports modules. It is not static and breaks my patch decorators as I add tests or they test in a different order.
I have plugin called EDB (edb.py). Inside of edb.py I import pyedb with import pyedb. This is located in my site-packages.
I have my test file test_edb.py and I try to patch my test methods like this
pytest_plugins = ["errbot.backends.test"]
extra_plugin_dir = '.'
from unittest.mock import patch # noqa: E402
#patch('yapsy_loaded_plugin_EDB_1.pyedb', autospec=True)
def test_edb_testlist(pyedb_mock, testbot):
testbot.push_message('!edb testlist')
assert "Okay, let me get..." == testbot.pop_message()
assert "I don't see any..." == testbot.pop_message()
Errbot adds this yapsy_loaded_plugin_EDB_<xx> path for module import but the xx depends on the order the test is run. This doesn't work, I need some static import path mypath.pyedb.
I'm hoping there is a different way to approach this. Maybe I can change the how I import the module so it's not dependent on errbot imports?
Here is a link to Errbot testing for reference.
My solution feels a bit hacky but it works. If anyone has a more elegant solution please share. I'll accept my own answer after awhile if there are no additional responses.
So I've come across this before but I guess I still wasn't familiar enough with how patching works in Python with knowing where to patch. After reading the "Where to patch" documentation ( again :) ) I have a work-around given the dynamic imports with errbot.
An errbot project folder will look something
errbot-project/
├── data/
│ ├── ...
├── plugins/
│ ├── plugin1/
| ├── ...
| ├── plugin2/
| ├── ...
I noticed that when errbot runs both the project directory ../errbot-project and all the plugin directories (e.g. ../errbot-project/plugins/plugin1) are added to sys.path.
So I added a package to my project directory and I import that in my plugins. I then can patch my dependencies reliably from that package. Again read the Where to Patch documentation for full explanation why. It looks something like this.
errbot-project/
├── data/
│ ├── ...
├── plugins/
│ ├── plugin1/
| ├── ...
| ├── plugin2/
| ├── ...
├── plugin_deps/
| ├── __init__.py
Where my ../errbot-project/plugin_deps/__init__.py looks like
...
import dep1
import dep2
...
And then in my plugin I use
...
import plugin_deps as pdep
...
def method():
pdep.dep1.method()
...
# note, you cannot use
# from plugin_deps import dep1
# this changes 'where' python looks up the module and
# and 'breaks' your patch
And finally my test code looks like
#patch('plugin_deps.dep1', autospec=True)
def test_get_tl_tabulation(my_mock, testbot):
# test code here
Related
I wrote a package with several modules
pkg/
├── __init__.py
├── mod.py
├── mod_calc.py
└── mod_plotting.py
mod.py uses functions from mod_calc.py and mod_plotting.py, so in mod.py I use:
# mod.py:
import pkg.mod_calc as MC
import pkg.mod_plotting as MP
MC.calculate_this()
MP.plot_this()
I reckon this is ok.
There are also scripts (jupyter notebooks) with the suggested workflow designed for users with very little python knowledge, and I'd like them to use the functions as calculate_this() (defined in mod_calc.py) etc (as oppose to mod_calc.calculate_this() etc)
So here is what I'm currently doing:
# __init__.py:
from pkg.mod import *
from pkg.mod_calc import *
from pkg.mod_plotting import *
# script:
from pkg import *
do_this() # from mod.py
calculate_this() # from mod_calc.py
plot_this() # from mod_plotting.py
This way the user doesn't need to worry about which function is defined in which module. It works fine, but I understand that from ... import * is not best practice. So what is the pythonic way to do this?
I am trying to work with python packages and modules for the first time and come across some import errors I don't understand.
My project has the following structure:
upper
├── __init__.py
├── upper_file.py # contains "from middle.middle_file import *"
└── middle
├── __init__.py
├── middle_file.py # contains "from lower.lower_file import Person, Animal"
└── lower
├── __init__.py
└── lower_file.py # contains the Classes Person and Animal
I can run middle_file.py and can create inside the file a Person() and Animal() without any problems.
If I try to run upper_file.py I get a ModuleNotFoundError: No module named 'lower' error.
However, I have no trouble importing Animal() or Person() in upper_file.py directly with from middle.lower.lower_file import *
If I change the import statement inside middle_file.py from from lower.lower_file import Person, Animal to from middle.lower.lower_file import Person, Animal I can successfully run upper_file.py but not middle_file.py itself (and pycharm underlines the import in middle_file.py red and says it doesn't know middle)
In the end, I need to access inside of upper_file.py a class that is located inside of middle_file.py, but middle_file.py itself depends on the imports of lower_file.py.
I already read through this answer and the docs but just don't get how it works and why it behaves the way it does.
Thanks for any help in advance.
You should use relative import to accomplish this. First link on Google I found some practical example that could help you understand better.
On middle_file try to use from .lower.lower_file import *. It should solve the issue on upper_file.
I have a package with the following structure
package/
│
└───__init__.py
│
└───sub/
└───__init__.py
│
└───XWrap.py # implements class X
└───YWrap.py # implements class Y
The first __init__.py looks like this:
from . import sub
The second __init__.py looks like this:
from .XWrap import X
from .YWrap import Y
Doing this, the user sees the following
package
│
└───sub
└───X
└───XWrap
└───Y
└───YWrap
I would like to have a cleaner interface where I don't see YWrap and XWrap. How can I achieve this?
This is the same question asked in a comment here: Python packages - import by class, not file
but couldn't find a definite answer.
I have a helper class in a test module that has a class-level member in which I cache already-created members of the class (sql dumps of database configurations composed in fixtures, so that I don't have to derive the data again for multiple tests).
It starts:
class SqlDump:
FIXUP = re.compile(r"^(\s*CREATE SEQUENCE[^;]*)AS INTEGER([^;]*);",
flags=re.MULTILINE | re.IGNORECASE | re.DOTALL)
PATH = os.path.join(os.path.dirname(__file__), 'test_data/sql_dumps/{script}.sql')
all = {}
def __init__(self, script):
self.__class__.all[script] = self
self.script = script
self.content = self.load()
If I place a breakpoint on the initialization of this member all, using it outside pytest, it is initialized only once.
But when I run pytest, the line that initializes the member is executed twice. This results in some values being lost.
Is there ever any reason a class-level member should be initialized twice? Why is pytest doing this?
This is a year old now, but in case it helps someone else:
There was a very similar issue in my case, where a module was getting rerun over and over from pytest. This particular module (SQLAlchemy) is highly sensitive to reinitializations, resulting in an opaque error of Multiple classes found for path in the registry of this declarative base. This did not occur during normal runs of the platform - only when pytest was run.
Here's how the project was structured:
ProjectName
│ __init__.py
│ script_outside_modules.py
│
└───actual_module
│ __init__.py
│ pytest.ini
│ some_code.py
│
└───some_subfolder
│ │ __init__.py
│ │ a_file_to_test.py
│
└───tests
│ __init__.py
│ test_suite.py
All imports were absolute from the actual_module root, e.g. actual_module.some_code.
If you want to triage exactly how sys sees your module imports, and whether the same module was imported such that it's seen as two different modules, try using the following code in a module which you believe could be getting double-imported, outside of any encapsulation (e.g. above class SqlDump in your example):
import sys
import traceback
print(f"{__file__} imported by module {__name__}, in sys.modules as {sys.modules[__name__]}")
try:
if hasattr(sys, 'double_import'):
raise AssertionError(f"{__file__} was double-imported!")
except Exception as e:
traceback.print_exc()
raise e
sys.double_import = 1
Reading what it's registered as in sys.modules should help you identify where the disconnect is happening and whether you have odd module import scenarios playing out.
After hours of investigating possible causes and logging, I found that the extra import was due to the __init__.py at the root of the project, inside ProjectName in this case. The code above helped to illustrate this, where sys.modules showed an module for actual_module.some_code during preparation phases, but then showed another module at ProjectName.actual_module.some_code within the tests themselves.
This seems to be because pytest will identify a root separately from whatever is defined in imports and prepend it when running tests (though that's just a working hypothesis). Deleting ProjectName/__init__.py resolved my issue.
I write in Python 3.
I want to add plugins support to my program. I don't want to use heavy frameworks, so I deiced to write a minimal one by my self.
By the way, the plugins need to run by time. I don't always unload and load the plugin when re-run -- the plugins will lose all data.
My folder structs is here:
interfaces/
├── dummy
├── gmail
│ ├── __init__.py
│ └── __pycache__
│ └── __init__.cpython-33.pyc
└── hello
├── __init__.py
└── __pycache__
└── __init__.cpython-33.pyc
Then, I wrote a piece of code to load and execute the plugins:
#!/usr/bin/python3
import os
import imp
INTERFACES_FOLDER = './interfaces'
MAIN_MODULE = '__init__'
def search_plugins():
plugins = []
plugins_folders = os.listdir(INTERFACES_FOLDER)
for i in plugins_folders:
plugin_folder = os.path.join(INTERFACES_FOLDER, i)
if not os.path.isdir(plugin_folder):
continue
if not MAIN_MODULE + '.py' in os.listdir(plugin_folder):
continue
info = imp.find_module(MAIN_MODULE, [plugin_folder])
plugins.append({'name': i, 'info': info})
return plugins
def load_plugin(plugin):
return imp.load_module(MAIN_MODULE, *plugin["info"])
plugins_list = search_plugins()
plugins = []
for i in plugins_list:
module = load_plugin(i)
print(module)
plugins.append(module)
print(plugins)
Outputs:
# it works!
<module '__init__' from './interfaces/gmail/__init__.py'>
<module '__init__' from './interfaces/hello/__init__.py'>
# what's wrong?
[<module '__init__' from './interfaces/hello/__init__.py'>,
<module '__init__' from './interfaces/hello/__init__.py'>]
As you see, when I loaded a plugin, everything is working correctly. But when I append them to the a list, different modules will became a same one.
What's wrong?
You import different modules, but with the same name (they are all called "init"). Python will, when you import the second module notice that you have already imported a module called "init" and return that.
The module you should import is rather the "gmail" and "hello" modules. That might work.
However, I'd urge you to reconsider about writing your own plugin system. It really isn't that easy. A quick search finds several plugin systems, some slightweight, some not so much. Most have been abandoned, which is an indication that this isn't so easy as you might think.
The plugin system in Python that probably is the most widely used, also the most flexible and amongst the oldest, is the Zope Component Architecture. See also the docs. Note that in Python 3 you use class decorators instead of the class body statements.
Another popular plugin system is the "entry points" of Distribute. That's quite lightweight, although I don't know if you can load or unload plugins with that.
Others I have never looked at are Yapsy, one called "Plugins" that seem abandoned, another one called "PyPlugin" also abandoned, and one called "easy_plugins" which seem very new, and isn't abandoned yet.
And here is example code for a plugin manager.