I have a package with the following structure
package/
│
└───__init__.py
│
└───sub/
└───__init__.py
│
└───XWrap.py # implements class X
└───YWrap.py # implements class Y
The first __init__.py looks like this:
from . import sub
The second __init__.py looks like this:
from .XWrap import X
from .YWrap import Y
Doing this, the user sees the following
package
│
└───sub
└───X
└───XWrap
└───Y
└───YWrap
I would like to have a cleaner interface where I don't see YWrap and XWrap. How can I achieve this?
This is the same question asked in a comment here: Python packages - import by class, not file
but couldn't find a definite answer.
Related
I am trying to work with python packages and modules for the first time and come across some import errors I don't understand.
My project has the following structure:
upper
├── __init__.py
├── upper_file.py # contains "from middle.middle_file import *"
└── middle
├── __init__.py
├── middle_file.py # contains "from lower.lower_file import Person, Animal"
└── lower
├── __init__.py
└── lower_file.py # contains the Classes Person and Animal
I can run middle_file.py and can create inside the file a Person() and Animal() without any problems.
If I try to run upper_file.py I get a ModuleNotFoundError: No module named 'lower' error.
However, I have no trouble importing Animal() or Person() in upper_file.py directly with from middle.lower.lower_file import *
If I change the import statement inside middle_file.py from from lower.lower_file import Person, Animal to from middle.lower.lower_file import Person, Animal I can successfully run upper_file.py but not middle_file.py itself (and pycharm underlines the import in middle_file.py red and says it doesn't know middle)
In the end, I need to access inside of upper_file.py a class that is located inside of middle_file.py, but middle_file.py itself depends on the imports of lower_file.py.
I already read through this answer and the docs but just don't get how it works and why it behaves the way it does.
Thanks for any help in advance.
You should use relative import to accomplish this. First link on Google I found some practical example that could help you understand better.
On middle_file try to use from .lower.lower_file import *. It should solve the issue on upper_file.
As per title, I am trying to build a python package myself, I am already familiar with writing python packages reading notes from https://packaging.python.org/en/latest/tutorials/packaging-projects/ and https://docs.python.org/3/tutorial/modules.html#packages. These gave me an idea of how to write a bunch of object class/functions where I can import them.
What I want is to write a package like pandas and numpy, where I run import and they work as an "object", that is to say most/all the function is a dotted after the package.
E.g. after importing
import pandas as pd
import numpy as np
The pd and np would have all the functions and can be called with pd.read_csv() or np.arange(), and running dir(pd) and dir(np) would give me all the various functions available from them. I tried looking at the pandas src code to try an replicate their functionality. However, I could not do it. Maybe there is some parts of that I am missing or misunderstanding. Any help or point in the right direction to help me do this would be much appreciated.
In a more general example, I want to write a package and import it to have the functionalities dotted after it. E.g. import pypack and I can call pypack.FUNCTION() instead of having to import that function as such from pypack.module import FUNCTION and call FUNCTION() or instead of importing it as just a submodule.
I hope my question makes sense as I have no formal training in write software.
Let's assume you have a module (package) called my_library.
.
├── main.py
└── my_library/
└── __init__.py
/my_library/__init__.py
def foo(x):
return x
In your main.py you can import my_library
import my_library
print(my_library.foo("Hello World"))
The directory with __init__.py will be your package and can be imported.
Now consider a even deeper example.
.
├── main.py
└── my_library/
├── __init__.py
└── inner_module.py
inner_module.py
def bar(x):
return x
In your /my_library/__init__.py you can add
from .inner_module import bar
def foo(x):
return x
You can use bar() in your main as follows
import my_library
print(my_library.foo("Hello World"))
print(my_library.bar("Hello World"))
I have a helper class in a test module that has a class-level member in which I cache already-created members of the class (sql dumps of database configurations composed in fixtures, so that I don't have to derive the data again for multiple tests).
It starts:
class SqlDump:
FIXUP = re.compile(r"^(\s*CREATE SEQUENCE[^;]*)AS INTEGER([^;]*);",
flags=re.MULTILINE | re.IGNORECASE | re.DOTALL)
PATH = os.path.join(os.path.dirname(__file__), 'test_data/sql_dumps/{script}.sql')
all = {}
def __init__(self, script):
self.__class__.all[script] = self
self.script = script
self.content = self.load()
If I place a breakpoint on the initialization of this member all, using it outside pytest, it is initialized only once.
But when I run pytest, the line that initializes the member is executed twice. This results in some values being lost.
Is there ever any reason a class-level member should be initialized twice? Why is pytest doing this?
This is a year old now, but in case it helps someone else:
There was a very similar issue in my case, where a module was getting rerun over and over from pytest. This particular module (SQLAlchemy) is highly sensitive to reinitializations, resulting in an opaque error of Multiple classes found for path in the registry of this declarative base. This did not occur during normal runs of the platform - only when pytest was run.
Here's how the project was structured:
ProjectName
│ __init__.py
│ script_outside_modules.py
│
└───actual_module
│ __init__.py
│ pytest.ini
│ some_code.py
│
└───some_subfolder
│ │ __init__.py
│ │ a_file_to_test.py
│
└───tests
│ __init__.py
│ test_suite.py
All imports were absolute from the actual_module root, e.g. actual_module.some_code.
If you want to triage exactly how sys sees your module imports, and whether the same module was imported such that it's seen as two different modules, try using the following code in a module which you believe could be getting double-imported, outside of any encapsulation (e.g. above class SqlDump in your example):
import sys
import traceback
print(f"{__file__} imported by module {__name__}, in sys.modules as {sys.modules[__name__]}")
try:
if hasattr(sys, 'double_import'):
raise AssertionError(f"{__file__} was double-imported!")
except Exception as e:
traceback.print_exc()
raise e
sys.double_import = 1
Reading what it's registered as in sys.modules should help you identify where the disconnect is happening and whether you have odd module import scenarios playing out.
After hours of investigating possible causes and logging, I found that the extra import was due to the __init__.py at the root of the project, inside ProjectName in this case. The code above helped to illustrate this, where sys.modules showed an module for actual_module.some_code during preparation phases, but then showed another module at ProjectName.actual_module.some_code within the tests themselves.
This seems to be because pytest will identify a root separately from whatever is defined in imports and prepend it when running tests (though that's just a working hypothesis). Deleting ProjectName/__init__.py resolved my issue.
numpy.random.randn(100)
I understand numpy is the name of the imported module and randn is a method defined within the module but not sure what .random. is
Thanks and happy new year!
#Yann's answer is definetly correct but might not make the whole picture clear.
The best analogy for package structure are probably folders. Imagine the whole numpy package as big folder. In said folder are a bunch of files, these are our functions. But you also have subfolder. random is one of these subfolders. It contains more files (functions) who are grouped together because they have to do with the same thing, namely randomness.
numpy
├── arccos
├── vectorize
├── random
│ ├── randn
│ ├── <more functions in the random subfolder>
│ <more functions in the numpy folder>
The .random part is a module within numpy,how you can confirm this is to use the python interpreter
#first import numpy into the interpreter
import numpy
#this is so the interpreter displays the info about the random module
numpy.random
Output should be something like "<module 'numpy.random' from 'path to module'>
I'm trying to patch dependencies with my errbot tests. The problem I'm having is how errbot imports modules. It is not static and breaks my patch decorators as I add tests or they test in a different order.
I have plugin called EDB (edb.py). Inside of edb.py I import pyedb with import pyedb. This is located in my site-packages.
I have my test file test_edb.py and I try to patch my test methods like this
pytest_plugins = ["errbot.backends.test"]
extra_plugin_dir = '.'
from unittest.mock import patch # noqa: E402
#patch('yapsy_loaded_plugin_EDB_1.pyedb', autospec=True)
def test_edb_testlist(pyedb_mock, testbot):
testbot.push_message('!edb testlist')
assert "Okay, let me get..." == testbot.pop_message()
assert "I don't see any..." == testbot.pop_message()
Errbot adds this yapsy_loaded_plugin_EDB_<xx> path for module import but the xx depends on the order the test is run. This doesn't work, I need some static import path mypath.pyedb.
I'm hoping there is a different way to approach this. Maybe I can change the how I import the module so it's not dependent on errbot imports?
Here is a link to Errbot testing for reference.
My solution feels a bit hacky but it works. If anyone has a more elegant solution please share. I'll accept my own answer after awhile if there are no additional responses.
So I've come across this before but I guess I still wasn't familiar enough with how patching works in Python with knowing where to patch. After reading the "Where to patch" documentation ( again :) ) I have a work-around given the dynamic imports with errbot.
An errbot project folder will look something
errbot-project/
├── data/
│ ├── ...
├── plugins/
│ ├── plugin1/
| ├── ...
| ├── plugin2/
| ├── ...
I noticed that when errbot runs both the project directory ../errbot-project and all the plugin directories (e.g. ../errbot-project/plugins/plugin1) are added to sys.path.
So I added a package to my project directory and I import that in my plugins. I then can patch my dependencies reliably from that package. Again read the Where to Patch documentation for full explanation why. It looks something like this.
errbot-project/
├── data/
│ ├── ...
├── plugins/
│ ├── plugin1/
| ├── ...
| ├── plugin2/
| ├── ...
├── plugin_deps/
| ├── __init__.py
Where my ../errbot-project/plugin_deps/__init__.py looks like
...
import dep1
import dep2
...
And then in my plugin I use
...
import plugin_deps as pdep
...
def method():
pdep.dep1.method()
...
# note, you cannot use
# from plugin_deps import dep1
# this changes 'where' python looks up the module and
# and 'breaks' your patch
And finally my test code looks like
#patch('plugin_deps.dep1', autospec=True)
def test_get_tl_tabulation(my_mock, testbot):
# test code here