Trouble importing a module that imports a module - python

I'm having trouble with a python package that uses separate modules to structure code. The package itself is working, however when imported from another environment fails with a ModuleNotFound error.
Here's the structure:
Project-root
|
|--src/
| __init__.py
| module_a.py
| module_b.py
| module_c.py
| module_d.py
|--tests
etc.
In module_a.py I have:
from module_a import function_a1,...
from module_b import function_b1,...
from module_c import function_c1,...
In module_c I import module_d like:
from module_d import function_d1,...
As mentioned above, executing module_a or module_c directly from the CLI work as expected, the unit tests I've created in the test directory also work (with the help of sys.path.insert), however if I create a new environment and import the package I get the following error:
>>> import module_a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/<abs_path>/.venv/lib/python3.9/site-packages/module_a.py", line 22, in <module>
from module_c import function_c1, function_c2
File /<abs_path>/.venv/lib/python3.9/site-packages/module_c.py", line 9, in <module>
import module_d
ModuleNotFoundError: No module named 'module_d'
>>>
I've exhausted all ideas how to overcome this, besides combining the code of modules c and d in one file, which I'd hate to do, or rethink the flow so that all modules are imported from module_a.
Any suggestions how to approach this would be greatly appreciated.
Update: It turned out to be a typing mistake in the name of module_d in setup.py. For whatever reason python setup.py install was failing silently or I wasn't reading the logs carefully.

The problem comes down to understanding the basics of the import system and the PYTHONPATH.
When you try to import a module (import module_a), Python will search in order in every directory listed in sys.path. If a directory matches the name (module_a)1, then it runs the __init__.py file is such exist.
When you get an [https://docs.python.org/3/library/exceptions.html#ImportError], it means that there is no directory in sys.path containing a directory with the name asked.
You said for your tests you did something like sys.path.insert(0, "some/path/"), but it is not a solution, just a broken fix.
What you should do is set your PYTHONPATH environment variable to contain the directory where your modules are located, Project-root/src in your case. That way, no need to ever use sys.path.insert, or fiddle with relative/absolute paths in import statements.
When you create your new environment, just set your environment variable PYTHONPATH to include Project-root/src and you are done. This is how installing regular Python modules (libraries) work : they are all put into a directory in site-packages.
1: this changed since old Python versions, it used to be required for the directory to contain an __init__.py file

Related

How to import modules in Azure Machine Learning run script?

I am new to Azure Machine Learning and have been struggling with importing modules into my run script. I am using the AzureML SDK for Python. I think I somehow have to append the script location to PYTHONPATH, but have been unable to do so.
To illustrate the problem, assume I have the following project directory:
project/
src/
utilities.py
test.py
run.py
requirements.txt
I want to run test.py on a compute instance on AzureML and I submit the run via run.py.
A simple version of run.py looks as follows:
from azureml.core import Workspace, Experiment, ScriptRunConfig
from azureml.core.compute import ComputeInstance
ws = Workspace.get(...) # my credentials here
env = Environment.from_pip_requirements(name='test-env', file_path='requirements.txt')
instance = ComputeInstance(ws, '<instance-name>')
config = ScriptRunConfig(source_directory='./src', script='test.py', environment=env, compute_target=instance)
run = exp.submit(config)
run.wait_for_completion()
Now, test.py imports functions from utilities.py, e.g.:
from src.utilities import test_func
test_func()
Then, when I submit a run, I get the error:
Traceback (most recent call last):
File "src/test.py", line 13, in <module>
from src.utilities import test_func
ModuleNotFoundError: No module named 'src.utilities'; 'src' is not a package
This looks like a standard error where the directory is not appended to the Python path. I tried two things to get rid of it:
include an __init__.py file in src. This didn't work and I would also for various reasons prefer not to use __init__.py files anyways.
fiddle with the environment_variables passed to AzureML like so
env.environment_variables={'PYTHONPATH': f'./src:${{PYTHONPATH}}' but that didn't really work either and I assume that is simply not the correct way to append the PYTHONPATH
I would greatly appreciate any suggestions on extending PYTHONPATH or any other ways to import modules when running a script in AzureML.
The source diectory set in ScriptRunConfig will automaticaly add to the PYTHONPATH, that means remove the "src" directory from the import line.
from utilities import test_func
Hope that helps

How to fix relative import error in python without code change?

In python I have a folder with three files
- __init__.py
- module.py
- test_module.py
in which the module module.py is imported inside the file test_module.py as follows:
from . import module
Of course, when I just run test_module.py I get an error
> python test_module.py
Traceback (most recent call last):
File "test_module.py", line 4, in <module>
from . import module
ImportError: attempted relative import with no known parent package
But as I set the PYTHONPATH to the absolute path to where I am working in
export PYTHONPATH=`pwd`
I expect the import to work (as I did set the PYTHONPATH). But to my surprise I get the same error!
So can I fix the relative import error without any code change?
Since the directory you describe (let's call it thatdirectory) is a package, as marked by an __init__ file, you can "fix" that by cding a directory higher up, then running
python -m thatdirectory.test_module
since running modules with -m fixes up sys.path in a way that makes this particular configuration work.
(In general, any code that messes with sys.path manually, or requires changes to PYTHONPATH to work, is broken in my eyes...)

Relative Imports & Directory Structure (Trillionth Time)

SUMMARY
I am fairly new to designing full-fledged python projects, and all my Python work earlier has been with Jupyter Notebooks. Now that I am designing some application with Python, I am having considerable difficulty making it 'run'.
I have visited the following sites -
Relative imports in Python
Ultimate answer to relative python imports
python relative import example code does not work
But none of them seem to solve my issue.
PROBLEM
Here's my repo structure -
my_app/
__init__.py
code/
__init__.py
module_1/
some_code_1.py
module_2/
some_code_2.py
module_3/
some_code_3.py
main.py
tests/
__init__.py
module_1/
test_some_code_1.py
module_2/
test_some_code_2.py
module_3/
test_some_code_3.py
resources/
__init__.py
config.json
data.csv
I am primarily using PyCharm and VS Code for development and testing.
The main.py file has the following imports -
from code.module_1.some_code_1 import class_1
from code.module_2.some_code_2 import class_2
from code.module_3.some_code_3 import class_3
In the PyCharm run configuration, I have the working directory set to `User/blah/blah/my_app/
Whenever I run the main.py from PyCharm, it runs perfectly.
But if I run the program from terminal like -
$ python code/main.py
Traceback (most recent call last):
File "code/main.py", line 5, in <module>
from code.module_1.some_code_1 import class_1
ModuleNotFoundError: No module named 'code.module_1.some_code_1'; 'code' is not a package
I get the same error if I run the main.py from VS Code.
Is there a way to make this work for PyCharm as well as terminal?
If I change the imports to -
from module_1.some_code_1 import class_1
from module_2.some_code_2 import class_2
from module_3.some_code_3 import class_3
This works on the terminal but doesn't work in PyCharm. The test cases fail too.
Is there something I am missing, or some configuration that can be done to make all this work seamlessly?
Can someone help me with this?
Thanks!
The problem is when you do python code/main.py it makes your current working directory code/, which makes all of your absolute imports incorrect since Python doesn't see above that directory unless you explicitly change a setting like the PYTHONPATH environment variable.
Your best option is to rename main.py to __main__.py and then use python -m code (although do note that package name clashes with a module in the stdlib).
You also don't need the __init__.py in my_app/ unless you're going to treat that entire directory as a package.
And I would consider using relative imports instead of absolute ones (and I would also advise importing to the module and not the object/class in a module in your import statements to avoid circular import issues). For instance, for the from code.module_1.some_code_1 import class_1 line in code.main, I would make it from .module_1 import some_code_1.

Import object declared inside __init__.py

I'm having trouble understanding how objects declared inside '__init__.py' are/should be imported to other files.
I have a directory structure like so
top/
|
|_lib/
|_ __init__.py
|_ one.py
File contents are as follows
lib/__init__.py
a=object()
lib/one.py
from lib import a
Here is the problem. If I fire a python shell from top directory, the following command runs well
>>> from lib.one import a
However if I change directory to top/lib and fire a similar command in a new python shell, I get error.
>>> from one import a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "one.py", line 1, in <module>
from lib import a
ImportError: No module named lib
Ofcourse, I can change one.py like so, that will make everything work.
from __init__ import a
But I'm really trying to understand, why import command works from top directory and not from top/lib.
Thanks.
Generally speaking, I think it's best practice to have the data funnel up to __init__.py from the modules/subpackages rather than needing to rely on data from __init__.py in the surrounding modules. In other words, __init__.py can use one.py, but one.py shouldn't use data/functions in __init__.py.
Now, to your question...
It works in top because python does a relative import (which is gone in python3.x IIRC, so don't depend on it ;-). In other words, python looks in the current directory for a module or package name lib and it imports it. That's all fine so far. running from lib.one import a first imports lib (__init__.py) which works fine. Then it imports one -- lib still imports ok from one because it's relative to your current working directory -- Not relative to the source file.
When you move into the lib directory, python can no longer find lib in the current directory making it not importable. Note that with most packages, this is fixed by installing the package which puts it someplace that python can find it without it needing to be in the current directory.

gobject-introspection overrides cause import errors

I am using gobject-introspection in python2.7 on ubuntu raring and I run into an import error while building some packages. I have isolated a minimal set of steps to replicate it:
Make a local directory structure:
gi:
__init__.py
overrides:
__init__.py
Put the standard boilerplate
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
print __path__, __name__
in both __init__.py files.
From the directory containing your local copy of gi, run the following:
python -c "from gi import repository"
I get an error message that looks like:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/gi/repository/__init__.py", line 25, in <module>
from ..importer import DynamicImporter
File "/usr/lib/python2.7/dist-packages/gi/importer.py", line 28, in <module>
from .module import DynamicModule
File "/usr/lib/python2.7/dist-packages/gi/module.py", line 37, in <module>
from .overrides import registry
ImportError: cannot import name registry
Any explanation? I can't find any decent documentation of the intended behavior because gobject-introspection seems like a very poorly documented project. Help is greatly appreciated!
From the Python documentation:
The __init__.py files are required to make Python treat the
directories as containing packages; this is done to prevent
directories with a common name, such as string, from unintentionally
hiding valid modules that occur later on the module search path.
Simply by having those __init__.py files accessible from the running directory, you are telling the interpreter that that is an implementation of the gi module. Any usage of the main gi module will not go correctly.
Now, why does it print the error as coming from /usr/lib ? Because gi was found in local/gi, but gi.repository was found in /usr/lib/python2.7/dist-packages/gi/repository . It is running /usr/lib/python2.7/dist-packages/gi/repository/__init__.py. From there, it imports some other submodules correctly, but when it tries to import overrides it finds your local stub in gi/overrides. Your stub does not define registry, so you have the fail.
Try and put registry='dumb_string' in gi/overrides/__init__.py and see that the error is gone.

Categories

Resources