Module not found running on command line - python

I have the following project structure:
project/
example/
__init__.py
foo.py
boo.py
meh.py
tests/
example/
test_foo.py
test_boo.py
test_meh.py
As example, I'm importing foo.py in boo.py as import example.foo as f. And I'm running tests with python3 -m pytest -s -v --cov tests on root folder (project). The unit tests are running very smooth but when I try to run a single file as python3 example/boo.py I got an error:
ModuleNotFoundError: No module named 'example'

Modules inside a package shouldn't really be run (some exceptions).
But, you can set a PYTHONPATH before running the module if you really want. For a one off, use e.g.
PYTHONPATH=$(pwd) python3 example/boo.py
An alternative is to use relative imports: from . import foo as f inside boo.py. But that still implies that modules shouldn't really be run.
To elaborate a bit more:
A module should be imported, not run like a script. That is what a module is for. If, for some reason, you really really feel you need to execute module, then 1/ reconsider, 2/ rewrite your module, 3/ wrap a script around that module by calling the necessary function(s) inside the module (and keep the script itself relatively short).
Note that setuptools already has this functionality through entry points.
A simpler alternative is to use a proper
if __name__ == '__main__':
main()
line at the end of your module, where main() calls into your module functionality, then execute the module using the Python -m switch:
python -m mypackage.mymodule
But, again, try and limit this functionality.

It's usually a problem with environment variables. You can force the path using the following, and the import should work under all circumstances:
import sys
sys.path.append("/absolute/module/path")
import local_module

Related

Run python script from within the python project, which is a part of a big project (not only Python source code)

I have a big project:
main
golang
src
file1.go
python
src
file1.py
file2.py
__init__.py
java
src
file1.java
scripts
script.py
validator.sh
venv
bin
pip
python3
pyyaml
dateutil
Python project will use interpreter from:
ven/bin/python3
So anywhere inside
file1.py
file2.py
I can use imports:
import pyyaml
import dateutil
And this will work, by running from CLI:
venv/bin/python3 python/src/file1.py
However I wish to use some functions from file1.py inside file2.py
And have "relative reference" like this (inside file2.py)
from src.file1 import some_function
But having this kind of import and running the same way as before from CLI fails with error:
ModuleNotFoundError: No module named 'src'
What should I do? Pay attention that I have init.py file.
When you do from src.file1 this is relative to your sys.path.
Usually your current working directory is the first element in sys.path.
Thus you need to cd to main/python and then run ../venv/bin/python3 src/file1.py to make the imports work.
I've had similar issues and thus I have created an experimental, new import library: ultraimport.
It gives you more control over your imports and allows file system based imports.
You could then write in file2.py:
import ultraimport
file1 = ultraimport('__dir__/file1.py)'
This will always work, no matter how you run your code

How to make python file importable relatively and absolutely? I get either ModuleNotFoundError or ImportError

I have the following structure of python code:
.
├── my_main.py
└── my_pkg
├── my_dep.py
├── my_script.py
Both my_main.py and my_script.py should be callable (have a if __name__ == '__main__') section:
my_main.py:
import my_pkg.my_script
if __name__ == '__main__':
print(my_pkg.my_script.bar())
and my_script.py:
import my_dep
def bar():
return my_dep.foo() + 1
if __name__ == '__main__':
print(bar())
this imports ... my_dep.py:, which looks like:
def foo():
return 1
If you want to look at it all together, look here: https://github.com/ct2034/python_import_trouble
Problem:
If I run my_script.py, all works well.
But if I run my_main.py, I get:
ModuleNotFoundError: No module named 'my_dep'
If I change the import in my_script.py to from . import my_dep, my_main.py works.
But when I run my_script.py, I get:
ImportError: attempted relative import with no known parent package
How can I make both of them work?
Note: This is on Python 3.8
And sorry for the long-winded explanation. Was not able to make it any more concise. Hope it is understandable.
Both my_main.py and my_script.py should be callable (have a if name == 'main') section)
No they should not.
What you want to do is a non Pythonic design, and even if you manage to achieve it with some work you will later be bitten. And the later is the worse because it could break more things including in production code.
I strongly urge you to stick to the common rules:
modules can only be found if they are in sys.path
relative imports only make sense in packages
a module inside a package should never be called as a script
It will indeed force you to adapt your design, but it will work smoothly and could easily be packaged into a distribution.
My advice if you have multiple related modules is to design that as a package (with __init__.py files). You will then get natural relative imports.
If you additionaly need to start scripts, you can either build wrapper scripts using the package modules or use a __main__.py file in your top level package folder: it will be executed if your run python -m package. But please, please do not try to run the same file sometimes as a package module and sometimes as a script. Or at least do not expect me to help you on that way...
I found a way to make it work. Serge is absolutely right in saying that this should be done differently altogether (https://stackoverflow.com/a/70594758/1493204).
But if you still want it, this is how to do it:
my_script.py:
if __name__ != '__main__':
from . import my_dep
def bar():
return my_dep.foo() + 1
if __name__ == '__main__':
import my_dep
print(bar())
also here .. https://github.com/ct2034/python_import_trouble/tree/bad_solution
Setup your code to use absolute imports like from my_pkg import my_dep. Then you can run the scripts like that:
my_main.py can be executed directly
my_script.py can be executed with python -m mypkg.my_script
If this is meant to become an installable package, you can use the setup.py to create entry points or scripts, which will be installed into some system wide accessible bin directory anyway.

Resolving module conflict in python [duplicate]

Okay, the scenario is very simple. I have this file structure:
.
├── interface.py
├── pkg
│   ├── __init__.py
│   ├── mod1.py
│   ├── mod2.py
Now, these are my conditions:
mod2 needs to import mod1.
both interface.py and mod2 needs to be run independently as a main script. If you want, think interface as the actual program and mod2 as an internal tester of the package.
So, in Python 2 I would simply do import mod1 inside mod2.py and both python2 mod2.py and python2 interface.py would work as expected.
However, and this is the part I less understand, using Python 3.5.2, if I do import mod1; then I can do python3 mod2.py, but python3 interface.py throws: ImportError: No module named 'mod1' :(
So, apparently, python 3 proposes to use import pkg.mod1 to avoid collisions against built-in modules. Ok, If I use that I can do python3 interface.py; but then I can't python3 mod2.py because: ImportError: No module named 'pkg'
Similarly, If I use relative import:
from . import mod1 then python3 interface.py works; but mod2.py says SystemError: Parent module '' not loaded, cannot perform relative import :( :(
The only "solution", I've found is to go up one folder and do python -m pkg.mod2 and then it works. But do we have to be adding the package prefix pkg to every import to other modules within that package? Even more, to run any scripts inside the package, do I have to remember to go one folder up and use the -m switch? That's the only way to go??
I'm confused. This scenario was pretty straightforward with python 2, but looks awkward in python 3.
UPDATE: I have upload those files with the (referred as "solution" above) working source code here: https://gitlab.com/Akronix/test_python3_packages. Note that I still don't like it, and looks much uglier than the python2 solution.
Related SO questions I've already read:
Python -- import the package in a module that is inside the same package
How to do relative imports in Python?
Absolute import module in same package
Related links:
https://docs.python.org/3.5/tutorial/modules.html
https://www.python.org/dev/peps/pep-0328/
https://www.python.org/dev/peps/pep-0366/
TLDR:
Run your code with python -m pkg.mod2.
Import your code with from . import mod1.
The only "solution", I've found is to go up one folder and do python -m pkg.mod2 and then it works.
Using the -m switch is indeed the "only" solution - it was already the only solution before. The old behaviour simply only ever worked out of sheer luck; it could be broken without even modifying your code.
Going "one folder up" merely adds your package to the search path. Installing your package or modifying the search path works as well. See below for details.
But do we have to be adding the package prefix pkg to every import to other modules within that package?
You must have a reference to your package - otherwise it is ambiguous which module you want. The package reference can be either absolute or relative.
A relative import is usually what you want. It saves writing pkg explicitly, making it easier to refactor and move modules.
# inside mod1.py
# import mod2 - this is wrong! It can pull in an arbitrary mod2 module
# these are correct, they uniquely identify the module
import pkg.mod2
from pkg import mod2
from . import mod2
from .mod2 import foo # if pkg.mod2.foo exists
Note that you can always use <import> as <name> to bind your import to a different name. For example, import pkg.mod2 as mod2 lets you work with just the module name.
Even more, to run any scripts inside the package, do I have to remember to go one folder up and use the -m switch? That's the only way to go??
If your package is properly installed, you can use the -m switch from anywhere. For example, you can always use python3 -m json.tool.
echo '{"json":"obj"}' | python -m json.tool
If your package is not installed (yet), you can set PYTHONPATH to its base directory. This includes your package in the search path, and allows the -m switch to find it properly.
If you are in the executable's directory, you can execute export PYTHONPATH="$(pwd)/.." to quickly mount the package for import.
I'm confused. This scenario was pretty straightforward with python 2, but looks awkward in python 3.
This scenario was basically broken in python 2. While it was straightforward in many cases, it was difficult or outright impossible to fix in any other cases.
The new behaviour is more awkward in the straightforward case, but robust and reliable in any case.
I had similar problem.
I solved it adding
import sys
sys.path.insert(0,".package_name")
into the __init__.py file in the package folder.

import from parent directory in script which lies in sub directory

Here's my directory structure:
my_package
|
+--__init__.py
|
+--setup.py
|
+--module.py
|
+--sub_package
|
+--__init__.py
|
+--script.py
The script script.py needs to import a function from module.py, and I need to be able to run script.py using the Python interpreter.
I know that Guido calls this an "anti-pattern". And I know you have 10 reasons why I shouldn't do this. I probably agree with most of them - really, I do. I wouldn't be asking this question if I could avoid this. So can we just skip the part where we go over why this is bad and move on to the solution? Thanks!
I also know that there are about a 1000 other SO questions about this. I've probably read them all by now. Most of them were made obsolete by changes to Python's import system, and the rest just aren't right.
What I have tried:
Using from .module import my_function in script.py and then either running python script.py inside the directory sub_package or python sub_package/script.py inside the directory my_package. Either way, I get the error:
SystemError: Parent module '' not loaded, cannot perform relative import
Using from module import my_function or from my_package.module import my_function and running script.py as above (either from sub_package or my_package). I get:
ImportError: No module named 'module'
(or similarly with my_package instead of module)
Running python -m sub_package/script from the directory my_package or python -m my_package/sub_package/script from the parent directory of my_package. I get:
No module named sub_package/script
(or similarly with my_package/sub_package/script)
Is there anything else I should be trying? I would really rather avoid messing with sys.path or PYTHONPATH for a whole host of reasons.
I believe you can just say
import module
from within script.py if you do the following:
from your terminal in the mypackage directory run:
$ python setup.py install
This will allow you to make the import statement that you want. The one issue that I have found with this method is that you will have to call
$ python setup.py install
every time you make a change to module.py but if you are ok with that this method should work
In fact, python interpreter will find module in sys.path.It means if you add your module's directory in this example "somepath/my_package" to sys.path, python will find it by "import module".
So how to achieve this goal?
If your directory absolute path never change, you can do this below:
sys.path.append(yourAbsPath)
But sometimes the path of directory might change, but the relative path will not change.So i recommend to follow this, try to add it to the beginning of your script.py:
import sys
import os
my_packagePath = os.path.dirname(os.path.dirname(__file__))
sys.path.append(my_packagePath)
import module
#from module import yourFunction #replace yourFunction with your real function
In the cmd tool, just type "python yourScript.py's abspath".

Python importing works from one folder but not another

I have a project directory that is set up in the following way:
>root
> modules
__init__.py
module1.py
> moduleClass
__init__.py
moduleClass1.py
moduleClass2.py
> scripts
runTests.py
> tests
__init__.py
test1.py
test2.py
run.sh
In runTests.py I have the following import statements:
import modules.module1
import modules.moduleClass.moduleClass2
import tests.test1
import tests.test2
The first two import statements work fine, but the second two give me the errors ImportError: No module named test1 and ImportError: No module named test2. I can't see what is different between the tests and modules directories.
I'd be happy to provide more information as needed.
When you run a script, Python adds the script's containing directory (here, scripts/) to sys.path. If your modules don't appear in sys.path any other way, that means Python may not be able to find them at all.
The usual solution is to put your scripts somewhere in your module hierarchy and "run" them with python -m path.to.module. But in this case, you could just use an existing test runner: Python comes with python -m unittest discover, or you might appreciate something fancier like py.test (pip install --user pytest).
The problem turned out to be that python didn't like the folder name tests. Changing the name to unit_tests solved the problem.

Categories

Resources