I created a project that runs fine locally, but fails when run from PyPi deploy/install, with ModuleNotFound. The link contains a very small sample, and run / deploy instructions. My original thought was regarding inheritance, but it seems more basic.
Traceback (most recent call last):
File "/Users/val/python/vscode/inheritance/venv/bin/inheritance-run", line 5, in <module>
from inheritance.run import start
ModuleNotFoundError: No module named 'inheritance'
Your setup.py has:
packages=find_packages(),
This will find modules (directories with __init__.py files) in the same directory as your setup.py file, but you don't have any. Instead, you have a base.py file.
If you want to be able to import inheritance, you should move this into an __init__.py file in new directory named inheritance, so you have something like:
.
├── inheritance
│ └── __init__.py
└── setup.py
Then setuptools will find this module and include it as an importable package in your project.
FYI, It's usually preferable to use a src-based layout instead, however. See https://github.com/pypa/sampleproject/ for a full example.
Related
I have a project hierarchy like below, when I run python src/bot/main I got no error. While if I run python -m src.bot.main I got error. Why?
This is my file hierarchy:
MyProject
└── src
├── __init__.py
├── bot
│ ├── __init__.py
│ ├── main.py
│ └── sib1.py
└── mod
├── __init__.py
└── module1.py
This is the content of main.py:
import sys
if __name__ == "__main__":
# sys.path will print the parent folder.
print(sys.path, end="\n\n")
# my problem is on this line.
import sib1
sib1.test()
The error:
Traceback (most recent call last):
File "/usr/local/Caskroom/miniconda/base/envs/test/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Caskroom/miniconda/base/envs/test/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/me/Desktop/test_py/src/bot/main.py", line 16, in <module>
import sib1
ModuleNotFoundError: No module named 'sib1'
Some conclusion I've made so far:
Since the output of sys.path in both cases include /Users/me/Desktop/MyProject, the reason should not related to scope?
The output of sys.path of both python -m src.bot.main and python src/bot/main:
(test) ✔ me Desktop/test_py % python -m src.bot.main
['/Users/me/Desktop/test_py', '/usr/local/Caskroom/miniconda/base/envs/test/lib/python39.zip', '/usr/local/Caskroom/miniconda/base/envs/test/lib/python3.9', '/usr/local/Caskroom/miniconda/base/envs/test/lib/python3.9/lib-dynload', '/usr/local/Caskroom/miniconda/base/envs/test/lib/python3.9/site-packages']
I will try my best to clarify each of my confusions in a Q&A form, and organize #Brain's comments and during the way:
Q1: when I run python src/bot/main I got no error.
The sys.path will include the current directory containing the file main.py, i.e. the interpreter will see the file MyProject/src/bot:
import sib1
is (logically) equivalent to:
import /path/to/MyProject/src/bot/sib1.py
Hence, no error.
BUT this is called IMPLICIT-RELATIVE-IMPORT, which should be used only on the main script. See Example.
Q2: While if I run python -m src.bot.main I got error. Why?
Now it's time to quote #Brain's valuable (first) comment:
Using python -m src.bot.main tells Python that src is a top-level package. Everything below src in the directory structure will be considered submodules/subpackages of src. The proper name for sib1 under that organization is src.bot.sib1. No top-level module named sib1 exists as far as Python is concerned.
(emphasis by me)
So:
The -m option will provide some context information(details, I recommend reading the first two high-upvoted answers entirely) for the file you're going to run.
Now every absolute (import-)path that does not begin with src. will be regarded as built-in or third-party libraries installed in your virtual environment. Since I didn't install any package called sib1, you got the error. (Ignore about the inclusion of the current directory in the sys.path. I guess this is for compatibility reason. Remember that we DONT want IMPLICIT-RELATIVE-IMPORT in python3)
You can use relative path BUT you should not go out of the src (since src is the top-level package in the execution) by prepending too many .'s.
For example this will work:
from . import sib1
from ..mod import module1 # The `..` is equivalent to the MyProject/src.
module1.hello()
sib1.test()
Finally, don't test your package by inserting many if __name__ == '__main__'. Do so by professional tools:
If you only need to run submodules for testing purpose, you could consider using more robust testing tools like doctest (lightweight) or unittest (heavyweight).
I have a Python project structure like this:
└── my_project
├── __init__.py (empty)
├── options.py
└── main.py
inside main.py I import options like so:
from my_project.options import some_function
However (and this is the weird part - this only happens on one computer (Python 3.9.1)) when running main.py I get the following error:
Traceback (most recent call last):
File ".../my_project/main.py", line 1, in <module>
from my_project.options import some_function
ModuleNotFoundError: No module named 'my_project'
This used to work before, but has now for some reason broken. Do you have any idea as to why it might not wanna work, and how I can fix it? It also works just fine when I install this program with pip - just not when ran locally with my Python interpreter
Check for the PYTHONPATH environment variable, which is the default path to search for python modules.
Must contain the root path of your project. You can find more info here
Basically, I have two python projects, one is located under myapp/screening and the other myapp/server. I'm currently developing the server module and want to import functions from screening using myapp.screening.
My folder structure is as shown bellow:
myapp/
screening/
screening-env/
myapp/
__init__.py
screening/
__init__.py
screening_task.py
submodule1/
# __init__.py and ub module files
submodule2/
# __init__.py and sub module files
submodule3/
# __init__.py and sub module files
tests/
# __init__.py and test scripts
setup.py
server/
server-env/
myapp/
__init__.py
server/
__init__.py
server_task.py
tests/
__init__.py
server_test.py
I structured my project following this answer.
My setup.py is basically as bellow:
from setuptools import setup
setup(
name='myapp-screening',
version='0.1.0',
packages=[
'myapp.screening',
'myapp.screening.submodule1',
'myapp.screening.submodule2',
'myapp.screening.submodule3'
],
)
I activated my server-env and installed the screening project by navigating to myapp/screening/ (same directory as setup.py) and ran python setup.py develop.
Finally, both server_test.py and server_task are such as bellow:
from myapp.screening.screening_test import foo
foo()
When I run python -m myapp.server.server_task or python -m test.server_test I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'myapp.screening'
This error makes sense if I'm running python -m myapp.server.server_task because local myapp existis and might be overwriting the installed myapp that contains the screening modules.
Is there a way to import stuff from screening using from myapp.screening.screening_task import foo?!
So, after some more research I found this similar (in a way) question that leads to import python modules with the same name and How do I create a namespace package in Python?.
The answer for "importing modules with same name" is not useful since it says to rename one module or turn the project directory into a package.
The other question is exactly what I want. It basically talks about the pkgutil with which you can 'append' modules to a given namespace.
I understand and share some opinions against this technique for some cases (such as this) but since it's presented here sometimes you want separated structures so you don't patch everything togheter even when you don't want everything
I have problems with python nosetests.
When I try to run the command, I get an import error.
I checked that the module is correctly installed on my machine.
In fact, if I run the interpreter from the directory where I run nosetests,
I am able to import the module.
I checked that the problems are to import not only that module, but different ones.
Where could the fix be?
Here is a possible traceback after I run nosetests:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/Library/Python/2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/Library/Python/2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/Users/user-me/Desktop/bla/tests/blatest1", line 1, in <module>
import a_module as mo
ImportError: No module named a_module
but if I open the python interpreter, I am able to import a_module.
Here is my directory structure:
ROOT
└── package
├── __init__.py
├── package1
└── tests
├── tests1
│ └── package1 -> ../../package1
└── tests2
Your problem has nothing to do with nose itself. nose doesn't perform any magic additional to the Python interpreter when looking up modules and packages.
So if we assume a directory structure like this
ROOT
`-- package
|-- __init__.py
`-- tests
`-- __init__.py
and try to do python -c import package - when does that work, and when fail?
It's simple. It fails anywhere other than if you invoke the command from within ROOT. Nowhere else package is known.
The reason is that Python keeps a list of paths (sys.path) which mark roots for packages and modules. A statement import package will be taken, and iterating through all entries in sys.path will be searched for either a package.py, a package-dir with an __init__.py in it, or some other cases (C-extensions, new style namespace packages)
So where does ROOT get into that list of paths? Simple: the Python interpreter will always add the current working directory to the list of paths.
To sum it up: just call nose from within ROOT.
There are additional ways to add a path to the sys.path, e.g. by using virtualenvs. Then you can import package from everywhere, given you use the venv interpreter.
I can't seem to get the nose testing framework to recognize modules beneath my test script in the file structure. I've set up the simplest example that demonstrates the problem. I'll explain it below.
Here's the the package file structure:
./__init__.py
./foo.py
./tests
./__init__.py
./test_foo.py
foo.py contains:
def dumb_true():
return True
tests/test_foo.py contains:
import foo
def test_foo():
assert foo.dumb_true()
Both init.py files are empty
If I run nosetests -vv in the main directory (where foo.py is), I get:
Failure: ImportError (No module named foo) ... ERROR
======================================================================
ERROR: Failure: ImportError (No module named foo)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 379, in loadTestsFromName
addr.filename, addr.module)
File "/usr/lib/python/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", line 39, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/usr/lib/python/site-packages/nose-0.11.1-py2.6.egg/nose/importer.py", line 86, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/home/user/nose_testing/tests/test_foo.py", line 1, in <module>
import foo
ImportError: No module named foo
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (errors=1)
I get the same error when I run from inside the tests/ directory. According to the documentation and an example I found, nose is supposed to add all parent packages to the path as well as the directory from which it is called, but this doesn't seem to be happening in my case.
I'm running Ubuntu 8.04 with Python 2.6.2. I've built and installed nose manually (not with setup_tools) if that matters.
You've got an __init__.py in your top level directory. That makes it a package. If you remove it, your nosetests should work.
If you don't remove it, you'll have to change your import to import dir.foo, where dir is the name of your directory.
Are you in a virtualenv? In my case, nosetests was the one in /usr/bin/nosetests, which was using /usr/bin/python. The packages in the virtualenv definitely won't be in the system path. The following fixed this:
source myvirtualenv/activate
pip install nose
which nosetests
/home/me/myvirtualenv/bin/nosetests
To those of you finding this question later on: I get the import error if I don't have an __init__.py file in my tests directory.
My directory structure was like this:
./tests/
./test_some_random_stuff.py
If I ran nosetests:
nosetests -w tests
It would give the ImportError that everyone else is seeing. If I add a blank __init__.py file it works just fine:
./tests/
./__init__.py
./test_some_random_stuff.py
Another potential problem appears to be hyphens/dashes in the directory tree. I recently fixed a nose ImportError issue by renaming a directory from sub-dir to sub_dir.
Of course if you have a syntax error in the module being imported that will cause this. For me the problem reared its head when I had a backup of a tests file with a path like module/tests.bak.py in the same directory as tests.py. Also, to deal with the init package/module problem in a Django app, you can run the following (in a bash/OSX shell) to make sure you don't have any init.pyc files lying around:
find . -name '*.pyc' -delete
I got this error message because I run the nosetests command from the wrong directory.
Silly, but happens.
I just ran into one more thing that might cause this issue: naming of tests in the form testname.test.py. That extra . confounds nose and leads to it importing things it should not. I suppose it may be obvious that using unconventional test naming conventions will break things, but I thought it might be worth noting.
For example, with the following directory structure, if you want to run nosetests in m1, m2 or m3 to test some functions in n.py, you should use from m2.m3 import n in test.py.
m1
└── m2
├── __init__.py
└── m3
├── __init__.py
├── n.py
└── test
└── test.py
Just to complete the question:
If you're struggling with structure like this:
project
├── m1
├ ├── __init__.py
├ ├── foo1.py
├ └──m2
├ ├── __init__.py
├ └── foo2.py
├
└── test
├── __init__.py
└── test.py
And maybe you want to run test from a path outside the project,
include your project path inside your PYTHONPATH.
export PYTHONPATH=$PYTHONPATH:$HOME/path/to/project
paste it inside your .profile.
If you're under a virtual environment, paste it inside the activate in your venv root