I used easy_install to install pytest on a Mac and started writing tests for a project with a file structure likes so:
repo/
|--app.py
|--settings.py
|--models.py
|--tests/
|--test_app.py
Run py.test while in the repo directory, and everything behaves as you would expect.
But when I try that same thing on either Linux or Windows (both have pytest 2.2.3 on them), it barks whenever it hits its first import of something from my application path. For instance, from app import some_def_in_app.
Do I need to be editing my PATH to run py.test on these systems?
I'm not sure why py.test does not add the current directory in the PYTHONPATH itself, but here's a workaround (to be executed from the root of your repository):
python -m pytest tests/
It works because Python adds the current directory in the PYTHONPATH for you.
Recommended approach for pytest>=7: use the pythonpath setting
Recently, pytest has added a new core plugin that supports sys.path modifications via the pythonpath configuration value. The solution is thus much simpler now and doesn't require any workarounds anymore:
pyproject.toml example:
[tool.pytest.ini_options]
pythonpath = [
"."
]
pytest.ini example:
[pytest]
pythonpath = .
The path entries are calculated relative to the rootdir, thus . adds repo directory to sys.path in this case.
Multiple path entries are also allowed: for a layout
repo/
├── src/
| └── lib.py
├── app.py
└── tests
├── test_app.py
└── test_lib.py
the configuration
[tool.pytest.ini_options]
pythonpath = [
".", "src",
]
or
[pytest]
pythonpath = . src
will add both app and lib modules to sys.path, so
import app
import lib
will both work.
Original answer (not recommended for recent pytest versions; use for pytest<7 only): conftest solution
The least invasive solution is adding an empty file named conftest.py in the repo/ directory:
$ touch repo/conftest.py
That's it. No need to write custom code for mangling the sys.path or remember to drag PYTHONPATH along, or placing __init__.py into dirs where it doesn't belong (using python -m pytest as suggested in Apteryx's answer is a good solution though!).
The project directory afterwards:
repo
├── conftest.py
├── app.py
├── settings.py
├── models.py
└── tests
└── test_app.py
Explanation
pytest looks for the conftest modules on test collection to gather custom hooks and fixtures, and in order to import the custom objects from them, pytest adds the parent directory of the conftest.py to the sys.path (in this case the repo directory).
Other project structures
If you have other project structure, place the conftest.py in the package root dir (the one that contains packages but is not a package itself, so does not contain an __init__.py), for example:
repo
├── conftest.py
├── spam
│ ├── __init__.py
│ ├── bacon.py
│ └── egg.py
├── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
src layout
Although this approach can be used with the src layout (place conftest.py in the src dir):
repo
├── src
│ ├── conftest.py
│ ├── spam
│ │ ├── __init__.py
│ │ ├── bacon.py
│ │ └── egg.py
│ └── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
beware that adding src to PYTHONPATH mitigates the meaning and benefits of the src layout! You will end up with testing the code from repository and not the installed package. If you need to do it, maybe you don't need the src dir at all.
Where to go from here
Of course, conftest modules are not just some files to help the source code discovery; it's where all the project-specific enhancements of the pytest framework and the customization of your test suite happen. pytest has a lot of information on conftest modules scattered throughout their docs; start with conftest.py: local per-directory plugins
Also, SO has an excellent question on conftest modules: In py.test, what is the use of conftest.py files?
I had the same problem. I fixed it by adding an empty __init__.py file to my tests directory.
Yes, the source folder is not in Python's path if you cd to the tests directory.
You have two choices:
Add the path manually to the test files. Something like this:
import sys, os
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../')
Run the tests with the env var PYTHONPATH=../.
Run pytest itself as a module with:
python -m pytest tests
This happens when the project hierarchy is, for example, package/src package/tests and in tests you import from src. Executing as a module will consider imports as absolute rather than relative to the execution location.
You can run with PYTHONPATH in project root
PYTHONPATH=. py.test
Or use pip install as editable import
pip install -e . # install package using setup.py in editable mode
I had the same problem in Flask.
When I added:
__init__.py
to the tests folder, the problem disappeared :)
Probably the application couldn't recognize folder tests as a module.
I created this as an answer to your question and my own confusion. I hope it helps. Pay attention to PYTHONPATH in both the py.test command line and in the tox.ini.
https://github.com/jeffmacdonald/pytest_test
Specifically: You have to tell py.test and tox where to find the modules you are including.
With py.test you can do this:
PYTHONPATH=. py.test
And with tox, add this to your tox.ini:
[testenv]
deps= -r{toxinidir}/requirements.txt
commands=py.test
setenv =
PYTHONPATH = {toxinidir}
I fixed it by removing the top-level __init__.py in the parent folder of my sources.
I started getting weird ConftestImportFailure: ImportError('No module named ... errors when I had accidentally added __init__.py file to my src directory (which was not supposed to be a Python package, just a container of all source).
It is a bit of a shame that this is an issue in Python... But just adding this environment variable is the most comfortable way, IMO:
export PYTHONPATH=$PYTHONPATH:.
You can put this line in you .zshrc or .bashrc file.
I was having the same problem when following the Flask tutorial and I found the answer on the official Pytest documentation.
It's a little shift from the way I (and I think many others) are used to do things.
You have to create a setup.py file in your project's root directory with at least the following two lines:
from setuptools import setup, find_packages
setup(name="PACKAGENAME", packages=find_packages())
where PACKAGENAME is your app's name. Then you have to install it with pip:
pip install -e .
The -e flag tells pip to install the package in editable or "develop" mode. So the next time you run pytest it should find your app in the standard PYTHONPATH.
I had a similar issue. pytest did not recognize a module installed in the environment I was working in.
I resolved it by also installing pytest into the same environment.
Also if you run pytest within your virtual environment make sure pytest module is installed within your virtual environment. Activate your virtual environment and run pip install pytest.
For me the problem was tests.py generated by Django along with tests directory. Removing tests.py solved the problem.
I got this error as I used relative imports incorrectly. In the OP example, test_app.py should import functions using e.g.
from repo.app import *
However liberally __init__.py files are scattered around the file structure, this does not work and creates the kind of ImportError seen unless the files and test files are in the same directory.
from app import *
Here's an example of what I had to do with one of my projects:
Here’s my project structure:
microbit/
microbit/activity_indicator/activity_indicator.py
microbit/tests/test_activity_indicator.py
To be able to access activity_indicator.py from test_activity_indicator.py I needed to:
start test_activity_indicatory.py with the correct relative import:
from microbit.activity_indicator.activity_indicator import *
put __init__.py files throughout the project structure:
microbit/
microbit/__init__.py
microbit/activity_indicator/__init__.py
microbit/activity_indicator/activity_indicator.py
microbit/tests/__init__.py
microbit/tests/test_activity_indicator.py
According to a post on Medium by Dirk Avery (and supported by my personal experience) if you're using a virtual environment for your project then you can't use a system-wide install of pytest; you have to install it in the virtual environment and use that install.
In particular, if you have it installed in both places then simply running the pytest command won't work because it will be using the system install. As the other answers have described, one simple solution is to run python -m pytest instead of pytest; this works because it uses the environment's version of pytest. Alternatively, you can just uninstall the system's version of pytest; after reactivating the virtual environment the pytest command should work.
I was getting this error due to something even simpler (you could even say trivial). I hadn't installed the pytest module. So a simple apt install python-pytest fixed it for me.
'pytest' would have been listed in setup.py as a test dependency. Make sure you install the test requirements as well.
Since no one has suggested it, you could also pass the path to the tests in your pytest.ini file:
[pytest]
...
testpaths = repo/tests
See documentation: https://docs.pytest.org/en/6.2.x/customize.html#pytest-ini
Side effect for Visual Studio Code: it should pick up the unit test in the UI.
We have fixed the issue by adding the following environment variable.
PYTHONPATH=${PYTHONPATH}:${PWD}/src:${PWD}/test
As pointed out by Luiz Lezcano Arialdi, the correct solution is to install your package as an editable package.
Since I am using Pipenv, I thought about adding to his answer a step-by-step how to install the current path as an edible with Pipenv, allowing to run pytest without the need of any mangling code or lose files.
You will need to have the following minimal folder structure (documentation):
package/
package/
__init__.py
module.py
tests/
module_test.py
setup.py
setup.py mostly has the following minium code (documentation):
import setuptools
setuptools.setup(name='package', # Change to your package name
packages=setuptools.find_packages())
Then you just need to run pipenv install --dev -e . and Pipenv will install the current path as an editable package (the --dev flag is optional) (documentation).
Now you should be able to run pytest without problems.
If this pytest error appears not for your own package, but for a Git-installed package in your package's requirements.txt, the solution is to switch to editable installation mode.
For example, suppose your package's requirements.txt had the following line:
git+https://github.com/foo/bar.git
You would instead replace it with the following:
-e git+https://github.com/foo/bar.git#egg=bar
If nothing works, make sure your test_module.py is listed under the correct src directory.
Sometimes it will give ModuleNotFoundError not because modules are misplaced or export PYTHONPATH="${PWD}:${PYTHONPATH}" is not working, its because test_module.py is placed into a wrong directory under the tests folder.
it should be 1-to-1 mapping relation recursively instead of the root folder should be named as "tests" and the name of the file that include test code should starts with "test_",
for example,
./nlu_service/models/transformers.py
./tests/models/test_transformers.py
This was my experience.
Very often the tests were interrupted due to module being unable to be imported.
After research, I found out that the system is looking at the file in the wrong place and we can easily overcome the problem by copying the file, containing the module, in the same folder as stated, in order to be properly imported.
Another solution proposal would be to change the declaration for the import and show MutPy the correct path of the unit. However, due to the fact that multiple units can have this dependency, meaning we need to commit changes also in their declarations, we prefer to simply move the unit to the folder.
My solution:
Create the conftest.py file in the test directory containing:
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + "/relative/path/to/code/")
This will add the folder of interest to the Python interpreter path without modifying every test file, setting environment variable or messing with absolute/relative paths.
The Setup
OS: Ubuntu 20.04
Python: 3.8.5 | pip: 20.0.2 | venv
Repo
.
├── build
├── dist
├── source.egg-info
├── source
├── readme.md
├── requirements.txt
├── setup.py
└── venv
source dir
.
├── config
├── examples
├── script.py
├── __init__.py
├── tests
└── utils
The important directories within the source directory are config, which contains a few .env and .json files; and utils, which is a package that contains a sub-package called config.
Running script.py, which references config and imports modules from utils, is how the CLI app is started. Ideally when it is run, it should load a bunch of environment variables, create some command aliases and display the application's prompt. (After which the user can start working within that shell.)
I created a wheel to install this application. The setup.py contains an entry point as follows:
entry_points={
'console_scripts': [
'script=source.script:main'
]
}
The Problem
I pip installed the wheel in a test directory with its own virtual environment. When I go to the corresponding site-packages directory and run python script.py, the CLI loads properly with the information about the aliases etc. However when I run simply script (the entry point) from the root directory of the environment the shell loads but I don't see any of the messages about the aliases etc., and some of the functionality which depends on the utils package aren't available either.
What could I be doing wrong? How can I make the command work as if it was running with all the necessary packages available?
Other information that may be useful
site-packages has copies of config and utils
config is included in the package as part of the package_data parameter in setup.py as ['./config/*.env', './config/*.json']
All import statements begin from source, i.e. from source.utils.config import etc.
which script gives me the location as venv/bin/script, but that bin directory does not have the packages. (Which is expected, I think.)
This is my folder structure:
.
├── main.py
├── formats
│ ├── __init__.py
│ └── writer.py
└── misc
├── __init__.py
└── util.py
In main.py, I can import util.py using:
from misc.util import sth
However, I can't import util.py in writer.py, using the above statement, and this command:
python formats/writer.py
Now the simplest solution is to mess with the PYTHONPATH: a simple export PYTHONPATH=. will do it. However, I don't like doing so, and don't like relative import. What are my options now?
The import mechanism is based on PYTHONPATH.
When you run python main.py, then the directory containing main.py is in PYTHONPATH, so all other packages there are importable as well.
When you run python formats/writer.py, then the formats directory is in PATHONPATH and its parent directory is not, so you cannot import modules and packages which are not in formats.
What you can do, is run writer module, but have the root directory in PATHONPATH and you can do that without even messing with environment variables:
cd /directory/in/which/main.py/is
python -m formats.writer
Unlike python formats/writer.py, which changes PYTHONPATH and runs writer.py, this keeps the default PYTHONPATH (current directory) and tells Python to look within that path for a module named formats.writer and run that as the main module.
My project hierarchy (Python 3.5):
/home/pavarine/Projects/brp (project root)
config.py
main.py
lib/
Class1.py
I'm trying to dynamically transform my project (brp folder) into a python package and call it's modules from wherever I want, like this:
Execution from 'main.py':
sys.path.insert(0, "/home/pavarine/Projects/brp")
print('\n'.join(sys.path))
That gives me:
/home/pavarine/Projects/brp
/home/pavarine/Projects/brp
/usr/lib/python35.zip
/usr/lib/python3.5
/usr/lib/python3.5/plat-x86_64-linux-gnu
/usr/lib/python3.5/lib-dynload
/home/pavarine/.local/lib/python3.5/site-packages
/usr/local/lib/python3.5/dist-packages
/usr/lib/python3/dist-packages
Where I can clearly see thats my projects root path is now inside sys.path, but when I try to import the "config.py" I got an error:
Execution from 'main.py':
from brp import config
Results in:
ImportError: No module named 'brp'
What am I doindo wrong?
You need to insert the parent directory into the python path.
However, please don't do this. Manipulating the python path from inside scripts is possibly dangerous and using an absolute path makes it unportable.
For your project, create another folder called brp and create a simple setup.py there.
It should look like this
/home/pavarine/Projects/brp (project root)
setup.py
brp/
__init__.py # can be empty
config.py
main.py
lib/
__init__.py # can be empty
Class1.py
For the start, setup.py can be as simple as
from setuptools import setup, find_packages
setup(
name='brp',
packages=find_packages(),
)
You can then install your package with pip install . from the root folder and use it anywhere on your system without manipulating the sys.path.
If you want, you can use pips developer mode: pip install -e ., this will just create symlinks so that changes in your project directory directly take effect without needing to reinstall the package.
Usually you structure your project one 1 of two ways.
You have one module in your root folder
brp/
brp.py
Or you have multiple modules which you put into one subfolder:
mymod/
mymod/
__init__.py
a.py
b.py
The simplest solution for you is to add a brp subfolder in the existing brp and move everything in that subfolder. Resulting in:
brp/
brp/
__init__.py
config.py
main.py
lib/
__init__.py
Class1.py
The __init__.py files are so python knows that these are submodules. For more information there are several guides including this one.
I have a GUI app that another developer wrote that I am trying to turn into a conda package that will install a desktop icon on the desktop that users can then launch seamlessly.
Below is the folder structure and the code that I can share:
Documents/
└── project/
├── bld.bat
├── meta.yaml
├── setup.py
├── setup.cfg
└── mygui/
├── MainGUI.py
├── __init__.py
├── __main__.py
└── images/
└── icon.ico
Documents\project\bld.bat:
python setup.py install install_shortcuts
if errorlevel 1 exit 1
Documents\project\meta.yaml:
package:
name: mygui
version: 1.2.3
source:
path: ./
build:
number: 1
string: py{{ CONDA_PY }}_{{ ARCH }}
requirements:
build:
- python 2.7.13
- pyvisa 1.4
- setuptools
- setuptools-shortcut
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
run:
- python 2.7.13
- pyvisa 1.4
- pydaqmx
- pmw
- matplotlib
- pyserial
- pil
about:
license:
summary: My GUI application
Documents\project\setup.py:
from setuptools import setup, find_packages
setup(
name='mygui',
version='1.2.3',
author='Me',
author_email='me#myemail.com',
description=(
"An App I wrote."
),
long_description="Actually, someone else wrote it but I'm making the conda package.",
packages=find_packages(),
package_data={
'mygui': ['images/*ico'],
},
entry_points={
'gui_scripts': [
'MyApp = mygui.__main__:main'
],
},
install_requires=['pyvisa==1.4', 'pmw', 'pydaqmx', 'matplotlib', 'pyserial', 'pil']
)
Documents\project\setup.cfg:
[install]
single-version-externally-managed=1
record=out.txt
[install_shortcuts]
iconfile=mygui/images/icon.ico
name=MyApp
group=My Custom Apps
desktop=1
Documents\project\mygui__main__.py:
from MainGUI import main
if __name__ == '__main__':
main()
The original GUI developer had a code block in a block that went like:
if __name__ == '__main__':
<code here>
so I took all the code where would be and put it cut/paste it into:
def main():
<code here>
if __name__ == '__main__':
main()
all inside the MainGUI.py file. I cannot share the specifics of the code. But it works as I'll describe below.
When I open up my code in PyCharm and hit run or debug in a conda environment with all the packages listed in the meta.yaml file the application works just fine with no warnings or errors. However, when I run conda build, upload to the anaconda channel, and then install on the machine, the desktop icon gets created but the application won't run when I click on it.
Is there something wrong in my setup files? How can I debug the reason why the application fails? I don't see any command window or output of any kind and PyCharm doesn't complain so it must be something after the application gets made.
Update: This is my first time creating a conda package that installs itself as an app like this and I used a colleague's setup.py files as a template. I was curious if the conda package that he created on one of his projects was structurally different from the conda package coming out of my conda-build and it is. If I take that tar.bz file and unzip it this is the structure that I get:
mygui-1.2.3-py27_32/
├── info/
├── about.json
├── files
├── has_prefix
├── index.json
└── paths.json
├── Lib/
└── site-packages
└── mygui-1.2.3-py2.7.egg-info
├── dependency_links.txt
├── entry_points.txt
├── PKG-INFO
├── requires.txt
├── SOURCES.txt
└── top_level.txt
├── Menu/
├── mygui.ico
└── mygui_menu.json
└── Scripts/
├── MyApp.exe
├── MyApp.manifest
└── MyApp.pyw
But my colleague gets the same structure but he also gets a directory called Lib/site-packages/mygui/, for example, which contains the source code in .py and .pyc files and directories. Why is my package not getting these source files and could this be the reason my application won't launch? I also don't see any of my data files which I've indicated in my setup.py file (the *.ico files)
I was finally able to get this app made where it would install the shortcuts on the desktop and include the source code.
The problem was with the imports. Since the original source code was written YEARS ago they didn't have absolute_imports.
I had to go through and make sure
from __future__ import (
unicode_literals,
print_function,
division,
absolute_import
)
was at the top of every file that made imports and then also change the relative imports to absolute imports. In the root __init__.py file, however, I left relative imports. Oh, also another thing I was doing wrong was that in one version of my setup.py I was including these four imports. Don't do that or python will complain about the unicode_literals. I just left it out of setup.py and it was fine.
To debug the conda package and find more import errors I would do the following:
Test the code in PyCharm by running __main__.py.
If that worked, I would build the conda package.
Install the conda package.
In a command window I would run python "C:\Miniconda2\envs\myenv\Scripts\MyApp-script.pyw". This would give me the next error that PyCharm did not.
I would return to the source code, make the necessary change and repeat steps 1-4 until the program launched from the desktop icon.