I just got set up to use pytest with Python 2.6. It has worked well so far with the exception of handling "import" statements: I can't seem to get pytest to respond to imports in the same way that my program does.
My directory structure is as follows:
src/
main.py
util.py
test/
test_util.py
geom/
vector.py
region.py
test/
test_vector.py
test_region.py
To run, I call python main.py from src/.
In main.py, I import both vector and region with
from geom.region import Region
from geom.vector import Vector
In vector.py, I import region with
from geom.region import Region
These all work fine when I run the code in a standard run. However, when I call "py.test" from src/, it consistently exits with import errors.
Some Problems and My Solution Attempts
My first problem was that, when running "test/test_foo.py", py.test could not "import foo.py" directly. I solved this by using the "imp" tool. In "test_util.py":
import imp
util = imp.load_source("util", "util.py")
This works great for many files. It also seems to imply that when pytest is running "path/test/test_foo.py" to test "path/foo.py", it is based in the directory "path".
However, this fails for "test_vector.py". Pytest can find and import the vector module, but it cannot locate any of vector's imports. The following imports (from "vector.py") both fail when using pytest:
from geom.region import *
from region import *
These both give errors of the form
ImportError: No module named [geom.region / region]
I don't know what to do next to solve this problem; my understanding of imports in Python is limited.
What is the proper way to handle imports when using pytest?
Edit: Extremely Hacky Solution
In vector.py, I changed the import statement from
from geom.region import Region
to simply
from region import Region
This makes the import relative to the directory of "vector.py".
Next, in "test/test_vector.py", I add the directory of "vector.py" to the path as follows:
import sys, os
sys.path.append(os.path.realpath(os.path.dirname(__file__)+"/.."))
This enables Python to find "../region.py" from "geom/test/test_vector.py".
This works, but it seems extremely problematic because I am adding a ton of new directories to the path. What I'm looking for is either
1) An import strategy that is compatible with pytest, or
2) An option in pytest that makes it compatible with my import strategy
So I am leaving this question open for answers of these kinds.
The issue here is that Pytest walks the filesystem to discover files that contain tests, but then needs to generate a module name that will cause import to load that file. (Remember, files are not modules.)
Pytest comes up with this test package name by finding the first directory at or above the level of the file that does not include an __init__.py file and declaring that the "basedir" for the module tree containing a module generated from this file. It then adds the basedir to sys.path and imports using the module name that will find that file relative to the basedir.
There are some implications of this of which you should beware:
The basepath may not match your intended basepath in which case the module will have a name that doesn't match what you would normally use. E.g., what you think of as geom.test.test_vector will actually be named just test_vector during the Pytest run because it found no __init__.py in src/geom/test/ and so added that directory to sys.path.
You may run into module naming collisions if two files in different directories have the same name. For example, lacking __init__.py files anywhere, adding geom/test/test_util.py will conflict with test/test_util.py because both are loaded as import test_util.py, with both test/ and geom/test/ in the path.
The system you're using here, without explicit __init__.py modules, is having Python create implicit namespace packages for your directories. (A package is a module with submodules.) Ideally we'd configure Pytest with a path from which it would also generate this, but it doesn't seem to know how to do that.
The easiest solution here is simply to add empty __init__.py files to all of the subdirectories under src/; this will cause Pytest to import everything using package/module names that start with directory names under src/.
The question How do I Pytest a project using PEP 420 namespace packages? discusses other solutions to this.
import looks in the following directories to find a module:
The home directory of the program. This is the directory of your root script. When you are running pytest your home directory is where it is installed (/usr/local/bin probably). No matter that you are running it from your src directory because the location of your pytest determines your home directory. That is the reason why it doesn't find the modules.
PYTHONPATH. This is an environment variable. You can set it from the command line of your operating system. In Linux/Unix systems you can do this by executing: 'export PYTHONPATH=/your/custom/path' If you wanted Python to find your modules from the test directory you should include the src path in this variable.
The standard libraries directory. This is the directory where all your libraries are installed.
There is a less common option using a pth file.
sys.path is the result of combining the home directory, PYTHONPATH and the standard libraries directory. What you are doing, modifying sys.path is correct. It is something I do regularly. You could try using PYTHONPATH if you don't like messing with sys.path
If you include an __init__.py file inside your tests directory, then when the program is looking to set a home directory it will walk 'upwards' until it finds one that does not contain an init file. In this case src/.
From here you can import by saying :
from geom.region import *
you must also make sure that you have an init file in any other subdirectories, such as the other nested test directory
I was wondering what to do about this problem too. After reading this post, and playing around a bit, I figured out an elegant solution. I created a file called "test_setup.py" and put the following code in it:
import sys, os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
I put this file in the top-level directory (such as src). When pytest is run from the top-level directory, it will run all test files including this one since the file is prefixed with "test". There are no tests in the file, but it is still run since it begins with "test".
The code will append the current directory name of the test_setup.py file to the system path within the test environment. This will be done only once, so there are not a bunch of things added to the path.
Then, from within any test function, you can import modules relative to that top-level folder (such as import geom.region) and it knows where to find it since the src directory was added to the path.
If you want to run a single test file (such as test_util.py) instead of all the files, you would use:
pytest test_setup.py test\test_util.py
This runs both the test_setup and test_util code so that the test_setup code can still be used.
Are so late to answer that question but usining python 3.9 or 3.10 u just need to add __init__.py folder in tests folders.
When u add this file python interprets this folders as a module.
Wold be like this
src/
main.py
util.py
test/
__init__.py
test_util.py
geom/
vector.py
region.py
test/
__init__.py
test_vector.py
test_region.py
so u just run pytest.
Sorry my poor english
Not the best solution, but maybe the fastest one:
cd path/python_folder
python -m pytest python_file.py
Related
I'm trying to keep a data science project well-organized so I've created a directory inside my src directory called utils that contains a file called helpers.py, which contains some helper functions that will be used in many scripts. What is the best practice for how I should import func_name from src/utils/helpers.py into a file in a totally different directory, such as src/processing/clean_data.py?
I see answers to this question, and I've implemented a solution that works, but this feels ugly:
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))))
Am I doing this right? Do I need to add this to every script that wants to import func_name, like train_model.py?
My current project folder structure:
myproject
/notebooks
notebook.ipynb
/src
/processing
clean_data.py
/utils
helpers.py
/models
train_model.py
__init__.py
Example files:
# clean_data.py
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))))
from src.utils.helpers import func_name
func_name()
# helpers.py
def func_name():
print('I'm a helper function.')
The correct way to do it is to use __init__.py, setup.py and the setuptools Python package:
myPackage/
myPackage/
__init__.py
setup.py
This link has all the steps.
First of all, let me describe you the differences between a Python module & a Python package so that both of us are on the same page. β
A module is a single .py file (or files) that are imported under one import and used. β
import aModuleName
# Here 'aModuleName' is just a regular .py file.
Whereas, a package is a collection of modules in directories that give a package hierarchy. A package contains a distinct __init__.py file. β
from aPackageName import aModuleName
# Here 'aPackageName` is a folder with a `__init__.py` file
# and 'aModuleName', which is just a regular .py file.
Therefore, when we have a project directory named proj-dir of the following structure ‡
proj-dir
--|--__init__.py
--package1
--|--__init__.py
--|--module1.py
--package2
--|--__init__.py
--|--module2.py
π Notice that I've also added an empty __init__.py into the proj-dir itself which makes it a package too.
π Now, if you want to import any python object from module2 of package2 into module1 of package1, then the import statement in the file module1.py would be
from package2.module2 import object2
# if you were to import the entire module2 then,
from package2 import module2
I hope this simple explanation clarifies your doubts on Python imports' mechanism and solves the problem. If not then do comment here. π
First of all let me clarify you that importing an entire module, if you are going to use a part of it, then is not a good idea. Instead of that you can use from to import specific function under a library/package. By doing this, you make your program efficient in terms of memory and performance.
To know more refer these:
'import module' or 'from module import'
difference between import and from
Net let us look into the solution.
Before starting off with the solution, let me clarify you the use of __init__.py file. It just tells the python interpreter that the *.py files present there are importable which means they are modules and are/maybe a part of a package.
So, If you have N no of sub directories you have to put __init__.py file in all those sub directories such that they can also be imported. Inside __init__.py file you can also add some additional information like which path should be included, default functions,variables,scope,..etc. To know about these just google about __init__.py file or take some python library and go through the same __init__.py file to know about it. (Here lies the solution)
More Info:
modules
Be pythonic
So as stated by #Sushant Chaudhary your project structure should be like
proj-dir
--|--__init__.py
--package1
--|--__init__.py
--|--module1.py
--package2
--|--__init__.py
--|--module2.py
So now, If I put __init__.py file under my directory like above, Will
it be importable and work fine?
yes and no.
Yes :
If you are importing the modules within that project/package directory.
for example in your case
you are importing package1.module1 in pakage2.module2 as from package1 import module1.
Here you have to import the base dir inside the sub modules, Why? the project will run fine if you are running the module from the same place. i.e: inside package2 as python module2.py, But will throw ModuleNotFoundError If you run the module from some other directory. i.e: any other path except under package2 for example under proj-dir as python package2/module2.py. This is what happening in your case. You are running the module from project-dir.
So How to fix this?
1- You have to append basedir path to system path in module2.py as
from sys import path
dir_path = "/absolute/path/to/proj-dir"
sys.path.insert(0, dir_path)
So that module2 will be able to find package1 (and module1 inside it).
2- You have to add all the sub module paths in __init__.py file under proj-dir.
For example:
#__init__.py under lxml
# this is a package
def get_include():
"""
Returns a list of header include paths (for lxml itself, libxml2
and libxslt) needed to compile C code against lxml if it was built
with statically linked libraries.
"""
import os
lxml_path = __path__[0]
include_path = os.path.join(lxml_path, 'includes')
includes = [include_path, lxml_path]
for name in os.listdir(include_path):
path = os.path.join(include_path, name)
if os.path.isdir(path):
includes.append(path)
return includes
This is the __init__.py file of lxml (a python library for parsing html,xml data). You can refer any __init__.py file under any python libraries having sub modules.ex (os,sys). Here I've mentioned lxml because I thought it will be easy for you to understand. You can even check __init__.py file under other libraries/packages. Each will have it's own way of defining the path for submodules.
No
If you are trying to import modules outside the directory. Then you have to export the module path such that other modules can find them into environment variables. This can be done directly by appending absolute path of the base dir to PYTHONPATH or to PATH.
To know more:
PATH variables in OS
PYTHONPATH variable
So to solve your problem, include the paths to all the sub modules in __init__.py file under proj-dir and add the /absolute/path/to/proj-dir either to PYTHONPATH or PATH.
Hope the answer explains you about usage of __init__.py and solves your problem.
On Linux, you can just add the path to the parent folder of your src directory to ~/.local/lib/python3.6/site-packages/my_modules.pth. See
Using .pth files. You can then import modules in src from anywhere on your system.
NB1: Replace python3.6 by any version of Python you want to use.
NB2: If you use Python2.7 (don't know for other versions), you will need to create __init__.py (empty) files in src/ and src/utils.
NB3: Any name.pth file is ok for my_modules.pth.
Yes, you can only import code from installed packages or from files in you working directory or subdirectories.
the way I see it, your problem would be solved if you would have your module or package installed, like an yother package one installs and then imports (numpy, xml, json etc.)
I also have a package I constantly use in all my projects, ulitilies, and I know it's a pain with the importing.
here is a description on how to How to package a python application to make it pip-installable:
https://marthall.github.io/blog/how-to-package-a-python-app/
Navigate to your python installation folder
Navigate to lib
Navigate to site-packages
Make a new file called any_thing_you_want.pth
Type .../src/utils/helpers.py inside that file with your favorite text editor
Note: the ellipsis before scr/utils/helpers.py will look something like: C:/Users/blahblahblah/python_folders/scr... <- YOU DO NEED THIS!
This is a cheap way out but it keeps code clean, and is the least complicated. The downside is, for every folder your modules are in, example.pth will need them. Upside: works with Windows all the way up to Windows 10
I read various answer (relative import), but none work to import a simple class. I have this structure:
__init__.py (empty)
my_script.py
other_script.py
my_class.py
I want to use my_class.py in both script (my_script.py and other_script.py). Like all answer suggest, I just use:
from .my_class import My_class
but I always get
SystemError: Parent module '' not loaded, cannot perform relative import
I am using Python 3.5 and PyCharm 2016.1.2. Does I need to configure the __init__.py? How can I import a simple class?
Edit
All the files are in the working directory. I just use the Pycharm to run and I wasn't having problem until try to import the class.
Ensure that your current working directory is not the one that contains these files. For example, if the path to __init__.py is spam/eggs/__init__.py, make sure you are working in directory spamβor alternatively, that /path/to/spam is in sys.path and you are working in some third place. Either way, do not attempt to work in directory eggs. To test your code, you say import eggs.
The reasoning is as follows. Since you're using __init__.py you clearly want to treat this collection of files as a package. To work as a package, a directory must fulfill both the following criteria:
it contains a file called __init__.py
its parent directory is part of the search path (or the parent directory is your current working directory); in effect, a package directory must masquerade as a file, i.e. be findable in exactly the same way that a single-file module might be found.
If you're working inside the package directory you may not be fulfilling criterion 2. You can do a straightforward import my_class from there, certainly, but the "relative import" thing you're trying is a feature that only a fully-working package will support.
What exactly is the use of __init__.py? Yes, I know this file makes a directory into an importable package. However, consider the following example:
project/
foo/
__init__.py
a.py
bar/
b.py
If I want to import a into b, I have to add following statement:
sys.path.append('/path_to_foo')
import foo.a
This will run successfully with or without __init__.py. However, if there is not an sys.path.append statement, a "no module" error will occur, with or without __init__.py. This makes it seem lik eonly the system path matters, and that __init__.py does not have any effect.
Why would this import work without __init__.py?
__init__.py has nothing to do with whether Python can find your package. You've run your code in such a way that your package isn't on the search path by default, but if you had run it differently or configured your PYTHONPATH differently, the sys.path.append would have been unnecessary.
__init__.py used to be necessary to create a package, and in most cases, you should still provide it. Since Python 3.3, though, a folder without an __init__.py can be considered part of an implicit namespace package, a feature for splitting a package across multiple directories.
During import processing, the import machinery will continue to
iterate over each directory in the parent path as it does in Python
3.2. While looking for a module or package named "foo", for each directory in the parent path:
If <directory>/foo/__init__.py is found, a regular package is imported and returned.
If not, but <directory>/foo.{py,pyc,so,pyd} is found, a module is imported and returned. The exact list of extension varies by platform
and whether the -O flag is specified. The list here is
representative.
If not, but <directory>/foo is found and is a directory, it is recorded and the scan continues with the next directory in the parent
path.
Otherwise the scan continues with the next directory in the parent path.
If the scan completes without returning a module or package, and at
least one directory was recorded, then a namespace package is created.
If you really want to avoid __init__.py for some reason, you don't sys.path. Rather, create a module object and set its __path__ to a list of directories.
if I want to import a into b, I have to add following statement:
No! You'd just say: import foo.a. All this is provided you run the entire package at once using python -m main.module where main.module is the entry point to your entire application. It imports all other modules, and the modules that import more modules will try to look for them from the root of this project. For instance, foo.bar.c will import as foo.bar.b
Then it seems that only the system path matters and init.py does not have any effect.
You need to modify sys.path only when you are importing modules from locations that are not in your project, or the places where python looks for libraries. __init__.py not only makes a folder look like a package, it also does a few more things like "export" objects to outside world (__all__)
When you import something it has to either:
Retrieve an already loaded module or
Load the module that was imported
When you do import foo and python finds a folder called foo in a folder on your sys.path then it will look in that folder for an __init__.py to be considered the top level module.
(Note that if the package is not on your sys.path then you would need to append it's location to be able to import it.)
If that is not present it will look for a __init__.pyc version possibly in the __pycache__ folder, if that is also missing then that folder foo is not considered a loadable python package. If no other options for foo are found then an ImportError is raised.
If you try deleting the __init__.pyc file as well you will see that the the initializer script for a package is indeed necessary.
I'm developing a system in Python that includes a calculation engine and a front end. I've split them up into two projects as the calculation engine can be used for other front ends as well.
I'm using Eclipse and PyDev. Everything works perfectly in Eclipse/PyDev, but as soon as I try to run it outside of PyDev (from command line) I get importing errors. I've done quite a bit of research to find the problem, but I just don't see a solution that works nicely. I believe that PyDev modifies the Python path.
In my project layout below I have two packages (package1 and tests) within one project (Calculations). I can't seem to import anything from package1 in tests. I also have another project (Frontend). Here I also can't import anything from package1.
What I want to understand is the proper way of calling my script/tests files from the command line? Both for two separate projects and two packages in the same project. I assume it would be similar to how PyDev does it. So far I think I have the following options:
Create python code to append to sys.path (seems hacky/not good practice)
Modify the PYTHONPATH when I call the test_some_calc.py like this: PYTHONPATH= python test_some_calc.py (I think this is how PyDev does it, but it seems a bit long - there must be a simpler way?
Make a install package (eventually I might go this method, but not yet.)
I have the following project layout.
CodeSolution/
Calculations/
package1/
__init__.py
subpackage/
__init__.py
some_calc.py
subpackage2/
__init__.py
another_calc.py
tests/
__init__.py
subpackage/
__init__.py
test_some_calc.py # Unable to import from package1
subpackage2/
__init__.py
test_another_calc.py # Unable to import from package1
Frontend/
some_script.py # Unable to import from package1
Comments on my project layout will also be appreciated.
A clean, quick and modular way to include certain python from anywhere on your system is to make a file named mymodule.pth and put it inside the path of site-packages
mymodule.pth should have the path of your project. Project folder must have an __init__.py file.
for example put:
for Linux:
/home/user/myproject
inside
/usr/lib/python2.7/site-packages/mymodule.pth
or
for Windows
C:\\Users\myUsername\My Documents\myproject
inside
C:\PythonXY\Lib\site-packages\mymodule.pth
I wrote a script to load PYTHONPATHs from PyDev's project properties. It's allow you to run your code from console without problems like "ModuleNotFoundError: No module named ...".
import sys
from xml.dom import minidom
import os
print(sys.path)
def loadPathsFromPyDev():
sys_path = sys.path[0]
# Load XML
xmldoc = minidom.parse(sys_path+'\.pydevproject')
# Get paths
xmlpaths = xmldoc.getElementsByTagName('path')
# Get paths' values
paths = list()
for xmlpath in xmlpaths:
paths.append(xmlpath.firstChild.data)
# set path variable
for path in paths:
# Set backslashes to forwardslashes
path = os.path.normpath(path)
# Set string's sys_path
path = path.replace("\${PROJECT_DIR_NAME}", sys_path)
if path not in sys.path:
# Add to system path
sys.path.insert(1,path)
loadPathsFromPyDev()
print(sys.path)
I hope it will help You :)
I am running into pathing issues with some scripts that I wrote to test parsers I've written. It would appear that Python (27 and 3) both do not act like IronPython when it comes to using the current working directory as part of sys.path. As a result my inner package pathings do not work.
This is my package layout.
MyPackage
__init__.py
_test
__init__.py
common.py
parsers
__init__.py
_test
my_test.py
I am attempting to call the scripts from within the MyPackage directory.
Command Line Statements:
python ./parsers/_test/my_test.py
ipy ./parsers/_test/my_test.py
The import statement located in my my_test.py file is.
from _test.common import TestClass
In the python scenario I get ONLY the MyPackage/parsers/_test directory appended to sys.path so as a result MyPackage cannot be found. In the IronPython scenario both the MyPackage/parsers/_test directory AND MyPackage/ is in sys.path. I could get the same bad reference if I called the test from within the _test directory (it would have no idea about MyPackage). Even if i consolidated the test files into the _test directory I would still have this pathing issue.
I should note, that I just tested and if i did something like
import sys
import os
sys.path.append(os.getcwd())
It will load correctly. But I would have to do this for every single file.
Is there anyway to fix this kind of pathing issue without doing the following.
A. Appending a relative pathing for sys.path in every file.
B. Appending PATH to include MyPackage (I do not want my package as part of the python "global" system pathing)
Thanks alot for any tips!
Two options spring to mind:
Use the environment variable PYTHONPATH and include .
Add it to your sys.path at the beginning of your program
import sys.path
sys.path.append('.')
If you need to import a relative path dynamically you can always do something like
import somemodule
import sys
dirname = os.path.dirname(os.path.abspath(somemodule.__file__))
sys.path.append(dirname)