Attempted relative import with no known parent package error - python

Error: While importing "wsgi-contract-test", an ImportError was raised:
Traceback (most recent call last):
File "/Users/karl/Development/tral/bin/tral-env/lib/python3.9/site-packages/flask/cli.py", line 236, in locate_app
__import__(module_name)
File "/Users/karl/Development/tral/test/contract/inject/wsgi-contract-test.py", line 8, in <module>
from . import (
ImportError: attempted relative import with no known parent package
ERROR (run-tral): trap on error (rc=2) near line 121
make: *** [component.mk:114: tral-local-run-api-contract-test] Error 2
wsgi-contract-test.py:
from ...libs.tral import app, setup_from_env
from ...libs.tral import routes
I have the regular source files under the libs/tral directory, however the entry file for this test is located under test/contract/inject. I do NOT want to move this test file into libs since this file should be nowhere near production code as it is a rather hazardous file security wise.
In node.js this would of worked fine but there seems to be something with python imports I'm not grasping?

Since tests don't belong inside the src tree, there are basically two approaches. If you are testing a library you will frequently install it (in a virtualenv) and then just import it exactly the same way you would anywhere else. Frequently you also do this with a fresh install for the test: this has the great advantage that your tests mirror real setups, and helps to catch bugs like forgetting to commit/include files (guilty!) and only noticing once you've deployed...
The other option is to modify sys.path so that your test is effectively in the same place as your production code:
# /test/contract/inject.py
from pathlib import Path
import sys
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
Since sys.path uses strs you may prefer to use os.path, like every single other example on SO. Personally I'm addicted to pathlib. Here:
__file__ is the absolute path of the current file
Path(str) wraps it in a pathlib.Path object
.parent gets you up the tree, and
str() casts back to the format sys.path expects, or it won't work.
Using a test runner
Normally we use a test runner for tests. But these tests need a running instance! No problem:
# tests/conftest.py
import pytest
from sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
# now pytest can find the src
from app import get_instance # dummy
#pytest.fixture
def instance():
instance = get_instance(some_param)
# maybe
instance.do_some_setup()
yield instance
# maybe
instance.do_some_cleanup()
If you don't need to do any cleanup, you can just return the instance rather than yielding it. Then in another file (for neatness) you write tests like this:
# tests/test_instance.py
def test_instance(instance): # note how we requested the fixture
assert instance.some_method()
And you run your tests with pytest:
pytest tests/
Pytest will run the code in conftest.py, discover all test fns starting with test in files whose names start with test, run the tests (supplying all the fixtures you have defined) and report.
Fixture Lifetime
Sometimes spinning up a fixture can be expensive. See the docs on fixture scope for telling pytest to keep the fixture around and supply it to multiple tests.
Using a runner vs. just running a script
You don't have to use a runner. But they do have many advantages:
decent metrics e.g. code/branch coverage
everyone uses them
parallel testing against different envs (e.g. python versions)
tests tend to be a lot neater

I took for granted that Python was able to handle simple relative paths; not the case. Instead I just added the path to the packages I wanted to include in the PYTHONPATH variable and walla, it found everything.
export PYTHONPATH="${PYTHONPATH}:$(pwd)/libs"
Then running it from the root project directory.
I had to change the code to the following though:
from tral import app, setup_from_env
from tral import routes

Related

How to import a module in Python in a file that is both used as a __main__ and is imported by a file in a different directory?

Say I am using Python 3 (and hence absolute imports) and my directory structure looks like this:
> package:
> sub_directory
__init__.py
sub_dir_file.py
sub_dir_file2.py
__init__.py
main_dir_file.py
In the file sub_dir_file.py I wish to import a function from sub_dir_file2.py. The catch is, I want to be able to run sub_dir_file.py with __name__ == '__main__', as well as import it in main_dir_file.py. Hence, if in sub_dir_file.py I use a relative import:
from .sub_dir_file2 import some_function
the module executes perfectly fine when run from main_dir_file.py, but throws an error when executed directly (as the relative import cannot be executed when __name__ == '__main__'. If I however use a normal absolute import, sub_dir_file.py will execute as a main, but cannot be imported from main_dir_file.py.
What would be the most elegant way of solving this problem? One obvious solution seems to be:
if __name__ == '__main__':
from sub_dir_file2 import some_function
else:
from .sub_dir_file2 import some_function
However, it doesn't seem very pythonic.
You should use the relative import syntax from .sub_dir_file2 import some_function or eventually the absolute syntax from package.sub_directory.sub_dir_file2 import some_function.
Then, in order to call one of the package sub module, it is simpler to use the -m option of the python interpreter to execute its content as the __main__ module.
Search sys.path for the named module and execute its contents as the
main module.
Since the argument is a module name, you must not give a file
extension (.py). The module name should be a valid absolute Python
module name, but the implementation may not always enforce this (e.g.
it may allow you to use a name that includes a hyphen).
For example:
> python -m package.main_dir_file
> python -m package.sub_directory.sub_dir_file
I would suggest using a main() function invoked if name is __main__. It's a good habit anyway, as far as I'm aware.
That way you can just call the imported module's main() yourself. It has other benefits, too, like allowing you to test or re-invoke the module without necessarily executing the file every time.

PyCharm - no tests were found?

I've been getting na error in PyCharm and I can't figure out why I'm getting it:
No tests were found
This is what I have for my point_test.py:
import unittest
import sys
import os
sys.path.insert(0, os.path.abspath('..'))
from ..point import Point
class TestPoint(unittest.TestCase):
def setUp(self):
pass
def xyCheck(self,x,y):
point = Point(x,y)
self.assertEqual(x,point.x)
self.assertEqual(y,point.y)
and this point.py, what I'm trying to test:
import unittest
from .utils import check_coincident, shift_point
class Point(object):
def __init__(self,x,y,mark={}):
self.x = x
self.y = y
self.mark = mark
def patched_coincident(self,point2):
point1 = (self.x,self.y)
return check_coincident(point1,point2)
def patched_shift(self,x_shift,y_shift):
point = (self.x,self.y)
self.x,self,y = shift_point(point,x_shift,y_shift)
Is it something wrong with my run configuration? I looked at this SO post but I'm still utterly confused. My run configuration currently looks like this:
In order to recognize test functions, they must be named test_. In your case, rename xyCheck to test_xyCheck :)
I know it's more than a year since the question was asked, but I had the same issue and that post was the first result in search.
As I understood PyCharm (or Intellij Idea Python plugin) needs your test to meet the following criteria if you want it to be launched when you run all the tests in directory.
test functions should start with "test" (underscore is not necessary)
the file, containing the test should also start with "test". "Test" (with capital T doesn't work in my case
I'm using Intellij IDEA 2016.3.5 with Python plugin
If you want to run you tests with command line
python -m unittest
Then you should add __init__.py to test directory. Python still wants your test function names to start with "test", and you test file name to start with "test", but in case of files it doesn't care if the first "t" is capital or not. TestCase and test_case is equally fine.
One thing that can also cause this problem is if you have not selected the right testing framework in your settings:
settings > Python Integrated Tools > Testing > Default test runner
All my tests are in pytest, but it was set to unit test
Don't use dashes ("-") in your filename. These files were being ignored on my machine. renaming them from project-tests.py to project_tests.py solved the problem.
Another gotcha that has just bitten me.
I had a test file within my test package called test_queue.py which followed all of the above advice, however when I selected "Run UnitTests" in PyCharm the console reported no tests found.
The issue in my case was that I had a non-unit test file in the root of the project also called test_queue.py which had been used for some other purpose early on in the project and forgotten about.
Even though I was specifically selecting the test file in my tests folder, and the path was set to absolutely point at the unit test version of the file, it seems the file in the root of the project was being used.
So, one more thing to check, make sure there are no other files in your project with the same name.
Another issue you may want to check is if you see in the console output "## tests deselected by '-k XXXX'". This means you've added a Keyword to the Run/Debug Configuration, and Pycharm will only run tests whose name contains the keyword.
Adding another answer in the hopes that it helps someone down the road. I've been fighting this problem and just figured out the answer (I think). I originally deleted the standard Django file tests.py out of my application folder. I also created a subdirectory of my project called tests, which contains separate test scripts. In this situation, Pycharm failed to find the tests. I corrected this simply by creating an empty file called tests.py in my application folder.
So:
Make sure you have a file called tests.py in your application director (it can be an empty file)
It seems that a folder called tests, in your project, can contain separate test scripts and Pycharm seems to find and run these.
Here's a picture of the directory structure that's working for me:
I had this exception when running individual tests in a Django 1.8 project in PyCharm 2018.1. I could run all the tests together, but individual tests in one file crashed.
The exception was happening in unittest's loader.py
It was getting an ImportError trying to import test_admin_views.py, though the exception was hiding the details of that error.
To see the details of the ImportError, I opened a Python Console and ran:
import my_app.users.tests.test_admin_views
This gave me:
Traceback (most recent call last):
[...]
File "my_app/my_app/users/tests/model_factories.py", line 42, in <module>
from my_app.applications.tests.model_factories import ApplicationFactory
ImportError: cannot import name ApplicationFactory
I noticed that some of the other factories in that file are imported without using the full path, so I tried adding ApplicationFactory to the relative path import list:
from .model_factories import UserFactory, UserGroupFactory, ApplicationFactory
Which worked!
None of these helped me, but I was able to fix the issue by setting the working directory to the project directory.
I debugged this by first creating a new Django test configuration witj a target of one of the app names with tests.
That ran successfully. No idea why working directory should need to be specified.

Test if code is executed from within a py.test session

I'd like to connect to a different database if my code is running under py.test. Is there a function to call or an environment variable that I can test that will tell me if I'm running under a py.test session? What's the best way to handle this?
A simpler solution I came to:
import sys
if "pytest" in sys.modules:
...
Pytest runner will always load the pytest module, making it available in sys.modules.
Of course, this solution only works if the code you're trying to test does not use pytest itself.
There's also another way documented in the manual:
https://docs.pytest.org/en/latest/example/simple.html#pytest-current-test-environment-variable
Pytest will set the following environment variable PYTEST_CURRENT_TEST.
Checking the existence of said variable should reliably allow one to detect if code is being executed from within the umbrella of pytest.
import os
if "PYTEST_CURRENT_TEST" in os.environ:
# We are running under pytest, act accordingly...
Note
This method works only when an actual test is being run.
This detection will not work when modules are imported during pytest collection.
A solution came from RTFM, although not in an obvious place. The manual also had an error in code, corrected below.
Detect if running from within a pytest run
Usually it is a bad idea to make application code behave differently
if called from a test. But if you absolutely must find out if your
application code is running from a test you can do something like
this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
import sys # This was missing from the manual
del sys._called_from_test
and then check for the sys._called_from_test flag:
if hasattr(sys, '_called_from_test'):
# called from within a test run
else:
# called "normally"
accordingly in your application. It’s also a good idea to use your own
application module rather than sys for handling flag.
Working with pytest==4.3.1 the methods above failed, so I just went old school and checked with:
script_name = os.path.basename(sys.argv[0])
if script_name in ['pytest', 'py.test']:
print('Running with pytest!')
While the hack explained in the other answer (http://pytest.org/latest/example/simple.html#detect-if-running-from-within-a-pytest-run) does indeed work, you could probably design the code in such a way you would not need to do this.
If you design the code to take the database to connect to as an argument somehow, via a connection or something else, then you can simply inject a different argument when you're running the tests then when the application drives this. Your code will end up with less global state and more modulare and reusable. So to me it sounds like an example where testing drives you to design the code better.
This could be done by setting an environment variable inside the testing code. For example, given a project
conftest.py
mypkg/
__init__.py
app.py
tests/
test_app.py
In test_app.py you can add
import os
os.environ['PYTEST_RUNNING'] = 'true'
And then you can check inside app.py:
import os
if os.environ.get('PYTEST_RUNNING', '') == 'true':
print('pytest is running')

In Python, can one get the path of a relative import imported after and before chdir calls?

I'm looking to get the path of a module after os.chdir has been called.
In this example:
import os
os.chdir('/some/location')
import foo # foo is located in the current directory.
os.chdir('/other/location')
# How do I get the path of foo now? ..is this impossible?
..the foo.__file__ variable will be 'foo.py', as will inspect.stack()[0][1] -- yet, there's no way to know where 'foo.py' is located now, right?
What could I use, outside (or inside, without storing it as a variable at import time) of 'foo', which would allow me to discover the location of foo?
I'm attempting to build a definitive method to determine which file a module is executing from. Since I use IPython as a shell, this is something I could actually run into.
Example usage:
I have two versions of a project I'm working on, and I'm comparing their behavior during the process of debugging them. ..let's say they're in the directories 'proj1' and 'proj2'. ..which foo do I have loaded in the IPython interpreter again?
The ideal:
In [242]: from my_tools import loc
In [243]: loc(foo)
'/home/blah/projects/proj2/foo.py'
** As abarnert noted, that is not possible, as python does not record the base directory location of relative imports. This will, however, work with normal (non-relative) imports.
** Also, regular python (as opposed to IPython) does not allow imports from the current directory, but rather only from the module directory.
The information isn't available anymore, period. Tracebacks, the debugger, ipython magic, etc. can't get at it. For example:
# foo.py
def bar():
1/0
$ ipython
In [1]: import foo
In [2]: os.chdir('/tmp')
In [3]: foo.baz()
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-5-a70d319d0d05> in <module>()
----> 1 foo.baz()
/private/tmp/foo.pyc in baz()
ZeroDivisionError: integer division or modulo by zero
So:
the foo.__file__ variable will be 'foo.py', as will inspect.stack()[0][1] -- yet, there's no way to know where 'foo.py' is located now, right?
Right. As you can see, Python treats it as a relative path, and (incorrectly) resolves it according to the current working directory whenever it needs an absolute path.
What could I use, outside (or inside, without storing it as a variable at import time) of 'foo', which would allow me to discover the location of foo?
Nothing. You have to store it somewhere.
The obvious thing to do is to store os.path.abspath(foo.__file__) from outside, or os.path.abspath(__file__) from inside, at import time. Not what you were hoping for, but I can't think of anything better.
If you want to get tricky, you can build an import hook that modifies modules as they're imported, adding a new __abspath__ attribute or, more simply, changing __file__ to always been an abspath. This is easier with the importlib module Python 3.1+.
As a quick proof of concept, I slapped together abspathimporter. After doing an import imppath, every further import you do that finds a normal .py file or package will absify its __file__.
I don't know whether it works for .so/.pyd modules, or .pyc modules without source. It definitely doesn't work for modules inside zipfiles, frozen modules, or anything else that doesn't use the stock FileFinder. It won't retroactively affect the paths of anything imported before it. It requires 3.3+, and is horribly fragile (most seriously, the FileFinder class or its hook function has to be the last thing in sys.path_hooks—which it is by default in CPython 3.3.0-3.3.1 on four Mac and linux boxes I tested, but certainly isn't guaranteed).
But it shows what you can do if you want to. And honestly, for playing around in iPython for the past 20 minutes or so, it's kind of handy.
import os
import foo
foodir = os.getcwd()
os.chdir('/other/location')
foodir now has original directory stored in it...

module reimported if imported from different path

In a big application I am working, several people import same modules differently e.g.
import x
or
from y import x
the side effects of that is x is imported twice and may introduce very subtle bugs, if someone is relying on global attributes
e.g. suppose I have a package mypakcage with three file mymodule.py, main.py and init.py
mymodule.py contents
l = []
class A(object): pass
main.py contents
def add(x):
from mypackage import mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
import mymodule
return mymodule.l
add(1)
print "lets check",get()
add(1)
print "lets check again",get()
it prints
updated list [1]
lets check []
updated list [1, 1]
lets check again []
because now there are two lists in two different modules, similarly class A is different
To me it looks serious enough because classes itself will be treated differently
e.g. below code prints False
def create():
from mypackage import mymodule
return mymodule.A()
def check(a):
import mymodule
return isinstance(a, mymodule.A)
print check(create())
Question:
Is there any way to avoid this? except enforcing that module should be imported one way onyl. Can't this be handled by python import mechanism, I have seen several bugs related to this in django code and elsewhere too.
Each module namespace is imported only once. Issue is, you're importing them differently. On the first you're importing from the global package, and on the second you're doing a local, non-packaged import. Python sees modules as different. The first import is internally cached as mypackage.mymodule and the second one as mymodule only.
A way to solve this is to always use absolute imports. That is, always give your module absolute import paths from the top-level package onwards:
def add(x):
from mypackage import mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
from mypackage import mymodule
return mymodule.l
Remember that your entry point (the file you run, main.py) also should be outside the package. When you want the entry point code to be inside the package, usually you use a run a small script instead. Example:
runme.py, outside the package:
from mypackage.main import main
main()
And in main.py you add:
def main():
# your code
I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. Pay attention to the bin folder - it is outside the package. I'll reproduce the entire text here:
Filesystem structure of a Python project
Do:
name the directory something
related to your project. For example,
if your project is named "Twisted",
name the top-level directory for its
source files Twisted. When you do
releases, you should include a version
number suffix: Twisted-2.5.
create a directory Twisted/bin and
put your executables there, if you
have any. Don't give them a .py
extension, even if they are Python
source files. Don't put any code in
them except an import of and call to a
main function defined somewhere else
in your projects.
If your project
is expressable as a single Python
source file, then put it into the
directory and name it something
related to your project. For example,
Twisted/twisted.py. If you need
multiple source files, create a
package instead (Twisted/twisted/,
with an empty
Twisted/twisted/__init__.py) and
place your source files in it. For
example,
Twisted/twisted/internet.py.
put
your unit tests in a sub-package of
your package (note - this means that
the single Python source file option
above was a trick - you always need at
least one other file for your unit
tests). For example,
Twisted/twisted/test/. Of course,
make it a package with
Twisted/twisted/test/__init__.py.
Place tests in files like
Twisted/twisted/test/test_internet.py.
add Twisted/README and Twisted/setup.py to explain and
install your software, respectively,
if you're feeling nice.
Don't:
put your source in a directory
called src or lib. This makes it
hard to run without installing.
put
your tests outside of your Python
package. This makes it hard to run the
tests against an installed version.
create a package that only has a
__init__.py and then put all your
code into __init__.py. Just make a
module instead of a package, it's
simpler.
try to come up with
magical hacks to make Python able to
import your module or package without
having the user add the directory
containing it to their import path
(either via PYTHONPATH or some other
mechanism). You will not correctly
handle all cases and users will get
angry at you when your software
doesn't work in their environment.
I can only replicate this if main.py is the file you are actually running. In that case you will get the current directory of main.py on the sys path. But you apparently also have a system path set so that mypackage can be imported.
Python will in that situation not realize that mymodule and mypackage.mymodule is the same module, and you get this effect. This change illustrates this:
def add(x):
from mypackage import mymodule
print "mypackage.mymodule path", mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
import mymodule
print "mymodule path", mymodule
return mymodule.l
add(1)
print "lets check",get()
add(1)
print "lets check again",get()
$ export PYTHONPATH=.
$ python mypackage/main.py
mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
mymodule path <module 'mymodule' from '/tmp/mypackage/mymodule.pyc'>
But add another mainfile, in the currect directory:
realmain.py:
from mypackage import main
and the result is different:
mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
So I suspect that you have your main python file within the package. And in that case the solution is to not do that. :-)

Categories

Resources