Python module search path problem - python

I am trying to work on a dev environment but am find problems in that python seems to be using modules from the site-packages directory. I want it to be using the modules from my dev directory.
sys.path returns a bunch of dirs, like this
['', '/usr/lib/python26.zip', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/site-packages' etc
This is good, it's using the current directory as the first place of lookup (at least this is how I understand it to be).
Ok now if I create say a file called command.py in the current directory, things work as I would expect them.
>>> import commands
>>> commands.__file__
'commands.pyc'
I then exit out of the python shell, and start another one. I then do this.
>>> import foo.bar.commands
Now, what I'm expecting it to do is go down from the current directory to ./foo/bar/ and get me the commands module from there. What I get though is this
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
Even though from my current directory there is a ./foo/bar/commands.py
Using imp.find_module() and imp.load_module() I can load the local module properly. Whats actually interesting (although I don't really know what it means) is the last line that is printed out in this sequence
>>> import foo.bar.commands
>>> foo.bar.commands.__file__
'/usr/lib/python2.6/site-packages/foo/bar/commands.pyc'
>>> foo.bar.__file__
'/usr/lib/python2.6/site-packages/foo/bar/__int__.pyc'
>>> foo.__file__
'./foo/__init__.pyc'
So if it can find the foo/init.pyc in the local dir why can't it find the other files in the local dir?
Cheers

You mention that there's a foo directory under your current directory, but you don't tell us whether foo/__init__.py exists (even possibly empty): if it doesn't, this tells Python that foo is not a package. Similarly for foo/bar/__init__.py -- if that file doesn't exist, even if foo/__init__.py does, then foo.bar is not a package.
You can play around a little by placing .pth files and/or setting __path__ explicitly in your packages, but the basic, simple rule is to just place an __init__.py in every directory that you want Python to recognize as a package. The contents of that file are "the body" of the package itself, so if you import foo and foo is a directory with a foo/__init__.py file, then that's what you're importing (in any case, the package's body executes the first time you import anything from the package or any subpackage thereof).
If that is not the problem, it looks like some other import (or explicit sys.path manipulation) may be messing you up. Running python with a -v flag makes imports highly visible, which can help. Another good technique is to place an
import pdb; pdb.set_trace()
just before the import that you think is misbehaving, and examining sys.path, sys.modules (and possibly other advanced structures such as import hooks) at that point - is there a sys.modules['foo'] already defined, for example? Interactively trying the functions from standard library module imp that locate modules on your behalf given a path may also prove instructive.

What is foo doing in /usr/lib/python2.6/site-packages?
It sounds like you have created foo in your local directory but that is not necessarily the one you are importing.
Try getting rid of the foo/bar in site-packages
Make sure your directory structure looks like this
/foo/__init__.py
/bar/__init__.py
/commands.py
Also, it is a good idea to not reuse python standard library names for your own modules -- can you call your commands.py something else?

Related

How are python module paths translated to filesystem paths?

This may seem like a simple question, but I haven't found an answer that explains the behavior I'm seeing. Hard to provide a simple repro case but I basically have a package structure like this:
a.b.c
a.b.utils
I have one project that has files in a.b.c. (let's call this aux_project) and another that has files in a.b.d, a.b.utils, etc (call it main_project). I'm trying to import a.b.utils inside pytest tests in the first project, using tests_require. This does not work because a.b is for some reason sourced from inside aux_project/a/b/__init__.pyc instead of the virtualenv and it shadows the other package (i.e. this a.b only has a c in it, not d or utils). This happens ONLY in the test context. In ipython I can load all packages fine, and they are correctly loaded from virtualenv.
What's weirder is that if I simply delete the actual directory, the tests do load the pycs from virtualenv and everything works (I need that directory, though)
python==2.7.9
What is going on?
Ok, the problem was simply that the cwd is prepended to the PYTHONPATH. sys.path.pop(1) (0 is the tests dir, prepended by pytest) resolved the behavior.

Importing user defined modules in python from a directory

I'm trying to import a module I wrote in python that just prints out a list containing numbers. The issue I'm having is that I want to be able to import it from a separate directory but the answers I have read so far don't seem to be working for my situation.
For example, given I want to import printnumbers.py from a directory in my documents folder I am supposed to implement the following:
import sys
sys.path.append('/home/jake/Documents')
import printnumbers.py
This snipit of code results in a "Import error" telling me that the specified module does not exist. I'm not exactly sure where to proceed from here, I have checked multiple times to make sure it's the right spelling for the path as well as for the module name. I'm still trying to understand exactly what appending to the "sys.path" does. From what I understand it's telling the program to search for modules in that directory?
Thanks for anyone who answers my rather novice question. I'm just looking for a better understanding of it that the python documentation isn't providing for my frame of mind.
When the file is printnumbers.py, the module is called printnumbers (without the .py). Therefore use
import printnumbers
import sys
sys.path.append('/home/jake/Documents')
appends '/home/jake/Documents' to the end of sys.path. The directories listed in sys.path are searched (in the order listed) whenever an import statement causes Python to search for a module. (Already imported modules are cached in sys.modules, so Python does not always need to search sys.path directories to import a module...)
So if you have a file /home/jake/Documents/printnumbers.py, then import printnumbers will cause Python to import it provided there is no other file named printnumbers.py in a directory listed in sys.path ahead of /home/jake/Documents/.
Note that injecting directories into sys.path is not the usual way to set up Python to search for modules. Usually it is preferable to add /home/jake/Documents to your PYTHONPATH environment variable. sys.path will automatically include the directories listed in the PYTHONPATH environment variable.
and one more thing, use an empty __ init __.py file in you directory
to make it as a python package (only then Python will know that this
directory is a Python package directory other than an ordinary
directory). Thus you can import modules from that package from
different directory.

How do I structure my Python project to allow named modules to be imported from sub directories

This is my directory structure:
Projects
+ Project_1
+ Project_2
- Project_3
- Lib1
__init__.py # empty
moduleA.py
- Tests
__init__.py # empty
foo_tests.py
bar_tests.py
setpath.py
__init__.py # empty
foo.py
bar.py
Goals:
Have an organized project structure
Be able to independently run each .py file when necessary
Be able to reference/import both sibling and cousin modules
Keep all import/from statements at the beginning of each file.
I Achieved #1 by using the above structure
I've mostly achieved 2, 3, and 4 by doing the following (as recommended by this excellent guide)
In any package that needs to access parent or cousin modules (such as the Tests directory above) I include a file called setpath.py which has the following code:
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('...'))
Then, in each module that needs parent/cousin access, such as foo_tests.py, I can write a nice clean list of imports like so:
import setpath # Annoyingly, PyCharm warns me that this is an unused import statement
import foo.py
Inside setpath.py, the second and third inserts are not strictly necessary for this example, but are included as a troubleshooting step.
My problem is that this only works for imports that reference the module name directly, and not for imports that reference the package. For example, inside bar_tests.py, neither of the two statements below work when running bar_tests.py directly.
import setpath
import Project_3.foo.py # Error
from Project_3 import foo # Error
I receive the error "ImportError: No module named 'Project_3'".
What is odd is that I can run the file directly from within PyCharm and it works fine. I know that PyCharm is doing some behind the scenes magic with the Python Path variable to make everything work, but I can't figure out what it is. As PyCharm simply runs python.exe and sets some environmental variables, it should be possible to clone this behavior from within a Python script itself.
For reasons not really germane to this question, I have to reference bar using the Project_3 qualifier.
I'm open to any solution that accomplishes the above while still meeting my earlier goals. I'm also open to an alternate directory structure if there is one that works better. I've read the Python doc on imports and packages but am still at a loss. I think one possible avenue might be manually setting the __path__ variable, but I'm not sure which one needs to be changed or what to set it to.
Those types of questions qualify as "primarily opinion based", so let me share my opinion how I would do it.
First "be able to independently run each .py file when necessary": either the file is an module, so it should not be called directly, or it is standalone executable, then it should import its dependencies starting from top level (you may avoid it in code or rather move it to common place, by using setup.py entry_points, but then your former executable effectively converts to a module). And yes, it is one of weak points of Python modules model, that causes misunderstandings.
Second, use virtualenv (or venv in Python3) and put each of your Project_x into separate one. This way project's name won't be part of Python module's path.
Third, link you've provided mentions setup.py – you may make use of it. Put your custom code into Project_x/src/mylib1, create src/mylib1/setup.py and finally your modules into src/mylib1/mylib1/module.py. Then you may install your code by pip as any other package (or pip -e so you may work on the code directly without reinstalling it, though it unfortunately has some limitations).
And finally, as you've confirmed in comment already ;). Problem with your current model was that in sys.path.insert(0, os.path.abspath('...')) you'd mistakenly used Python module's notation, which in incorrect for system paths and should be replaced with '../..' to work as expected.
I think your goals are not reasonable. Specifically, goal number 2 is a problem:
Be able to independently run each .py file when neccessary
This doesn't work well for modules in a package. At least, not if you're running the .py files naively (e.g. with python foo_tests.py on the command line). When you run the files that way, Python can't tell where the package hierarchy should start.
There are two alternatives that can work. The first option is to run your scripts from the top level folder (e.g. projects) using the -m flag to the interpreter to give it a dotted path to the main module, and using explicit relative imports to get the sibling and cousin modules. So rather than running python foo_tests.py directly, run python -m project_3.tests.foo_tests from the projects folder (or python -m tests.foo_tests from within project_3 perhaps), and have have foo_tests.py use from .. import foo.
The other (less good) option is to add a top-level folder to your Python installation's module search path on a system wide basis (e.g. add the projects folder to the PYTHON_PATH environment variable), and then use absolute imports for all your modules (e.g. import project3.foo). This is effectively what your setpath module does, but doing it system wide as part of your system's configuration, rather than at run time, it's much cleaner. It also avoids the multiple names that setpath will allow to you use to import a module (e.g. try import foo_tests, tests.foo_tests and you'll get two separate copies of the same module).

How do I get the file location of libraries imported by Python

Considering this line of Python code:
import foo
how do I then find the location on disk of the file that contains the source code to be executed in foo?
(And I'm on Win7)
You can find the information in
foo.__file__
Note that not all modules come from files, though. Some modules are also compiled directly into the interpreter, and those modules won't have a __file__ attribute.
A list of the modules included in the interpreter can be found in sys.builtin_module_names.
I'm not sure about the value of __file__ when the module does not have a one-to-one mapping to a file in the file-system. Specifically, if it is loading a module found within a ZIP file, I'm not sure what __file__ will look like. (When I've needed such mappings, I've used tuples with ("path/to/file.zip", "path/within/zip/module.pyc") but that was a result of walking sys.path manually.) It wouldn't surprise me if it just set __file__ to None.
When it is an issue as to which module of the same name will be loaded, it is always the first one found on the sys.path.
It is possible to sys.path.insert(0, some_dir) before you do your first import of the package to tweak things manually. (In the environment, this is set with PYTHONPATH.)
Ideally, you can find out which is being used simply by checking PYTHONPATH and/or sys.path. If the wrong one is being loaded, the only way to fix it is with PYTHONPATH or sys.path.

PYTHONPATH hell with overlapping package structures

I'm having problems with my PythonPath on windows XP, and I'm wondering if I'm doing something wrong.
Say that I have a project (created with Pydev) that has an src directory. Under src I have a single package, named common, and in it a single class module, named service.py with a class name Service
Say now that I have another project (also created with Pydev) with an src directory and a common package. In the common package, I have a single script, client.py, that imports service.
So in other words, two separate disk locations, but same package.
I've noticed that even if I set my PYTHONPATH to include both src directories, the import fails unless the files are both in the same directory. I get the dreaded no module found.
Am I misunderstanding how python resolves module names? I'm used to Java and its classpath hell.
If you really must have a split package like this, read up on the module level attribute __path__.
In short, make one of the 'src' directories the main one, and give it an __init__.py that appends the path of other 'src' to the __path__ list. Python will now look in both places when looking up submodules of 'src'.
I really don't recommend this for the long term though. It is kind of brittle and breaks if you move things around.
I think in Python you are best off avoiding this problem by providing each package with a unique name. Don't name both packages common. Then you can import both with something like
import common1.service as cs
import common2.client as cc
If you try to import like this:
import src.common.service
Python will look on the Python path for a directory named "src" (or an egg, etc). Once it finds "src", it will not consider another one. If the first "src" doesn't have common and service inside it, then you will get an ImportError, even if another "src" directory in the path does have those things.

Categories

Resources