Python: Force import to prefer .py over .so - python

I have a situation where the same Python module is present in the same directory in two different versions; mymodule.py and mymodule.so (I obtain the latter from the first via Cython, but that's irrelevant to my question). When from Python I do
import mymodule
it always chooses mymodule.so. Sometimes I really want to import mymodule.py instead. I could temporarily move mymodule.so to another location, but that does not play well if I simultaneously have another Python instance running which needs to import mymodule.so.
The question is how to make import prefer .py files over .so, rather than vice versa?
Here's my thoughts on a solution:
I imagine performing some magic using importlib and possibly edit sys.meta_path. Specifically I see that sys.meta_path[2] holds _frozen_importlib_external.PathFinder which is used to import external modules, i.e. this is used for both mymodule.py and mymodule.so. If I could just replace this with a similar PathFinder which used the reverse ordering for file types, I would have a solution.
I'm using Python 3.7, if that affects the solution.
Edit
Note that simply reading in the source lines of mymodule.py and exec'ing them won't do, as mymodule.py may itself import other modules which again exist in both a .py and .so version (I want to import the .py version of these as well).

Here is another solution, that works just by tweaking the finders that the runtime generates by default. This uses hidden implementation details (FileFinder._loaders), but I've tested on CPythons 3.7, 3.8, and 3.9.
from contextlib import contextmanager
from dataclasses import dataclass
from importlib.machinery import FileFinder
from importlib.abc import Finder
import sys
from typing import Callable
#dataclass
class PreferPureLoaderHook:
orig_hook: Callable[[str], Finder]
def __call__(self, path: str) -> Finder:
finder = self.orig_hook(path)
if isinstance(finder, FileFinder):
# Move pure python file loaders to the front
finder._loaders.sort(key=lambda pair: 0 if pair[0] in (".py", ".pyc") else 1) # type: ignore
return finder
#contextmanager
def prefer_pure_python_imports():
sys.path_hooks = [PreferPureLoaderHook(h) for h in sys.path_hooks]
sys.path_importer_cache.clear()
yield
assert all(isinstance(h, PreferPureLoaderHook) for h in sys.path_hooks)
sys.path_hooks = [h.orig_hook for h in sys.path_hooks]
sys.path_importer_cache.clear()
with prefer_pure_python_imports():
...

Using these notes I came up with this. It's not too pretty, but it seems to work.
import glob, importlib, sys
def hook(name):
if name != '.':
raise ImportError()
modnames = set(f.rstrip('.py') for f in glob.glob('*.py'))
return Finder(modnames)
sys.path_hooks.insert(1, hook)
sys.path.insert(0, '.')
class Finder(object):
def __init__(self, modnames):
self.modnames = modnames
def find_spec(self, modname, target=None):
if modname in self.modnames:
origin = './' + modname + '.py'
loader = Loader()
return importlib.util.spec_from_loader(modname, loader, origin=origin)
else:
return None
class Loader(object):
def create_module(self, target):
return None
def exec_module(self, module):
with open(module.__spec__.origin, 'r', encoding='utf-8') as f:
code = f.read()
compile(code, module.__spec__.origin, 'exec')
exec(code, module.__dict__)

Related

pygame open script which opens another script doesn't work [duplicate]

How do I load a Python module given its full path?
Note that the file can be anywhere in the filesystem where the user has access rights.
See also: How to import a module given its name as string?
For Python 3.5+ use (docs):
import importlib.util
import sys
spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py")
foo = importlib.util.module_from_spec(spec)
sys.modules["module.name"] = foo
spec.loader.exec_module(foo)
foo.MyClass()
For Python 3.3 and 3.4 use:
from importlib.machinery import SourceFileLoader
foo = SourceFileLoader("module.name", "/path/to/file.py").load_module()
foo.MyClass()
(Although this has been deprecated in Python 3.4.)
For Python 2 use:
import imp
foo = imp.load_source('module.name', '/path/to/file.py')
foo.MyClass()
There are equivalent convenience functions for compiled Python files and DLLs.
See also http://bugs.python.org/issue21436.
The advantage of adding a path to sys.path (over using imp) is that it simplifies things when importing more than one module from a single package. For example:
import sys
# the mock-0.3.1 dir contains testcase.py, testutils.py & mock.py
sys.path.append('/foo/bar/mock-0.3.1')
from testcase import TestCase
from testutils import RunTests
from mock import Mock, sentinel, patch
To import your module, you need to add its directory to the environment variable, either temporarily or permanently.
Temporarily
import sys
sys.path.append("/path/to/my/modules/")
import my_module
Permanently
Adding the following line to your .bashrc (or alternative) file in Linux
and excecute source ~/.bashrc (or alternative) in the terminal:
export PYTHONPATH="${PYTHONPATH}:/path/to/my/modules/"
Credit/Source: saarrrr, another Stack Exchange question
If your top-level module is not a file but is packaged as a directory with __init__.py, then the accepted solution almost works, but not quite. In Python 3.5+ the following code is needed (note the added line that begins with 'sys.modules'):
MODULE_PATH = "/path/to/your/module/__init__.py"
MODULE_NAME = "mymodule"
import importlib
import sys
spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH)
module = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
Without this line, when exec_module is executed, it tries to bind relative imports in your top level __init__.py to the top level module name -- in this case "mymodule". But "mymodule" isn't loaded yet so you'll get the error "SystemError: Parent module 'mymodule' not loaded, cannot perform relative import". So you need to bind the name before you load it. The reason for this is the fundamental invariant of the relative import system: "The invariant holding is that if you have sys.modules['spam'] and sys.modules['spam.foo'] (as you would after the above import), the latter must appear as the foo attribute of the former" as discussed here.
It sounds like you don't want to specifically import the configuration file (which has a whole lot of side effects and additional complications involved). You just want to run it, and be able to access the resulting namespace. The standard library provides an API specifically for that in the form of runpy.run_path:
from runpy import run_path
settings = run_path("/path/to/file.py")
That interface is available in Python 2.7 and Python 3.2+.
You can also do something like this and add the directory that the configuration file is sitting in to the Python load path, and then just do a normal import, assuming you know the name of the file in advance, in this case "config".
Messy, but it works.
configfile = '~/config.py'
import os
import sys
sys.path.append(os.path.dirname(os.path.expanduser(configfile)))
import config
I have come up with a slightly modified version of #SebastianRittau's wonderful answer (for Python > 3.4 I think), which will allow you to load a file with any extension as a module using spec_from_loader instead of spec_from_file_location:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("module.name", SourceFileLoader("module.name", "/path/to/file.py"))
mod = module_from_spec(spec)
spec.loader.exec_module(mod)
The advantage of encoding the path in an explicit SourceFileLoader is that the machinery will not try to figure out the type of the file from the extension. This means that you can load something like a .txt file using this method, but you could not do it with spec_from_file_location without specifying the loader because .txt is not in importlib.machinery.SOURCE_SUFFIXES.
I've placed an implementation based on this, and #SamGrondahl's useful modification into my utility library, haggis. The function is called haggis.load.load_module. It adds a couple of neat tricks, like the ability to inject variables into the module namespace as it is loaded.
You can use the
load_source(module_name, path_to_file)
method from the imp module.
Do you mean load or import?
You can manipulate the sys.path list specify the path to your module, and then import your module. For example, given a module at:
/foo/bar.py
You could do:
import sys
sys.path[0:0] = ['/foo'] # Puts the /foo directory at the start of your path
import bar
Here is some code that works in all Python versions, from 2.7-3.5 and probably even others.
config_file = "/tmp/config.py"
with open(config_file) as f:
code = compile(f.read(), config_file, 'exec')
exec(code, globals(), locals())
I tested it. It may be ugly, but so far it is the only one that works in all versions.
You can do this using __import__ and chdir:
def import_file(full_path_to_module):
try:
import os
module_dir, module_file = os.path.split(full_path_to_module)
module_name, module_ext = os.path.splitext(module_file)
save_cwd = os.getcwd()
os.chdir(module_dir)
module_obj = __import__(module_name)
module_obj.__file__ = full_path_to_module
globals()[module_name] = module_obj
os.chdir(save_cwd)
except Exception as e:
raise ImportError(e)
return module_obj
import_file('/home/somebody/somemodule.py')
If we have scripts in the same project but in different directory means, we can solve this problem by the following method.
In this situation utils.py is in src/main/util/
import sys
sys.path.append('./')
import src.main.util.utils
#or
from src.main.util.utils import json_converter # json_converter is example method
To add to Sebastian Rittau's answer:
At least for CPython, there's pydoc, and, while not officially declared, importing files is what it does:
from pydoc import importfile
module = importfile('/path/to/module.py')
PS. For the sake of completeness, there's a reference to the current implementation at the moment of writing: pydoc.py, and I'm pleased to say that in the vein of xkcd 1987 it uses neither of the implementations mentioned in issue 21436 -- at least, not verbatim.
I believe you can use imp.find_module() and imp.load_module() to load the specified module. You'll need to split the module name off of the path, i.e. if you wanted to load /home/mypath/mymodule.py you'd need to do:
imp.find_module('mymodule', '/home/mypath/')
...but that should get the job done.
You can use the pkgutil module (specifically the walk_packages method) to get a list of the packages in the current directory. From there it's trivial to use the importlib machinery to import the modules you want:
import pkgutil
import importlib
packages = pkgutil.walk_packages(path='.')
for importer, name, is_package in packages:
mod = importlib.import_module(name)
# do whatever you want with module now, it's been imported!
There's a package that's dedicated to this specifically:
from thesmuggler import smuggle
# À la `import weapons`
weapons = smuggle('weapons.py')
# À la `from contraband import drugs, alcohol`
drugs, alcohol = smuggle('drugs', 'alcohol', source='contraband.py')
# À la `from contraband import drugs as dope, alcohol as booze`
dope, booze = smuggle('drugs', 'alcohol', source='contraband.py')
It's tested across Python versions (Jython and PyPy too), but it might be overkill depending on the size of your project.
Create Python module test.py:
import sys
sys.path.append("<project-path>/lib/")
from tes1 import Client1
from tes2 import Client2
import tes3
Create Python module test_check.py:
from test import Client1
from test import Client2
from test import test3
We can import the imported module from module.
This area of Python 3.4 seems to be extremely tortuous to understand! However with a bit of hacking using the code from Chris Calloway as a start I managed to get something working. Here's the basic function.
def import_module_from_file(full_path_to_module):
"""
Import a module given the full path/filename of the .py file
Python 3.4
"""
module = None
try:
# Get module name and path from full path
module_dir, module_file = os.path.split(full_path_to_module)
module_name, module_ext = os.path.splitext(module_file)
# Get module "spec" from filename
spec = importlib.util.spec_from_file_location(module_name,full_path_to_module)
module = spec.loader.load_module()
except Exception as ec:
# Simple error printing
# Insert "sophisticated" stuff here
print(ec)
finally:
return module
This appears to use non-deprecated modules from Python 3.4. I don't pretend to understand why, but it seems to work from within a program. I found Chris' solution worked on the command line but not from inside a program.
I made a package that uses imp for you. I call it import_file and this is how it's used:
>>>from import_file import import_file
>>>mylib = import_file('c:\\mylib.py')
>>>another = import_file('relative_subdir/another.py')
You can get it at:
http://pypi.python.org/pypi/import_file
or at
http://code.google.com/p/import-file/
To import a module from a given filename, you can temporarily extend the path, and restore the system path in the finally block reference:
filename = "directory/module.py"
directory, module_name = os.path.split(filename)
module_name = os.path.splitext(module_name)[0]
path = list(sys.path)
sys.path.insert(0, directory)
try:
module = __import__(module_name)
finally:
sys.path[:] = path # restore
A simple solution using importlib instead of the imp package (tested for Python 2.7, although it should work for Python 3 too):
import importlib
dirname, basename = os.path.split(pyfilepath) # pyfilepath: '/my/path/mymodule.py'
sys.path.append(dirname) # only directories should be added to PYTHONPATH
module_name = os.path.splitext(basename)[0] # '/my/path/mymodule.py' --> 'mymodule'
module = importlib.import_module(module_name) # name space of defined module (otherwise we would literally look for "module_name")
Now you can directly use the namespace of the imported module, like this:
a = module.myvar
b = module.myfunc(a)
The advantage of this solution is that we don't even need to know the actual name of the module we would like to import, in order to use it in our code. This is useful, e.g. in case the path of the module is a configurable argument.
I have written my own global and portable import function, based on importlib module, for:
Be able to import both modules as submodules and to import the content of a module to a parent module (or into a globals if has no parent module).
Be able to import modules with a period characters in a file name.
Be able to import modules with any extension.
Be able to use a standalone name for a submodule instead of a file name without extension which is by default.
Be able to define the import order based on previously imported module instead of dependent on sys.path or on a what ever search path storage.
The examples directory structure:
<root>
|
+- test.py
|
+- testlib.py
|
+- /std1
| |
| +- testlib.std1.py
|
+- /std2
| |
| +- testlib.std2.py
|
+- /std3
|
+- testlib.std3.py
Inclusion dependency and order:
test.py
-> testlib.py
-> testlib.std1.py
-> testlib.std2.py
-> testlib.std3.py
Implementation:
Latest changes store: https://sourceforge.net/p/tacklelib/tacklelib/HEAD/tree/trunk/python/tacklelib/tacklelib.py
test.py:
import os, sys, inspect, copy
SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("test::SOURCE_FILE: ", SOURCE_FILE)
# portable import to the global space
sys.path.append(TACKLELIB_ROOT) # TACKLELIB_ROOT - path to the library directory
import tacklelib as tkl
tkl.tkl_init(tkl)
# cleanup
del tkl # must be instead of `tkl = None`, otherwise the variable would be still persist
sys.path.pop()
tkl_import_module(SOURCE_DIR, 'testlib.py')
print(globals().keys())
testlib.base_test()
testlib.testlib_std1.std1_test()
testlib.testlib_std1.testlib_std2.std2_test()
#testlib.testlib.std3.std3_test() # does not reachable directly ...
getattr(globals()['testlib'], 'testlib.std3').std3_test() # ... but reachable through the `globals` + `getattr`
tkl_import_module(SOURCE_DIR, 'testlib.py', '.')
print(globals().keys())
base_test()
testlib_std1.std1_test()
testlib_std1.testlib_std2.std2_test()
#testlib.std3.std3_test() # does not reachable directly ...
globals()['testlib.std3'].std3_test() # ... but reachable through the `globals` + `getattr`
testlib.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("1 testlib::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/std1', 'testlib.std1.py', 'testlib_std1')
# SOURCE_DIR is restored here
print("2 testlib::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/std3', 'testlib.std3.py')
print("3 testlib::SOURCE_FILE: ", SOURCE_FILE)
def base_test():
print('base_test')
testlib.std1.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std1::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/../std2', 'testlib.std2.py', 'testlib_std2')
def std1_test():
print('std1_test')
testlib.std2.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std2::SOURCE_FILE: ", SOURCE_FILE)
def std2_test():
print('std2_test')
testlib.std3.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std3::SOURCE_FILE: ", SOURCE_FILE)
def std3_test():
print('std3_test')
Output (3.7.4):
test::SOURCE_FILE: <root>/test01/test.py
import : <root>/test01/testlib.py as testlib -> []
1 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib']
import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1']
testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py
2 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib']
testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py
3 testlib::SOURCE_FILE: <root>/test01/testlib.py
dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib'])
base_test
std1_test
std2_test
std3_test
import : <root>/test01/testlib.py as . -> []
1 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib']
import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1']
testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py
2 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib']
testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py
3 testlib::SOURCE_FILE: <root>/test01/testlib.py
dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib', 'testlib_std1', 'testlib.std3', 'base_test'])
base_test
std1_test
std2_test
std3_test
Tested in Python 3.7.4, 3.2.5, 2.7.16
Pros:
Can import both module as a submodule and can import content of a module to a parent module (or into a globals if has no parent module).
Can import modules with periods in a file name.
Can import any extension module from any extension module.
Can use a standalone name for a submodule instead of a file name without extension which is by default (for example, testlib.std.py as testlib, testlib.blabla.py as testlib_blabla and so on).
Does not depend on a sys.path or on a what ever search path storage.
Does not require to save/restore global variables like SOURCE_FILE and SOURCE_DIR between calls to tkl_import_module.
[for 3.4.x and higher] Can mix the module namespaces in nested tkl_import_module calls (ex: named->local->named or local->named->local and so on).
[for 3.4.x and higher] Can auto export global variables/functions/classes from where being declared to all children modules imported through the tkl_import_module (through the tkl_declare_global function).
Cons:
Does not support complete import:
Ignores enumerations and subclasses.
Ignores builtins because each what type has to be copied exclusively.
Ignore not trivially copiable classes.
Avoids copying builtin modules including all packaged modules.
[for 3.3.x and lower] Require to declare tkl_import_module in all modules which calls to tkl_import_module (code duplication)
Update 1,2 (for 3.4.x and higher only):
In Python 3.4 and higher you can bypass the requirement to declare tkl_import_module in each module by declare tkl_import_module in a top level module and the function would inject itself to all children modules in a single call (it's a kind of self deploy import).
Update 3:
Added function tkl_source_module as analog to bash source with support execution guard upon import (implemented through the module merge instead of import).
Update 4:
Added function tkl_declare_global to auto export a module global variable to all children modules where a module global variable is not visible because is not a part of a child module.
Update 5:
All functions has moved into the tacklelib library, see the link above.
This should work
path = os.path.join('./path/to/folder/with/py/files', '*.py')
for infile in glob.glob(path):
basename = os.path.basename(infile)
basename_without_extension = basename[:-3]
# http://docs.python.org/library/imp.html?highlight=imp#module-imp
imp.load_source(basename_without_extension, infile)
Import package modules at runtime (Python recipe)
http://code.activestate.com/recipes/223972/
###################
## #
## classloader.py #
## #
###################
import sys, types
def _get_mod(modulePath):
try:
aMod = sys.modules[modulePath]
if not isinstance(aMod, types.ModuleType):
raise KeyError
except KeyError:
# The last [''] is very important!
aMod = __import__(modulePath, globals(), locals(), [''])
sys.modules[modulePath] = aMod
return aMod
def _get_func(fullFuncName):
"""Retrieve a function object from a full dotted-package name."""
# Parse out the path, module, and function
lastDot = fullFuncName.rfind(u".")
funcName = fullFuncName[lastDot + 1:]
modPath = fullFuncName[:lastDot]
aMod = _get_mod(modPath)
aFunc = getattr(aMod, funcName)
# Assert that the function is a *callable* attribute.
assert callable(aFunc), u"%s is not callable." % fullFuncName
# Return a reference to the function itself,
# not the results of the function.
return aFunc
def _get_class(fullClassName, parentClass=None):
"""Load a module and retrieve a class (NOT an instance).
If the parentClass is supplied, className must be of parentClass
or a subclass of parentClass (or None is returned).
"""
aClass = _get_func(fullClassName)
# Assert that the class is a subclass of parentClass.
if parentClass is not None:
if not issubclass(aClass, parentClass):
raise TypeError(u"%s is not a subclass of %s" %
(fullClassName, parentClass))
# Return a reference to the class itself, not an instantiated object.
return aClass
######################
## Usage ##
######################
class StorageManager: pass
class StorageManagerMySQL(StorageManager): pass
def storage_object(aFullClassName, allOptions={}):
aStoreClass = _get_class(aFullClassName, StorageManager)
return aStoreClass(allOptions)
I'm not saying that it is better, but for the sake of completeness, I wanted to suggest the exec function, available in both Python 2 and Python 3.
exec allows you to execute arbitrary code in either the global scope, or in an internal scope, provided as a dictionary.
For example, if you have a module stored in "/path/to/module" with the function foo(), you could run it by doing the following:
module = dict()
with open("/path/to/module") as f:
exec(f.read(), module)
module['foo']()
This makes it a bit more explicit that you're loading code dynamically, and grants you some additional power, such as the ability to provide custom builtins.
And if having access through attributes, instead of keys is important to you, you can design a custom dict class for the globals, that provides such access, e.g.:
class MyModuleClass(dict):
def __getattr__(self, name):
return self.__getitem__(name)
In Linux, adding a symbolic link in the directory your Python script is located works.
I.e.:
ln -s /absolute/path/to/module/module.py /absolute/path/to/script/module.py
The Python interpreter will create /absolute/path/to/script/module.pyc and will update it if you change the contents of /absolute/path/to/module/module.py.
Then include the following in file mypythonscript.py:
from module import *
This will allow imports of compiled (pyd) Python modules in 3.4:
import sys
import importlib.machinery
def load_module(name, filename):
# If the Loader finds the module name in this list it will use
# module_name.__file__ instead so we need to delete it here
if name in sys.modules:
del sys.modules[name]
loader = importlib.machinery.ExtensionFileLoader(name, filename)
module = loader.load_module()
locals()[name] = module
globals()[name] = module
load_module('something', r'C:\Path\To\something.pyd')
something.do_something()
A quite simple way: suppose you want import file with relative path ../../MyLibs/pyfunc.py
libPath = '../../MyLibs'
import sys
if not libPath in sys.path: sys.path.append(libPath)
import pyfunc as pf
But if you make it without a guard you can finally get a very long path.
These are my two utility functions using only pathlib. It infers the module name from the path.
By default, it recursively loads all Python files from folders and replaces init.py by the parent folder name. But you can also give a Path and/or a glob to select some specific files.
from pathlib import Path
from importlib.util import spec_from_file_location, module_from_spec
from typing import Optional
def get_module_from_path(path: Path, relative_to: Optional[Path] = None):
if not relative_to:
relative_to = Path.cwd()
abs_path = path.absolute()
relative_path = abs_path.relative_to(relative_to.absolute())
if relative_path.name == "__init__.py":
relative_path = relative_path.parent
module_name = ".".join(relative_path.with_suffix("").parts)
mod = module_from_spec(spec_from_file_location(module_name, path))
return mod
def get_modules_from_folder(folder: Optional[Path] = None, glob_str: str = "*/**/*.py"):
if not folder:
folder = Path(".")
mod_list = []
for file_path in sorted(folder.glob(glob_str)):
mod_list.append(get_module_from_path(file_path))
return mod_list
This answer is a supplement to Sebastian Rittau's answer responding to the comment: "but what if you don't have the module name?" This is a quick and dirty way of getting the likely Python module name given a filename -- it just goes up the tree until it finds a directory without an __init__.py file and then turns it back into a filename. For Python 3.4+ (uses pathlib), which makes sense since Python 2 people can use "imp" or other ways of doing relative imports:
import pathlib
def likely_python_module(filename):
'''
Given a filename or Path, return the "likely" python module name. That is, iterate
the parent directories until it doesn't contain an __init__.py file.
:rtype: str
'''
p = pathlib.Path(filename).resolve()
paths = []
if p.name != '__init__.py':
paths.append(p.stem)
while True:
p = p.parent
if not p:
break
if not p.is_dir():
break
inits = [f for f in p.iterdir() if f.name == '__init__.py']
if not inits:
break
paths.append(p.stem)
return '.'.join(reversed(paths))
There are certainly possibilities for improvement, and the optional __init__.py files might necessitate other changes, but if you have __init__.py in general, this does the trick.

Can importlib import a module from any directory (non-packages)? [duplicate]

How do I load a Python module given its full path?
Note that the file can be anywhere in the filesystem where the user has access rights.
See also: How to import a module given its name as string?
For Python 3.5+ use (docs):
import importlib.util
import sys
spec = importlib.util.spec_from_file_location("module.name", "/path/to/file.py")
foo = importlib.util.module_from_spec(spec)
sys.modules["module.name"] = foo
spec.loader.exec_module(foo)
foo.MyClass()
For Python 3.3 and 3.4 use:
from importlib.machinery import SourceFileLoader
foo = SourceFileLoader("module.name", "/path/to/file.py").load_module()
foo.MyClass()
(Although this has been deprecated in Python 3.4.)
For Python 2 use:
import imp
foo = imp.load_source('module.name', '/path/to/file.py')
foo.MyClass()
There are equivalent convenience functions for compiled Python files and DLLs.
See also http://bugs.python.org/issue21436.
The advantage of adding a path to sys.path (over using imp) is that it simplifies things when importing more than one module from a single package. For example:
import sys
# the mock-0.3.1 dir contains testcase.py, testutils.py & mock.py
sys.path.append('/foo/bar/mock-0.3.1')
from testcase import TestCase
from testutils import RunTests
from mock import Mock, sentinel, patch
To import your module, you need to add its directory to the environment variable, either temporarily or permanently.
Temporarily
import sys
sys.path.append("/path/to/my/modules/")
import my_module
Permanently
Adding the following line to your .bashrc (or alternative) file in Linux
and excecute source ~/.bashrc (or alternative) in the terminal:
export PYTHONPATH="${PYTHONPATH}:/path/to/my/modules/"
Credit/Source: saarrrr, another Stack Exchange question
If your top-level module is not a file but is packaged as a directory with __init__.py, then the accepted solution almost works, but not quite. In Python 3.5+ the following code is needed (note the added line that begins with 'sys.modules'):
MODULE_PATH = "/path/to/your/module/__init__.py"
MODULE_NAME = "mymodule"
import importlib
import sys
spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH)
module = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
Without this line, when exec_module is executed, it tries to bind relative imports in your top level __init__.py to the top level module name -- in this case "mymodule". But "mymodule" isn't loaded yet so you'll get the error "SystemError: Parent module 'mymodule' not loaded, cannot perform relative import". So you need to bind the name before you load it. The reason for this is the fundamental invariant of the relative import system: "The invariant holding is that if you have sys.modules['spam'] and sys.modules['spam.foo'] (as you would after the above import), the latter must appear as the foo attribute of the former" as discussed here.
It sounds like you don't want to specifically import the configuration file (which has a whole lot of side effects and additional complications involved). You just want to run it, and be able to access the resulting namespace. The standard library provides an API specifically for that in the form of runpy.run_path:
from runpy import run_path
settings = run_path("/path/to/file.py")
That interface is available in Python 2.7 and Python 3.2+.
You can also do something like this and add the directory that the configuration file is sitting in to the Python load path, and then just do a normal import, assuming you know the name of the file in advance, in this case "config".
Messy, but it works.
configfile = '~/config.py'
import os
import sys
sys.path.append(os.path.dirname(os.path.expanduser(configfile)))
import config
I have come up with a slightly modified version of #SebastianRittau's wonderful answer (for Python > 3.4 I think), which will allow you to load a file with any extension as a module using spec_from_loader instead of spec_from_file_location:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("module.name", SourceFileLoader("module.name", "/path/to/file.py"))
mod = module_from_spec(spec)
spec.loader.exec_module(mod)
The advantage of encoding the path in an explicit SourceFileLoader is that the machinery will not try to figure out the type of the file from the extension. This means that you can load something like a .txt file using this method, but you could not do it with spec_from_file_location without specifying the loader because .txt is not in importlib.machinery.SOURCE_SUFFIXES.
I've placed an implementation based on this, and #SamGrondahl's useful modification into my utility library, haggis. The function is called haggis.load.load_module. It adds a couple of neat tricks, like the ability to inject variables into the module namespace as it is loaded.
You can use the
load_source(module_name, path_to_file)
method from the imp module.
Do you mean load or import?
You can manipulate the sys.path list specify the path to your module, and then import your module. For example, given a module at:
/foo/bar.py
You could do:
import sys
sys.path[0:0] = ['/foo'] # Puts the /foo directory at the start of your path
import bar
Here is some code that works in all Python versions, from 2.7-3.5 and probably even others.
config_file = "/tmp/config.py"
with open(config_file) as f:
code = compile(f.read(), config_file, 'exec')
exec(code, globals(), locals())
I tested it. It may be ugly, but so far it is the only one that works in all versions.
You can do this using __import__ and chdir:
def import_file(full_path_to_module):
try:
import os
module_dir, module_file = os.path.split(full_path_to_module)
module_name, module_ext = os.path.splitext(module_file)
save_cwd = os.getcwd()
os.chdir(module_dir)
module_obj = __import__(module_name)
module_obj.__file__ = full_path_to_module
globals()[module_name] = module_obj
os.chdir(save_cwd)
except Exception as e:
raise ImportError(e)
return module_obj
import_file('/home/somebody/somemodule.py')
If we have scripts in the same project but in different directory means, we can solve this problem by the following method.
In this situation utils.py is in src/main/util/
import sys
sys.path.append('./')
import src.main.util.utils
#or
from src.main.util.utils import json_converter # json_converter is example method
To add to Sebastian Rittau's answer:
At least for CPython, there's pydoc, and, while not officially declared, importing files is what it does:
from pydoc import importfile
module = importfile('/path/to/module.py')
PS. For the sake of completeness, there's a reference to the current implementation at the moment of writing: pydoc.py, and I'm pleased to say that in the vein of xkcd 1987 it uses neither of the implementations mentioned in issue 21436 -- at least, not verbatim.
I believe you can use imp.find_module() and imp.load_module() to load the specified module. You'll need to split the module name off of the path, i.e. if you wanted to load /home/mypath/mymodule.py you'd need to do:
imp.find_module('mymodule', '/home/mypath/')
...but that should get the job done.
You can use the pkgutil module (specifically the walk_packages method) to get a list of the packages in the current directory. From there it's trivial to use the importlib machinery to import the modules you want:
import pkgutil
import importlib
packages = pkgutil.walk_packages(path='.')
for importer, name, is_package in packages:
mod = importlib.import_module(name)
# do whatever you want with module now, it's been imported!
There's a package that's dedicated to this specifically:
from thesmuggler import smuggle
# À la `import weapons`
weapons = smuggle('weapons.py')
# À la `from contraband import drugs, alcohol`
drugs, alcohol = smuggle('drugs', 'alcohol', source='contraband.py')
# À la `from contraband import drugs as dope, alcohol as booze`
dope, booze = smuggle('drugs', 'alcohol', source='contraband.py')
It's tested across Python versions (Jython and PyPy too), but it might be overkill depending on the size of your project.
Create Python module test.py:
import sys
sys.path.append("<project-path>/lib/")
from tes1 import Client1
from tes2 import Client2
import tes3
Create Python module test_check.py:
from test import Client1
from test import Client2
from test import test3
We can import the imported module from module.
This area of Python 3.4 seems to be extremely tortuous to understand! However with a bit of hacking using the code from Chris Calloway as a start I managed to get something working. Here's the basic function.
def import_module_from_file(full_path_to_module):
"""
Import a module given the full path/filename of the .py file
Python 3.4
"""
module = None
try:
# Get module name and path from full path
module_dir, module_file = os.path.split(full_path_to_module)
module_name, module_ext = os.path.splitext(module_file)
# Get module "spec" from filename
spec = importlib.util.spec_from_file_location(module_name,full_path_to_module)
module = spec.loader.load_module()
except Exception as ec:
# Simple error printing
# Insert "sophisticated" stuff here
print(ec)
finally:
return module
This appears to use non-deprecated modules from Python 3.4. I don't pretend to understand why, but it seems to work from within a program. I found Chris' solution worked on the command line but not from inside a program.
I made a package that uses imp for you. I call it import_file and this is how it's used:
>>>from import_file import import_file
>>>mylib = import_file('c:\\mylib.py')
>>>another = import_file('relative_subdir/another.py')
You can get it at:
http://pypi.python.org/pypi/import_file
or at
http://code.google.com/p/import-file/
To import a module from a given filename, you can temporarily extend the path, and restore the system path in the finally block reference:
filename = "directory/module.py"
directory, module_name = os.path.split(filename)
module_name = os.path.splitext(module_name)[0]
path = list(sys.path)
sys.path.insert(0, directory)
try:
module = __import__(module_name)
finally:
sys.path[:] = path # restore
A simple solution using importlib instead of the imp package (tested for Python 2.7, although it should work for Python 3 too):
import importlib
dirname, basename = os.path.split(pyfilepath) # pyfilepath: '/my/path/mymodule.py'
sys.path.append(dirname) # only directories should be added to PYTHONPATH
module_name = os.path.splitext(basename)[0] # '/my/path/mymodule.py' --> 'mymodule'
module = importlib.import_module(module_name) # name space of defined module (otherwise we would literally look for "module_name")
Now you can directly use the namespace of the imported module, like this:
a = module.myvar
b = module.myfunc(a)
The advantage of this solution is that we don't even need to know the actual name of the module we would like to import, in order to use it in our code. This is useful, e.g. in case the path of the module is a configurable argument.
I have written my own global and portable import function, based on importlib module, for:
Be able to import both modules as submodules and to import the content of a module to a parent module (or into a globals if has no parent module).
Be able to import modules with a period characters in a file name.
Be able to import modules with any extension.
Be able to use a standalone name for a submodule instead of a file name without extension which is by default.
Be able to define the import order based on previously imported module instead of dependent on sys.path or on a what ever search path storage.
The examples directory structure:
<root>
|
+- test.py
|
+- testlib.py
|
+- /std1
| |
| +- testlib.std1.py
|
+- /std2
| |
| +- testlib.std2.py
|
+- /std3
|
+- testlib.std3.py
Inclusion dependency and order:
test.py
-> testlib.py
-> testlib.std1.py
-> testlib.std2.py
-> testlib.std3.py
Implementation:
Latest changes store: https://sourceforge.net/p/tacklelib/tacklelib/HEAD/tree/trunk/python/tacklelib/tacklelib.py
test.py:
import os, sys, inspect, copy
SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("test::SOURCE_FILE: ", SOURCE_FILE)
# portable import to the global space
sys.path.append(TACKLELIB_ROOT) # TACKLELIB_ROOT - path to the library directory
import tacklelib as tkl
tkl.tkl_init(tkl)
# cleanup
del tkl # must be instead of `tkl = None`, otherwise the variable would be still persist
sys.path.pop()
tkl_import_module(SOURCE_DIR, 'testlib.py')
print(globals().keys())
testlib.base_test()
testlib.testlib_std1.std1_test()
testlib.testlib_std1.testlib_std2.std2_test()
#testlib.testlib.std3.std3_test() # does not reachable directly ...
getattr(globals()['testlib'], 'testlib.std3').std3_test() # ... but reachable through the `globals` + `getattr`
tkl_import_module(SOURCE_DIR, 'testlib.py', '.')
print(globals().keys())
base_test()
testlib_std1.std1_test()
testlib_std1.testlib_std2.std2_test()
#testlib.std3.std3_test() # does not reachable directly ...
globals()['testlib.std3'].std3_test() # ... but reachable through the `globals` + `getattr`
testlib.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("1 testlib::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/std1', 'testlib.std1.py', 'testlib_std1')
# SOURCE_DIR is restored here
print("2 testlib::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/std3', 'testlib.std3.py')
print("3 testlib::SOURCE_FILE: ", SOURCE_FILE)
def base_test():
print('base_test')
testlib.std1.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std1::SOURCE_FILE: ", SOURCE_FILE)
tkl_import_module(SOURCE_DIR + '/../std2', 'testlib.std2.py', 'testlib_std2')
def std1_test():
print('std1_test')
testlib.std2.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std2::SOURCE_FILE: ", SOURCE_FILE)
def std2_test():
print('std2_test')
testlib.std3.py:
# optional for 3.4.x and higher
#import os, inspect
#
#SOURCE_FILE = os.path.abspath(inspect.getsourcefile(lambda:0)).replace('\\','/')
#SOURCE_DIR = os.path.dirname(SOURCE_FILE)
print("testlib.std3::SOURCE_FILE: ", SOURCE_FILE)
def std3_test():
print('std3_test')
Output (3.7.4):
test::SOURCE_FILE: <root>/test01/test.py
import : <root>/test01/testlib.py as testlib -> []
1 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib']
import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1']
testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py
2 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib']
testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py
3 testlib::SOURCE_FILE: <root>/test01/testlib.py
dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib'])
base_test
std1_test
std2_test
std3_test
import : <root>/test01/testlib.py as . -> []
1 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std1/testlib.std1.py as testlib_std1 -> ['testlib']
import : <root>/test01/std1/../std2/testlib.std2.py as testlib_std2 -> ['testlib', 'testlib_std1']
testlib.std2::SOURCE_FILE: <root>/test01/std1/../std2/testlib.std2.py
2 testlib::SOURCE_FILE: <root>/test01/testlib.py
import : <root>/test01/std3/testlib.std3.py as testlib.std3 -> ['testlib']
testlib.std3::SOURCE_FILE: <root>/test01/std3/testlib.std3.py
3 testlib::SOURCE_FILE: <root>/test01/testlib.py
dict_keys(['__name__', '__doc__', '__package__', '__loader__', '__spec__', '__annotations__', '__builtins__', '__file__', '__cached__', 'os', 'sys', 'inspect', 'copy', 'SOURCE_FILE', 'SOURCE_DIR', 'TackleGlobalImportModuleState', 'tkl_membercopy', 'tkl_merge_module', 'tkl_get_parent_imported_module_state', 'tkl_declare_global', 'tkl_import_module', 'TackleSourceModuleState', 'tkl_source_module', 'TackleLocalImportModuleState', 'testlib', 'testlib_std1', 'testlib.std3', 'base_test'])
base_test
std1_test
std2_test
std3_test
Tested in Python 3.7.4, 3.2.5, 2.7.16
Pros:
Can import both module as a submodule and can import content of a module to a parent module (or into a globals if has no parent module).
Can import modules with periods in a file name.
Can import any extension module from any extension module.
Can use a standalone name for a submodule instead of a file name without extension which is by default (for example, testlib.std.py as testlib, testlib.blabla.py as testlib_blabla and so on).
Does not depend on a sys.path or on a what ever search path storage.
Does not require to save/restore global variables like SOURCE_FILE and SOURCE_DIR between calls to tkl_import_module.
[for 3.4.x and higher] Can mix the module namespaces in nested tkl_import_module calls (ex: named->local->named or local->named->local and so on).
[for 3.4.x and higher] Can auto export global variables/functions/classes from where being declared to all children modules imported through the tkl_import_module (through the tkl_declare_global function).
Cons:
Does not support complete import:
Ignores enumerations and subclasses.
Ignores builtins because each what type has to be copied exclusively.
Ignore not trivially copiable classes.
Avoids copying builtin modules including all packaged modules.
[for 3.3.x and lower] Require to declare tkl_import_module in all modules which calls to tkl_import_module (code duplication)
Update 1,2 (for 3.4.x and higher only):
In Python 3.4 and higher you can bypass the requirement to declare tkl_import_module in each module by declare tkl_import_module in a top level module and the function would inject itself to all children modules in a single call (it's a kind of self deploy import).
Update 3:
Added function tkl_source_module as analog to bash source with support execution guard upon import (implemented through the module merge instead of import).
Update 4:
Added function tkl_declare_global to auto export a module global variable to all children modules where a module global variable is not visible because is not a part of a child module.
Update 5:
All functions has moved into the tacklelib library, see the link above.
This should work
path = os.path.join('./path/to/folder/with/py/files', '*.py')
for infile in glob.glob(path):
basename = os.path.basename(infile)
basename_without_extension = basename[:-3]
# http://docs.python.org/library/imp.html?highlight=imp#module-imp
imp.load_source(basename_without_extension, infile)
Import package modules at runtime (Python recipe)
http://code.activestate.com/recipes/223972/
###################
## #
## classloader.py #
## #
###################
import sys, types
def _get_mod(modulePath):
try:
aMod = sys.modules[modulePath]
if not isinstance(aMod, types.ModuleType):
raise KeyError
except KeyError:
# The last [''] is very important!
aMod = __import__(modulePath, globals(), locals(), [''])
sys.modules[modulePath] = aMod
return aMod
def _get_func(fullFuncName):
"""Retrieve a function object from a full dotted-package name."""
# Parse out the path, module, and function
lastDot = fullFuncName.rfind(u".")
funcName = fullFuncName[lastDot + 1:]
modPath = fullFuncName[:lastDot]
aMod = _get_mod(modPath)
aFunc = getattr(aMod, funcName)
# Assert that the function is a *callable* attribute.
assert callable(aFunc), u"%s is not callable." % fullFuncName
# Return a reference to the function itself,
# not the results of the function.
return aFunc
def _get_class(fullClassName, parentClass=None):
"""Load a module and retrieve a class (NOT an instance).
If the parentClass is supplied, className must be of parentClass
or a subclass of parentClass (or None is returned).
"""
aClass = _get_func(fullClassName)
# Assert that the class is a subclass of parentClass.
if parentClass is not None:
if not issubclass(aClass, parentClass):
raise TypeError(u"%s is not a subclass of %s" %
(fullClassName, parentClass))
# Return a reference to the class itself, not an instantiated object.
return aClass
######################
## Usage ##
######################
class StorageManager: pass
class StorageManagerMySQL(StorageManager): pass
def storage_object(aFullClassName, allOptions={}):
aStoreClass = _get_class(aFullClassName, StorageManager)
return aStoreClass(allOptions)
I'm not saying that it is better, but for the sake of completeness, I wanted to suggest the exec function, available in both Python 2 and Python 3.
exec allows you to execute arbitrary code in either the global scope, or in an internal scope, provided as a dictionary.
For example, if you have a module stored in "/path/to/module" with the function foo(), you could run it by doing the following:
module = dict()
with open("/path/to/module") as f:
exec(f.read(), module)
module['foo']()
This makes it a bit more explicit that you're loading code dynamically, and grants you some additional power, such as the ability to provide custom builtins.
And if having access through attributes, instead of keys is important to you, you can design a custom dict class for the globals, that provides such access, e.g.:
class MyModuleClass(dict):
def __getattr__(self, name):
return self.__getitem__(name)
In Linux, adding a symbolic link in the directory your Python script is located works.
I.e.:
ln -s /absolute/path/to/module/module.py /absolute/path/to/script/module.py
The Python interpreter will create /absolute/path/to/script/module.pyc and will update it if you change the contents of /absolute/path/to/module/module.py.
Then include the following in file mypythonscript.py:
from module import *
This will allow imports of compiled (pyd) Python modules in 3.4:
import sys
import importlib.machinery
def load_module(name, filename):
# If the Loader finds the module name in this list it will use
# module_name.__file__ instead so we need to delete it here
if name in sys.modules:
del sys.modules[name]
loader = importlib.machinery.ExtensionFileLoader(name, filename)
module = loader.load_module()
locals()[name] = module
globals()[name] = module
load_module('something', r'C:\Path\To\something.pyd')
something.do_something()
A quite simple way: suppose you want import file with relative path ../../MyLibs/pyfunc.py
libPath = '../../MyLibs'
import sys
if not libPath in sys.path: sys.path.append(libPath)
import pyfunc as pf
But if you make it without a guard you can finally get a very long path.
These are my two utility functions using only pathlib. It infers the module name from the path.
By default, it recursively loads all Python files from folders and replaces init.py by the parent folder name. But you can also give a Path and/or a glob to select some specific files.
from pathlib import Path
from importlib.util import spec_from_file_location, module_from_spec
from typing import Optional
def get_module_from_path(path: Path, relative_to: Optional[Path] = None):
if not relative_to:
relative_to = Path.cwd()
abs_path = path.absolute()
relative_path = abs_path.relative_to(relative_to.absolute())
if relative_path.name == "__init__.py":
relative_path = relative_path.parent
module_name = ".".join(relative_path.with_suffix("").parts)
mod = module_from_spec(spec_from_file_location(module_name, path))
return mod
def get_modules_from_folder(folder: Optional[Path] = None, glob_str: str = "*/**/*.py"):
if not folder:
folder = Path(".")
mod_list = []
for file_path in sorted(folder.glob(glob_str)):
mod_list.append(get_module_from_path(file_path))
return mod_list
This answer is a supplement to Sebastian Rittau's answer responding to the comment: "but what if you don't have the module name?" This is a quick and dirty way of getting the likely Python module name given a filename -- it just goes up the tree until it finds a directory without an __init__.py file and then turns it back into a filename. For Python 3.4+ (uses pathlib), which makes sense since Python 2 people can use "imp" or other ways of doing relative imports:
import pathlib
def likely_python_module(filename):
'''
Given a filename or Path, return the "likely" python module name. That is, iterate
the parent directories until it doesn't contain an __init__.py file.
:rtype: str
'''
p = pathlib.Path(filename).resolve()
paths = []
if p.name != '__init__.py':
paths.append(p.stem)
while True:
p = p.parent
if not p:
break
if not p.is_dir():
break
inits = [f for f in p.iterdir() if f.name == '__init__.py']
if not inits:
break
paths.append(p.stem)
return '.'.join(reversed(paths))
There are certainly possibilities for improvement, and the optional __init__.py files might necessitate other changes, but if you have __init__.py in general, this does the trick.

Import hooks for PyQt4.QtCore

I'm am attempting to setup some import hooks through sys.meta_path, in a somewhat similar approach to this SO question. For this, I need to define two functions find_module and load_module as explained in the link above. Here is my load_module function,
import imp
def load_module(name, path):
fp, pathname, description = imp.find_module(name, path)
try:
module = imp.load_module(name, fp, pathname, description)
finally:
if fp:
fp.close()
return module
which works fine for most modules, but fails for PyQt4.QtCore when using Python 2.7:
name = "QtCore"
path = ['/usr/lib64/python2.7/site-packages/PyQt4']
mod = load_module(name, path)
which returns,
Traceback (most recent call last):
File "test.py", line 19, in <module>
mod = load_module(name, path)
File "test.py", line 13, in load_module
module = imp.load_module(name, fp, pathname, description)
SystemError: dynamic module not initialized properly
The same code works fine with Python 3.4 (although imp is getting deprecated and importlib should ideally be used instead there).
I suppose this has something to do with the SIP dynamic module initialization. Is there anything else I should try with Python 2.7?
Note: this applies both with PyQt4 and PyQt5.
Edit: this may be related to this question as indeed,
cd /usr/lib64/python2.7/site-packages/PyQt4
python2 -c 'import QtCore'
fails with the same error. Still I'm not sure what would be a way around it...
Edit2: following #Nikita's request for a concrete use case example, what I am trying to do is to redirect the import, so when one does import A, what happens is import B. One could indeed think that for this it would be sufficient to do module renaming in find_spec/find_module and then use the default load_module. However, it is unclear where to find a default load_module implementation in Python 2. The closest implementation I have found of something similar is future.standard_library.RenameImport. It does not look like there is a backport of the complete implementation of importlib from Python 3 to 2.
A minimal working example for the import hooks that reproduces this problem can be found in this gist.
UPD: This part in not really relevant after answer updates, so see UPD below.
Why not just use importlib.import_module, which is available in both Python 2.7 and Python 3:
#test.py
import importlib
mod = importlib.import_module('PyQt4.QtCore')
print(mod.__file__)
on Ubuntu 14.04:
$ python2 test.py
/usr/lib/python2.7/dist-packages/PyQt4/QtCore.so
Since it's a dynamic module, as said in the error (and the actual file is QtCore.so), may be also take a look at imp.load_dynamic.
Another solution might be to force the execution of the module initialization code, but IMO it's too much of a hassle, so why not just use importlib.
UPD: There are things in pkgutil, that might help. What I was talking about in my comment, try to modify your finder like this:
import pkgutil
class RenameImportFinder(object):
def find_module(self, fullname, path=None):
""" This is the finder function that renames all imports like
PyQt4.module or PySide.module into PyQt4.module """
for backend_name in valid_backends:
if fullname.startswith(backend_name):
# just rename the import (That's what i thought about)
name_new = fullname.replace(backend_name, redirect_to_backend)
print('Renaming import:', fullname, '->', name_new, )
print(' Path:', path)
# (And here, don't create a custom loader, get one from the
# system, either by using 'pkgutil.get_loader' as suggested
# in PEP302, or instantiate 'pkgutil.ImpLoader').
return pkgutil.get_loader(name_new)
#(Original return statement, probably 'pkgutil.ImpLoader'
#instantiation should be inside 'RenameImportLoader' after
#'find_module()' call.)
#return RenameImportLoader(name_orig=fullname, path=path,
# name_new=name_new)
return None
Can't test the code above now, so please try it yourself.
P.S. Note that imp.load_module(), which worked for you in Python 3 is deprecated since Python 3.3.
Another solution is not to use hooks at all, but instead wrap the __import__:
print(__import__)
valid_backends = ['shelve']
redirect_to_backend = 'pickle'
# Using closure with parameters
def import_wrapper(valid_backends, redirect_to_backend):
def wrapper(import_orig):
def import_mod(*args, **kwargs):
fullname = args[0]
for backend_name in valid_backends:
if fullname.startswith(backend_name):
fullname = fullname.replace(backend_name, redirect_to_backend)
args = (fullname,) + args[1:]
return import_orig(*args, **kwargs)
return import_mod
return wrapper
# Here it's important to assign to __import__ in __builtin__ and not
# local __import__, or it won't affect the import statement.
import __builtin__
__builtin__.__import__ = import_wrapper(valid_backends,
redirect_to_backend)(__builtin__.__import__)
print(__import__)
import shutil
import shelve
import re
import glob
print shutil.__file__
print shelve.__file__
print re.__file__
print glob.__file__
output:
<built-in function __import__>
<function import_mod at 0x02BBCAF0>
C:\Python27\lib\shutil.pyc
C:\Python27\lib\pickle.pyc
C:\Python27\lib\re.pyc
C:\Python27\lib\glob.pyc
shelve renamed to pickle, and pickle is imported by default machinery with the variable name shelve.
When finding a module which is part of package like PyQt4.QtCore, you have to recursively find each part of the name without .. And imp.load_module requires its name parameter be full module name with . separating package and module name.
Because QtCore is part of a package, you shoud do python -c 'import PyQt4.QtCore' instead. Here's the code to load a module.
import imp
def load_module(name):
def _load_module(name, pkg=None, path=None):
rest = None
if '.' in name:
name, rest = name.split('.', 1)
find = imp.find_module(name, path)
if pkg is not None:
name = '{}.{}'.format(pkg, name)
try:
mod = imp.load_module(name, *find)
finally:
if find[0]:
find[0].close()
if rest is None:
return mod
return _load_module(rest, name, mod.__path__)
return _load_module(name)
Test;
print(load_module('PyQt4.QtCore').qVersion())
4.8.6

How do I change the name of an imported library?

I am using jython with a third party application. The third party application has some builtin libraries foo. To do some (unit) testing we want to run some code outside of the application. Since foo is bound to the application we decided to write our own mock implementation.
However there is one issue, we implemented our mock class in python while their class is in java. Thus to use their code one would do import foo and foo is the mock class afterwards. However if we import the python module like this we get the module attached to the name, thus one has to write foo.foo to get to the class.
For convenience reason we would love to be able to write from ourlib.thirdparty import foo to bind foo to the foo-class. However we would like to avoid to import all the classes in ourlib.thirdparty directly, since the loading time for each file takes quite a while.
Is there any way to this in python? ( I did not get far with Import hooks I tried simply returning the class from load_module or overwriting what I write to sys.modules (I think both approaches are ugly, particularly the later))
edit:
ok: here is what the files in ourlib.thirdparty look like simplified(without magic):
foo.py:
try:
import foo
except ImportError:
class foo
....
Actually they look like this:
foo.py:
class foo
....
__init__.py in ourlib.thirdparty
import sys
import os.path
import imp
#TODO: 3.0 importlib.util abstract base classes could greatly simplify this code or make it prettier.
class Importer(object):
def __init__(self, path_entry):
if not path_entry.startswith(os.path.join(os.path.dirname(__file__), 'thirdparty')):
raise ImportError('Custom importer only for thirdparty objects')
self._importTuples = {}
def find_module(self, fullname):
module = fullname.rpartition('.')[2]
try:
if fullname not in self._importTuples:
fileObj, self._importTuples[fullname] = imp.find_module(module)
if isinstance(fileObj, file):
fileObj.close()
except:
print 'backup'
path = os.path.join(os.path.join(os.path.dirname(__file__), 'thirdparty'), module+'.py')
if not os.path.isfile(path):
return None
raise ImportError("Could not find dummy class for %s (%s)\n(searched:%s)" % (module, fullname, path))
self._importTuples[fullname] = path, ('.py', 'r', imp.PY_SOURCE)
return self
def load_module(self, fullname):
fp = None
python = False
print fullname
if self._importTuples[fullname][1][2] in (imp.PY_SOURCE, imp.PY_COMPILED, imp.PY_FROZEN):
fp = open( self._importTuples[fullname][0], self._importTuples[fullname][1][1])
python = True
try:
imp.load_module(fullname, fp, *self._importTuples[fullname])
finally:
if python:
module = fullname.rpartition('.')[2]
#setattr(sys.modules[fullname], module, getattr(sys.modules[fullname], module))
#sys.modules[fullname] = getattr(sys.modules[fullname], module)
if isinstance(fp, file):
fp.close()
return getattr(sys.modules[fullname], module)
sys.path_hooks.append(Importer)
As others have remarked, it is such a plain thing in Python that the import statement iself has a syntax for that:
from foo import foo as original_foo, for example -
or even import foo as module_foo
Interesting to note is that the import statemente binds a name to the imported module or object ont he local context - however, the dictionary sys.modules (on the moduels sys of course), is a live reference to all imported modules, using their names as a key. This mechanism plays a key role in avoding that Python re-reads and re-executes and already imported module , when running (that is, if various of yoru modules or sub-modules import the samefoo` module, it is just read once -- the subsequent imports use the reference stored in sys.modules).
And -- besides the "import...as" syntax, modules in Python are just another object: you can assign any other name to them in run time.
So, the following code would also work perfectly for you:
import foo
original_foo = foo
class foo(Mock):
...

How can I discover classes in a specific package in python?

I have a package of plug-in style modules. It looks like this:
/Plugins
/Plugins/__init__.py
/Plugins/Plugin1.py
/Plugins/Plugin2.py
etc...
Each .py file contains a class that derives from PluginBaseClass. So I need to list every module in the Plugins package and then search for any classes that implement PluginBaseClass. Ideally I want to be able to do something like this:
for klass in iter_plugins(project.Plugins):
action = klass()
action.run()
I have seen some other answers out there, but my situation is different. I have an actual import to the base package (ie: import project.Plugins) and I need to find the classes after discovering the modules.
Edit: here's a revised solution. I realised I was making a mistake while testing my previous one, and it doesn't really work the way you would expect. So here is a more complete solution:
import os
from imp import find_module
from types import ModuleType, ClassType
def iter_plugins(package):
"""Receives package (as a string) and, for all of its contained modules,
generates all classes that are subclasses of PluginBaseClass."""
# Despite the function name, "find_module" will find the package
# (the "filename" part of the return value will be None, in this case)
filename, path, description = find_module(package)
# dir(some_package) will not list the modules within the package,
# so we explicitly look for files. If you need to recursively descend
# a directory tree, you can adapt this to use os.walk instead of os.listdir
modules = sorted(set(i.partition('.')[0]
for i in os.listdir(path)
if i.endswith(('.py', '.pyc', '.pyo'))
and not i.startswith('__init__.py')))
pkg = __import__(package, fromlist=modules)
for m in modules:
module = getattr(pkg, m)
if type(module) == ModuleType:
for c in dir(module):
klass = getattr(module, c)
if (type(klass) == ClassType and
klass is not PluginBaseClass and
issubclass(klass, PluginBaseClass)):
yield klass
My previous solution was:
You could try something like:
from types import ModuleType
import Plugins
classes = []
for item in dir(Plugins):
module = getattr(Plugins, item)
# Get all (and only) modules in Plugins
if type(module) == ModuleType:
for c in dir(module):
klass = getattr(module, c)
if isinstance(klass, PluginBaseClass):
classes.append(klass)
Actually, even better, if you want some modularity:
from types import ModuleType
def iter_plugins(package):
# This assumes "package" is a package name.
# If it's the package itself, you can remove this __import__
pkg = __import__(package)
for item in dir(pkg):
module = getattr(pkg, item)
if type(module) == ModuleType:
for c in dir(module):
klass = getattr(module, c)
if issubclass(klass, PluginBaseClass):
yield klass
You may (and probably should) define __all__ in __init__.py as a list of the submodules in your package; this is so that you support people doing from Plugins import *. If you have done so, you can iterate over the modules with
import Plugins
import sys
modules = { }
for module in Plugins.__all__:
__import__( module )
modules[ module ] = sys.modules[ module ]
# iterate over dir( module ) as above
The reason another answer posted here fails is that __import__ imports the lowest-level module, but returns the top-level one (see the docs). I don't know why.
Scanning modules isn't good idea. If you need class registry you should look at metaclasses or use existing solutions like zope.interface.
Simple solution through metaclasses may look like that:
from functools import reduce
class DerivationRegistry(type):
def __init__(cls,name,bases,cls_dict):
type.__init__(cls,name,bases,cls_dict)
cls._subclasses = set()
for base in bases:
if isinstance(base,DerivationRegistry):
base._subclasses.add(cls)
def getSubclasses(cls):
return reduce( set.union,
( succ.getSubclasses() for succ in cls._subclasses if isinstance(succ,DerivationRegistry)),
cls._subclasses)
class Base(object):
__metaclass__ = DerivationRegistry
class Cls1(object):
pass
class Cls2(Base):
pass
class Cls3(Cls2,Cls1):
pass
class Cls4(Cls3):
pass
print(Base.getSubclasses())
If you don't know what's going to be in Plugins ahead of time, you can get a list of python files in the package's directory, and import them like so:
# compute a list of modules in the Plugins package
import os
import Plugins
plugin_modules = [f[:-3] for f in os.listdir(os.path.dirname(Plugins.__file__))
if f.endswith('.py') and f != '__init__.py']
Sorry, that comprehension might be a mouthful for someone relatively new to python. Here's a more verbose version (might be easier to follow):
plugin_modules = []
package_path = Plugins.__file__
file_list = os.listdir(os.path.dirname(package_path))
for file_name in file_list:
if file_name.endswith('.py') and file_name != '__init__.py':
plugin_modules.append(file_name)
Then you can use __import__ to get the module:
# get the first one
plugin = __import__('Plugins.' + plugin_modules[0])

Categories

Resources