How do I do from some.module import * where name of the module is defined in a string variable?
This code imports all symbols from os:
import importlib
# Load the module as `module'
module = importlib.import_module("os")
# Now extract the attributes into the locals() namespace, as `from ..
# import *' would do
if hasattr(module, "__all__"):
# A module can define __all__ to explicitly define which names
# are imported by the `.. import *' statement
attrs = { key: getattr(module, key) for key in module.__all__ }
else:
# Otherwise, the statement imports all names that do not start with
# an underscore
attrs = { key: value for key, value in module.__dict__.items() if
key[0] != "_" }
# Copy the attibutes into the locals() namespace
locals().update(attrs)
See e.g. this question for more information on the logic behind the from ... import * operation.
Now while this works, you should not use this code. It is already considered bad practice to import all symbols from a named module, but it certainly is even worse to do this with a user-given name. Search for PHP's register_globals if you need a hint for what could go wrong.
In Python the built-in import function accomplishes the same goal as using the import statement, but it's an actual function, and it takes a string as an argument.
sys = __import__('sys')
The variable sys is now the sys module, just as if you had said import sys.
Reference
Related
I want to write a function which takes the name of a variable, a file name, and a third string, and tries to import the given variable from the file and if it can not do that, it sets the variable to the third string. Let me show you. This is in my config.py:
variable = 'value'
This is my function (it doesn't work):
#!/usr/bin/python
def importvar (var, fname, notfound) :
try:
from fname import var
except:
var = notfound
return var;
value = importvar ('variable', 'config', 'value not found')
print value #prints 'value not found'
This is what I am trying to achieve:
from config import variable
print variable #prints 'value'
This question is similar to "How to use a variable name as a variable in python?", but the answers I found to those didn't seem to work for me. I don't necessarily need to store them in a variable, but I couldn't come up with anything better. I know this is a perfect example of "What you shouldn't do in python", but I still need this. Thanks for the help!
What you want to do is dynamically importing a module starting from a string describing the path of the module. You can do this by using import_module from the importlib package.
import importlib
def importvar (var, fname, notfound) :
try:
return getattr(importlib.import_module(fname), var)
except:
return notfound
This should give you the clue:
>>> from importlib import import_module
>>> config = import_module('config')
>>> print( getattr(config, 'variable') )
value
See the docs for getattr.
Basically, getattr(x, 'variable') is equivalent to x.variable
a function for import & return imported variable:
def importvar (var, fname, notfound):
try:
exec('from {f} import {v}'.format(f=fname, v=var))
return locals().get(var, notfound)
except:
return notfound
If you just want a simple import from a string, the __import__ builtin may be good enough. It takes the module name as a string and returns it. If you also need to get an attribute from it programmatically use the builtin getattr, which takes the attribute name as a string.
If you're trying to import a package submodule, though, importlib.import_module is easier--you can import a name with a dot in it and get the module directly. This just calls __import__ for you. Compare __import__("logging.config").config vs import_module("logging.config").
If you're trying to import an arbitrary file not on the Python path, it gets a little more involved. The Python docs have a recipe for this.
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, file_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
Unlike __import__, this doesn't add the module to the cache, because it doesn't have a canonical import name. But you can add it yourself (using whatever name you want) if you want to import it normally later, e.g.
import sys
sys.modules["foo_module"] = module
After running this, it allows you to get the same module instance again with a simple
import foo_module
What is the equivalent of import * in Python using functions (presumably from importlib)?
I know that you can import a module with mod = __import__(...), which will delegate to whatever the currently configured implementation is. You can also do something like
mod_spec = importlib.utl.spec_from_file_location(...)
mod = importlib.util.module_from_spec(mod_spec)
mod_spec.loader.exec_module(mod)
which allows you to do crazy things like injecting things into the module by inserting them before the call to exec_module. (Courtesy of https://stackoverflow.com/a/67692/2988730 and https://stackoverflow.com/a/38650878/2988730)
However, my question remains. How does import * work in function form? What function determines which names to load from a module depending on the presence/contents of __all__?
There's no function for from whatever import *. In fact, there's no function for import whatever, either! When you do
mod = __import__(...)
the __import__ function is only responsible for part of the job. It provides you with a module object, but you have to assign that module object to a variable separately. There's no function that will import a module and assign it to a variable the way import whatever does.
In from whatever import *, there are two parts:
prepare the module object for whatever
assign variables
The "prepare the module object" part is almost identical to in import whatever, and it can be handled by the same function, __import__. There's a minor difference in that import * will load any not-yet-loaded submodules in a package's __all__ list; __import__ will handle this for you if you provide fromlist=['*']:
module = __import__('whatever', fromlist=['*'])
The part about assigning names is where the big differences occur, and again, you have to handle that yourself. It's fairly straightforward, as long as you're at global scope:
if hasattr(module, '__all__'):
all_names = module.__all__
else:
all_names = [name for name in dir(module) if not name.startswith('_')]
globals().update({name: getattr(module, name) for name in all_names})
Function scopes don't support assigning variables determined at runtime.
i'm searching how to do
from myLib import *
inside my python code in order to do a import loop
__import__() method does not seems to provide the * feature, as i have to specify every content i want to import.
Is there a way to do the * ?
Thank's a lot for your help
EDIT:
to clarify, the goal is to import a bunch of classes, that stand inside a bunch of modules in a package to access them directly through there classes name, not like myPacakge.myModule.myClass(), nor myModule.myClass() but just myClass()
imagine you have:
myScript.py
myPackage
\__init__.py
\myModule_0.py
\myModule_1.py
\myModule_2.py
\myModule_3.py
\myModule_4.py
each myModule contains a bunch of classes and you are editing myScript.py, you want to have access to all classes in each myModule_X.py like:
myClass()
myOtherClass()
myOtherOtherClass()
etc... not like myModule_X.myClass() nor myPackage.myModyle_X.myClass()
__import__ returns the imported module's namespace. If you want to do import * from it, then you can iterate over that namespace and stuff all the module's names into your module's globals, which is what from modulename import * does. You probably shouldn't, just like you shouldn't use import * (except more so because you don't even know what module you're importing) but you can.
module = __import__("modulename")
if hasattr(module, "__all__"): # module tells us which names are all names
globals().update((name, getattr(module, name)) for name in module.__all__)
else: # import all non-private names
globals().update((name, getattr(module, name)) for name in dir(module)
if not name.startswith("_"))
You could also write it like so, which is a little safer since it avoids clobbering any global names already defined (at the risk of potentially not having a name you need):
module = __import__("modulename")
if hasattr(module, "__all__"): # module tells us which names are all names
globals().update((name, getattr(module, name)) for name in module.__all__
if name not in globals())
else: # import all non-private names
globals().update((name, getattr(module, name)) for name in dir(module)
if not (name.startswith("_") or name in globals()))
import myLib
will import everything but I advise against an import all
to use it you'd preface all imports with
myLib.my_module
If you only want to import certain things on the fly you'd want to do a conditional import, eg:
if condition_met:
import myLib.my_module
import * wreaks havoc with static code checking and debugging, so I don't recommend using it in a script. Assuming you're not trying to do something ill-advised with this, you might consider using the __all__ attribute to get a list of strings of the members of the package.
import my_package
for sub_package in my_package.__all__:
print "found " + sub_package
I have a system that collects all classes that derive from certain base classes and stores them in a dictionary. I want to avoid having to specify which classes are available (I would like to discover them programatically), so have used a from ModuleName import * statement. The user is then directed to place all tests to be collected in the ModuleName module. However, I cannot find a way to programatically determine what symbols were imported with that import statement. I have tried using dir() and __dict__ as indicated in the following example, but to no avail. How does one programatically find symbols imported in this manner (with import *)? I am unable to find them with the above methods.
testTypeFigureOuterrer.py:
from testType1 import *
from testType2 import *
class TestFigureOuterrer(object):
def __init__(self):
self.existingTests = {'type1':{},'type2':{}}
def findAndSortTests(self):
for symbol in dir(): # Also tried: dir(self) and __dict__
try:
thing = self.__getattribute__(symbol)
except AttributeError:
continue
if issubclass(thing,TestType1):
self.existingTests['type1'].update( dict(symbol,thing) )
elif issubclass(thing,TestType3):
self.existingTests['type2'].update( dict(symbol,thing) )
else:
continue
if __name__ == "__main__":
testFigureOuterrer = TestFigureOuterrer()
testFigureOuterrer.findAndSortTests()
testType1.py:
class TestType1(object):
pass
class TestA(TestType1):
pass
class TestB(TestType1):
pass
testType2.py:
class TestType2:
pass
class TestC(TestType2):
pass
class TestD(TestType2):
pass
Since you know the imports yourself, you should just import the module manually again, and then check the contents of the module. If an __all__ property is defined, its contents are imported as names when you do from module import *. Otherwise, just use all its members:
def getImportedNames (module):
names = module.__all__ if hasattr(module, '__all__') else dir(module)
return [name for name in names if not name.startswith('_')]
This has the benefit that you do not need to go through the globals, and filter everything out. And since you know the modules you import from at design time, you can also check them directly.
from testType1 import *
from testType2 import *
import testType1, testType2
print(getImportedNames(testType1))
print(getImportedNames(testType2))
Alternatively, you can also look up the module by its module name from sys.modules, so you don’t actually need the extra import:
import sys
def getImportedNames (moduleName):
module = sys.modules[moduleName]
names = module.__all__ if hasattr(module, '__all__') else dir(module)
return [name for name in names if not name.startswith('_')]
print(getImportedNames('testType1'))
print(getImportedNames('testType2'))
Take a look at this SO answer, which describes how to determine the name of loaded classes, you can get the name of all the classes defined within the context of the module.
import sys, inspect
clsmembers = inspect.getmembers(sys.modules['testType1'], inspect.isclass)
which is now defined as
[('TestA', testType1.TestA),
('TestB', testType1.TestB),
('TestType1', testType1.TestType1)]
You can also replace testType1 with __name__ when you're within the function of interest.
Don't use the * form of import. This dumps the imported names into your script's global namespace. Not only could they clobber some important bit of data by using the same name, you don't have any easy way to fish out the names you just imported. (Easiest way is probably to take a snapshot of globals().keys() before and after.)
Instead, import just the module:
import testType1
import testType2
Now you can easily get a list of what's in each module:
tests = dir(testType1)
And access each using getattr() on the module object:
for testname in tests:
test = getattr(testType1, testname)
if callable(test):
# do something with it
I want my_module to export __all__ as empty list, i.e.
from my_module import *
assert '__all__' in dir() and __all__ == []
I can export __all__ like this (in 'my_module.py'):
__all__ = ['__all__']
However it predictably binds __all__ to itself , so that
from my_module import *
assert '__all__' in dir() and __all__ == ['__all__']
How can I export __all__ as an empty list? Failing that, how can I hook into import process to put __all__ into importing module's __dict__ on every top level import my_module statement, circumventing module caching.
I'll start with saying this is, in my mind, a terrible idea. You really should not implicitly alter what is exported from a module, this goes counter to the Zen of Python: Explicit is better than implicit..
I also agree with the highest-voted answer on the question you cite; Python already has a mechanism to mark functions 'private', by convention we use a leading underscore to indicate a function should not be considered part of the module API. This approach works with existing tools, vs. the decorator dynamically setting __all__ which certainly breaks static code analysers.
That out of the way, here is a shotgun pointing at your foot. Use it with care.
What you want here is a way to detect when names are imported. You cannot normally do this; there are no hooks for import statements. Once a module has been imported from source, a module object is added to sys.modules and re-used for subsequent imports, but that object is not notified of imports.
What you can do is hook into attribute access. Not with the default module object, but you can stuff any object into sys.modules and it'll be treated as a module. You could just subclass the module type even, then add a __getattribute__ method to that. It'll be called when importing any name with from module import name, for all names listed in __all__ when using from module import *, and in Python 3, __spec__ is accessed for all import forms, even when doing just import module.
You can then use this to hack your way into the calling frame globals, via sys._getframe():
import sys
import types
class AttributeAccessHookModule(types.ModuleType):
def __getattribute__(self, name):
if name == '__all__':
# assume we are being imported with from module import *
g = sys._getframe(1).f_globals
if '__all__' not in g:
g['__all__'] = []
return super(AttributeAccessHook, self).__getattribute__(name)
# replace *this* module with our hacked-up version
# this part goes at the *end* of your module.
replacement = sys.module[__name__] = AttributeAccessHook(__name__, __doc__)
for name, obj in globals().items():
setattr(replacement, name, obj)
The guy there sets __all__ on first decorator application, so not explicitly exporting anything causes it to implicitly export everything. I am trying to improve on this design: if the decorator is imported, then export nothing my default, regardless of it's usage.
Just set __all__ to an empty list at the start of your module, e.g.:
# this is my_module.py
from utilitymodule import public
__all__ = []
# and now you could use your #public decorator to optionally add module to it