If a large module is loaded by some submodule of your code, is there any benefit to referencing the module from that namespace instead of importing it again?
For example:
I have a module MyLib, which makes extensive use of ReallyBigLib. If I have code that imports MyLib, should I dig the module out like so
import MyLib
ReallyBigLib = MyLib.SomeModule.ReallyBigLib
or just
import MyLib
import ReallyBigLib
Python modules could be considered as singletons... no matter how many times you import them they get initialized only once, so it's better to do:
import MyLib
import ReallyBigLib
Relevant documentation on the import statement:
https://docs.python.org/2/reference/simple_stmts.html#the-import-statement
Once the name of the module is known (unless otherwise specified, the term “module” will refer to both packages and modules), searching for the module or package can begin. The first place checked is sys.modules, the cache of all modules that have been imported previously. If the module is found there then it is used in step (2) of import.
The imported modules are cached in sys.modules:
This is a dictionary that maps module names to modules which have already been loaded. This can be manipulated to force reloading of modules and other tricks. Note that removing a module from this dictionary is not the same as calling reload() on the corresponding module object.
As others have pointed out, Python maintains an internal list of all modules that have been imported. When you import a module for the first time, the module (a script) is executed in its own namespace until the end, the internal list is updated, and execution of continues after the import statement.
Try this code:
# module/file a.py
print "Hello from a.py!"
import b
# module/file b.py
print "Hello from b.py!"
import a
There is no loop: there is only a cache lookup.
>>> import b
Hello from b.py!
Hello from a.py!
>>> import a
>>>
One of the beauties of Python is how everything devolves to executing a script in a namespace.
It makes no substantial difference. If the big module has already been loaded, the second import in your second example does nothing except adding 'ReallyBigLib' to the current namespace.
WARNING: Python does not guarantee that module will not be initialized twice.
I've stubled upon such issue. See discussion:
http://code.djangoproject.com/ticket/8193
The internal registry of imported modules is the sys.modules dictionary, which maps module names to module objects. You can look there to see all the modules that are currently imported.
You can also pull some useful tricks (if you need to) by monkeying with sys.modules - for example adding your own objects as pseudo-modules which can be imported by other modules.
It is the same performancewise. There is no JIT compiler in Python yet.
Related
I have an excel sheet having thousands of import statements.
eg.,
from XYZ.loghelper import LogHelper,
import os,
from models import CustomUser, VerticalApp,
from django.http import HttpResponse,
Some of them are built-in and some are user defined.
Now I have to find whether they are user defined or built-in.
How can I do that?
I assume that by builtin you mean "part of Python's stdlib" (python's "builtin" features, being "builtin", don't need to be imported at all). The definition of "user defined" is much more vague - is a 3rd part package like Django "builtin" or "user-defined" ?
But anyway: the short answer is that technically you CAN NOT tell just from the import statement. Modules are looked up in sys.path and the first matching module will be selected, so if you have a module named "os.py" in a local directory that comes before your Python installation's stdlib directory in sys.path then "import os" will indeed import your own "os.py" module instead of the stdlib's one. IOW, you need to use the exact same environment, import the module, and check the module's __file__ attribute to find out where it's been imported from.
Now most python devs try to avoid shadowing stdlib's module names (for obvious reasons) so you can also just build a list (technically you want a set for better perfs but anyway) of Python's stdlibs modules names, parse your imports statements, and check if the name of the module to be imported belongs to the set of stdlib's names. This should yield correct results in most cases, but it's not garanteed to be 100% accurate.
Let's say I have imported two modules like this:
from module0 import hello_func
from directory.module1 import hello_var
Where in module0:
def hello_func(): return "hello from module0"
And module1:
hello_var = "hello from module1"
How can I know from which file is each object being imported?
I tried to check locals() function but nothing in there giving reference to the file...
Actually, you kind of answered your question yourself:
Let's say I have imported two modules
(insert "from xxx import *" here)
How can I know from which file is each object being imported?
One of the reasons of NOT using wildcard imports is precisely to make it clear where names are imported from (the other one being to avoind one imported name to shadow a previously imported one - something that tends to break you code in the most unexpected - and sometimes quite hard to spot - ways).
Note that in your edited question:
from module0 import hello_func
from directory.module1 import hello_var
you already have a much better idea where a name comes from. Not the exact files path yet, but at least the name of the package/module.
And that's one of the main reasons why one should NOT use wildcard imports.
Now if you want to know the exact files path, you have two distinct cases.
Some objects keep trace of where they were created (mostly modules, classes, functions, etc - cf the list of types supported by inspect.getfile()), and then, well, you already know the answer (use inspect.getfile() xD).
But most types wont (because there's no reason for it). In this case, you have to know which module they were imported from and call inspect.getfile() on the module itself. In this case, if you used wildcard imports, you will have to manually inspect all the modules you imported from to find out which one defined this name. Enjoy. Specially if one of those modules did also use wildcard imports...
one question please: where they does keep traces? and how these traces look like?
Modules keep it in their __file__ attribute. Functions and classes keep a reference to their module's name in their __module__ attribute, and from this you can use it to retrieve the module from the sys.modules dict (a cache of all modules already imported in the current process), which will gives you the file.
I never had a need to search this info for tracebacks, frames, code objects etc so you'll have to check it yourself I'm afraid ;-)
You can define in each module a constant with its path, something like this should work:
import os
FILE_PATH = os.path.abspath(__file__)
When you import that module you can access its location like this:
import module
print(module.FILE_PATH)
Another solution using the inspect and os modules.
import module0
import os
import inspect
print(os.path.abspath(inspect.getfile(module0.hello_func)))
If you are looking for the absolute path of where the script is being run, this should work for sure:
import os
abs_path = os.path.dirname(os.path.abspath(__file__))
print(abs_path)
Let's say i have 3 modules within the same directory. (module1,module2,module3)
Suppose the 2nd module imports the 3rd module then if i import module2 in module 1. Does that automatically import module 3 to module 1 ?
Thanks
No. The imports only work inside a module. You can verify that by creating a test.
Saying,
# module1
import module2
# module2
import module3
# in module1
module3.foo() # oops
This is reasonable because you can think in reverse: if imports cause a chain of importing, it'll be hard to decide which function is from which module, thus causing complex naming conflicts.
No, it will not be imported unless you explicitly specify python to, like so:
from module2 import *
What importing does conceptually is outlined below.
import some_module
The statement above is equivalent to:
module_variable = import_module("some_module")
All we have done so far is bind some object to a variable name.
When it comes to the implementation of import_module it is also not that hard to grasp.
def import_module(module_name):
if module_name in sys.modules:
module = sys.modules[module_name]
else:
filename = find_file_for_module(module_name)
python_code = open(filename).read()
module = create_module_from_code(python_code)
sys.modules[module_name] = module
return module
First, we check if the module has been imported before. If it was, then it will be available in the global list of all modules (sys.modules), and so will simply be reused. In the case that the module is not available, we create it from the code. Once the function returns, the module will be assigned to the variable name that you have chosen. As you can see the process is not inefficient or wasteful. All you are doing is creating an alias for your module. In most cases, transparency is prefered, hence having a quick look at the top of the file can tell you what resources are available to you. Otherwise, you might end up in a situation where you are wondering where is a given resource coming from. So, that is why you do not get modules inherently "imported".
Resource:
Python doc on importing
I'm noticing some weird situations where tests like the following fail:
x = <a function from some module, passed around some big application for a while>
mod = __import__(x.__module__)
x_ref = getattr(mod, x.__name__)
assert x_ref is x # Fails
(Code like this appears in the pickle module)
I don't think I have any import hooks, reload calls, or sys.modules manipulation that would mess with python's normal import caching behavior.
Is there any other reason why a module would be loaded twice? I've seen claims about this (e.g, https://stackoverflow.com/a/10989692/1332492), but I haven't been able to reproduce it in a simple, isolated script.
I believe you misunderstood how __import__ works:
>>> from my_package import my_module
>>> my_module.function.__module__
'my_package.my_module'
>>> __import__(my_module.function.__module__)
<module 'my_package' from './my_package/__init__.py'>
From the documentation:
When the name variable is of the form package.module, normally, the
top-level package (the name up till the first dot) is returned, not
the module named by name. However, when a non-empty fromlist
argument is given, the module named by name is returned.
As you can see __import__ does not return the sub-module, but only the top package. If you have function also defined at package level you will indeed have different references to it.
If you want to just load a module you should use importlib.import_module instead of __import__.
As to answer you actual question: AFAIK there is no way to import the same module, with the same name, twice without messing around with the importing mechanism. However, you could have a submodule of a package that is also available in the sys.path, in this case you can import it twice using different names:
from some.package import submodule
import submodule as submodule2
print(submodule is submodule2) # False. They have *no* relationships.
This sometimes can cause problems with, e.g., pickle. If you pickle something referenced by submodule you cannot unpickle it using submodule2 as reference.
However this doesn't address the specific example you gave us, because using the __module__ attribute the import should return the correct module.
A Python namespace package can be spread over many directories, and zip files or custom importers. What's the correct way to iterate over all the importable submodules of a namespace package?
Here is a way that works well for me. Create a new submodule all.py, say, in one of the packages in the namespace.
If you write
import mynamespace.all
you are given the object for the mynamespace module. This object contains all of the loaded modules in the namespace, irrespective of where they were loaded, since there is only one instance of mynamespace around.
So, just load all the packages in the namespace in all.py!
# all.py
from pkgutil import iter_modules
# import this module's namespace (= parent) package
pkg = __import__(__package__)
# iterate all modules in pkg's paths,
# prefixing the returned module names with namespace-dot,
# and import the modules by name
for m in iter_modules(pkg.__path__, __package__ + '.'):
__import__(m.name)
Or in a one-liner that keeps the all module empty, if you care about that sort of thing:
# all.py
(lambda: [__import__(_.name) for _ in __import__('pkgutil').iter_modules(__import__(__package__).__path__, __package__ + '.')])() # noqa
After importing the all module from your namespace, you then actually receive a fully populated namespace module:
import mynamespace.all
mynamespace.mymodule1 # works
mynamespace.mymodule2 # works
...
Of course, you can use the same mechanism to enumerate or otherwise process the modules in the namespace, if you do not want to import them immediately.
Please read import confusion.
It very clearly distinguishes all the different ways you can import packages and its sub modules and in the process answers your question. When you need a certain submodule from a package, it’s often much more convenient to write from io.drivers import zip than import io.drivers.zip, since the former lets you refer to the module simply as zip instead of its full name.
from modname import *, this provides an easy way to import all the items from a module into the current namespace; however, this statement should be used sparingly.