Does a python dynamic import of module generates .pyc? - python

I would like to have dotted name strings that I can evaluate. Those dotted names could point to functions from new files the project do not know about (to quickly add a new functionality to the project without being part of the development team).
Right now, I resolve and compile the dotted names using a library (pyramid) then I save the compiled function object somewhere to be able to use it later.
I've seen that importlib let us import a module and it works perfectly fine, like so:
importlib.import_module('my_library')
Still, normally when you import a module, a .pyc will be generated so other calls won't take as much time to execute (as they won't have to be compiled again).
Do imports using importlib create .pyc files?
If not, would adding it to locals() change anything? (Adding it to globals() did not seem to work for me) Like so:
locals()['my_library'] = importlib.import_module('my_library')

Related

PyCharm procedural __all__ generation and syntax highlighting

I'm using this decorator to manage __all__ in a DRY manner:
def export(obj):
mod = sys.modules[obj.__module__]
if hasattr(mod, '__all__'):
mod.__all__.append(obj.__name__)
else:
mod.__all__ = [obj.__name__]
return obj
For names imported with import * PyCharm issues an unresolved reference error, which is understandable, since it doesn't run the code before analysis. But it is an obvious inconvenience.
How would you solve it (or maybe already solved)?
My assumptions:
Adding some automatic linter plugin or altering existing PyCharm's inspection code would be fine.
Something that's actually editing a .py source is viable, but not fine.
This method is probably not the best one, therefore suggesting another convenient technique of dealing with exports is fine too.
You may be interested in an alternative approach to managing __all__:
https://pypi.org/project/auto-all/
This provides a start_all() and end_all() function to place in your module around the items you want to make accessible. This approach works with PyCharms code inspection.
from auto_all import start_all, end_all
# Imports outside the start and end function calls are not included in __all__.
from pathlib import Path
def a_private_function():
print("This is a private function.")
# Start defining externally accessible objects.
start_all(globals())
def a_public_function():
print("This is a public function.")
# Stop defining externally accessible objects.
end_all(globals())
I feel like this is a reasonable approach to managing __all__, and one that I used on more complex packages. The source code for the package is small, so could easily be included direct in your code to avoid external dependencies if you need.
The reason I use this is I have some modules where lots of items need to be "exported" and a I want to keep imported items out of the export list. I have multiple developers working on the code and it's easy to add new items and forget to include them in __all__, so automating this helps.

Why do Python modules altered during execution persist over separate files?

Sorry for confusing title, let me explain what I mean. I came across a piece of code similar to the following using Google's PrettyTensor API, where it allows for custom functions to be added to the PrettyTensor class through its #prettytensor.Register() decorator.
(located in custom_ops.py)
import prettytensor as pt
#pt.Register(...)
def custom_foo(bar):
...
(located in main.py)
import prettytensor as pt
import custom_ops
x = pt.custom_foo(bar)
This code accesses prettytensor through 2 separate files, and I don't understand why the changes made in one file carry over to the other. What's also interesting is that the order of the imports doesn't matter.
import custom_ops
import prettytensor as pt
x = pt.custom_foo(bar)
The code above still works fine. I would like help finding an explanation for this phenomenon, as I could not find documentation for it anywhere. It seems to me like the python interpreter is caching the module in memory, and when it is altered by the custom_ops file it persists in the interpreter when it is imported again. If anyone knows why this happens, how would you stop it from occurring?
The reason both your modules see the same version of the prettytensor module is that Python caches the module objects it creates when it loads a module for the first time. The same module module object can then be imported any number of times in different places (or even several times within the same module, if you had a reason to do that), without being reloaded from its file.
You can see all the modules that have been loaded in the dictionary sys.modules. Whenever you do an import of a module that's already been loaded, Python will see it in sys.modules and you'll get a reference to the module object that already exists instead of a new module loaded from the .py file.
In general, this is what you want. It's usually a very bad thing if two different parts of the code can get a reference to a module loaded from the same file via two different module names. For instance, you can have two objects that both claim to be instances of class foo.Foo, but they could be instances of two different foo.Foo classes if foo can be accessed two different ways. This can make debugging a real nightmare.
Duplicated modules can happen if your Python module search path is messed up (so that the modules inside a package are also exposed at the top level). It can also happen with the __main__ module (created from the file you're running as a script), which can also be imported using its normal name (e.g. main in your example with main.py).
You can also manually reload a module using the reload function. In Python 2 this was a builtin, but it's stashed away in importlib now in Python 3.

Proper way of setting classes and constants in python package

I'm writing a small package for internal use and come to a design problem. I define a few classes and constants (i.e., server IP address) in some file, let's call it mathfunc.py. Now, some of these classes and constants will be used in other files in the same package. My current setup is like this:
/mypackage
__init__.py
mathfunc.py
datefunc.py
So, at the moment I think I have to import mathfunc.py in datefunc.py to use the classes defined there (or alternatively import both of them all the time). This sounds wrong to me because then I'll be in a lot of pain importing lots of files everywhere. Is it a proper design at all or there is some other way? Maybe I can put all definitions in some file which will not be a subpackage on its own, but will be used by all other files?
Nope, that's pretty much how Python works. If you want to use objects declared in another file, you have to import from it.
Tips:
You can keep your namespace clean by only importing the things you need, rather than using from foo import *.
If you really need to do a "circular import" (where A needs things in B, and B needs things in A) you can solve that by only importing inside the functions where you need the object, not at the top of a file.

Import statement: Config file Python

I'm maintaining a dictionary and that is loaded inside the config file. The dictionary is loaded from a JSON file.
In config.py
name_dict = json.load(open(dict_file))
I'm importing this config file in several other scripts(file1.py, file2.py,...,filen.py) using
import config
statement. My question is when will the config.py script be executed ? I'm sure it wont be executed for every import call that is made inside my multiple scripts. But, what exactly happens when an import statement is called.
The top-level code in a module is executed once, the first time you import it. After that, the module object will be found in sys.modules, and the code will not be re-executed to re-generate it.
There are a few exceptions to this:
reload, obviously.
Accidentally importing the same module under two different names (e.g., if the module is in a package, and you've got some directory in the middle of the package in sys.path, you could end up with mypackage.mymodule and mymodule being two copies of the same thing, in which case the code gets run twice).
Installing import hooks/custom imported that replace the standard behavior.
Explicitly monkeying with sys.modules.
Directly calling functions out of imp/importlib or the like.
Certain cases with multiprocessing (and modules that use it indirectly, like concurrent.futures).
For Python 3.1 and later, this is all described in detail under The import system. In particular, look at the Searching section. (The multiprocessing-specific cases are described for that module.)
For earlier versions of Python, you pretty much have to infer the behavior from a variety of different sources and either reading the code or experimenting. However, the well-documented new behavior is intended to work like the old behavior except in specifically described ways, so you can usually get away with reading the 3.x docs even for 2.x.
Note that in general, you don't want to rely on whether top-level code in the module is run once or multiple times. For example, given a top-level function definition, as long as you never compare function objects, or rebind any globals that it (meaning the definition itself, not just the body) depends on, it doesn't make any difference. However, there are some exceptions to that, and loading start-time config files is a perfect example of an exception.

What happens when you import a package?

For efficiency's sake I am trying to figure out how python works with its heap of objects (and system of namespaces, but it is more or less clear). So, basically, I am trying to understand when objects are loaded into the heap, how many of them are there, how long they live etc.
And my question is when I work with a package and import something from it:
from pypackage import pymodule
what objects get loaded into the memory (into the object heap of the python interpreter)? And more generally: what happens? :)
I guess the above example does something like:
some object of the package pypackage was created in the memory (which contains some information about the package but not too much), the module pymodule was loaded into the memory and its reference was created in the local name space. The important thing here is: no other modules of the pypackage (or other objects) were created in the memory, unless it is stated explicitly (in the module itself, or somewhere in the package initialization tricks and hooks, which I am not familiar with). At the end the only one big thing in the memory is pymodule (i.e. all the objects that were created when the module was imported). Is it so? I would appreciate if someone clarified this matter a little bit. Maybe you could advice some useful article about it? (documentation covers more particular things)
I have found the following to the same question about the modules import:
When Python imports a module, it first checks the module registry (sys.modules) to see if the module is already imported. If that’s the case, Python uses the existing module object as is.
Otherwise, Python does something like this:
Create a new, empty module object (this is essentially a dictionary)
Insert that module object in the sys.modules dictionary
Load the module code object (if necessary, compile the module first)
Execute the module code object in the new module’s namespace. All variables assigned by the code will be available via the module object.
And would be grateful for the same kind of explanation about packages.
By the way, with packages a module name is added into the sys.modules oddly:
>>> import sys
>>> from pypacket import pymodule
>>> "pymodule" in sys.modules.keys()
False
>>> "pypacket" in sys.modules.keys()
True
And also there is a practical question concerning the same matter.
When I build a set of tools, which might be used in different processes and programs. And I put them in modules. I have no choice but to load a full module even when all I want is to use only one function declared there. As I see one can make this problem less painful by making small modules and putting them into a package (if a package doesn't load all of its modules when you import only one of them).
Is there a better way to make such libraries in Python? (With the mere functions, which don't have any dependencies within their module.) Is it possible with C-extensions?
PS sorry for such a long question.
You have a few different questions here. . .
About importing packages
When you import a package, the sequence of steps is the same as when you import a module. The only difference is that the packages's code (i.e., the code that creates the "module code object") is the code of the package's __init__.py.
So yes, the sub-modules of the package are not loaded unless the __init__.py does so explicitly. If you do from package import module, only module is loaded, unless of course it imports other modules from the package.
sys.modules names of modules loaded from packages
When you import a module from a package, the name is that is added to sys.modules is the "qualified name" that specifies the module name together with the dot-separated names of any packages you imported it from. So if you do from package.subpackage import mod, what is added to sys.modules is "package.subpackage.mod".
Importing only part of a module
It is usually not a big concern to have to import the whole module instead of just one function. You say it is "painful" but in practice it almost never is.
If, as you say, the functions have no external dependencies, then they are just pure Python and loading them will not take much time. Usually, if importing a module takes a long time, it's because it loads other modules, which means it does have external dependencies and you have to load the whole thing.
If your module has expensive operations that happen on module import (i.e., they are global module-level code and not inside a function), but aren't essential for use of all functions in the module, then you could, if you like, redesign your module to defer that loading until later. That is, if your module does something like:
def simpleFunction():
pass
# open files, read huge amounts of data, do slow stuff here
you can change it to
def simpleFunction():
pass
def loadData():
# open files, read huge amounts of data, do slow stuff here
and then tell people "call someModule.loadData() when you want to load the data". Or, as you suggested, you could put the expensive parts of the module into their own separate module within a package.
I've never found it to be the case that importing a module caused a meaningful performance impact unless the module was already large enough that it could reasonably be broken down into smaller modules. Making tons of tiny modules that each contain one function is unlikely to gain you anything except maintenance headaches from having to keep track of all those files. Do you actually have a specific situation where this makes a difference for you?
Also, regarding your last point, as far as I'm aware, the same all-or-nothing load strategy applies to C extension modules as for pure Python modules. Obviously, just like with Python modules, you could split things up into smaller extension modules, but you can't do from someExtensionModule import someFunction without also running the rest of the code that was packaged as part of that extension module.
The approximate sequence of steps that occurs when a module is imported is as follows:
Python tries to locate the module in sys.modules and does nothing else if it is found. Packages are keyed by their full name, so while pymodule is missing from sys.modules, pypacket.pymodule will be there (and can be obtained as sys.modules["pypacket.pymodule"].
Python locates the file that implements the module. If the module is part of the package, as determined by the x.y syntax, it will look for directories named x that contain both an __init__.py and y.py (or further subpackages). The bottom-most file located will be either a .py file, a .pyc file, or a .so/.pyd file. If no file that fits the module is found, an ImportError will be raised.
An empty module object is created, and the code in the module is executed with the module's __dict__ as the execution namespace.1
The module object is placed in sys.modules, and injected into the importer's namespace.
Step 3 is the point at which "objects get loaded into memory": the objects in question are the module object, and the contents of the namespace contained in its __dict__. This dict typically contains top-level functions and classes created as a side effect of executing all the def, class, and other top-level statements normally contained in each module.
Note that the above only desribes the default implementation of import. There is a number of ways one can customize import behavior, for example by overriding the __import__ built-in or by implementing import hooks.
1 If the module file is a .py source file, it will be compiled into memory first, and the code objects resulting from the compilation will be executed. If it is a .pyc, the code objects will be obtained by deserializing the file contents. If the module is a .so or a .pyd shared library, it will be loaded using the operating system's shared-library loading facility, and the init<module> C function will be invoked to initialize the module.

Categories

Resources