I want to override a module of an existing Python application. The application is structured in modules and submodules and I have registered an extension. Within the extension I want to provide a module (new_module.py) which overrides the original module (module.py). So whenever another module imports it, my version is used.
/application
__init__.py
folder_a
folder_b
__init__.py
folder_b_a
__init__.py
module.py
/extension
__init__.py
new_module.py
I guess this can be achieved by setting it in the sys module, like this:
import extension.new_module
sys.modules["application.folder_b.folder_b_a.module"] = extension.new_module
But I am not sure where to put those lines. I tried it in all of the init files, but it does not work. Or is there another way?
Documented here: https://docs.python.org/2/using/cmdline.html#envvar-PYTHONSTARTUP
Force python to load your extension via the environment variable, and in that extension import sys and mess about with sys.modules.
Related
This is an extension to the question posed here on dynamically execution of python modules
Dynamic module import in Python
While the consensus seems to be to use import or importlib to accomplish the dynamic loading/execution of python modules, this solution tends to break down when you have additional imports defined inside of the dynamically loaded module.
Take the original example
myapp/
__init__.py
commands/
__init__.py
command1.py
command2.py
foo.py
bar.py
If command1.py imports command2.py then when you try to dynamically load command1.py using importlib or import it will fail with
ModuleNotFoundError: No module named 'command2'
Now I can get around this by adding commands directory to sys.path but that will pollute the global namespace. This can get even more problematic if there are multiple commands folder with different pip third party library dependencies. One command may depend on a different version of pip installed library than another command.
So in essence, I am looking for a way to dynamically load/execute a python module in isolation. Any ideas on how to achieve this?
Since myapp is your project's root folder, not myapp/commands, you should do:
from commands import command2
in command1.py, so that the interpreter will be able to load command2.py.
What exactly is the use of __init__.py? Yes, I know this file makes a directory into an importable package. However, consider the following example:
project/
foo/
__init__.py
a.py
bar/
b.py
If I want to import a into b, I have to add following statement:
sys.path.append('/path_to_foo')
import foo.a
This will run successfully with or without __init__.py. However, if there is not an sys.path.append statement, a "no module" error will occur, with or without __init__.py. This makes it seem lik eonly the system path matters, and that __init__.py does not have any effect.
Why would this import work without __init__.py?
__init__.py has nothing to do with whether Python can find your package. You've run your code in such a way that your package isn't on the search path by default, but if you had run it differently or configured your PYTHONPATH differently, the sys.path.append would have been unnecessary.
__init__.py used to be necessary to create a package, and in most cases, you should still provide it. Since Python 3.3, though, a folder without an __init__.py can be considered part of an implicit namespace package, a feature for splitting a package across multiple directories.
During import processing, the import machinery will continue to
iterate over each directory in the parent path as it does in Python
3.2. While looking for a module or package named "foo", for each directory in the parent path:
If <directory>/foo/__init__.py is found, a regular package is imported and returned.
If not, but <directory>/foo.{py,pyc,so,pyd} is found, a module is imported and returned. The exact list of extension varies by platform
and whether the -O flag is specified. The list here is
representative.
If not, but <directory>/foo is found and is a directory, it is recorded and the scan continues with the next directory in the parent
path.
Otherwise the scan continues with the next directory in the parent path.
If the scan completes without returning a module or package, and at
least one directory was recorded, then a namespace package is created.
If you really want to avoid __init__.py for some reason, you don't sys.path. Rather, create a module object and set its __path__ to a list of directories.
if I want to import a into b, I have to add following statement:
No! You'd just say: import foo.a. All this is provided you run the entire package at once using python -m main.module where main.module is the entry point to your entire application. It imports all other modules, and the modules that import more modules will try to look for them from the root of this project. For instance, foo.bar.c will import as foo.bar.b
Then it seems that only the system path matters and init.py does not have any effect.
You need to modify sys.path only when you are importing modules from locations that are not in your project, or the places where python looks for libraries. __init__.py not only makes a folder look like a package, it also does a few more things like "export" objects to outside world (__all__)
When you import something it has to either:
Retrieve an already loaded module or
Load the module that was imported
When you do import foo and python finds a folder called foo in a folder on your sys.path then it will look in that folder for an __init__.py to be considered the top level module.
(Note that if the package is not on your sys.path then you would need to append it's location to be able to import it.)
If that is not present it will look for a __init__.pyc version possibly in the __pycache__ folder, if that is also missing then that folder foo is not considered a loadable python package. If no other options for foo are found then an ImportError is raised.
If you try deleting the __init__.pyc file as well you will see that the the initializer script for a package is indeed necessary.
I have a question. I have a directory setup like this:
folder/
main.py
/stuff/
__init__.py
function.py
/items/
__init__.py
class.py
My question is how would I import the class.py into the function.py? This setup is very specific and is unable to be changed. What would I need to put in order for this to work?
Your current directory structure seems ideal, so long as the application is started via main.py.
Python will always automatically add the parent directory of the main script to the start of sys.path (i.e. folder in your example). This means that the import machinery will give that directory priority when searching for modules and packages that are not part of the standard libarary.
Given this, you can import the classes.py module into function.py, like so:
from items import classes
(Note that I have renamed the module, because class is a python keyword).
If you later added another module to stuff, and wanted to import it into functions.py, you would do:
from stuff import another
and if a sub-package was added to items, and you wanted to import a module from that, you would do:
from items.subpackage import module
Imports specified in this top-down way can be used from any module within the application, because they are always relative to the parent directory of the main script, which has priority.
Basically I have written two modules for my Python Program. I need one module to import the other module.
Here is a sample of my file structure.
test_app
main.py
module_1
__init__.py
main.py
module_2
__init__.py
main.py
Main.py is able to import either of the two modules, but I need module_1 to import module_2, is that possible?
If you started your program from test_app/main.py, you can just use from module_1 import main in test_app/module_2/main.py file.
If you add an (empty) __init__.py to test_app, test_app will be a package. This means that python will search for modules/packages a bit smarter.
Having done that, in module1, you can now write import test_app.module2 (or import .. module2) and it works.
(This answer was combined from other comments and answers here, hence CW)
Yes. If your PYTHONPATH environment variable is set to test_app, you should be able to import module1 from module2 and vice versa.
I assume that you run your program like this:
python test_app/main.py
and that the program imports module1.main, which in turn imports module2.main. In that case there is no need to alter the value of PYTHONPATH, since Python has already added the test_app directory to it. See the Module Search Path section in the Python docs.
This question ask been answered by the official python documents, in the section called Intra-package References. python modules
The submodules often need to refer to each other.
You don't need to care about the PYTHONPATH, declaration of the relative path will do.
For your case,
just type "import .. module2" in the module_1/main.py
sure does.
In your module_1 module. In any file:
from module_2 import your_function, your_class
My python project path /project and these are its files :
/project/test.py
/project/templates/
/project/includes/
/project/includes/config.py
# config.py
import os
template_path = os.path.join(os.path.dirname(__file__), "templates")
when I used execute() function to include config.py from test.py , the /project/includes/config.php file was executed and the result was returned to /project/includes/test.py so template_path variable was saved as /project/includes/templates/ and templates folder would not be found there .
I want a function to include /project/includes/config.py from /project/test.py and execute all functions in /project/includes/config.py through test.py.
Create a (empty) file __init__.py in includes.
In test.py, call config.py by doing an import.
from includes import config
So if there is a function say foo in config, you will call it like:
config.foo()
Refer to the documentation (as pointed out by The MYYN)
In Python you don't include, you import, also you don't import files and directories, you import modules and packages, which are often (but not always) files and directories present somewhere in the Python path. Therefore the common practice is to put all your modules in a package which the user should place in the Python path, and then import the modules from there. Importing files is possible but discouraged.
If your config module is a real module part of your library, internal configuration or site configuration, you need to create a new package named as your project is named, put the config and other modules inside it. Also note that because of the module/package structure used in Python, creating a project with an includes directory for the modules inside it isn't going to work very well.
If your config.py file is supposed to be user configuration, you can look at the ConfigParser module, or use the third-party cfgparse.
You usually include code from other files (called modules) via import.
Documentation on modules
Reference: import statement