Importing modules dynamically in Python 3.X - python

I would like to import a module from inside a functions. For example from this:
from directory.folder.module import module
def import():
app.register_blueprint(module)
To this:
def import():
from directory.folder.module import module
But, without hardcoding it. For example:
def import():
m = "module"
from directory.folder.m import m
Is it possible? Thanks in advance

You want the importlib module.
Here's the most simplistic way to use this module. There are lots of different ways of weaving the results of calls to the module into the environment:
import importlib
math = importlib.import_module("math")
print(math.cos(math.pi))
Result:
-1.0
I've used this library a lot. I built a whole plug-in deployment system with it. Scripts for all the various deploys were dropped in directories and only imported when they were mentioned in a config file rather than everything having to be imported right away.
Something I find very cool about this module is what's stated at the very top of its documentation:
The purpose of the importlib package is two-fold. One is to provide the implementation of the import statement (and thus, by extension, the import() function) in Python source code.
The intro in the 2.7 docs is interesting as well:
New in version 2.7.
This module is a minor subset of what is available in the more full-featured package of the same name from Python 3.1 that provides a complete implementation of import. What is here has been provided to help ease in transitioning from 2.7 to 3.1.

No, python import does not work this way.
Such as an example you try to import a module named mod, so you run import mod. Now interpreter will search for mod.py in a list of directories gathered from the following sources:
The directory from where the input script was run or the current directory if the interpreter is being run interactively.
The list of directories contained in the PYTHONPATH environment variable, if it is set. (The format for PYTHONPATH is OS-dependent but should mimic the PATH environment variable.)
An installation-dependent list of directories configured at the time Python is installed.
So if you have a variable named m='mod' and run import m it will search for m.py not mod.py.
But just a silly dangerous workaround is to use exec() (WARNING FOR MALICIOUS INPUT)
m = "module"
exec(f'from directory.folder.m import {m}')
If you don't mind external modules try importlib.

You can use the importlib module to programmatically import modules.
import importlib
full_name = "package." + "module"
m = importlib.import_module(full_name)

Related

How to use globally imported packages in script

I want to put my interactive commands in a script, but I can't run the same commands in the script.
We are using a heavily packaged version of Python for our tests, we usually run tests in interactive mode, but now I want to place all the commands in a package. Below is an example using the time package.
In interactive mode:
>>> import time
>>> import myscript
In my script:
time.sleep(5)
I expected the script to refer to the globally imported packages and allow me to run sleep, but it says NameError: global name 'time' is not defined
How do I get my script to recognize all packages imported into the interactive terminal? We use thousands of packages in our toolkit, and I can't import them all into my script.
You have to import these libraries also in the .py file where you are going to use them. Python does not allow using them when they are imported in a higher level module, and that's the way it should be. Python, in some way, forces you being better programmer. Your script should be something like this:
import time
time.sleep(5)
If I have Module A:
import time
and Module B:
import A
then in module B I do have access to libraries that A imported, but they must be qualified thus:
A.time.sleep5()
In short, when you import a module, the public names in that module become accessible to the importer. But what you are attempting to do is quite different. In essence, you have Module A as:
import time
import B
And module B as:
time.sleep(5)
Module B neither directly imports the time package nor module A and therefore has no access to time. Being imported by a module that does have access to time does not confer to the imported module that access.

Cannot import module in same directory and package

I'm using a from . import module statement to do exactly that: import a local module to my script. The script and module reside in the same folder.
# module.py
def foo():
print('Foo!')
# script.py
from . import module
module.foo()
> ImportError: cannot import name 'module'
This should be pretty easy, and doing just import module does work, but as this answer suggests one should, I modified the statements to the former form.
The end goal is to have a package, from which I can use things, but also to have executable scripts inside the package that import other parts of that package. Apparently, after a few days worth of searching and a few questions I still don't quite understand the import and packaging machinery.
These might be the cause:
Import statements are different in 2.7 and 3.x, I'm using 3.6, the question was on 2.7
Relative imports are different inside packages (folder with __init__.py)
The working directory is different or the folders are not in sys.path
Having an __init__ file does not make a difference at least in a fresh project in PyCharm. Also, the working directory is set to the folder of the sources and it is in path.
Have I missed something? Or rather, what's the correct way of achieving the functionality described in the end goal? Any help is greatly appreciated!
Since writing this answer I have realised it is more convenient and better style in my humble opinion to install the package with pip install -e . and use absolute imports. So even within a package writing from package.sub.module import thing. This makes refactoring a lot easier and there's no need to ever manipulate module variables or sys.path.
When running a script directly, Python consideres the name (a special variable, __name__) of that script to be "__main__". In case of an import, the name is set to the name of the module. In the latter case relative imports are fine. But import actually looks at the combination of __name__ and another special variable, __package__, which is None for an executed script, but the path to a module for an imported module, e.g. parent.sub.
The searched variable is... drumroll...
__package__ + '.' + __name__
The secret ingredient is manipulating __package__:
# script.py
__package__ = 'package_name' # or parent.sub.package
from . import module
This lets Python know you are inside a package even though the script is executed directly. However, the top level folder needs to be in sys.path and the package name has to reflect that directory structure.
See this very comprehensive answer on the topic of relative imports.

how python import packages with multiple copies

I was believing that when importing a package, it would search from sys.path and use the first hit for import. However, it seems not true:
import mpl_toolkits
print(mpl_toolkits.__path__)
And it outputs:
['/Library/Python/2.7/site-packages/matplotlib-1.5.0-py2.7-macosx-10.11-intel.egg/mpl_toolkits', '/usr/local/lib/python2.7/site-packages/mpl_toolkits']
Can someone please explain to me how exactly python looks for packages if it is installed multiple times in the machine (in different location searchable by sys.path)? Or a pointer to relevant reference would be good.
when you import a module, python uses PYTHON PATH (system variable that contains a list of folders) and loops to search for importable module.
Python will test if it is a package (folder containing init.py) or a module (*.py). it will stop on first module found if no module is found python raises an import error

how the python interpreter find the modules path?

I'm new to python, and I find that to see the import search paths, you have to import the sys module and than access the list of paths using sys.path, if this list is not available until I explicitly import the sys module, so how the interpreter figure out where this module resides.
thanks for any explanation.
The module search path always exists, even before you import the sys module. The sys module just makes it available for you.
It reflects the contents of the system variable $PYTHONPATH, or a system default, if you have not set that environment variable.
There is a default search path within the interpreter. (https://docs.python.org/2/install/#modifying-python-s-search-path )
A default value for the path is configured into the Python binary when the interpreter is built.
BTW, sys is built into the Python interpreter. (https://docs.python.org/2/tutorial/modules.html#standard-modules)
One particular module deserves some attention: sys, which is built into every Python interpreter.

imp.load_source() in Python

When is it useful to use imp.load_source() method for importing Python module? Has it some advantage in some scenario in opposite to normal importing with import keyword?
import always looks in the following order:
already imported modules
import hooks
files in the locations in sys.path
builtin modules
If you want to import a module which would not be found by any of these mechanisms, but you know the filename, then you could use imp.load_source(). Or if you want to import a module that would be shadowed by an earlier import mechanism, for example if you want to import foo from a directory in sys.path but there is a custom import hook that would find its own version of foo first, then you could use imp.load_source() for that too. Basically it lets you control the source of the module's code in a way that import does not.

Categories

Resources