How to get variables from another file in python [duplicate] - python

This question already has answers here:
How can I import a module dynamically given its name as string?
(10 answers)
Closed 8 years ago.
For example I have a variable config in config.py. I want to use the variable config in main.py. And the config.py must be pass to main.py through command line. like follows:
python ./main.py ./config.py
I know in lua I can use dofile function. How can I do this in Python
Fixed by Dynamic module import in Python

disclaimer: i cant comment
can you just import the variable in main.py ?
from .config import config
#do stuff
the only other way i could imagine this would work, if you would pipe the line to the main.py file and parse it by hand or something:
cat config.py | grep 'config = ' | sed -e 's,config = ,,g' | python main.py
thou this will only work if config is only used once in the file and the value it represents is behind the = and you know if it is a string or a int etc.

# main.py
import sys
import os
if __name__ == "__main__":
var_name = "config"
arg = sys.argv[1]
module_path = os.path.realpath(os.path.dirname(arg))
if module_path not in sys.path:
sys.path.insert(0, module_path)
module_name = os.path.basename(arg).replace(".py", "")
module = __import__(module_name)
config_var = getattr(module, var_name)
# use config_var

Related

import error when importing module from the same directory [duplicate]

I have a directory that stores all the .py files.
bin/
main.py
user.py # where class User resides
dir.py # where class Dir resides
I want to use classes from user.py and dir.py in main.py.
How can I import these Python classes into main.py?
Furthermore, how can I import class User if user.py is in a sub directory?
bin/
dir.py
main.py
usr/
user.py
Python 2
Make an empty file called __init__.py in the same directory as the files. That will signify to Python that it's "ok to import from this directory".
Then just do...
from user import User
from dir import Dir
The same holds true if the files are in a subdirectory - put an __init__.py in the subdirectory as well, and then use regular import statements, with dot notation. For each level of directory, you need to add to the import path.
bin/
main.py
classes/
user.py
dir.py
So if the directory was named "classes", then you'd do this:
from classes.user import User
from classes.dir import Dir
Python 3
Same as previous, but prefix the module name with a . if not using a subdirectory:
from .user import User
from .dir import Dir
I just learned (thanks to martineau's comment) that, in order to import classes from files within the same directory, you would now write in Python 3:
from .user import User
from .dir import Dir
From python3.3 upwards, __init__.py is no longer necessary. If the current directory of the console is the directory where the python script is located, everything works fine with
import user
However, this won't work if called from a different directory, which does not contain user.py.
In that case, use
from . import user
This works even if you want to import the whole file instead of just a class from there.
In your main.py:
from user import Class
where Class is the name of the class you want to import.
If you want to call a method of Class, you can call it using:
Class.method
Note that there should be an empty __init__.py file in the same directory.
If user.py and dir.py are not including classes then
from .user import User
from .dir import Dir
is not working. You should then import as
from . import user
from . import dir
You can import the module and have access through its name if you don't want to mix functions and classes with yours
import util # imports util.py
util.clean()
util.setup(4)
or you can import the functions and classes to your code
from util import clean, setup
clean()
setup(4)
you can use wildchar * to import everything in that module to your code
from util import *
clean()
setup(4)
To make it more simple to understand:
Step 1: lets go to one directory, where all will be included
$ cd /var/tmp
Step 2: now lets make a class1.py file which has a class name Class1 with some code
$ cat > class1.py <<\EOF
class Class1:
OKBLUE = '\033[94m'
ENDC = '\033[0m'
OK = OKBLUE + "[Class1 OK]: " + ENDC
EOF
Step 3: now lets make a class2.py file which has a class name Class2 with some code
$ cat > class2.py <<\EOF
class Class2:
OKBLUE = '\033[94m'
ENDC = '\033[0m'
OK = OKBLUE + "[Class2 OK]: " + ENDC
EOF
Step 4: now lets make one main.py which will be execute once to use Class1 and Class2 from 2 different files
$ cat > main.py <<\EOF
"""this is how we are actually calling class1.py and from that file loading Class1"""
from class1 import Class1
"""this is how we are actually calling class2.py and from that file loading Class2"""
from class2 import Class2
print Class1.OK
print Class2.OK
EOF
Step 5: Run the program
$ python main.py
The output would be
[Class1 OK]:
[Class2 OK]:
Python 3
Same directory.
import file:log.py
import class: SampleApp().
import log
if __name__ == "__main__":
app = log.SampleApp()
app.mainloop()
or
directory is basic.
import in file: log.py.
import class: SampleApp().
from basic import log
if __name__ == "__main__":
app = log.SampleApp()
app.mainloop()
from user import User
from dir import Dir
For Python 3+, suppose you have this structure:
A/
__init__.py
bar.py
foo.py
In your __init__.py file, you can put from . import foo
then you can import foo in bar file
# A/bar.py
from foo import YourClass
The purpose of the __init__.py files is to include optional initialization code that runs as different levels of a package are encountered. everything you put in the __init__.py will be initialized during the package load.
I'm not sure why this work but using Pycharm build from file_in_same_dir import class_name
The IDE complained about it but it seems it still worked. I'm using Python 3.7
For python3
import from sibling: from .user import User
import from nephew: from .usr.user import User
If you have filename.py in the same folder, you can easily import it like this:
import filename
I am using python3.7
Python3
use
from .user import User inside dir.py file
and
use from class.dir import Dir inside main.py
or from class.usr import User inside main.py
like so
# My Python version: 3.7
# IDE: Pycharm 2021.1.2 Community
# Have "myLib" in folder "labs":
class Points:
def __init__(self, x = 0, y = 0):
self.__x = x
self.__y = y
def __str__(self):
return f"x = {self.__x}, y = {self.__y}"
# Have "myFile" in (same) folder "labs":
from myFile import Point
p1 = Point(1, 4)
p2 = Point(1, 4)
print(f"p1: {p1}, p2: {p2}")
# Result:
# p1: x = 1, y = 4, p2: x = 1, y = 4
# Good Luck!
Indeed Python does not provide an elegant solution for this everyday use-case. It is especially problematic when you are testing your code that eventually will be delivered as part of a Python package. Here is an approach that has worked for me:
dir
|
file1.py
file2.py
And let's say you want to import file2 from file1.
# In file1.py:
try:
# This works when packaged as Python package
from . import file2
except:
# This works when simply invoking file1 as a module (i.e. python file1)
import file2
# rest of the code ...
I cannot submit an edit for the top answer, so based on some pointers given in comments above, another thing to try out is:
from subfolder.MyClassFile import MyClass
And that's it. Just remember to have an __init__.py empty file in our subfolder.
Just for reference, the solution works if your structure is something like this:
your_project/
__ini__.py
main.py
subfolder/
__init__.py
MyClassFile.py <-- You want this
MyClassFile.py contains the class MyClass.
Just too brief,
Create a file __init__.py is classes directory and then import it to your script like following (Import all case)
from classes.myscript import *
Import selected classes only
from classes.myscript import User
from classes.myscript import Dir
to import from the same directory
from . import the_file_you_want_to_import
to import from sub directory the directory should contain
init.py
file other than you files then
from directory import your_file

Execute python script with same name in different directory

I have the following file structure:
A:
|_ a.py
|_ b.py
B:
|_ a.py
|_ b.py
I want to dynamically execute either A/b.py or B/b.py.
I am using the following code:
from importlib import import_module
path = '/home/username/test/' + module + '/'
if path not in sys.path:
sys.path.append(path)
script = import_module('b', 'Script')
myClass = getattr(script, 'Script')
run = myClass()
Doing this, if I run B/b.py and then A/b.py, it will execute B/b.py instead of A/b.py.
The first script to be run will be executed in the next round.
I need help in making sure the file in the directory I want is run only.
I'm making some assumption on what you want to accomplish here. Even if this is not exactly what you want, it might still push you the right direction: You got two different sub directories, A and B. These contain scripts of identical names a.py and b.py. Based on some condition, your script should call either A/a.py or A/a.py and then maybe B/b.py or B/b.py.
I would set up A and B as actual python modules, that is, create a __init__.py file in both folders. Then have a master-script which somehow determines which module to use..
# root_folder/master.py
import sys
import A
import B
master_script_name = sys.argv[0]
print("I'm the master script : " + str(master_script_name))
def choose_module_A_or_B(arg):
if arg == "A":
print(" You chose module A !")
return A
return B
module = choose_module_A_or_B("A")
module.b.print_locations()
Then,
# root_folder/A/__init__.py
from A import b
and,
# root_folder/A/b.py
import os
import sys
# how to obtain paths and script name:
folder = os.path.dirname(os.path.realpath(__file__))
script = __file__
parent = os.path.abspath(os.path.join(folder, os.pardir))
def print_locations():
print(" --> script : " + str(script))
print(" --> folder : " + str(folder))
print(" --> parent : " + str(parent))
Similarily ..
# root_folder/B/__init__.py
from B import b
and,
# root_folder/B/b.py
import os
import sys
# how to obtain paths and script name:
folder = os.path.dirname(os.path.realpath(__file__))
script = __file__
parent = os.path.abspath(os.path.join(folder, os.pardir))
def print_locations():
print(" --> script : " + str(script))
print(" --> folder : " + str(folder))
print(" --> parent : " + str(parent))
OUTPUT:
$ python master.py
I'm the master script : master.py
You chose module A !
--> script : A\b.py
--> folder : C:\dev\ScriptTesting\py\script_by_name\A
--> parent : C:\dev\ScriptTesting\py\script_by_name
I've read your other, similar question and come up with a solution without any pre-imports. This is (imho) highly un-pythonic, and may by all means be considered a dirty "hack". I highly recommend you to consider my other answer and just properly deal with the imports.
Your problem occurs because you're trashing the namespace, and all it holds dear. When you pollute the namespace with functions/methods with the same signature, there is absolutely no way for the Python-interpreter to distinguish them: it resolves to the one that was first imported.
However, as stated, there is a workaround: There is (currently) no way to unload a python module, but you may reload it, using the imp module. Essentially, it lets you clean up (redefine) the namespace. A complete, working example can be found at my repl.it
# root_folder/main.py
import sys
import imp
from importlib import import_module
def import_script(mod_dir, script):
sys.path.append(mod_dir)
mod = imp.reload(import_module(script, 'Script'))
sys.path.remove(mod_dir)
return mod
# input:
mod_dir = "A"
script = "b"
# import module/script.py
active_mod = import_script(mod_dir, script)
# use module/script.py
mod_name = active_mod.get_mod_name()
print(mod_name) # Prints "A : b.y"
# New input: different module/script.py
mod_dir = "C"
script = "b"
# import module/script.py
active_mod = import_script(mod_dir, script)
# use module/script.py
mod_name = active_mod.get_mod_name()
print(mod_name) # Prints "C : b.y"
when the modules look like below,
# root_folder/A/b.py
def get_mod_name():
return "A : b.py"
Do note that every import is doubled, since everytime you import a module (with possibly a duplicate name), it must also be reloaded to clean up the namespace. It is not enough to just del the module.

Python reusing of module without new load

I have two modules, and I need to import one from another. In first module, I need to only declare variable and load (if its not loaded). In second, just read this variable and use that (if not loaded, in another module load it)
First module:
run_engine.py:
#run_engine.py
import matlab
eng = None
if(eng == None):
eng = matlab.engine.start_matlab()
start.py:
#!/usr/bin/python
import matlab.engine
import os
import run_engine
def app():
currentDir = os.path.dirname(os.path.abspath(__file__))
matlabInstance = run_engine.eng
matlabInstance.addpath(currentDir)
matlabInstance.sim('thermo_simple')
if __name__ == '__main__':
app()
matlab.engine.start_matlab takes about 30seconds to start, and i´m using start.py repeatly, so I need only one instance of eng, but correct loaded. How can I do that?
Run app() repeatedly in an interactive python shell:
$ python -i start.py
>>> app()
#run_engine.py
import matlab
import time
eng = None
while not eng:
eng = matlab.engine.start_matlab()
time.sleep(1)
BTW - if you want to use something like "if X == None" - this should be put as
if X is None
This is a preferable Python idiom ;)

Python: Importing files based on which files the user wants [duplicate]

This question already has answers here:
How to import a module in Python with importlib.import_module
(3 answers)
Closed 8 years ago.
I have the following directory structure
+ code
|
--+ plugins
|
-- __init__.py
-- test_plugin.py (has a class TestPlugin)
-- another_test_plugin.py (has a class AnotherTestPlugin)
--+ load.py
--+ __init__.py
In load.py, I want to be able to initialize only those classes that the user specifies. For example, lets say I do something like
$ python load.py -c test_plugin # Should only import test_plugin.py and initialize an object of the TestPlugin class
I am having trouble trying to use the "imp" module to do it. It keeps on saying "No such file or directory". My understanding is that it is somehow not understanding the path properly. Can someone help me out with this?
ok, your problem is a path related problem. You expect that the script is being run in the same directory as where load.py is, where it is not the case.
what you have to do is something like:
import imp, os, plugins
path = os.path.dirname(plugins.__file__)
imp.load_source('TestPlugin', os.path.join(path, 'test_plugin.py')
where plugins is the module containing all your plugins (i.e. just the empty __init__.py), that will help you get the full path to your plugin modules' files.
Another solution, if you want a "plugins" discovery tool:
import imp, os
import glob
def load_plugins(path):
"""
Assuming `path` is the only directory in which you store your plugins,
and assuming each name follows the syntax:
plugin_file.py -> PluginFile
Please note that we don't import files starting with an underscore.
"""
plugins = {}
plugin_files = glob.glob(path + os.sep + r'[!_]*.py')
for plugin_path in plugin_files:
module_name, ext = os.path.splitext(plugin_path)
module_name = os.path.basename(module_name)
class_name = module_name.title().replace('_', '')
loaded_module = imp.load_source(class_name, plugin_path) # we import the plugin
plugins[module_name] = getattr(loaded_module, class_name)
return plugins
plugins = load_plugins(your_path_here)
plugin_name = sys.argv[3]
plugin = plugins.get(plugin_name)
if not plugin:
# manage a not existing plugin
else:
plugin_instance = plugin() # creates an instance of your plugin
This way, you can also specify different names by changing your keys, e.g., 'test_plugins' => 'tp'. You don't have to initialize your plugins, but you can still run this function whenever you want to load your plugins at runtime.
exec('import ' + sys.argv[2])
obj = test_plugin.TestPlugin()
Here sys.argv[2] is 'test_plugin' string from command line arguments.
EDIT: Another way to avoid using exec:
import importlib
mod = importlib.import_module(sys.argv[2])

How to import all submodules?

I have a directory structure as follows:
| main.py
| scripts
|--| __init__.py
| script1.py
| script2.py
| script3.py
From main.py, the module scripts is imported. I tried using pkgutils.walk_packages in combination with __all__, but using that, I can only import all the submodules directly under main using from scripts import *. I would like to get them all under scripts. What would be the cleanest way to import all the submodules of scripts so that I could access scripts.script1 from main?
EDIT: I am sorry that I was a bit vague. I would like to import the submodules on run-time without specifying them explicitly in __init__.py. I can use pkgutils.walk_packages to get the submodule names (unless someone knows of a better way), but I am not sure of the cleanest way to use these names (or maybe the ImpImporters that walk_packages returns?) to import them.
Edit: Here's one way to recursively import everything at runtime...
(Contents of __init__.py in top package directory)
import pkgutil
__all__ = []
for loader, module_name, is_pkg in pkgutil.walk_packages(__path__):
__all__.append(module_name)
_module = loader.find_module(module_name).load_module(module_name)
globals()[module_name] = _module
I'm not using __import__(__path__+'.'+module_name) here, as it's difficult to properly recursively import packages using it. If you don't have nested sub-packages, and wanted to avoid using globals()[module_name], though, it's one way to do it.
There's probably a better way, but this is the best I can do, anyway.
Original Answer (For context, ignore othwerwise. I misunderstood the question initially):
What does your scripts/__init__.py look like? It should be something like:
import script1
import script2
import script3
__all__ = ['script1', 'script2', 'script3']
You could even do without defining __all__, but things (pydoc, if nothing else) will work more cleanly if you define it, even if it's just a list of what you imported.
This is based on the answer that kolypto provided, but his answer does not perform recursive import of packages, whereas this does. Although not required by the main question, I believe recursive import applies and can be very useful in many similar situations. I, for one, found this question when searching on the topic.
This is a nice, clean way of performing the import of the subpackage's modules, and should be portable as well, and it uses the standard lib for python 2.7+ / 3.x.
import importlib
import pkgutil
def import_submodules(package, recursive=True):
""" Import all submodules of a module, recursively, including subpackages
:param package: package (name or actual module)
:type package: str | module
:rtype: dict[str, types.ModuleType]
"""
if isinstance(package, str):
package = importlib.import_module(package)
results = {}
for loader, name, is_pkg in pkgutil.walk_packages(package.__path__):
full_name = package.__name__ + '.' + name
results[full_name] = importlib.import_module(full_name)
if recursive and is_pkg:
results.update(import_submodules(full_name))
return results
Usage:
# from main.py, as per the OP's project structure
import scripts
import_submodules(scripts)
# Alternatively, from scripts.__init__.py
import_submodules(__name__)
Simply works, and allows relative import inside packages:
def import_submodules(package_name):
""" Import all submodules of a module, recursively
:param package_name: Package name
:type package_name: str
:rtype: dict[types.ModuleType]
"""
package = sys.modules[package_name]
return {
name: importlib.import_module(package_name + '.' + name)
for loader, name, is_pkg in pkgutil.walk_packages(package.__path__)
}
Usage:
__all__ = import_submodules(__name__).keys()
Not nearly as clean as I would like, but none of the cleaner methods worked for me. This achieves the specified behaviour:
Directory structure:
| pkg
|--| __init__.py
| main.py
| scripts
|--| __init__.py
| script1.py
| script2.py
| script3.py
Where pkg/scripts/__init__.py is empty, and pkg/__init__.py contains:
import importlib as _importlib
import pkgutil as _pkgutil
__all__ = [_mod[1].split(".")[-1] for _mod in
filter(lambda _mod: _mod[1].count(".") == 1 and not
_mod[2] and __name__ in _mod[1],
[_mod for _mod in _pkgutil.walk_packages("." + __name__)])]
__sub_mods__ = [".".join(_mod[1].split(".")[1:]) for _mod in
filter(lambda _mod: _mod[1].count(".") > 1 and not
_mod[2] and __name__ in _mod[1],
[_mod for _mod in
_pkgutil.walk_packages("." + __name__)])]
from . import *
for _module in __sub_mods__:
_importlib.import_module("." + _module, package=__name__)
Although it's messy, it should be portable. I've used this code for several different packages.
I got tired of this problem myself, so I wrote a package called automodinit to fix it. You can get it from http://pypi.python.org/pypi/automodinit/. Usage is like this:
Include the automodinit package into your setup.py dependencies.
Add the following to the beginning of the __init__.py file:
__all__ = ["I will get rewritten"]
# Don't modify the line above, or this line!
import automodinit
automodinit.automodinit(__name__, __file__, globals())
del automodinit
# Anything else you want can go after here, it won't get modified.
That's it! From now on importing a module will set __all__ to
a list of .py[co] files in the module and will also import each
of those files as though you had typed:
for x in __all__: import x
Therefore the effect of from M import * matches exactly import M.
automodinit is happy running from inside ZIP archives and is therefore ZIP safe.
To just load all submodules of a package, you can use this simple function:
import importlib
import pkgutil
def import_submodules(module):
"""Import all submodules of a module, recursively."""
for loader, module_name, is_pkg in pkgutil.walk_packages(
module.__path__, module.__name__ + '.'):
importlib.import_module(module_name)
Use case: load all database models of a Flask app, so that Flask-Migrate could detect changes to the schema. Usage:
import myproject.models
import_submodules(myproject.models)
I was writing a small personal library and adding new modules all the time so I wrote a shell script to look for scripts and create the __init__.py's. The script is executed just outside of the main directory for my package, pylux.
I know it probably isn't the answer you're looking for, but it servered its purpose for me and it might be useful to someone else, too.
#!/bin/bash
echo 'Traversing folder hierarchy...'
CWD=`pwd`
for directory in `find pylux -type d -exec echo {} \;`;
do
cd $directory
#echo Entering $directory
echo -n "" > __init__.py
for subdirectory in `find . -type d -maxdepth 1 -mindepth 1`;
do
subdirectory=`echo $subdirectory | cut -b 3-`
#echo -n ' ' ...$subdirectory
#echo -e '\t->\t' import $subdirectory
echo import $subdirectory >> __init__.py
done
for pyfile in *.py ;
do
if [ $pyfile = $(echo __init__.py) ]; then
continue
fi
#echo -n ' ' ...$pyfile
#echo -e '\t->\t' import `echo $pyfile | cut -d . -f 1`
echo import `echo $pyfile | cut -d . -f 1` >> __init__.py
done
cd $CWD
done
for directory in `find pylux -type d -exec echo {} \;`;
do
echo $directory/__init__.py:
cat $directory/__init__.py | awk '{ print "\t"$0 }'
done
I've played around with Joe Kington's Answer and have built a solution that uses globals and get/setattr and thus doesn't need eval. A slight modification is that instead of directly using the packages __path__ for walk_packages, I use the packages parent directory and then only import modules starting with __name__ + ".". This was done to reliably get all subpackages from walk_packages - in my use case I had a subpackage named test which caused pkgutil to iterate over the test package from python's library; furthermore, using __path__ would not recurse into the packages subdirectories. All these issues were observed using jython and python2.5, the code below is only tested in jython thus far.
Also note that OPs question only talks about importing all modules from a package, this code recursively imports all packages too.
from pkgutil import walk_packages
from os import path
__all__ = []
__pkg_prefix = "%s." % __name__
__pkg_path = path.abspath(__path__[0]).rsplit("/", 1)[0] #parent directory
for loader, modname, _ in walk_packages([__pkg_path]):
if modname.startswith(__pkg_prefix):
#load the module / package
module = loader.find_module(modname).load_module(modname)
modname = modname[len(__pkg_prefix):] #strip package prefix from name
#append all toplevel modules and packages to __all__
if not "." in modname:
__all__.append(modname)
globals()[modname] = module
#set everything else as an attribute of their parent package
else:
#get the toplevel package from globals()
pkg_name, rest = modname.split(".", 1)
pkg = globals()[pkg_name]
#recursively get the modules parent package via getattr
while "." in rest:
subpkg, rest = rest.split(".", 1)
pkg = getattr(pkg, subpkg)
#set the module (or package) as an attribute of its parent package
setattr(pkg, rest, module)
As a future improvement I'll try to make this dynamic with a __getattr__ hook on the package, so the actual modules are only imported when they are accessed...
This works nicely for me in Python 3.3. Note that this works only for submodules which are in files in the same directory as the __init__.py. With some work however it can be enhanced for supporting submodules in directories too.
from glob import iglob
from os.path import basename, relpath, sep, splitext
def import_submodules(__path__to_here):
"""Imports all submodules.
Import this function in __init__.py and put this line to it:
__all__ = import_submodules(__path__)"""
result = []
for smfile in iglob(relpath(__path__to_here[0]) + "/*.py"):
submodule = splitext(basename(smfile))[0]
importstr = ".".join(smfile.split(sep)[:-1])
if not submodule.startswith("_"):
__import__(importstr + "." + submodule)
result.append(submodule)
return result

Categories

Resources