Unable to use dynamically imported modules - python

I'm dynamically importing any .py modules that are not the __init__.py or my skeleton.py files from a subdirectory. I first build a list that looks like (.py is omitted):
mods_in_dir = ['my_mods.frst_mod','my_mods.scnd_mod']
my_mods beign the subdirectory, frst_mod.py and sncd_mod.py being the available modules. All modules I import contain the class Operation and it offers always the same three functions. Then I import them using importlib.
import importlib
imports = {}
for i in mods_in_dir:
imports[i] = importlib.import_module(i)
Which results in apparently successful imports, as I checked it by print(sys.modules.keys()), where they show up as [...], 'my_mods.frst_mod', 'my_mods.scnd_mod', [...].
For testing reasons I defined a function in every of the modules:
def return_some_string():
return "some string"
But I'm unable to call this function. I've tried everything that came to my mind.. e.g. print(my_mods.frst_mod.Operation.return_some_string()) or print(frst_mod.Operation.return_some_string())
Both resulting in NameError: name 'my_mods' is not defined or NameError: name 'frst_mod' is not defined
edit:
Solved.
#m170897017 helped me solve the problem in my initial attempt.
I missed that I had the modules in the imports dict and didn't use that.
print(imports['my_mods.frst_mod'].ModsClassName.return_some_string()) now successfully prints some_string

Change dynamic import to this way and try again:
...
mods_in_dir = ['my_mods.frst_mod','my_mods.scnd_mod']
modules = list(map(__import__, mods_in_dir))
# if you want to invoke func1 in my_mods.frst_mod and get result from it
result1 = modules[0].func1()
# if you want to invoke func2 in my_mods.scnd_mod and get result from it
result2 = modules[1].func2()
...

Related

Importing variables from python script into another script is throwing errors that variables are undefined

I am currently writing automation scripts for a proprietary Windows desktop application at work using WinAppDriver with Python. Our application has the user upload a handful of files, does some behind the scenes calculating based on the files uploaded and then spits out results. I have automation in place that uploads these files using the UI and am not having any issues with this specifically. The process to do this is as follows:
Click the ‘Choose File’ button. Browse to file location in pop up window
Click in ‘File Name’ field and input the direct path to the file. Click OK (This is being done with the Python Keyboard library)
Repeat previous steps for all necessary files
Click ‘Go’
To tidy up my scripts, I have set the file paths to variables instead of using their direct paths in my code. Then I just call the variable name for the file I need.
E.g. file_to_upload_1: str = r”C:\Users\user\...\filename.txt
I have created a separate filePaths.py where all these file paths set to variables are stored so adding/modifying them in the future is easy and all in one place.
The issue that I am running into with all of this is when I import this .py that contains my file paths set to variables. Right now, I am doing from filePaths import * for simplicity sake. This is generally frowned upon and VS Code throws warnings at me advising I have imported unused imports. I went ahead and set my variables to separate classes and then tried to import them in the following way: from filePaths import dataset_1 When I do this I get the follow error: Undefined variable “variable_name” and my tests fail to run. It seems like I can only get this all to work if I import everything and I would like to avoid doing that if possible. All my scripts are in the same directory. What am I missing here?
Sample of code:
from filePaths import * <-- THIS WORKS!
# from filePaths import class_1 <-- THIS DOES NOT
#Open App
desired_caps = {}
desired_caps["app"] = "C:\\Users\\Public\\Desktop\\Application_Being_Tested.lnk"
driver = webdriver.Remote("http://127.0.0.1:4723", desired_caps)
#Login
driver.find_element_by_accessibility_id("Username").send_keys("tester")
driver.find_element_by_accessibility_id("UserPassword").send_keys("password")
driver.find_element_by_accessibility_id("btnLogin").click()
###Upload Files###
#First File To Upload
driver.find_element_by_accessibility_id("ChooseFile").click()
time.sleep(.1)
driver.find_element_by_accessibility_id("FileName").click()
keyboard.write(filePaths_variable)
keyboard.press_and_release('enter')
You have three options:
Import everything using the wildcard (i.e. from filePaths import *)
Import select objects (i.e. from filePaths import object1, object2, object3 #...)
Use dot notation (i.e. import filePaths then filePaths.object1 #etc)
Some options may be considered better programming style than others.
The reason the wildcard works is because it is the same as option 2 from above if you had listed all created objects within filePaths on you import statement. In general, you should either selectively import only the methods and objects you need, or just import the script and use dot notation to selectively use methods and objects as needed.
The following example code shows how to use dot notation.
file 1:
# objects_to_import.py
bob = 127
string = 'my string'
def foo():
print('bar')
def bar():
print('foo')
def print_var(var):
print(var)
file 2:
# main.py in the same directory as objects_to_import.py
import objects_to_import
print(objects_to_import.bob)
objects_to_import.print_var(objects_to_import.bob)
objects_to_import.foo()
objects_to_import.bar()
try:
print(string)
except NameError:
print("You didn't import that variable or use correct notation!")
Then, running main.py outputs:
"""
127
127
bar
foo
You didn't import that variable or use correct notation!
"""
The results are identical if main.py instead read:
from objects_to_import import bob, foo, bar, print_var
print(bob)
print_var(bob)
foo()
bar()
try:
print(string)
except NameError:
print("You didn't import that variable or use correct notation!")
Note the if we add the following code to both versions of main.py:
if('bob' in globals()):
print('Bob is in your globals!')
else:
print("Can't find bob in your globals")
We find that bob is in your globals' space when explicitly imported, but is not present when using dot notation with the general non-explicit import statement. There therefore might be pragmatic reasons to choose one import method over the other (e.g. if you program is long and complex and you would like to more easily manage potential name collisions, you should use dot notation).
Alright I've come up with a solution!
I have my filePaths.py module with class_1 in there containing a set of certain variables: var_1, var_2, etc. respectively...
In my script that wants these variables, I'm bringing the module in like so:
import filePaths
path = filePaths.class_1
When I call one of the variables in class_1 instead of just var_1 I call path.var_1 and it comes in with no issues. Thank you everyone for helping out with this!

How can i pass imports in Python higher up the hierarchy?

I am developping a program which runs on two different plattforms. Depending on which plattform I want to run it, the import directories and the names of the libraries change. For this reason i set a variable called RUN_ON_PC to True or False.
I want to implement a helper which sets the paths correctly and imports the libraries with the correct name depending of the platform and gives an interface with the same name of the libraries to the main program. The module myimporthelper is either in the "/mylib" or in the "/sd/mylib" directory. The other module names in these directories differ.
I try to do the following which is not working, since the imported modules from myimporthelper.py are not visible to main.py:
main.py:
RUN_ON_PC = True
import sys
if RUN_ON_PC:
sys.path.append("/mylib1")
else:
sys.path.append("/sd/mylib1")
import myimporthelper
myimporthelper.importall(RUN_ON_PC)
a = moduleA.ClassA() -> produces NameError: name not defined
myimporthelper.py:
import sys
def importall(run_on_pc):
if (run_on_pc == True):
sys.path.append("C:\\Users\\.....\\mylib")
import module1 as moduleA
else:
sys.path.append("/sd/mylib")
import module_a as moduleA
I want to keep the main.py short and want to outsource the platform dependent importing stuff to other module. I was not able to find a solution for this and would aprecciate any help.
Thanks a lot in advance.
You just have to qualify the name with the helper module name
a = myimporthelper.moduleA.ClassA()
But the moduleA name has to be accessible. If you import it inside a function in the helper it won't be, because of scope, unless you assign it to a name you previously declared as global in the helper module function.

How to dynamically import modules?

I am trying to import modules dynamically in Python. Right now, I have a directory called 'modules' with two files inside; they are mod1.py and mod2.py. They are simple test functions to return time (ie. mod1.what_time('now') returns the current time).
From my main application, I can import as follows :
sys.path.append('/Users/dxg/import_test/modules')
import mod1
Then execute :
mod1.what_time('now')
and it works.
I am not always going to know what modules are available in the dirctory. I wanted to import as follows :
tree = []
tree = os.listdir('modules')
sys.path.append('/Users/dxg/import_test/modules')
for i in tree:
import i
However I get the error :
ImportError: No module named i
What am I missing?
The import instruction does not work with variable contents (as strings) (see extended explanation here), but with file names. If you want to import dynamically, you can use the importlib.import_module method:
import importlib
tree = os.listdir('modules')
...
for i in tree:
importlib.import_module(i)
Note:
You can not import from a directory where the modules are not included under Lib or the current directory like that (adding the directory to the path won't help, see previous link for why). The simplest solution would be to make this directory (modules) a package (just drop an empty __init__.py file there), and call importlib.import_module('..' + i, 'modules.subpkg') or use the __import__ method.
You might also review this question. It discusses a similar situation.
You can achieve something like what you are proposing, but it will involve some un-pythonic code. I do not recommend doing this:
dynamic_imports = dict()
for filename in tree:
name = filename.replace('.py', '')
dynamic_imports[name] = __import__(name)

python: dynamically loading one-time plugins?

I'm writing a python application in which I want to make use of dynamic, one-time-runnable plugins.
By this I mean that at various times during the running of this application, it looks for python source files with special names in specific locations. If any such source file is found, I want my application to load it, run a pre-named function within it (if such a function exists), and then forget about that source file.
Later during the running of the application, that file might have changed, and I want my python application to reload it afresh, execute its method, and then forget about it, like before.
The standard import system keeps the module resident after the initial load, and this means that subsequent "import" or "__import__" calls won't reload the same module after its initial import. Therefore, any changes to the python code within this source file are ignored during its second through n-th imports.
In order for such packages to be loaded uniquely each time, I came up with the following procedure. It works, but it seems kind of "hacky" to me. Are there any more elegant or preferred ways of doing this? (note that the following is an over-simplified, illustrative example)
import sys
import imp
# The following module name can be anything, as long as it doesn't
# change throughout the life of the application ...
modname = '__whatever__'
def myimport(path):
'''Dynamically load python code from "path"'''
# get rid of previous instance, if it exists
try:
del sys.modules[modname]
except:
pass
# load the module
try:
return imp.load_source(modname, path)
except Exception, e:
print 'exception: {}'.format(e)
return None
mymod = myimport('/path/to/plugin.py')
if mymod is not None:
# call the plugin function:
try:
mymod.func()
except:
print 'func() not defined in plugin: {}'.format(path)
Addendum: one problem with this is that func() runs within a separate module context, and it has no access to any functions or variables within the caller's space. I therefore have to do inelegant things like the following if I
want func_one(), func_two() and abc to be accessible within the invocation
of func():
def func_one():
# whatever
def func_two():
# whatever
abc = '123'
# Load the module as shown above, but before invoking mymod.func(),
# the following has to be done ...
mymod.func_one = func_one
mymod.func_two = func_two
mymod.abc = abc
# This is a PITA, and I'm hoping there's a better way to do all of
# this.
Thank you very much.
I use the following code to do this sort of thing.
Note that I don't actually import the code as a module, but instead execute the code in a particular context. This lets me define a bunch of api functions automatically available to the plugins without users having to import anything.
def load_plugin(filename, context):
source = open(filename).read()
code = compile(source, filename, 'exec')
exec(code, context)
return context['func']
context = { 'func_one': func_one, 'func_two': func_two, 'abc': abc }
func = load_plugin(filename, context)
func()
This method works in python 2.6+ and python 3.3+
The approach you use is totally fine. For this question
one problem with this is that func() runs within a separate module context, and it has no access to any functions or variables within the caller's space.
It may be better to use execfile function:
# main.py
def func1():
print ('func1 called')
exec(open('trackableClass.py','r').read(),globals()) # this is similar to import except everything is done in the current module
#execfile('/path/to/plugin.py',globals()) # python 2 version
func()
Test it:
#/path/to/plugin.py
def func():
func1()
Result:
python main.py
# func1 called
One potential problem with this approach is namespace pollution because every file is run in the current namespace which increase the chance of name conflict.

module reimported if imported from different path

In a big application I am working, several people import same modules differently e.g.
import x
or
from y import x
the side effects of that is x is imported twice and may introduce very subtle bugs, if someone is relying on global attributes
e.g. suppose I have a package mypakcage with three file mymodule.py, main.py and init.py
mymodule.py contents
l = []
class A(object): pass
main.py contents
def add(x):
from mypackage import mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
import mymodule
return mymodule.l
add(1)
print "lets check",get()
add(1)
print "lets check again",get()
it prints
updated list [1]
lets check []
updated list [1, 1]
lets check again []
because now there are two lists in two different modules, similarly class A is different
To me it looks serious enough because classes itself will be treated differently
e.g. below code prints False
def create():
from mypackage import mymodule
return mymodule.A()
def check(a):
import mymodule
return isinstance(a, mymodule.A)
print check(create())
Question:
Is there any way to avoid this? except enforcing that module should be imported one way onyl. Can't this be handled by python import mechanism, I have seen several bugs related to this in django code and elsewhere too.
Each module namespace is imported only once. Issue is, you're importing them differently. On the first you're importing from the global package, and on the second you're doing a local, non-packaged import. Python sees modules as different. The first import is internally cached as mypackage.mymodule and the second one as mymodule only.
A way to solve this is to always use absolute imports. That is, always give your module absolute import paths from the top-level package onwards:
def add(x):
from mypackage import mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
from mypackage import mymodule
return mymodule.l
Remember that your entry point (the file you run, main.py) also should be outside the package. When you want the entry point code to be inside the package, usually you use a run a small script instead. Example:
runme.py, outside the package:
from mypackage.main import main
main()
And in main.py you add:
def main():
# your code
I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. Pay attention to the bin folder - it is outside the package. I'll reproduce the entire text here:
Filesystem structure of a Python project
Do:
name the directory something
related to your project. For example,
if your project is named "Twisted",
name the top-level directory for its
source files Twisted. When you do
releases, you should include a version
number suffix: Twisted-2.5.
create a directory Twisted/bin and
put your executables there, if you
have any. Don't give them a .py
extension, even if they are Python
source files. Don't put any code in
them except an import of and call to a
main function defined somewhere else
in your projects.
If your project
is expressable as a single Python
source file, then put it into the
directory and name it something
related to your project. For example,
Twisted/twisted.py. If you need
multiple source files, create a
package instead (Twisted/twisted/,
with an empty
Twisted/twisted/__init__.py) and
place your source files in it. For
example,
Twisted/twisted/internet.py.
put
your unit tests in a sub-package of
your package (note - this means that
the single Python source file option
above was a trick - you always need at
least one other file for your unit
tests). For example,
Twisted/twisted/test/. Of course,
make it a package with
Twisted/twisted/test/__init__.py.
Place tests in files like
Twisted/twisted/test/test_internet.py.
add Twisted/README and Twisted/setup.py to explain and
install your software, respectively,
if you're feeling nice.
Don't:
put your source in a directory
called src or lib. This makes it
hard to run without installing.
put
your tests outside of your Python
package. This makes it hard to run the
tests against an installed version.
create a package that only has a
__init__.py and then put all your
code into __init__.py. Just make a
module instead of a package, it's
simpler.
try to come up with
magical hacks to make Python able to
import your module or package without
having the user add the directory
containing it to their import path
(either via PYTHONPATH or some other
mechanism). You will not correctly
handle all cases and users will get
angry at you when your software
doesn't work in their environment.
I can only replicate this if main.py is the file you are actually running. In that case you will get the current directory of main.py on the sys path. But you apparently also have a system path set so that mypackage can be imported.
Python will in that situation not realize that mymodule and mypackage.mymodule is the same module, and you get this effect. This change illustrates this:
def add(x):
from mypackage import mymodule
print "mypackage.mymodule path", mymodule
mymodule.l.append(x)
print "updated list",mymodule.l
def get():
import mymodule
print "mymodule path", mymodule
return mymodule.l
add(1)
print "lets check",get()
add(1)
print "lets check again",get()
$ export PYTHONPATH=.
$ python mypackage/main.py
mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
mymodule path <module 'mymodule' from '/tmp/mypackage/mymodule.pyc'>
But add another mainfile, in the currect directory:
realmain.py:
from mypackage import main
and the result is different:
mypackage.mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
mymodule path <module 'mypackage.mymodule' from '/tmp/mypackage/mymodule.pyc'>
So I suspect that you have your main python file within the package. And in that case the solution is to not do that. :-)

Categories

Resources