Dynamically reload a class definition in Python - python

I've written an IRC bot using Twisted and now I've gotten to the point where I want to be able to dynamically reload functionality.
In my main program, I do from bots.google import GoogleBot and I've looked at how to use reload to reload modules, but I still can't figure out how to do dynamic re-importing of classes.
So, given a Python class, how do I dynamically reload the class definition?

Reload is unreliable and has many corner cases where it may fail. It is suitable for reloading simple, self-contained, scripts. If you want to dynamically reload your code without restart consider using forkloop instead:
http://opensourcehacker.com/2011/11/08/sauna-reload-the-most-awesomely-named-python-package-ever/

You cannot reload the module using reload(module) when using the from X import Y form. You'd have to do something like reload(sys.modules['module']) in that case.
This might not necessarily be the best way to do what you want, but it works!
import bots.google
class BotClass(irc.IRCClient):
def __init__(self):
global plugins
plugins = [bots.google.GoogleBot()]
def privmsg(self, user, channel, msg):
global plugins
parts = msg.split(' ')
trigger = parts[0]
if trigger == '!reload':
reload(bots.google)
plugins = [bots.google.GoogleBot()]
print "Successfully reloaded plugins"

I figured it out, here's the code I use:
def reimport_class(self, cls):
"""
Reload and reimport class "cls". Return the new definition of the class.
"""
# Get the fully qualified name of the class.
from twisted.python import reflect
full_path = reflect.qual(cls)
# Naively parse the module name and class name.
# Can be done much better...
match = re.match(r'(.*)\.([^\.]+)', full_path)
module_name = match.group(1)
class_name = match.group(2)
# This is where the good stuff happens.
mod = __import__(module_name, fromlist=[class_name])
reload(mod)
# The (reloaded definition of the) class itself is returned.
return getattr(mod, class_name)

Better yet subprocess the plugins, then hypervise the subprocess, when the files change reload the plugins process.
Edit: cleaned up.

You can use the sys.modules to dynamically reload modules based on user-input.
Say that you have a folder with multiple plugins such as:
module/
cmdtest.py
urltitle.py
...
You can use sys.modules in this way to load/reload modules based on userinput:
import sys
if sys.modules['module.' + userinput]:
reload(sys.modules['module.' + userinput])
else:
' Module not loaded. Cannot reload '
try:
module = __import__("module." + userinput)
module = sys.modules["module." + userinput]
except:
' error when trying to load %s ' % userinput

When you do a from ... import ... it binds the object into the local namespace, so all you need to is re-import it. However, since the module is already loaded, it will just re-import the same version of the class so you would need to reload the module too. So this should do it:
from bots.google import GoogleBot
...
# do stuff
...
reload(bots.google)
from bots.google import GoogleBot
If for some reason you don't know the module name you can get it from GoogleBot.module.

def reload_class(class_obj):
module_name = class_obj.__module__
module = sys.modules[module_name]
pycfile = module.__file__
modulepath = string.replace(pycfile, ".pyc", ".py")
code=open(modulepath, 'rU').read()
compile(code, module_name, "exec")
module = reload(module)
return getattr(module,class_obj.__name__)
There is a lot of error checking you can do on this, if your using global variables you will probably have to figure out what happens then.

Related

How can I redirect module imports with modern Python?

I am maintaining a python package in which I did some restructuring. Now, I want to support clients who still do from my_package.old_subpackage.foo import Foo instead of the new from my_package.new_subpackage.foo import Foo, without explicitly reintroducing many files that do the forwarding. (old_subpackage still exists, but no longer contains foo.py.)
I have learned that there are "loaders" and "finders", and my impression was that I should implement a loader for my purpose, but I only managed to implement a finder so far:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
# TODO: ideally, we would not just implement a "finder", but also a "loader"
# (using the importlib.util.module_for_loader decorator); this would enable us
# to get module contents that also pass identity checks
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
return importlib.util.find_spec(renamed)
return None
sys.meta_path.append(RenamedFinder())
https://docs.python.org/3.5/library/importlib.html#importlib.util.module_for_loader and related functionality, however, seem to be deprecated. I know it's not a very pythonic thing I am trying to achieve, but I would be glad to learn that it's achievable.
On import of your package's __init__.py, you can place whatever objects you want into sys.modules, the values you put in there will be returned by import statements:
from . import new_package
from .new_package import module1, module2
import sys
sys.modules["my_lib.old_package"] = new_package
sys.modules["my_lib.old_package.module1"] = module1
sys.modules["my_lib.old_package.module2"] = module2
If someone now uses import my_lib.old_package or import my_lib.old_package.module1 they will obtain a reference to my_lib.new_package.module1. Since the import machinery already finds the keys in the sys.modules dictionary, it never even begins looking for the old files.
If you want to avoid importing all the submodules immediately, you can emulate a bit of lazy loading by placing a module with a __getattr__ in sys.modules:
from types import ModuleType
import importlib
import sys
class LazyModule(ModuleType):
def __init__(self, name, mod_name):
super().__init__(name)
self.__mod_name = name
def __getattr__(self, attr):
if "_lazy_module" not in self.__dict__:
self._lazy_module = importlib.import(self.__mod_name, package="my_lib")
return self._lazy_module.__getattr__(attr)
sys.modules["my_lib.old_package"] = LazyModule("my_lib.old_package", "my_lib.new_package")
In the init file of the old module, have it import from the newer modules
Old (package.oldpkg):
foo = __import__("Path to new module")
New (package.newpkg):
class foo:
bar = "thing"
so
package.oldpkg.foo.bar is the same as package.newpkg.foo.bar
Hope this helps!
I think that this is what you are looking for:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
spec = importlib.util.find_spec(renamed)
spec.loader = cls
return spec
return None
#staticmethod
def create_module(spec):
return importlib.import_module(spec.name)
#staticmethod
def exec_module(module):
pass
sys.meta_path.append(RenamedFinder())
Still, IMO the approach that manipulates sys.modules is preferable as it is more readable, more explicit, and provides you much more control. It might become useful especially in further versions of your package when my_package.new_subpackage.foo starts to diverge from my_package.old_subpackage.foo while you would still need to provide the old one for backward compatibility. For that reason, you would maybe need to preserve the code of both anyway.
Consolidate all the old package names into my_package.
Old packages (old_package):
image_processing (class) Will be deleted and replaced by better_image_processing
text_recognition (class) Will be deleted and replaced by better_text_recognition
foo (variable) Will be moved to better_text_recognition
still_there (class) Will not move
New packages:
super_image_processing
better_text_recognition
Redirector (class of my_package):
class old_package:
image_processing = super_image_processing # Will be replaced
text_recognition = better_text_recognition # Will be replaced
Your main new module (my_package):
#imports here
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
still_there = __import__("path to unchanged module")
This allows you to delete some files and keep the rest. If you want to redirect variables you would do:
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
foo = text_recognition.foo
still_there = __import__("path to unchanged module")
Would this work?

How to dynamically reload function in Python?

I'm trying to create a process that dynamically watches jupyter notebooks, compiles them on modification and imports them into my current file, however I can't seem to execute the updated code. It only executes the first version that was loaded.
There's a file called producer.py that calls this function repeatedly:
import fs.fs_util as fs_util
while(True):
fs_util.update_feature_list()
In fs_util.py I do the following:
from fs.feature import Feature
import inspect
from importlib import reload
import os
def is_subclass_of_feature(o):
return inspect.isclass(o) and issubclass(o, Feature) and o is not Feature
def get_instances_of_features(name):
module = __import__(COMPILED_MODULE, fromlist=[name])
module = reload(module)
feature_members = getattr(module, name)
all_features = inspect.getmembers(feature_members, predicate=is_subclass_of_feature)
return [f[1]() for f in all_features]
This function is called by:
def update_feature_list(name):
os.system("jupyter nbconvert --to script {}{} --output {}{}"
.format(PATH + "/" + s3.OUTPUT_PATH, name + JUPYTER_EXTENSION, PATH + "/" + COMPILED_PATH, name))
features = get_instances_of_features(name)
for f in features:
try:
feature = f.create_feature()
except Exception as e:
print(e)
There is other irrelevant code that checks for whether a file has been modified etc.
I can tell the file is being reloaded correctly because when I use inspect.getsource(f.create_feature) on the class it displays the updated source code, however during execution it returns older values. I've verified this by changing print statements as well as comparing the return values.
Also for some more context the file I'm trying to import:
from fs.feature import Feature
class SubFeature(Feature):
def __init__(self):
Feature.__init__(self)
def create_feature(self):
return "hello"
I was wondering what I was doing incorrectly?
So I found out what I was doing wrong.
When called reload I was reloading the module I had newly imported, which was fairly idiotic I suppose. The correct solution (in my case) was to reload the module from sys.modules, so it would be something like reload(sys.modules[COMPILED_MODULE + "." + name])

Creating a pseudo-module that creates submodules at runtime

To support extensions in my Python project, I'm trying to create a pseudo-module that will serve "extension modules" as it's submodules. I'm having a problem treating the submodules as modules - it seems like I need to access them using from..import on the main pseudo-module and can't just access their full path.
Here is a minimal working example:
import sys
from types import ModuleType
class Foo(ModuleType):
#property
def bar(self):
# Here I would actually find the location of `bar.py` and load it
bar = ModuleType('foo.bar')
sys.modules['foo.bar'] = bar
return bar
sys.modules['foo'] = Foo('foo')
from foo import bar # without this line the next line fails
import foo.bar
This works, but if I comment out the from foo import bar line, it'll fail with:
ImportError: No module named bar
on Python2, and on Python3 it'll fail with:
ModuleNotFoundError: No module named 'foo.bar'; 'foo' is not a package
If I add the fields to make it a package:
class Foo(ModuleType):
__all__ = ('bar',)
__package__ = 'foo'
__path__ = []
__file__ = __file__
It'll fail on:
ModuleNotFoundError: No module named 'foo.bar'
From what I understand, the problem is that I did not set sys.modules['foo.bar'] yet. But... to fill sys.modules I need to load the module first, and I don't want to do it unless the user of my project explicitly imports it.
Is there any way to make Python realize that when it sees import foo.bar it needs to load foo first(or I can just guarantee foo will already be loaded at that point) and take bar from it?
This post does NOT answer "This is how you do it."
If you want to know how to do this yourself look at PEP 302 or Idan Arye's solution.
This post instead presents a recipe that makes it easy to write. The recipe is at the end of this answer.
The block of code below defines two classes intended for use: PseudoModule and PseudoPackage. The behaviour only differs from whether import foo.x should raise an error stating foo isn't a package or try to load x and make sure it's a module. Several example uses are outlined below.
PseudoModule
PseudoModule can be used as a decorator to a function, it creates a new module object that when attributes are accessed for the first time it called the decorated function with the name of the attribute and the namespace of previously defined elements.
For example, this will make a module that assigns a new integer to each attribute accessed:
#PseudoModule
def access_tracker(attr, namespace):
namespace["_count"] = namespace.get("_count", -1) + 1
return namespace["_count"]
#PseudoModule will set `namespace[attr] = <return value>` for you
#this can be overriden by passing `remember_results=False` to the constructor
sys.modules["access_tracker"] = access_tracker
from access_tracker import zero, one, two, three
assert zero == 0 and one == 1 and two == 2 and three == 3
PseudoPackage
PseudoPackage is used the same way as PseudoModule however if the decorated function returns a module (or package) it will correct the name to be qualified as a subpackage and sys.modules is updated as needed. (the top level package still needs to be added to sys.modules manually)
Here is an example use of PseudoPackage:
spam_submodules = {"bacon"}
spam_attributes = {"eggs", "ham"}
#PseudoPackage
def spam(name, namespace):
print("getting a component of spam:", name)
if name in spam_submodules:
#PseudoModule
def submodule(attr, nested_namespace):
print("getting a component of submodule {}: {}".format(name, attr))
return attr #use the string of the attribute
return submodule #PseudoPackage will rename the module to be spam.bacon for us
elif name in spam_attributes:
return "supported attribute"
else:
raise AttributeError("spam doesn't have any {!r}.".format(name))
sys.modules["spam"] = spam
import spam.bacon
#prints "getting a component of spam: bacon"
assert spam.bacon.something == "something"
#prints "getting a component of submodule bacon: something"
from spam import eggs
#prints "getting a component of spam: eggs"
assert eggs == "supported attribute"
import spam.ham #ham isn't a submodule, raises error!
The way PseudoPackage is setup also makes arbitrary depth packages very easy although this specific example doesn't accomplish much:
def make_abstract_package(qualname = ""):
"makes a PseudoPackage that has arbitrary nesting of subpackages"
def gen_func(attr, namespace):
print("getting {!r} from package {!r}".format(attr, qualname))
return make_abstract_package("{}.{}".format(qualname, attr))
#can pass the name of the module as second argument if needed
return PseudoPackage(gen_func, qualname)
sys.modules["foo"] = make_abstract_package("foo")
from foo.bar.baz import thing_I_want
##prints:
# getting 'bar' from package 'foo'
# getting 'baz' from package 'foo.bar'
# getting 'thing_I_want' from package 'foo.bar.baz'
print(thing_I_want)
#prints "<module 'foo.bar.baz.thing_I_want' from '<PseudoPackage>'>"
Few notes on implementation
As general guidelines:
The function that computes attributes of the module should not import the module it's defining the attributes for
If you want a package or module to be available for import, you need to put it in sys.modules yourself.
PseudoPackage assumes each submodule is unique, don't reuse module objects.
It is also worth noting that sys.modules is only updated with submodules of PseudoPackages when an import statement that requires the name to be a module, for example if foo is a package already in sys.modules but foo.x has not been referenced yet then all these assertions will pass:
assert "foo.x" not in sys.modules and not hasattr(foo,"x")
import foo; foo.x #foo.x is computed but not added to sys.modules
assert "foo.x" not in sys.modules and hasattr(foo,"x")
from foo import x #x is retrieved from namespace but sys.modules is still not affected
assert "foo.x" not in sys.modules
import foo.x #if x is a module then "foo.x" is added to sys.modules
assert "foo.x" in sys.modules
as well in the above case if foo.x isn't a module then the statement import foo.x raises a ModuleNotFoundError.
Finally, while the problematic edge cases I have identified can be avoided by following the guidelines above, the docstring for _PseudoPackageLoader describes the implementation details responsible for unwanted behaviour for possible future modifications.
The recipe
import sys
from types import ModuleType
import importlib.abc #uses Loader and MetaPathFinder, more for inspection purposes then use
class RawPseudoModule(ModuleType):
"""
see PseudoModule for documentation, this class is not intended for direct use.
RawPseudoModule does not handle __path__ so the generating function of direct
instances are expected to make and return an appropriate value for __path__
*** if you do not know what an appropriate value for __path__ is
then use PseudoModule instead ***
"""
#using slots keeps these two variables out of the module dictionary
__slots__ = ["__generating_func", "__remember_results"]
def __init__(self, func, name=None, remember_results = True):
name = name or func.__name__
super(RawPseudoModule, self).__init__(name)
self.__file__ = "<{0.__class__.__name__}>".format(self)
self.__generating_func = func
self.__remember_results = remember_results
def __getattr__(self, attr):
value = self.__generating_func(attr, vars(self))
if self.__remember_results:
setattr(self, attr, value)
return value
class PseudoModule(RawPseudoModule):
"""
A module that has attributes generated from a specified function
The generating function passed to the constructor should have the signature:
f(attr:str, namespace:dict) -> object:
- attr is the name of the attribute accessed
- namespace is the currently defined values in the module
the function should return a value for the attribute or raise an AttributeError if it doesn't exist.
by default the result is then saved to the namespace so you don't
have to explicitly do "namespace[attr] = <value>" however this behaviour
can be overridden by specifying "remember_results = False" in the constructor.
If no name is specified in the constructor the function name will be
used for the module name instead, this allows the class to be used as a decorator
Note: the PseudoModule class is setup so that "import foo.bar"
when foo is a PseudoModule will fail stating "'foo' is not a package".
- to allow importing submodules use PseudoPackage.
- to handle the internal __path__ manually use RawPseudoPackage.
Note: the module is NOT added to sys.modules automatically.
"""
def __getattr__(self, attr):
#to not have submodules then __path__ must not exist
if attr == "__path__":
msg = "{0.__name__} is a PseudoModule, it is not a package so it doesn't have a __path__"
#this error message would only be seen by people who explicitly access __path__
raise AttributeError(msg.format(self))
return super(PseudoModule, self).__getattr__(attr)
class PseudoPackage(RawPseudoModule):
"""
A version of PseudoModule that sets itself up to allow importing subpackages
When a submodule is imported from a PseudoPackage:
- it is evaluated with the generating function.
- the name of the submodule is overriden to be correctly qualified
- and it is added to sys.modules to allow repeated imports.
Note: the top level package still needs to be added to sys.modules manually
Note: A RecursionError will be raised if the code that generates submodules
attempts to import another submodule from the PseudoPackage.
"""
#IMPLEMENTATION DETAIL: technically this doesn't deal with adding submodules to
# sys.modules, that is handled in _PseudoPackageLoader
# which explicitly checks for instances of PseudoPackage
__path__ = [] #packages must have a __path__ to be recognized as packages.
def __getattr__(self, attr):
value = super(PseudoPackage, self).__getattr__(attr)
if isinstance(value, ModuleType):
#I'm just going to say if it's a module then the name must be in this format.
value.__name__ = self.__name__ + "." + attr
return value
class _PseudoPackageLoader(importlib.abc.Loader, importlib.abc.MetaPathFinder):
"""
Singleton finder and loader for pseudo packages
When ever a subpackage of a PseudoPackage (that is already in sys.modules) is imported
this will handle loading it and adding the subpackage to sys.modules
Note that although PEP 302 states the finder should not depend on the parent
being loaded in sys.modules, this is implemented under the understanding that
the user of PseudoPackage will add their module to sys.modules manually themselves
so this will work only when the parent is present in sys.modules
Also PEP 302 indicates the module should be added to sys.modules first in case
it is imported during it's execution, however this is impossible due to the
nature of how the module actually gets loaded.
So for heaven's sake don't try to import a pseudo package or a module that uses
a pseudo package from within the code that generates it.
I have only tested this when the sub module is either PseudoModule or PseudoPackage
and it was created new from the generating function, ideally there would be a way
to allow the generating function to return an unexecuted module and this would
properly handle executing it but I don't know how to deal with that.
"""
def find_module(self, fullname, path):
#this will only support loading if the parent package is a PseudoPackage
base,_,_ = fullname.rpartition(".")
if isinstance(sys.modules.get(base), PseudoPackage):
return self
#I found that `if path is PseudoPackage.__path__` worked the same way for all the cases I tested
#however since load_module will fail if the base part isn't in sys.modules
# it seems safer to just check for that.
def load_module(self, fullname):
if fullname in sys.modules:
return sys.modules[fullname]
base,_,sub = fullname.rpartition(".")
parent = sys.modules[base]
try:
submodule = getattr(parent, sub)
except AttributeError:
#when we just access `foo.x` it raises an AttributeError
#but `import foo.x` should instead raise an ImportError
raise ImportError("cannot import name {!r}".format(sub))
if not isinstance(submodule, ModuleType):
#match the format of error raised when the submodule isn't a module
#example: `import sys.path` raises the same format of error.
raise ModuleNotFoundError("No module named {}".format(fullname))
#fill all the fields as described in PEP 302 except __name__
submodule.__loader__ = self
submodule.__package__ = base
submodule.__file__ = getattr(submodule, "__file__", "<submodule of PseudoPackage>")
#if there was a way to do this before the module was made that'd be nice
sys.modules[fullname] = submodule
#if we needed to execute the body of an unloaded module it'd be done here.
return submodule
#add the loader to sys.meta_path so it will handle our pseudo packages
sys.meta_path.append(_PseudoPackageLoader())
Thanks to the link #TadhgMcDonald-Jensen provided I managed to solve it:
import sys
from types import ModuleType
class FooImporter(object):
module = ModuleType('foo')
module.__path__ = [module.__name__]
def find_module(self, fullname, path):
if fullname == self.module.__name__:
return self
if path == [self.module.__name__]:
return self
def load_module(self, fullname):
if fullname == self.module.__name__:
return sys.modules.setdefault(fullname, self.module)
assert fullname.startswith(self.module.__name__ + '.')
try:
return sys.modules[fullname]
except KeyError:
submodule = ModuleType(fullname)
name = fullname[len(self.module.__name__) + 1:]
setattr(self.module, name, submodule)
sys.modules[fullname] = submodule
return submodule
sys.meta_path.append(FooImporter())
from foo import bar
#TadhgMcDonald-Jensen - please make an answer so that I can approve it.

Pythonic way to dynamically load and call modules

I have a working code, but I would like to know what is the proper Pythonic approach.
My goal: have a directory of "plugins" (one module per plugin), which is dynamically loaded when the program runs. All of the modules will have a function defined, which will act as an "entrypoint".
The aim is to have a script which is easily extended by some extra functionality.
What I have come up with is the following. reporter = plugin in this case.
import os
import importlib
import reporters # Package, where plugins (reporters) will reside
def find_reporters():
# Find all modules in directory "reporters" which look like "*_reporter.py"
reporters = [rep.rsplit('.py', 1)[0] for rep in os.listdir('reporters') if rep.endswith('_reporter.py')]
functions = []
for reporter in reporters:
module = importlib.import_module('.' + reporter, 'reporters')
try:
func = getattr(module, 'entry_function') # Read the entry_function if present
functions.append(func) # Add the function to the list to be returned
except AttributeError as e:
print(e)
return functions
def main():
funcs = find_reporters()
for func in funcs:
func() # Execute all collected functions
I am not too seasoned in Python, so is this an acceptable solution?

python windows directory mtime: how to detect package directory new file?

I'm working on an auto-reload feature for WHIFF
http://whiff.sourceforge.net
(so you have to restart the HTTP server less often, ideally never).
I have the following code to reload a package module "location"
if a file is added to the package directory. It doesn't work on Windows XP.
How can I fix it? I think the problem is that getmtime(dir) doesn't
change on Windows when the directory content changes?
I'd really rather not compare an os.listdir(dir) with the last directory
content every time I access the package...
if not do_reload and hasattr(location, "__path__"):
path0 = location.__path__[0]
if os.path.exists(path0):
dir_mtime = int( os.path.getmtime(path0) )
if fn_mtime<dir_mtime:
print "dir change: reloading package root", location
do_reload = True
md_mtime = dir_mtime
In the code the "fn_mtime" is the recorded mtime from the last (re)load.
... added comment: I came up with the following work around, which I think
may work, but I don't care for it too much since it involves code generation.
I dynamically generate a code fragment to load a module and if it fails
it tries again after a reload. Not tested yet.
GET_MODULE_FUNCTION = """
def f():
import %(parent)s
try:
from %(parent)s import %(child)s
except ImportError:
# one more time...
reload(%(parent)s)
from %(parent)s import %(child)s
return %(child)s
"""
def my_import(partname, parent):
f = None # for pychecker
parentname = parent.__name__
defn = GET_MODULE_FUNCTION % {"parent": parentname, "child": partname}
#pr "executing"
#pr defn
try:
exec(defn) # defines function f()
except SyntaxError:
raise ImportError, "bad function name "+repr(partname)+"?"
partmodule = f()
#pr "got", partmodule
setattr(parent, partname, partmodule)
#pr "setattr", parent, ".", partname, "=", getattr(parent, partname)
return partmodule
Other suggestions welcome. I'm not happy about this...
long time no see. I'm not sure exactly what you're doing, but the equivalent of your code:
GET_MODULE_FUNCTION = """
def f():
import %(parent)s
try:
from %(parent)s import %(child)s
except ImportError:
# one more time...
reload(%(parent)s)
from %(parent)s import %(child)s
return %(child)s
"""
to be execed with:
defn = GET_MODULE_FUNCTION % {"parent": parentname, "child": partname}
exec(defn)
is (per the docs), assuming parentname names a package and partname names a module in that package (if partname is a top-level name of the parentname package, such as a function or class, you'll have to use a getattr at the end):
import sys
def f(parentname, partname):
name = '%s.%s' % (parentname, partname)
try:
__import__(name)
except ImportError:
parent = __import__(parentname)
reload(parent)
__import__(name)
return sys.modules[name]
without exec or anything weird, just call this f appropriately.
you can try using getatime() instead.
I'm not understanding your question completely...
Are you calling getmtime() on a directory or an individual file?
There are two things about your first code snippet that concern me:
You cast the float from getmtime to int. Dependening on the frequency this code is run, you might get unreliable results.
At the end of the code you assign dir_mtime to a variable md_mtime. fn_mtime, which you check against, seems not to be updated.

Categories

Resources