How to check in python if some class (by string name) exists? - python

Is it possible to check if some class exists? I have class names in a json config file.
I know that I can simply try to create an object by class name string but that is actually a bad idea because the class constructor can do some unexpected stuff while at this point in time I just want to check if my config is valid and all mentioned classes are available.
Is there any way to do it?
EDIT: Also I do understand that u can get all the methods from some module, in my case I am not sure and don't actually care from what module comes the method. It can be from any import statement and I probably don't know where exactly from.

Using eval() leaves the door open for arbitrary code execution, for security's sake it should be avoided.
Especially if you ask for a solution for such a problem here.
Then we can assume that you do not know these risks sufficiently.
import sys
def str_to_class(str):
return reduce(getattr, str.split("."), sys.modules[__name__])
try:
cls = str_to_class(<json-fragment-here>)
except AttributeError:
cls = None
if cls:
obj = cls(...)
else:
# fight against this
This avoids using eval and is approved by several SO users.
Solution is similar to Convert string to Python class object?.

You can parse the source to get all the class names:
from ast import ClassDef, parse
import importlib
import inspect
mod = "test"
mod = importlib.import_module(mod)
p = parse(inspect.getsource(mod))
names = [kls.name for kls in p.body if isinstance(kls, ClassDef)]
Input:
class Foo(object):
pass
class Bar(object):
pass
Output:
['Foo', 'Bar']
Just compare the class names from the config to the names returned.
{set of names in config}.difference(names)
If you want to include imported names you can parse the module it was imported from but depending on how it was imported you can still find cases that won't work:
from ast import ClassDef, parse, ImportFrom
import importlib
import inspect
mod = "test"
mod = importlib.import_module(mod)
p = parse(inspect.getsource(mod))
names = []
for node in p.body:
if isinstance(node, ClassDef):
names.append(node.name)
elif isinstance(node, ImportFrom):
names.extend(imp.name for imp in node.names)
print(names)
Input:
from test2 import Foobar, Barbar, foo
class Foo(object):
pass
class Bar(object):
pass
test2:
foo = 123
class Foobar(object):
pass
class Barbar(object):
pass
Output:
['Foobar', 'Barbar', 'Foo', 'Bar']

I tried the built-in type function, which worked for me, but there is maybe a more pythonic way to test for the existence of a class:
import types
def class_exist(className):
result = False
try:
result = (eval("type("+className+")") == types.ClassType)
except NameError:
pass
return result
# this is a test class, it's only purpose is pure existence:
class X:
pass
print class_exist('X')
print class_exist('Y')
The output is
True
False
Of course, this is a basic solution which should be used only with well-known input: the eval function can be a great a back door opener. There is a more reliable (but also compact) solution by wenzul.

Related

How can I redirect module imports with modern Python?

I am maintaining a python package in which I did some restructuring. Now, I want to support clients who still do from my_package.old_subpackage.foo import Foo instead of the new from my_package.new_subpackage.foo import Foo, without explicitly reintroducing many files that do the forwarding. (old_subpackage still exists, but no longer contains foo.py.)
I have learned that there are "loaders" and "finders", and my impression was that I should implement a loader for my purpose, but I only managed to implement a finder so far:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
# TODO: ideally, we would not just implement a "finder", but also a "loader"
# (using the importlib.util.module_for_loader decorator); this would enable us
# to get module contents that also pass identity checks
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
return importlib.util.find_spec(renamed)
return None
sys.meta_path.append(RenamedFinder())
https://docs.python.org/3.5/library/importlib.html#importlib.util.module_for_loader and related functionality, however, seem to be deprecated. I know it's not a very pythonic thing I am trying to achieve, but I would be glad to learn that it's achievable.
On import of your package's __init__.py, you can place whatever objects you want into sys.modules, the values you put in there will be returned by import statements:
from . import new_package
from .new_package import module1, module2
import sys
sys.modules["my_lib.old_package"] = new_package
sys.modules["my_lib.old_package.module1"] = module1
sys.modules["my_lib.old_package.module2"] = module2
If someone now uses import my_lib.old_package or import my_lib.old_package.module1 they will obtain a reference to my_lib.new_package.module1. Since the import machinery already finds the keys in the sys.modules dictionary, it never even begins looking for the old files.
If you want to avoid importing all the submodules immediately, you can emulate a bit of lazy loading by placing a module with a __getattr__ in sys.modules:
from types import ModuleType
import importlib
import sys
class LazyModule(ModuleType):
def __init__(self, name, mod_name):
super().__init__(name)
self.__mod_name = name
def __getattr__(self, attr):
if "_lazy_module" not in self.__dict__:
self._lazy_module = importlib.import(self.__mod_name, package="my_lib")
return self._lazy_module.__getattr__(attr)
sys.modules["my_lib.old_package"] = LazyModule("my_lib.old_package", "my_lib.new_package")
In the init file of the old module, have it import from the newer modules
Old (package.oldpkg):
foo = __import__("Path to new module")
New (package.newpkg):
class foo:
bar = "thing"
so
package.oldpkg.foo.bar is the same as package.newpkg.foo.bar
Hope this helps!
I think that this is what you are looking for:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
spec = importlib.util.find_spec(renamed)
spec.loader = cls
return spec
return None
#staticmethod
def create_module(spec):
return importlib.import_module(spec.name)
#staticmethod
def exec_module(module):
pass
sys.meta_path.append(RenamedFinder())
Still, IMO the approach that manipulates sys.modules is preferable as it is more readable, more explicit, and provides you much more control. It might become useful especially in further versions of your package when my_package.new_subpackage.foo starts to diverge from my_package.old_subpackage.foo while you would still need to provide the old one for backward compatibility. For that reason, you would maybe need to preserve the code of both anyway.
Consolidate all the old package names into my_package.
Old packages (old_package):
image_processing (class) Will be deleted and replaced by better_image_processing
text_recognition (class) Will be deleted and replaced by better_text_recognition
foo (variable) Will be moved to better_text_recognition
still_there (class) Will not move
New packages:
super_image_processing
better_text_recognition
Redirector (class of my_package):
class old_package:
image_processing = super_image_processing # Will be replaced
text_recognition = better_text_recognition # Will be replaced
Your main new module (my_package):
#imports here
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
still_there = __import__("path to unchanged module")
This allows you to delete some files and keep the rest. If you want to redirect variables you would do:
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
foo = text_recognition.foo
still_there = __import__("path to unchanged module")
Would this work?

AttributeError: while using monkeypatch of pytest

src/mainDir/mainFile.py
contents of mainFile.py
import src.tempDir.tempFile as temp
data = 'someData'
def foo(self):
ans = temp.boo(data)
return ans
src/tempDir/tempFile.py
def boo(data):
ans = data
return ans
Now I want to test foo() from src/tests/test_mainFile.py and I want to mock temp.boo(data) method in foo() method
import src.mainDir.mainFile as mainFunc
testData = 'testData'
def test_foo(monkeypatch):
monkeypatch.setattr('src.tempDir.tempFile', 'boo', testData)
ans = mainFunc.foo()
assert ans == testData
but I get error
AttributeError: 'src.tempDir.tempFile' has no attribute 'boo'
I expect ans = testData.
I would like to know if I am correctly mocking my tempDir.boo() method or I should use pytest's mocker instead of monkeypatch.
You're telling monkeypatch to patch the attribute boo of the string object you pass in.
You'll either need to pass in a module like monkeypatch.setattr(tempFile, 'boo', testData), or pass the attribute as a string too (using the two-argument form), like monkeypatch.setattr('src.tempDir.tempFile.boo', testData).
My use case was was slightly different but should still apply. I wanted to patch the value of sys.frozen which is set when running an application bundled by something like Pyinstaller. Otherwise, the attribute does not exist. Looking through the pytest docs, the raising kwarg controls wether or not AttributeError is raised when the attribute does not already exist. (docs)
Usage Example
import sys
def test_frozen_func(monkeypatch):
monkeypatch.setattr(sys, 'frozen', True, raising=False)
# can use ('fq_import_path.sys.frozen', ...)
# if what you are trying to patch is imported in another file
assert sys.frozen
Update: mocking function calls can be done with monkeypatch.setattr('package.main.slow_fun', lambda: False) (see answer and comments in https://stackoverflow.com/a/44666743/3219667) and updated snippet below
I don't think this can be done with pytest's monkeypatch, but you can use the pytest-mock package. Docs: https://github.com/pytest-dev/pytest-mock
Quick example with the two files below:
# package/main.py
def slow_fun():
return True
def main_fun():
if slow_fun():
raise RuntimeError('Slow func returned True')
# tests/test_main.py
from package.main import main_fun
# Make sure to install pytest-mock so that the mocker argument is available
def test_main_fun(mocker):
mocker.patch('package.main.slow_fun', lambda: False)
main_fun()
# UPDATE: Alternative with monkeypatch
def test_main_fun_monkeypatch(monkeypatch):
monkeypatch.setattr('package.main.slow_fun', lambda: False)
main_fun()
Note: this also works if the functions are in different files

How to patch a class in python unit test and get a handle on patched object's return value

I am testing a class's method in Python 2.7 using Mock 2.0.0 Library
Here is how the method under test looks like:
from sklearn.externals import joblib
class ClassUnderTest():
def MethodUnderTest(self, path, myDict):
newDict= {}
for key, val in myDict.iteritems():
retVal= (joblib.load(path + val))
newDict[key] = retVal
return newDict
Now, my intention is to test MethodUnderTest but mock joblib.load and not call it in reality. So, to achieve this, I use the #patch decorator available in Mock library. My test looks as follows:
import unittest
from mock import MagicMock, patch
from sklearn.externals import joblib
import ClassUnderTest
class TestClass(unittest.TestCase):
#patch('ClassUnderTest.joblib')
def test_MethodUnderTest(self, mockJoblibLoad):
dict = {"v1": "p1.pkl"}
retVal = ClassUnderTest.MethodUnderTest("whateverpath", dict)
Now if I have to assert on retVal's keys and values against something, that something is based on what the mocked return value of joblib.load is. If I knew that value somehow then, I would be able to know what MethodUnderTest returns.
The problem here is that I don't know what that mocked value of joblib.load is when it is mocked using #patch decorator.
Does someone know how to get around this problem ? Or if there is a better way to mock python libraries like joblib and their methods like load and get a handle of that mock object ?
class TestClass(unittest.TestCase):
#patch('path.to.module.joblib.load') # You path is probably wrong here
def test_MethodUnderTest(self, mockJoblibLoad):
# Set the side_effect if you want to return different things on
# each iteration e.g. mockJoblib.side_effect = [...]
mockJoblibLoad.return_value = ...
test_dict = {"v1": "p1.pkl"}
expect = {"v1": mockJoblibLoad.return_value}
actual = ClassUnderTest.MethodUnderTest("whateverpath", dict)
self.assertEqual(expect, actual)

Vim's omnicompletion fails with "from" imports in Python

Omnicompletion for Python seems to fail when there is a "from" import instead of a normal one.
For example, if I have these two files:
Test.py:
class Test:
def method(self):
pass
main.py:
from Test import Test
class Test2:
def __init__(self):
self.x = Test()
If I try to activate omnicompletion for self.x... it says "Pattern not found".
However, if I change the import statement to:
import Test
and the self.x declaration to:
self.x = Test.Test()
then I'm able to use omnicompletion as expected (it suggests "method", for example).
I'm using Vim 7.2.245 and the default plugin for Python code completion (pythoncomplete).
Should I set some variable? Or is this behavior expected?
Update:
Based on Jared's answer, I found out something by accident:
Omnicompletion doesn't work on this:
from StringIO import StringIO
class Test:
def __init__(self):
self.x = StringIO()
self.x.<C-x><C-o>
s = Test()
But works on this:
from StringIO import StringIO
class Test:
def __init__(self):
self.x = StringIO()
self.x.<C-x><C-o>
s = Test()
s.x = StringIO()
The only difference is the redeclaration of x (actually, it also works if I remove the declaration inside __init__).
I tested my example again, and I think the problem is not the "from" import, but the use of the imported class inside another class.
If I change the file main.py to:
from Test import Test
class Test2:
def __init__(self):
self.x = Test()
self.x.<C-x><C-o>
y = Test()
y.<C-x><C-o>
The first attempt to use omnicompletion fails, but the second works fine.
So yep, looks like a bug in the plugin :)
update: ooh, so I checked your example, and I get completion for
x = Test()
x.<C-x><C-o>
but not
o = object()
o.x = Test()
o.x.<C-x><C-o>
...I'm gonna do some digging
update 2: revenge of Dr. Strangelove
and...this is where it get's weird.
from StringIO import StringIO
class M:
pass
s = M()
s.x = StringIO()
s.x.<C-x><C-o>
completes. but this
from StringIO import StringIO
class M: pass
s = M()
s.x = StringIO()
s.x.<C-x><C-o>
Did you catch the difference? nothing syntactically -- just a little whitespace
And yet it breaks completion. So there's definitely a parsing bug in there somewhere (why they don't just use the ast module, I have no idea...)
[end of updates]
On first blush, I can't reproduce your problem; here's my test file:
from os import path
path.<C-x><C-o>
and I get completion. Now, I know it's not exactly your situation, but it shows that pythoncomplete knows about 'from'.
And now the more in-depth example:
from StringIO import StringIO
s = StringIO()
s.<C-x><C-o>
And...completion! Could you try that example to see if it works with builtin modules for you? If that's the case, you should probably check paths...
If it still doesnt work, and you're up for some digging around, check out line #555 of pythoncomplete.vim [at /usr/share/vim/vim72/autoload/pythoncomplete.vim on my ubuntu machine]:
elif token == 'from':
mod, token = self._parsedotname()
if not mod or token != "import":
print "from: syntax error..."
continue
names = self._parseimportlist()
for name, alias in names:
loc = "from %s import %s" % (mod,name)
if len(alias) > 0: loc += " as %s" % alias
self.scope.local(loc)
freshscope = False
as you can see, this is where it handles from statements.
Cheers

How can I get a list of all classes within current module in Python?

I've seen plenty of examples of people extracting all of the classes from a module, usually something like:
# foo.py
class Foo:
pass
# test.py
import inspect
import foo
for name, obj in inspect.getmembers(foo):
if inspect.isclass(obj):
print obj
Awesome.
But I can't find out how to get all of the classes from the current module.
# foo.py
import inspect
class Foo:
pass
def print_classes():
for name, obj in inspect.getmembers(???): # what do I do here?
if inspect.isclass(obj):
print obj
# test.py
import foo
foo.print_classes()
This is probably something really obvious, but I haven't been able to find anything. Can anyone help me out?
Try this:
import sys
current_module = sys.modules[__name__]
In your context:
import sys, inspect
def print_classes():
for name, obj in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(obj):
print(obj)
And even better:
clsmembers = inspect.getmembers(sys.modules[__name__], inspect.isclass)
Because inspect.getmembers() takes a predicate.
I don't know if there's a 'proper' way to do it, but your snippet is on the right track: just add import foo to foo.py, do inspect.getmembers(foo), and it should work fine.
What about
g = globals().copy()
for name, obj in g.iteritems():
?
I was able to get all I needed from the dir built in plus getattr.
# Works on pretty much everything, but be mindful that
# you get lists of strings back
print dir(myproject)
print dir(myproject.mymodule)
print dir(myproject.mymodule.myfile)
print dir(myproject.mymodule.myfile.myclass)
# But, the string names can be resolved with getattr, (as seen below)
Though, it does come out looking like a hairball:
def list_supported_platforms():
"""
List supported platforms (to match sys.platform)
#Retirms:
list str: platform names
"""
return list(itertools.chain(
*list(
# Get the class's constant
getattr(
# Get the module's first class, which we wrote
getattr(
# Get the module
getattr(platforms, item),
dir(
getattr(platforms, item)
)[0]
),
'SYS_PLATFORMS'
)
# For each include in platforms/__init__.py
for item in dir(platforms)
# Ignore magic, ourselves (index.py) and a base class.
if not item.startswith('__') and item not in ['index', 'base']
)
))
import pyclbr
print(pyclbr.readmodule(__name__).keys())
Note that the stdlib's Python class browser module uses static source analysis, so it only works for modules that are backed by a real .py file.
If you want to have all the classes, that belong to the current module, you could use this :
import sys, inspect
def print_classes():
is_class_member = lambda member: inspect.isclass(member) and member.__module__ == __name__
clsmembers = inspect.getmembers(sys.modules[__name__], is_class_member)
If you use Nadia's answer and you were importing other classes on your module, that classes will be being imported too.
So that's why member.__module__ == __name__ is being added to the predicate used on is_class_member. This statement checks that the class really belongs to the module.
A predicate is a function (callable), that returns a boolean value.
This is the line that I use to get all of the classes that have been defined in the current module (ie not imported). It's a little long according to PEP-8 but you can change it as you see fit.
import sys
import inspect
classes = [name for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if obj.__module__ is __name__]
This gives you a list of the class names. If you want the class objects themselves just keep obj instead.
classes = [obj for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if obj.__module__ is __name__]
This is has been more useful in my experience.
Another solution which works in Python 2 and 3:
#foo.py
import sys
class Foo(object):
pass
def print_classes():
current_module = sys.modules[__name__]
for key in dir(current_module):
if isinstance( getattr(current_module, key), type ):
print(key)
# test.py
import foo
foo.print_classes()
I think that you can do something like this.
class custom(object):
__custom__ = True
class Alpha(custom):
something = 3
def GetClasses():
return [x for x in globals() if hasattr(globals()[str(x)], '__custom__')]
print(GetClasses())`
if you need own classes
I frequently find myself writing command line utilities wherein the first argument is meant to refer to one of many different classes. For example ./something.py feature command —-arguments, where Feature is a class and command is a method on that class. Here's a base class that makes this easy.
The assumption is that this base class resides in a directory alongside all of its subclasses. You can then call ArgBaseClass(foo = bar).load_subclasses() which will return a dictionary. For example, if the directory looks like this:
arg_base_class.py
feature.py
Assuming feature.py implements class Feature(ArgBaseClass), then the above invocation of load_subclasses will return { 'feature' : <Feature object> }. The same kwargs (foo = bar) will be passed into the Feature class.
#!/usr/bin/env python3
import os, pkgutil, importlib, inspect
class ArgBaseClass():
# Assign all keyword arguments as properties on self, and keep the kwargs for later.
def __init__(self, **kwargs):
self._kwargs = kwargs
for (k, v) in kwargs.items():
setattr(self, k, v)
ms = inspect.getmembers(self, predicate=inspect.ismethod)
self.methods = dict([(n, m) for (n, m) in ms if not n.startswith('_')])
# Add the names of the methods to a parser object.
def _parse_arguments(self, parser):
parser.add_argument('method', choices=list(self.methods))
return parser
# Instantiate one of each of the subclasses of this class.
def load_subclasses(self):
module_dir = os.path.dirname(__file__)
module_name = os.path.basename(os.path.normpath(module_dir))
parent_class = self.__class__
modules = {}
# Load all the modules it the package:
for (module_loader, name, ispkg) in pkgutil.iter_modules([module_dir]):
modules[name] = importlib.import_module('.' + name, module_name)
# Instantiate one of each class, passing the keyword arguments.
ret = {}
for cls in parent_class.__subclasses__():
path = cls.__module__.split('.')
ret[path[-1]] = cls(**self._kwargs)
return ret
import Foo
dir(Foo)
import collections
dir(collections)
The following can be placed at the top of the file:
def get_classes():
import inspect, sys
return dict(inspect.getmembers(
sys.modules[__name__],
lambda member: inspect.isclass(member) and member.__module__ == __name__
))
Note, this can be placed at the top of the module because we've wrapped the logic in a function definition. If you want the dictionary to exist as a top-level object you will need to place the definition at the bottom of the file to ensure all classes are included.
Go to Python Interpreter. type help ('module_name') , then press Enter.
e.g. help('os') .
Here, I've pasted one part of the output below:
class statvfs_result(__builtin__.object)
| statvfs_result: Result from statvfs or fstatvfs.
|
| This object may be accessed either as a tuple of
| (bsize, frsize, blocks, bfree, bavail, files, ffree, favail, flag, namemax),
| or via the attributes f_bsize, f_frsize, f_blocks, f_bfree, and so on.
|
| See os.statvfs for more information.
|
| Methods defined here:
|
| __add__(...)
| x.__add__(y) <==> x+y
|
| __contains__(...)
| x.__contains__(y) <==> y in x

Categories

Resources