Instantiate a Python class from a name [duplicate] - python

This question already has answers here:
Python: Reference to a class from a string?
(4 answers)
Closed 7 years ago.
So i have a set of classes and a string with one of the class names. How do I instantiate a class based on that string?
class foo:
def __init__(self, left, right):
self.left = left
self.right = right
str = "foo"
x = Init(str, A, B)
I want x to be an instantiation of class foo.

In your case you can use something like:
get_class = lambda x: globals()[x]
c = get_class("foo")
And it's even easier to get the class from the module:
import somemodule
getattr(somemodule, "SomeClass")

If you know the namespace involved, you can use it directly -- for example, if all classes are in module zap, the dictionary vars(zap) is that namespace; if they're all in the current module, globals() is probably the handiest way to get that dictionary.
If the classes are not all in the same namespace, then building an "artificial" namespace (a dedicated dict with class names as keys and class objects as values), as #Ignacio suggests, is probably the simplest approach.

classdict = {'foo': foo}
x = classdict['foo'](A, B)

classname = "Foo"
foo = vars()[classname](Bar, 0, 4)
Or perhaps
def mkinst(cls, *args, **kwargs):
try:
return globals()[cls](*args, **kwargs)
except:
raise NameError("Class %s is not defined" % cls)
x = mkinst("Foo", bar, 0, 4, disc="bust")
y = mkinst("Bar", foo, batman="robin")
Miscellaneous notes on the snippet:
*args and **kwargs are special parameters in Python, they mean «an array of non-keyword args» and «a dict of keyword args» accordingly.
PEP-8 (official Python style guide) recommends using cls for class variables.
vars() returns a dict of variables defined in the local scope.
globals() returns a dict of variables currently present in the environment outside of local scope.

try this
cls = __import__('cls_name')
and this - http://effbot.org/zone/import-confusion.htm maybe helpful

You might consider usage of metaclass as well:
Cls = type('foo', (), foo.__dict__)
x = Cls(A, B)
Yet it creates another similar class.

Related

How to call a function within an attribute? [duplicate]

This question already has answers here:
How to access (get or set) object attribute given string corresponding to name of that attribute
(3 answers)
Closed 3 years ago.
I have a Python class that have attributes named: date1, date2, date3, etc.
During runtime, I have a variable i, which is an integer.
What I want to do is to access the appropriate date attribute in run time based on the value of i.
For example,
if i == 1, I want to access myobject.date1
if i == 2, I want to access myobject.date2
And I want to do something similar for class instead of attribute.
For example, I have a bunch of classes: MyClass1, MyClass2, MyClass3, etc. And I have a variable k.
if k == 1, I want to instantiate a new instance of MyClass1
if k == 2, I want to instantiate a new instance of MyClass2
How can i do that?
EDIT
I'm hoping to avoid using a giant if-then-else statement to select the appropriate attribute/class.
Is there a way in Python to compose the class name on the fly using the value of a variable?
You can use getattr() to access a property when you don't know its name until runtime:
obj = myobject()
i = 7
date7 = getattr(obj, 'date%d' % i) # same as obj.date7
If you keep your numbered classes in a module called foo, you can use getattr() again to access them by number.
foo.py:
class Class1: pass
class Class2: pass
[ etc ]
bar.py:
import foo
i = 3
someClass = getattr(foo, "Class%d" % i) # Same as someClass = foo.Class3
obj = someClass() # someClass is a pointer to foo.Class3
# short version:
obj = getattr(foo, "Class%d" % i)()
Having said all that, you really should avoid this sort of thing because you will never be able to find out where these numbered properties and classes are being used except by reading through your entire codebase. You are better off putting everything in a dictionary.
For the first case, you should be able to do:
getattr(myobject, 'date%s' % i)
For the second case, you can do:
myobject = locals()['MyClass%s' % k]()
However, the fact that you need to do this in the first place can be a sign that you're approaching the problem in a very non-Pythonic way.
OK, well... It seems like this needs a bit of work. Firstly, for your date* things, they should be perhaps stored as a dict of attributes. eg, myobj.dates[1], so on.
For the classes, it sounds like you want polymorphism. All of your MyClass* classes should have a common ancestor. The ancestor's __new__ method should figure out which of its children to instantiate.
One way for the parent to know what to make is to keep a dict of the children. There are ways that the parent class doesn't need to enumerate its children by searching for all of its subclasses but it's a bit more complex to implement. See here for more info on how you might take that approach. Read the comments especially, they expand on it.
class Parent(object):
_children = {
1: MyClass1,
2: MyClass2,
}
def __new__(k):
return object.__new__(Parent._children[k])
class MyClass1(Parent):
def __init__(self):
self.foo = 1
class MyClass2(Parent):
def __init__(self):
self.foo = 2
bar = Parent(1)
print bar.foo # 1
baz = Parent(2)
print bar.foo # 2
Thirdly, you really should rethink your variable naming. Don't use numbers to enumerate your variables, instead give them meaningful names. i and k are bad to use as they are by convention reserved for loop indexes.
A sample of your existing code would be very helpful in improving it.
to get a list of all the attributes, try:
dir(<class instance>)
I agree with Daenyth, but if you're feeling sassy you can use the dict method that comes with all classes:
>>> class nullclass(object):
def nullmethod():
pass
>>> nullclass.__dict__.keys()
['__dict__', '__module__', '__weakref__', 'nullmethod', '__doc__']
>>> nullclass.__dict__["nullmethod"]
<function nullmethod at 0x013366A8>

Why __dict__ does not contain class member unless using derect initialization?

I need to access list of class members in class methods specifically in __init__() function with intention of initializing them en masse. I tried to use __dict__ and vars(self) but Unfortunately they return empty dict object, unless using direct initialization such as self.y=5. The questions are; why that is empty and how can I initialize them at once, or essentially is variable __dict__ suitable for bulk initialization?
thank you
sample code is like this:
class P:
def __init__(self):
print("inside __init__() :",self.y,self.__dict__)
x=8
y=9
p=P()
print(" y is: ", p.y)
print("and __dict__ is:",p.__dict__)
output:
inside __init__() : 9 {}
y is: 9
and __dict__ is: {}
Python version: 3.8.5
Tested Operating systems: windows 10, MacOS 10.15.7 and Linux CentOS 7
Python class instances are dynamic - they don't have variables until you put them there (see note below). And they don't have any pre-knowledge of what those variables should be. Usually that's done by adding them one by one (e.g., self.foo = 1) in __init__ or other methods in the class. But if you have a different source of variables, you can do it by adding to self.__dict__. Suppose I have a row from a CSV file and I want a class that is initialized by that row. I could do
class P:
cell_names = ('a', 'b', 'c')
def __init__(self, row):
print("before", self.__dict__)
self.__dict__.update(zip(self.cell_names, row))
print("after", self.__dict__)
p = P((1,2,3))
print("attributes", p.a, p.b, p.c)
Outputs
before {}
after {'a': 1, 'b': 2, 'c': 3}
attributes 1 2 3
There are other ways to initialize, of course. It depends on what the source of your "en mass" data is.
If you want to add all of the keyword arguments you could
class P1:
def __init__(self, **kw):
self.__dict__.update(kw)
p = P1(a=1, b=3, c=3)
print("P2", p.a, p.b, p.c)
(Note: You can use __slot__ to predefine variables in a class. These bypass the instance dict)
__dict__ only holds instance attributes. x and y in your examples are class attributes. self.x is an expression that could mean many different things; the actual result depends on which lookup succeeds first.
When you make an assignment like self.x = 5, then the value 5 is associated with the key x in self.__dict__.
When you try to get the value of self.x, the first thing that is tried is self.__dict__['x']. If that fails, though, it tries P.x. If that failed, it would check for x as an attribute in each ancestor of P in the MRO, until a value is found. If nothing is found, an AttributeError is raised. (This ignores the use of __getattr__ or __getattribute__, but is sufficient to explain how self.x can provide a value when self.__dict__ is empty.)

Behavior of setattr vs self.<attr> = value [duplicate]

I'm trying to set up a simple test example of setattr() in Python, but it fails to assign a new value to the member.
class Foo(object):
__bar = 0
def modify_bar(self):
print(self.__bar)
setattr(self, "__bar", 1)
print(self.__bar)
Here I tried variable assignment with setattr(self, "bar", 1), but was unsuccessful:
>>> foo = Foo()
>>> foo.modify_bar()
0
0
Can someone explain what is happening under the hood. I'm new to python, so please forgive my elementary question.
A leading double underscore invokes python name-mangling.
So:
class Foo(object):
__bar = 0 # actually `_Foo__bar`
def modify_bar(self):
print(self.__bar) # actually self._Foo__bar
setattr(self, "__bar", 1)
print(self.__bar) # actually self._Foo__bar
Name mangling only applies to identifiers, not strings, which is why the __bar in the setattr function call is unaffected.
class Foo(object):
_bar = 0
def modify_bar(self):
print(self._bar)
setattr(self, "_bar", 1)
print(self._bar)
should work as expected.
Leading double underscores are generally not used very frequently in most python code (because their use is typically discouraged). There are a few valid use-cases (mainly to avoid name clashes when subclassing), but those are rare enough that name mangling is generally avoided in the wild.

How to pass variables in and out of functions in Python [duplicate]

This question already has answers here:
Alternatives for returning multiple values from a Python function [closed]
(14 answers)
Closed 4 years ago.
When I write functions in Python, I typically need to pass quite a few variables to the function. Also, output of such functions contains more than a few variables. In order to manage this variables I/O, I resort to the dictionary datatype, where I pack all input variables into a dictionary to inject into a function and then compile another dictionary at the end of the function for returning to the main program. This of course needs another unpacking of the output dictionary.
dict_in = {'var1':var1,
'var2':var2,
'varn':varn}
def foo(dict_in):
var1 = dict_in['var1']
var2 = dict_in['var2']
varn = dict_in['varn']
""" my code """
dict_out = {'op1':op1,
'op2':op2,
'op_m':op_m}
return dict_out
As the list of variables grows, I suspect that this will be an inefficient approach to deal with the variables I/O.
Can someone suggest a better, more efficient and less error-prone approach to this practice?
You could take advantage of kwargs to unpack named variables
def foo(**kwargs):
kwargs['var1'] = do_something(kwargs['var1'])
...
return kwargs
If you find yourself writing a lot of functions that act on the same data, one better way would be using classes to contain your data.
class Thing:
def __init__(self, a, b, c):
var_1 = a
var_2 = b
var_3 = c
# you can then define methods on it
def foo(self):
self.var_1 *= self.var_2
# and use it
t = Thing(1, 2, 3)
t.foo()
print(t.var_1)
There are a number of methods of creating these in an easier way. Some of them include:
attrs:
>>> #attr.s
... class SomeClass(object):
... a_number = attr.ib(default=42)
... list_of_numbers = attr.ib(default=attr.Factory(list))
...
... def hard_math(self, another_number):
... return self.a_number + sum(self.list_of_numbers) * another_number
namedtuples
>>> Point = namedtuple('Point', ['x', 'y'])
>>> p = Point(11, y=22) # instantiate with positional or keyword arguments
>>> p.x + p.y # fields accessible by name
33
Dataclasses
These are not in python yet, but will be added in 3.7. I am adding them in here here because they will likely be the tool of choice in the future.

Is there a Python 'shortcut' to define a class variable equal to a string version of its own name?

This is a bit of a silly thing, but I want to know if there is concise way in Python to define class variables that contain string representations of their own names. For example, one can define:
class foo(object):
bar = 'bar'
baz = 'baz'
baf = 'baf'
Probably a more concise way to write it in terms of lines consumed is:
class foo(object):
bar, baz, baf = 'bar', 'baz', 'baf'
Even there, though, I still have to type each identifier twice, once on each side of the assignment, and the opportunity for typos is rife.
What I want is something like what sympy provides in its var method:
sympy.var('a,b,c')
The above injects into the namespace the variables a, b, and c, defined as the corresponding sympy symbolic variables.
Is there something comparable that would do this for plain strings?
class foo(object):
[nifty thing]('bar', 'baz', 'baf')
EDIT: To note, I want to be able to access these as separate identifiers in code that uses foo:
>>> f = foo(); print(f.bar)
bar
ADDENDUM: Given the interest in the question, I thought I'd provide more context on why I want to do this. I have two use-cases at present: (1) typecodes for a set of custom exceptions (each Exception subclass has a distinct typecode set); and (2) lightweight enum. My desired feature set is:
Only having to type the typecode / enum name (or value) once in the source definition. class foo(object): bar = 'bar' works fine but means I have to type it out twice in-source, which gets annoying for longer names and exposes a typo risk.
Valid typecodes / enum values exposed for IDE autocomplete.
Values stored internally as comprehensible strings:
For the Exception subclasses, I want to be able to define myError.__str__ as just something like return self.typecode + ": " + self.message + " (" + self.source + ")", without having to do a whole lot of dict-fu to back-reference an int value of self.typecode to a comprehensible and meaningful string.
For the enums, I want to just be able to obtain widget as output from e = myEnum.widget; print(e), again without a lot of dict-fu.
I recognize this will increase overhead. My application is not speed-sensitive (GUI-based tool for driving a separate program), so I don't think this will matter at all.
Straightforward membership testing, by also including (say) a frozenset containing all of the typecodes / enum string values as myError.typecodes/myEnum.E classes. This addresses potential problems from accidental (or intentional.. but why?!) use of an invalid typecode / enum string via simple sanity checks like if not enumVal in myEnum.E: raise(ValueError('Invalid enum value: ' + str(enumVal))).
Ability to import individual enum / exception subclasses via, say, from errmodule import squirrelerror, to avoid cluttering the namespace of the usage environment with non-relevant exception subclasses. I believe this prohibits any solutions requiring post-twiddling on the module level like what Sinux proposed.
For the enum use case, I would rather avoid introducing an additional package dependency since I don't (think I) care about any extra functionality available in the official enum class. In any event, it still wouldn't resolve #1.
I've already figured out implementation I'm satisfied with for all of the above but #1. My interest in a solution to #1 (without breaking the others) is partly a desire to typo-proof entry of the typecode / enum values into source, and partly plain ol' laziness. (Says the guy who just typed up a gigantic SO question on the topic.)
I recommend using collections.namedtuple:
Example:
>>> from collections import namedtuple as nifty_thing
>>> Data = nifty_thing("Data", ["foo", "bar", "baz"])
>>> data = Data(foo=1, bar=2, baz=3)
>>> data.foo
1
>>> data.bar
2
>>> data.baz
3
Side Note: If you are using/on Python 3.x I'd recommend Enum as per #user2357112's comment. This is the standardized approach going forward for Python 3+
Update: Okay so if I understand the OP's exact requirement(s) here I think the only way to do this (and presumably sympy does this too) is to inject the names/variables into the globals() or locals() namespaces. Example:
#!/usr/bin/env python
def nifty_thing(*names):
d = globals()
for name in names:
d[name] = None
nifty_thing("foo", "bar", "baz")
print foo, bar, bar
Output:
$ python foo.py
None None None
NB: I don't really recommend this! :)
Update #2: The other example you showed in your question is implemented like this:
#!/usr/bin/env python
import sys
def nifty_thing(*names):
frame = sys._getframe(1)
locals = frame.f_locals
for name in names:
locals[name] = None
class foo(object):
nifty_thing("foo", "bar", "baz")
f = foo()
print f.foo, f.bar, f.bar
Output:
$ python foo.py
None None None
NB: This is inspired by zope.interface.implements().
current_list = ['bar', 'baz', 'baf']
class foo(object):
"""to be added"""
for i in current_list:
setattr(foo, i, i)
then run this:
>>>f = foo()
>>>print(f.bar)
bar
>>>print(f.baz)
baz
This doesn't work exactly like what you asked for, but it seems like it should do the job:
class AutoNamespace(object):
def __init__(self, names):
try:
# Support space-separated name strings
names = names.split()
except AttributeError:
pass
for name in names:
setattr(self, name, name)
Demo:
>>> x = AutoNamespace('a b c')
>>> x.a
'a'
If you want to do what SymPy does with var, you can, but I would strongly recommend against it. That said, here's a function based on the source code of sympy.var:
def var(names):
from inspect import currentframe
frame = currentframe().f_back
try:
names = names.split()
except AttributeError:
pass
for name in names:
frame.f_globals[name] = name
Demo:
>>> var('foo bar baz')
>>> bar
'bar'
It'll always create global variables, even if you call it from inside a function or class. inspect is used to get at the caller's globals, whereas globals() would get var's own globals.
How about you define the variable as emtpy string and then get their name:
class foo(object):
def __getitem__(self, item):
return item
foo = foo()
print foo['test']
Here's an extension of bman's idea. This has its advantages and disadvantages, but at least it does work with some autocompleters.
class FooMeta(type):
def __getattr__(self, attr):
return attr
def __dir__(self):
return ['bar', 'baz', 'baf']
class foo:
__metaclass__ = FooMeta
This allows access like foo.xxx → 'xxx' for all xxx, but also guides autocomplete through __dir__.
Figured out what I was looking for:
>>> class tester:
... E = frozenset(['this', 'that', 'the', 'other'])
... for s in E:
... exec(str(s) + "='" + str(s) + "'") # <--- THIS
...
>>> tester()
<__main__.tester instance at 0x03018BE8>
>>> t = tester()
>>> t.this
'this'
>>> t.that in tester.E
True
Only have to define the element strings once, and I'm pretty sure it will work for all of my requirements listed in the question. In actual implementation, I plan to encapsulate the str(s) + "='" + str(s) + "'" in a helper function, so that I can just call exec(helper(s)) in the for loop. (I'm pretty sure that the exec has to be placed in the body of the class, not in the helper function, or else the new variables would be injected into the (transitory) scope of the helper function, not that of the class.)
EDIT: Upon detailed testing, this DOES NOT WORK -- the use of exec prevents the introspection of the IDE from knowing of the existence of the created variables.
I think you can achieve a rather beautiful solution using metaclasses, but I'm not fluent enough in using those to present that as an answer, but I do have an option which seems to work rather nicely:
def new_enum(name, *class_members):
"""Builds a class <name> with <class_members> having the name as value."""
return type(name, (object, ), { val : val for val in class_members })
Foo = new_enum('Foo', 'bar', 'baz', 'baf')
This should recreate the class you've given as example, and if you want you can change the inheritance by changing the second parameter of the call to class type(name, bases, dict).

Categories

Resources