Using Python eval to evaluate classes - python

How I can use eval to made class definitions inside other class ?
evalstr = str("class MyScreen(Screen):\n\tpass\n")
eval(evalstr)
I want to execute this code in other class method. But it returns an error.

Use the type function instead:
MyScreen = type("MyScreen", (Screen,), {})
This is the correct way to create a class at run-time (and in fact is essentially what executing a class statement does, since type is also the default metaclass in Python).
You can also simply define the class the "normal" way; there's nothing that says a class statement must be at the global level of a module:
class Something(object):
# Define a new class here...
class MyScreen(Screen):
pass
def __init__(self):
"""Initialize a Something object"""
# ... or here
class MyScreen(Screen):
pass

You want exec not eval here (security concerns aside)
evalstr = str("class MyScreen(Screen):\n\tpass\n")
exec(evalstr)
eval will only evaluate an expression and return it's value. exec is used for executing arbitrary code strings as statements, which is what you need here.
But seriously this is a security disaster waiting to happen in most cases, consider alternatives (like writing the code not as a string).
EDIT:
Chepner's answer is probably the right way to go.

You can't use eval for that because class is a statement, and eval only evaluates expressions, not statements. You could use exec.

Related

Invisible argument python [duplicate]

This question already has answers here:
What is the purpose of the `self` parameter? Why is it needed?
(26 answers)
Closed 6 months ago.
When defining a method on a class in Python, it looks something like this:
class MyClass(object):
def __init__(self, x, y):
self.x = x
self.y = y
But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype.
Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
I like to quote Peters' Zen of Python. "Explicit is better than implicit."
In Java and C++, 'this.' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't.
Python elects to make things like this explicit rather than based on a rule.
Additionally, since nothing is implied or assumed, parts of the implementation are exposed. self.__class__, self.__dict__ and other "internal" structures are available in an obvious way.
It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes.
e.g.
>>> class C:
... def foo(self):
... print("Hi!")
...
>>>
>>> def bar(self):
... print("Bork bork bork!")
...
>>>
>>> c = C()
>>> C.bar = bar
>>> c.bar()
Bork bork bork!
>>> c.foo()
Hi!
>>>
It also (as far as I know) makes the implementation of the python runtime easier.
I suggest that one should read Guido van Rossum's blog on this topic - Why explicit self has to stay.
When a method definition is decorated, we don't know whether to automatically give it a 'self' parameter or not: the decorator could turn the function into a static method (which has no 'self'), or a class method (which has a funny kind of self that refers to a class instead of an instance), or it could do something completely different (it's trivial to write a decorator that implements '#classmethod' or '#staticmethod' in pure Python). There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not.
I reject hacks like special-casing '#classmethod' and '#staticmethod'.
Python doesn't force you on using "self". You can give it whatever name you want. You just have to remember that the first argument in a method definition header is a reference to the object.
Also allows you to do this: (in short, invoking Outer(3).create_inner_class(4)().weird_sum_with_closure_scope(5) will return 12, but will do so in the craziest of ways.
class Outer(object):
def __init__(self, outer_num):
self.outer_num = outer_num
def create_inner_class(outer_self, inner_arg):
class Inner(object):
inner_arg = inner_arg
def weird_sum_with_closure_scope(inner_self, num)
return num + outer_self.outer_num + inner_arg
return Inner
Of course, this is harder to imagine in languages like Java and C#. By making the self reference explicit, you're free to refer to any object by that self reference. Also, such a way of playing with classes at runtime is harder to do in the more static languages - not that's it's necessarily good or bad. It's just that the explicit self allows all this craziness to exist.
Moreover, imagine this: We'd like to customize the behavior of methods (for profiling, or some crazy black magic). This can lead us to think: what if we had a class Method whose behavior we could override or control?
Well here it is:
from functools import partial
class MagicMethod(object):
"""Does black magic when called"""
def __get__(self, obj, obj_type):
# This binds the <other> class instance to the <innocent_self> parameter
# of the method MagicMethod.invoke
return partial(self.invoke, obj)
def invoke(magic_self, innocent_self, *args, **kwargs):
# do black magic here
...
print magic_self, innocent_self, args, kwargs
class InnocentClass(object):
magic_method = MagicMethod()
And now: InnocentClass().magic_method() will act like expected. The method will be bound with the innocent_self parameter to InnocentClass, and with the magic_self to the MagicMethod instance. Weird huh? It's like having 2 keywords this1 and this2 in languages like Java and C#. Magic like this allows frameworks to do stuff that would otherwise be much more verbose.
Again, I don't want to comment on the ethics of this stuff. I just wanted to show things that would be harder to do without an explicit self reference.
I think it has to do with PEP 227:
Names in class scope are not accessible. Names are resolved in the
innermost enclosing function scope. If a class definition occurs in a
chain of nested scopes, the resolution process skips class
definitions. This rule prevents odd interactions between class
attributes and local variable access. If a name binding operation
occurs in a class definition, it creates an attribute on the resulting
class object. To access this variable in a method, or in a function
nested within a method, an attribute reference must be used, either
via self or via the class name.
I think the real reason besides "The Zen of Python" is that Functions are first class citizens in Python.
Which essentially makes them an Object. Now The fundamental issue is if your functions are object as well then, in Object oriented paradigm how would you send messages to Objects when the messages themselves are objects ?
Looks like a chicken egg problem, to reduce this paradox, the only possible way is to either pass a context of execution to methods or detect it. But since python can have nested functions it would be impossible to do so as the context of execution would change for inner functions.
This means the only possible solution is to explicitly pass 'self' (The context of execution).
So i believe it is a implementation problem the Zen came much later.
As explained in self in Python, Demystified
anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit). This is the reason the first parameter of a function in class must be the object itself.
class Point(object):
def __init__(self,x = 0,y = 0):
self.x = x
self.y = y
def distance(self):
"""Find distance from origin"""
return (self.x**2 + self.y**2) ** 0.5
Invocations:
>>> p1 = Point(6,8)
>>> p1.distance()
10.0
init() defines three parameters but we just passed two (6 and 8). Similarly distance() requires one but zero arguments were passed.
Why is Python not complaining about this argument number mismatch?
Generally, when we call a method with some arguments, the corresponding class function is called by placing the method's object before the first argument. So, anything like obj.meth(args) becomes Class.meth(obj, args). The calling process is automatic while the receiving process is not (its explicit).
This is the reason the first parameter of a function in class must be the object itself. Writing this parameter as self is merely a convention. It is not a keyword and has no special meaning in Python. We could use other names (like this) but I strongly suggest you not to. Using names other than self is frowned upon by most developers and degrades the readability of the code ("Readability counts").
...
In, the first example self.x is an instance attribute whereas x is a local variable. They are not the same and lie in different namespaces.
Self Is Here To Stay
Many have proposed to make self a keyword in Python, like this in C++ and Java. This would eliminate the redundant use of explicit self from the formal parameter list in methods. While this idea seems promising, it's not going to happen. At least not in the near future. The main reason is backward compatibility. Here is a blog from the creator of Python himself explaining why the explicit self has to stay.
The 'self' parameter keeps the current calling object.
class class_name:
class_variable
def method_name(self,arg):
self.var=arg
obj=class_name()
obj.method_name()
here, the self argument holds the object obj. Hence, the statement self.var denotes obj.var
There is also another very simple answer: according to the zen of python, "explicit is better than implicit".

Why does my class run automatically without calling it?

name_player = None
health_player = None
inventory_player = []
class engine:
print name_player
I have no idea why this runs without calling it with engine()
The Python interpreter starts by reading your file, one line at a time.
Step 1:
name_player = None adds name_player : None to locals()
Step 2 and 3 proceed in the same way.
Step 4: class engine: Python sees a class and prepares to load the definition into memory. So it's going to read the class and put all of the fields and method definitions into some runtime dictionary probably. In order to do that, it needs to execute the statements in the class.
So normally a class might look like
class Foo():
def my_method():
return "I'm foo!"
This would define a method, and put that definition with the class definition on the heap.
So your definition proceeds as follows. We've started creating the class object and then we come across a statement, so the interpreter executes it. In your case, it's a print statement, so you see it executed.
You'll see now if you call engine(), another print won't happen.
What you probably want is to have this statement in a constructor like so:
class engine:
def __init__(self); #__init__() is a constructor in Python
print name_player
For more information about classes in Python, see https://docs.python.org/2/tutorial/classes.html
When you define a class, python evaluates the statements making up the class's definition. If those statements have side effects, for example sending text to the standard output, then that text will get sent.
If you were to instantiate this, by calling engine(), you would get back an empty object.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Is it bad practice to put stuff into new function properties?

Say I have a class and a function:
class AddressValidator(self):
def __init__(self):
pass
def validate(address):
# ...
def validate_address(addr):
validator = AddressValidator()
return validator.validate(addr)
The function is a shortcut for using the class, if you will. Now, what if this function has to be run thousands of times? If the validator class actually has to do something on instantiation, like connecting to a database, creating it over and over thousands of times is pretty wasteful. I was wondering if I could perhaps do something like this:
def validate_address(addr):
if not hasattr(validate_address, 'validator'):
validate_address.validator = AddressValidator()
validator = validate_address.validator
return validator.validate(addr)
Now the validator class is only instantiated once and saved "in the function", to put it that way. I've never seen this done though, so I'm guessing it's bad practice. If so, why?
Note: I know I can just cache the validator object in a module global. I'm just curious if this is a viable solution when I want to avoid littering my module.
Despite "everithing is an object", not everithing work as nice as instances of well controlled class.
This problem looks like typical case for "functor" or "callable object" as it called in python.
the code will be look something like
class AddressValidator(self):
def __init__(self):
pass
def __call__(self,address):
# ...
validate_address = AdressValidator()
or you could just define your function as shortcut to bound method
class AddressValidator(self):
def __init__(self):
pass
def validate(self,address):
# ...
validate_adress = AdressValidator().validate
I'd go with a default argument (evaluated once at function definition time and bound to the function):
def validate_address(addr, validator=AddressValidator())
return validator.validate(addr)
This is perfectly acceptable if instances of AddressValidator are considered immutable (i.e. they don't contain methods that modify their internal state), and it also allows you to later override the choice of validator should you find the need to (e.g. to provide a validator specialized for a particular country).

Does Python have something like anonymous inner classes of Java?

In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of OptionParser that overrides only a single method (for example exit()). In Java you can write something like this:
new OptionParser () {
public void exit() {
// body of the method
}
};
This piece of code creates a anonymous class that extends OptionParser and override only the exit() method.
There is a similar idiom in Python? Which idiom is used in these circumstances?
You can use the type(name, bases, dict) builtin function to create classes on the fly. For example:
op = type("MyOptionParser", (OptionParser,object), {"foo": lambda self: "foo" })
op().foo()
Since OptionParser isn't a new-style class, you have to explicitly include object in the list of base classes.
Java uses anonymous classes mostly to imitate closures or simply code blocks. Since in Python you can easily pass around methods there's no need for a construct as clunky as anonymous inner classes:
def printStuff():
print "hello"
def doit(what):
what()
doit(printStuff)
Edit: I'm aware that this is not what is needed in this special case. I just described the most common python solution to the problem most commonly by anonymous inner classes in Java.
You can accomplish this in three ways:
Proper subclass (of course)
a custom method that you invoke with the object as an argument
(what you probably want) -- adding a new method to an object (or replacing an existing one).
Example of option 3 (edited to remove use of "new" module -- It's deprecated, I did not know ):
import types
class someclass(object):
val = "Value"
def some_method(self):
print self.val
def some_method_upper(self):
print self.val.upper()
obj = someclass()
obj.some_method()
obj.some_method = types.MethodType(some_method_upper, obj)
obj.some_method()
Well, classes are first class objects, so you can create them in methods if you want. e.g.
from optparse import OptionParser
def make_custom_op(i):
class MyOP(OptionParser):
def exit(self):
print 'custom exit called', i
return MyOP
custom_op_class = make_custom_op(3)
custom_op = custom_op_class()
custom_op.exit() # prints 'custom exit called 3'
dir(custom_op) # shows all the regular attributes of an OptionParser
But, really, why not just define the class at the normal level? If you need to customise it, put the customisation in as arguments to __init__.
(edit: fixed typing errors in code)
Python doesn't support this directly (anonymous classes) but because of its terse syntax it isn't really necessary:
class MyOptionParser(OptionParser):
def exit(self, status=0, msg=None):
# body of method
p = MyOptionParser()
The only downside is you add MyOptionParser to your namespace, but as John Fouhy pointed out, you can hide that inside a function if you are going to do it multiple times.
Python probably has better ways to solve your problem. If you could provide more specific details of what you want to do it would help.
For example, if you need to change the method being called in a specific point in code, you can do this by passing the function as a parameter (functions are first class objects in python, you can pass them to functions, etc). You can also create anonymous lambda functions (but they're restricted to a single expression).
Also, since python is very dynamic, you can change methods of an object after it's been created object.method1 = alternative_impl1, although it's actually a bit more complicated, see gnud's answer
In python you have anonymous functions, declared using lambda statement. I do not like them very much - they are not so readable, and have limited functionality.
However, what you are talking about may be implemented in python with a completely different approach:
class a(object):
def meth_a(self):
print "a"
def meth_b(obj):
print "b"
b = a()
b.__class__.meth_a = meth_b
You can always hide class by variables:
class var(...):
pass
var = var()
instead of
var = new ...() {};
This is what you would do in Python 3.7
#!/usr/bin/env python3
class ExmapleClass:
def exit(self):
print('this should NOT print since we are going to override')
ExmapleClass= type('', (ExmapleClass,), {'exit': lambda self: print('you should see this printed only')})()
ExmapleClass.exit()
I do this in python3 usually with inner classes
class SomeSerializer():
class __Paginator(Paginator):
page_size = 10
# defining it for e.g. Rest:
pagination_class = __Paginator
# you could also be accessing it to e.g. create an instance via method:
def get_paginator(self):
return self.__Paginator()
as i used double underscore, this mixes the idea of "mangling" with inner classes, from outside you can still access the inner class with SomeSerializer._SomeSerializer__Paginator, and also subclasses, but SomeSerializer.__Paginator will not work, which might or might not be your whish if you want it a bit more "anonymous".
However I suggest to use "private" notation with a single underscore, if you do not need the mangling.
In my case, all I need is a fast subclass to set some class attributes, followed up by assigning it to the class attribute of my RestSerializer class, so the double underscore would denote to "not use it at all further" and might change to no underscores, if I start reusing it elsewhere.
Being perverse, you could use the throwaway name _ for the derived class name:
class _(OptionParser):
def exit(self):
pass # your override impl
Here is a more fancy way of doing Maciej's method.
I defined the following decorator:
def newinstance(*args, **kwargs):
def decorator(cls):
return cls(*args, **kwargs)
return decorator
The following codes are roughly equivalent (also works with args!)
// java
MyClass obj = new MyClass(arg) {
public void method() {
// body of the method
}
};
# python
#newinstance(arg)
class obj(MyClass):
def method(self):
pass # body of the method
You can use this code from within a class/method/function if you want to define an "inner" class instance.

Categories

Resources