Change recordorder in python in loop [duplicate] - python

Suppose I have a python object x and a string s, how do I set the attribute s on x? So:
>>> x = SomeObject()
>>> attr = 'myAttr'
>>> # magic goes here
>>> x.myAttr
'magic'
What's the magic? The goal of this, incidentally, is to cache calls to x.__getattr__().

setattr(x, attr, 'magic')
For help on it:
>>> help(setattr)
Help on built-in function setattr in module __builtin__:
setattr(...)
setattr(object, name, value)
Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
``x.y = v''.
However, you should note that you can't do that to a "pure" instance of object. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that.

Usually, we define classes for this.
class XClass( object ):
def __init__( self ):
self.myAttr= None
x= XClass()
x.myAttr= 'magic'
x.myAttr
However, you can, to an extent, do this with the setattr and getattr built-in functions. However, they don't work on instances of object directly.
>>> a= object()
>>> setattr( a, 'hi', 'mom' )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'hi'
They do, however, work on all kinds of simple classes.
class YClass( object ):
pass
y= YClass()
setattr( y, 'myAttr', 'magic' )
y.myAttr

let x be an object then you can do it two ways
x.attr_name = s
setattr(x, 'attr_name', s)

Also works fine within a class:
def update_property(self, property, value):
setattr(self, property, value)

If you want a filename from an argument:
import sys
filename = sys.argv[1]
file = open(filename, 'r')
contents = file.read()
If you want an argument to show on your terminal (using print()):
import sys
arg = sys.argv[1]
arg1config = print(arg1config)

Related

how to convert a string to a variable in python in classes? [duplicate]

Suppose I have a python object x and a string s, how do I set the attribute s on x? So:
>>> x = SomeObject()
>>> attr = 'myAttr'
>>> # magic goes here
>>> x.myAttr
'magic'
What's the magic? The goal of this, incidentally, is to cache calls to x.__getattr__().
setattr(x, attr, 'magic')
For help on it:
>>> help(setattr)
Help on built-in function setattr in module __builtin__:
setattr(...)
setattr(object, name, value)
Set a named attribute on an object; setattr(x, 'y', v) is equivalent to
``x.y = v''.
However, you should note that you can't do that to a "pure" instance of object. But it is likely you have a simple subclass of object where it will work fine. I would strongly urge the O.P. to never make instances of object like that.
Usually, we define classes for this.
class XClass( object ):
def __init__( self ):
self.myAttr= None
x= XClass()
x.myAttr= 'magic'
x.myAttr
However, you can, to an extent, do this with the setattr and getattr built-in functions. However, they don't work on instances of object directly.
>>> a= object()
>>> setattr( a, 'hi', 'mom' )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'object' object has no attribute 'hi'
They do, however, work on all kinds of simple classes.
class YClass( object ):
pass
y= YClass()
setattr( y, 'myAttr', 'magic' )
y.myAttr
let x be an object then you can do it two ways
x.attr_name = s
setattr(x, 'attr_name', s)
Also works fine within a class:
def update_property(self, property, value):
setattr(self, property, value)
If you want a filename from an argument:
import sys
filename = sys.argv[1]
file = open(filename, 'r')
contents = file.read()
If you want an argument to show on your terminal (using print()):
import sys
arg = sys.argv[1]
arg1config = print(arg1config)

Partial function object has no attribute "__code__"

I am writing a small application that takes users' input to give them a set of optimal parameters to use. (Each of these sets are ranked and the user can select whichever one they want to use)
To be able to do this, I select one function from an array of choices(depending on the context), partially fill the function using functools.partial then return the partial object to another module which in turn calls a C++ library (dlib) which has a python interface.
Up until today, I wasn't using functools.partial to fill the function and faced no issues. But to make the code less repetitive and easier to understand, I added that in. After adding that part, I get the following error :
AttributeError: 'functools.partial' object has no attribute '__code__'
I read a few posts and realized that this is an issue with partial objects as they often lack attributes like __name__, __module__ etc but I am not sure how to resolve this issue.
PS: I am using python 3.7
EDIT
I am adding a small code that reproduces the error
from functools import partial
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.datasets import load_breast_cancer
from dlib import find_max_global
def objective_calculator(*args, X, y):
args = args[0]
model = LogisticRegression()
model.set_params(**{'class_weight': args[0], 'max_iter':args[1]})
model.fit(train['data'], train['target'])
predictions = model.predict(X)
return accuracy_score(y, predictions)
train = load_breast_cancer()
obj_calc = partial(objective_calculator, X=train['data'], y=train['target'])
lower_bounds = [0.1, 10] # class_weight, max_iter
upper_bounds = [0.5, 200] # class_weight, max_iter
is_integer_variable = [False, True]
find_max_global(f=obj_calc,
bound1=lower_bounds,
bound2=upper_bounds,
num_function_calls=2,
is_integer_variable=is_integer_variable,
solver_epsilon=1,)
Running the above code results in the following error
AttributeError: 'functools.partial' object has no attribute '__code__'
Is it advisable to manually add the __code__ attribute to the partial object?
AttributeError: 'functools.partial' object has no attribute '__code__'
for solving this error we can use
wraps
functools.WRAPPER_ASSIGNMENTS to update attributes,
which defaults to ('module', 'name', 'doc')
in python 2.7.6
or,
we can update only present attributes...
import functools
import itertools
def wraps_safely(obj, attr_names=functools.WRAPPER_ASSIGNMENTS):
return wraps(obj, assigned=itertools.ifilter(functools.partial(hasattr, obj), attr_names))
`>>> def foo():
`... ` ` """ Ubiquitous foo function ...."""`
...
>>> functools.wraps(partial(foo))(foo)()
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ```"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/functools.py", ``line 33, in update_wrapper`
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'functools.partial' object has no attribute '__module__'
>>> wraps_safely(partial(foo))(foo)()
>>> `
`
just filter out all those attribute which aren't present.
second approach would be:-
*strictly deal with partial objects only.
fold wraps with singledispatch and it creates wrapped partial object and its attribute would be taken from the function "fun" attribute.
import functools
def wraps_partial(wrapper, *args, **kwargs):
` """ Creates a callable object whose attributes will be set from the partials nested func attribute ..."""
` ` ` wrapper = wrapper.func``
while isinstance(wrapper, functools.partial):
wrapper = wrapper.func
return functools.wraps(wrapper, *args, **kwargs)
# after returning functools.wraps
def foo():
""" Foo function.
:return: None """
pass
>>> wraps_partial(partial(partial(foo)))(lambda : None).__doc__
' Foo Function, returns None '
>>> wraps_partial(partial(partial(foo)))(lambda : None).__name__
'foo'
>>> wraps_partial(partial(partial(foo)))(lambda : None)()
>>> pfoo = partial(partial(foo))
>>> #wraps_partial(pfoo)
... def not_foo():
...
""" Not Foo function ... """
...
>>> not_foo.__doc__
' Foo Function, returns None '
>>> not_foo.__name__
'foo'
>>>
now we can get the original functions docs.
python(CPython) 3.
from functools import wraps, partial, WRAPPER_ASSIGNMENTS
try:
wraps(partial(wraps))(wraps)
except AttributeError:
#wraps(wraps)
def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):
return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))
*we define a new wraps function only if we fail to wrap a partial,*
*use the original wraps to copy the docs*
also,
In (Python 3.5) we have a reference to the original function, and it is maintained in the partial. You can access it as .func:
from functoolsimport partial
def a(b):
print(b)
In[20]: c=partial(a,5)
In[21]: c.func.__module__
Out[21]: '__main__'
In[22]: c.func.__name__
Out[22]: 'a'
I would not dare adding a __code__ attribute to a partial object. The __code__ attribute allows a low level access to the compiled Python code. It is normally never used in common scripts and is probably used here to interface it with the underlying C++ library.
The bullet proof way is to define a new function. In Python def is an executable statement, and it is possible to iterately redefine a function:
def objective_calculator(*args, X, y):
...
for X, y in ...:
def obj_calc(*args):
return objective_calculator(*args, X, y)
...
find_max_global(f=obj_calc, ...)
obj_calc is now a true Python function and it will have its own __code__ attribute.
If the dlib library supports it, it could be possible to use a lambda:
find_max_global(f=lambda *args: objective_calculator(*args, X, y), ...)
A lambda is almost a true function and has indeed a __code__ attribute, but it is defined as a separate object class in Python, so depending on the dlib library requirements (I could not find any reference on it) it could work or not.

How to Create a Python Method in Execution Time?

This following code works fine, and shows a way to create attributes and methods in execution time:
class Pessoa:
pass
p = Pessoa( )
p.nome = 'fulano'
if hasattr(p, 'nome'):
print(p)
p.get_name = lambda self:'Sr.{}'.format(self.nome)
But, I think my way to create methods is not correct. There are another way to create a method dynamically ?
[Although this has really been answered in Steven Rumbalski's comment, pointing to two independent questions, I'm adding a short combined answer here.]
Yes, you're right that this does not correctly define a method.
>>> class C:
... pass
...
>>> p = C()
>>> p.name = 'nickie'
>>> p.get_name = lambda self: 'Dr. {}'.format(self.name)
>>> p.get_name()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: <lambda>() takes exactly 1 argument (0 given)
Here's how you can call the function that is stored in object p's attribute called get_name:
>>> p.get_name(p)
'Dr. nickie'
For properly defining an instance method dynamically, take a look at the answers to a relevant question.
If you want to define a class method dynamically, you have to define it as:
>>> C.get_name = lambda self: 'Dr. {}'.format(self.name)
Although the method will be added to existing objects, this will not work for p (as it already has its own attribute get_name). However, for a new object:
>>> q = C()
>>> q.name = 'somebody'
>>> q.get_name()
'Dr. somebody'
And (obviously), the method will fail for objects that don't have a name attribute:
>>> r = C()
>>> r.get_name()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <lambda>
AttributeError: C instance has no attribute 'name'
There are two ways to dynamically create methods in Python 3:
create a method on the class itself: just assign a function to a member ; it is made accessible to all objects of the class, even if they were created before the method was created:
>>> class A: # create a class
def __init__(self, v):
self.val = v
>>> a = A(1) # create an instance
>>> def double(self): # define a plain function
self.val *= 2
>>> A.double = double # makes it a method on the class
>>> a.double() # use it...
>>> a.val
2
create a method on an instance of the class. It is possible in Python 3 thanks to the types module:
>>> def add(self, x): # create a plain function
self.val += x
>>> a.add = types.MethodType(add, a) # make it a method on an instance
>>> a.add(2)
>>> a.val
4
>>> b = A(1)
>>> b.add(2) # chokes on another instance
Traceback (most recent call last):
File "<pyshell#55>", line 1, in <module>
b.add(2)
AttributeError: 'A' object has no attribute 'add'
>>> type(a.add) # it is a true method on a instance
<class 'method'>
>>> type(a.double)
<class 'method'>
A slight variation on method 1 (on class) can be used to create static or class methods:
>>> def static_add(a,b):
return a+b
>>> A.static_add = staticmethod(static_add)
>>> a.static_add(3,4)
7
>>> def show_class(cls):
return str(cls)
>>> A.show_class = classmethod(show_class)
>>> b.show_class()
"<class '__main__.A'>"
Here is how I add methods to classes imported from a library. If I modified the library I would lose the changes at the next library upgrade. I can't create a new derived class because I can't tell the library to use my modified instance. So I monkey patch the existing classes by adding the missing methods:
# Import the standard classes of the shapely library
import shapely.geometry
# Define a function that returns the points of the outer
# and the inner polygons of a Polygon
def _coords_ext_int_polygon(self):
exterior_coords = [self.exterior.coords[:]]
interior_coords = [interior.coords[:] for interior in self.interiors]
return exterior_coords, interior_coords
# Define a function that returns the points of the outer
# and the inner polygons of a MultiPolygon
def _coords_ext_int_multi_polygon(self):
if self.is_empty:
return [], []
exterior_coords = []
interior_coords = []
for part in self:
i, e = part.coords_ext_int()
exterior_coords += i
interior_coords += e
return exterior_coords, interior_coords
# Define a function that saves outer and inner points to a .pt file
def _export_to_pt_file(self, file_name=r'C:\WizardTemp\test.pt'):
'''create a .pt file in the format that pleases thinkdesign'''
e, i = self.coords_ext_int()
with open(file_name, 'w') as f:
for rings in (e, i):
for ring in rings:
for x, y in ring:
f.write('{} {} 0\n'.format(x, y))
# Add the functions to the definition of the classes
# by assigning the functions to new class members
shapely.geometry.Polygon.coords_ext_int = _coords_ext_int_polygon
shapely.geometry.Polygon.export_to_pt_file = _export_to_pt_file
shapely.geometry.MultiPolygon.coords_ext_int = _coords_ext_int_multi_polygon
shapely.geometry.MultiPolygon.export_to_pt_file = _export_to_pt_file
Notice that the same function definition can be assigned to two different classes.
EDIT
In my example I'm not adding methods to a class of mine, I'm adding methods to shapely, an open source library I installed.
In your post you use p.get_name = ... to add a member to the object instance p. I first define a funciton _xxx(), then I add it to the class definition with class.xxx = _xxx.
I don't know your use case, but usually you add variables to instances and you add methods to class definitions, that's why I am showing you how to add methods to the class definition instead of to the instance.
Shapely manages geometric objects and offers methods to calculate the area of the polygons, to add or subtract polygons to each other, and many other really cool things.
My problem is that I need some methods that shapely doesn't provide out of the box.
In my example I created my own method that returns the list of points of the outer profile and the list of points of the inner profiles. I made two methods, one for the Polygon class and one for the MultiPolygon class.
I also need a method to export all the points to a .pt file format. In this case I made only one method that works with both the Polygon and the MultiPolygon classes.
This code is inside a module called shapely_monkeypatch.py (see monkey patch). When the module is imported the functions with the name starting by _ are defined, then they are assigned to the existing classes with names without _. (It is a convention in Python to use _ for names of variables or functions intended for internal use only.)
I shall be maligned, pilloried, and excoriated, but... here is one way I make a keymap for an alphabet of methods within __init__(self).
def __init__(this):
for c in "abcdefghijklmnopqrstuvwxyz":
this.keymap[ord(c)] = eval(f"this.{c}")
Now, with appropriate code, I can press a key in pygame to execute the mapped method.
It is easy enough to use lambdas so one does not even need pre-existing methods... for instance, if __str__(this) is a method, capital P can print the instance string representation using this code:
this.keymap[ord('P')] = lambda: print(this)
but everyone will tell you that eval is bad.
I live to break rules and color outside the boundaries.

Iterating class object

It's not a real world program but I would like to know why it can't be done.
I was thinking about numpy.r_ object and tried to do something similar but just making a class and not instantiating it.
a simple code (has some flaws) for integers could be:
class r_:
#classmethod
def __getitem__(clc, sl):
try:
return range(sl)
except TypeError:
sl = sl.start, sl.stop, sl.step
return range(*(i for i in sl if i is not None))
but as I try to do r_[1:10] i receive TypeError: 'type' object is not subscriptable.
Of course the code works with r_.__getitem__(slice(1,10)) but that's not what I want.
Is there something I can do in this case instead of using r_()[1:10]?
The protocol for resolving obj[index] is to look for a __getitem__ method in the type of obj, not to directly look up a method on obj (which would normally fall back to looking up a method on the type if obj didn't have an instance attribute with the name __getitem__).
This can be easily verified.
>>> class Foo(object):
pass
>>> def __getitem__(self, index):
return index
>>> f = Foo()
>>> f.__getitem__ = __getitem__
>>> f[3]
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
f[3]
TypeError: 'Foo' object does not support indexing
>>> Foo.__getitem__ = __getitem__
>>> f[3]
3
I don't know why exactly it works this way, but I would guess that at least part of the reason is exactly to prevent what you're trying to do; it would be surprising if every class that defined __getitem__ so that its instances were indexable accidentally gained the ability to be indexed itself. In the overwhelming majority of cases, code that tries to index a class will be a bug, so if the __getitem__ method happened to be able to return something, it would be bad if that didn't get caught.
Why don't you just call the class something else, and bind an instance of it to the name r_? Then you'd be able to do r_[1:10].
What you are trying to do is like list[1:5] or set[1:5] =) The special __getitem__ method only works on instances.
What one would normally do is just create a single ("singleton") instance of the class:
class r_class(object):
...
r_ = r_class()
Now you can do:
r_[1:5]
You can also use metaclasses, but that may be more than is necessary.
"No, my question was about getitem in the class, not in the instance"
Then you do need metaclasses.
class r_meta(type):
def __getitem__(cls, key):
return range(key)
class r_(object, metaclass=r_meta):
pass
Demo:
>>> r_[5]
range(0, 5)
If you pass in r_[1:5] you will get a slice object. Do help(slice) for more info; you can access values like key.stop if isinstance(key,slice) else key.
Define __getitem__() as a normal method in r_'s metaclass.
The reason for this behavior lies in the way how special methods like __getitem__() are lookup up.
Attributes are looked up first in the objects __dict__, and, if not found there, in the class __dict__. That's why e.g. this works:
>>> class Test1(object):
... x = 'hello'
...
>>> t = Test1()
>>> t.__dict__
{}
>>> t.x
'hello'
Methods that are defined in the class body are stored in the class __dict__:
>>> class Test2(object):
... def foo(self):
... print 'hello'
...
>>> t = Test2()
>>> t.foo()
hello
>>> Test2.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method foo() must be called with Test2 instance as first argument (got nothing
instead)
So far there's nothing surprising here. When it comes to special methods however, Python's behavior is a little (or very) different:
>>> class Test3(object):
... def __getitem__(self, key):
... return 1
...
>>> t = Test3()
>>> t.__getitem__('a key')
1
>>> Test3['a key']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'type' object is unsubscriptable
The error messages are very different. With Test2, Python complains about an unbound method call, whereas with Test3 it complains about the unsubscriptability.
If you try to invoke a special method - by way of using it's associated operator - on an object, Python doesn't try to find it in the objects __dict__ but goes straight to the __dict__ of the object's class, which, if the object is itself a class, is a metaclass. So that's where you have to define it:
>>> class Test4(object):
... class __metaclass__(type):
... def __getitem__(cls, key):
... return 1
...
>>> Test4['a key']
1
There's no other way. To quote PEP20: There should be one-- and preferably only one --obvious way to do it.

How to create inline objects with properties?

In Javascript it would be:
var newObject = { 'propertyName' : 'propertyValue' };
newObject.propertyName; // returns "propertyValue"
But the same syntax in Python would create a dictionary, and that's not what I want
new_object = {'propertyName': 'propertyValue'}
new_object.propertyName # raises an AttributeError
obj = type('obj', (object,), {'propertyName' : 'propertyValue'})
there are two kinds of type function uses.
Python 3.3 added the SimpleNamespace class for that exact purpose:
>>> from types import SimpleNamespace
>>> obj = SimpleNamespace(propertyName='propertyValue')
>>> obj
namespace(propertyName='propertyValue')
>>> obj.propertyName
'propertyValue'
In addition to the appropriate constructor to build the object, SimpleNamespace defines __repr__ and __eq__ (documented in 3.4) to behave as expected.
Peter's answer
obj = lambda: None
obj.propertyName = 'propertyValue'
I don't know if there's a built-in way to do it, but you can always define a class like this:
class InlineClass(object):
def __init__(self, dict):
self.__dict__ = dict
obj = InlineClass({'propertyName' : 'propertyValue'})
I like Smashery's idea, but Python seems content to let you modify classes on your own:
>>> class Inline(object):
... pass
...
>>> obj = Inline()
>>> obj.test = 1
>>> obj.test
1
>>>
Works just fine in Python 2.5 for me. Note that you do have to do this to a class derived from object - it won't work if you change the line to obj = object.
It is easy in Python to declare a class with an __init__() function that can set up the instance for you, with optional arguments. If you don't specify the arguments you get a blank instance, and if you specify some or all of the arguments you initialize the instance.
I explained it here (my highest-rated answer to date) so I won't retype the explanation. But, if you have questions, ask and I'll answer.
If you just want a generic object whose class doesn't really matter, you can do this:
class Generic(object):
pass
x = Generic()
x.foo = 1
x.bar = 2
x.baz = 3
An obvious extension would be to add an __str__() function that prints something useful.
This trick is nice sometimes when you want a more-convenient dictionary. I find it easier to type x.foo than x["foo"].
SilentGhost had a good answer, but his code actually creates a new object of metaclass type, in other words it creates a class. And classes are objects in Python!
obj = type('obj', (object,), {'propertyName' : 'propertyValue'})
type(obj)
gives
<class 'type'>
To create a new object of a custom or build-in class with dict attributes (aka properties) in one line I'd suggest to just call it:
new_object = type('Foo', (object,), {'name': 'new object'})()
and now
type(new_object)
is
<class '__main__.Foo'>
which means it's an object of class Foo
I hope it helps those who are new to Python.
Another viable option is to use namedtuple:
from collections import namedtuple
message = namedtuple('Message', ['propertyName'], verbose=True)
messages = [
message('propertyValueOne'),
message('propertyValueTwo')
]
class test:
def __setattr__(self,key,value):
return value
myObj = test()
myObj.mykey = 'abc' # set your property and value

Categories

Resources