Is there a solution to sympify() strings including custom methods?
Sympy has the beautiful function sympify.
It can reduce strings to sympy-functions and reduce all expressions. Nice!
It allows to add 'sympify'-able functions as class.
This is the official example.
from sympy import Matrix, sympify
class MyList1(object):
def __iter__(self):
yield 1
yield 2
return
def __getitem__(self, i): return list(self)[i]
def _sympy_(self): return Matrix(self)
local_dict= {"MyList1": MyList1}
print(sympify(MyList1())) # Matrix([[1], [2]])
print(sympify('MyList1()')) # MyList1()
print(sympify('MyList1()', locals=local_dict)) # <__main__.MyList1 object at 0x0000000006D0AA20>
The last two lines can not be reduced by sympify- our class is obviously not known when sympifying a string. Putting the class into 'locals' did not work for me.
Is there a solution to reduce strings?
Need sympy function for log2(x) capable of being used in sympy.solve did not work for me
SymPy: Safely parsing strings was not solved
Also: In https://stackoverflow.com/a/58487317/5626139, the class type was id Function and not Object. Which one to use?
I think it did what you wanted...MyList1 doesn't have a method for printing but the data is there:
>>> list(sympify('MyList1()', locals=local_dict))
[1, 2]
I would consider this to be a bug in SymPy. Basically, it isn't calling _sympy_ when it constructs an object from a string. You can work around it by either calling sympify() twice, like sympify(sympify('MyList1()', locals=local_dict)). Under normal operation sympify() should be idempotent so there is no harm in doing this.
Related
Let's say we have a class which has an instance method that accepts another instance of that class, and then returns a new instance of that class.
An example of an this type of class is an integer. It has the __mul__ method, which accepts another integer and returns an integer, which is the product of both numbers.
Here's the problem. I have a class that implements a method like __mul__. I have a list of instances of this class, and I want to apply the aforementioned method of the last object to the object before it, then take the result of that and apply it to the one before it, etc., until we have processed the entire list, and have ourselves one object.
A concrete example looks like this. Imagine we have a list of objects...
my_objs = [do, re, me, fa, so, la, te, do]
... And imagine they have the "combine" method, which follows the pattern outlined above, and we want to apply the procedure I outlined to it. You might think of it like this ...
my_objs_together = do.combine(re.combine(me.combine(fa.combine(so.combine(la.combine(te.combine(do)))))))
That's pretty gnarly, obviously. This makes me want to write a generic function like this...
def together(list_of_objects, method_name):
combined = list_of_objects[0]
for obj in list_of_objects[1:]:
combined = getattr(combined, method_name)(obj)
return combined
...But it occurs to me that there's likely already a standard library function that does this, right?
It's reduce! (I was in the middle of writing the question when I found it :/)
https://docs.python.org/2/library/functions.html#reduce
Apply function of two arguments cumulatively to the items of iterable,
from left to right, so as to reduce the iterable to a single value.
For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates
((((1+2)+3)+4)+5).
Does the Python language have a built-in function for an analog of map that sends an argument to a sequence of functions, rather than a function to a sequence of arguments?
Plain map would have "type" (thinking like Haskell) (a -> b) -> [a] -> [b]; is there anything with the corresponding type a -> [(a -> b)] -> [b]?
I could implement this in a number of ways. Here's using a lambda
def rev_map(x, seq):
evaluate_yourself_at_x = lambda f: f(x)
return map(evaluate_yourself_at_x, seq)
rev_map([1,2], [sum, len, type])
which prints [3, 2, list].
I'm just curious if this concept of "induce a function to evaluate itself at me" has a built-in or commonly used form.
One motivation for me is thinking about dual spaces in functional analysis, where a space of elements which used to be conceived of as arguments passed to functions is suddenly conceived of as a space of elements which are functions whose operation is to induce another function to be evaluated at them.
You could think of a function like sin as being an infinite map from numbers to numbers, you give sin a number, sin gives you some associated number back, like sin(3) or something.
But then you could also think of the number 3 as an infinite map from functions to numbers, you give 3 a function f and 3 gives you some associated number, namely f(3).
I'm finding cases where I'd like some efficient syntax to suddenly view "arguments" or "elements" as "function-call-inducers" but most things, e.g. my lambda approach above, seem clunky.
Another thought I had was to write wrapper classes for the "elements" where this occurs. Something like:
from __future__ import print_function
class MyFloat(float):
def __call__(self, f):
return f(self)
m = MyFloat(3)
n = MyFloat(2)
MyFloat(m + n)(type)
MyFloat(m + n)(print)
which will print __main__.MyFloat and 5.0.
But this requires a lot of overhead to redefine data model operators and so on, and clearly it's not a good idea to push around your own version of very basic things like float which will be ubiquitous in most programs. It's also easy to get it wrong, like from my example above, doing this:
# Will result in a recursion error.
MyFloat(3)(MyFloat(4))
There is no built-in function for that. Simply because that's definitely not a commonly used concept. Plus Python is not designed to solve mathematical problems.
As for the implementation here's the shortest one you can get IMHO:
rev_map = lambda x, seq: [f(x) for f in seq]
Note that the list comprehension is so short and easy that wrapping it with a function seems to be unnecessary in the first place.
Let's assume that we have a function f and an operator L. In this case, it can be something simple, like,
L[f](x)=\sum_{k=1}^{4}f(x+k)
My main objective is to compute compositions of operators, like L above, using sympy. Sympy has no problem handling compositions of functions but we can quickly see that there is gonna be a problem with the operator above.
For example, I can define it as,
class L(Function):
#classmethod
def eval(cls, f,x):
k = Symbol('k')
return summation(f(k+x),(k,1,4))
And this indeed computes L[f] but returns an evaluated object that is no longer a function of x, so computing L[L[f]] no longer makes sense.
Is there a way in sympy to convert what L returns to be a function of x? I think that would solve the problem, since then I would be able to re-apply L on the new object.
Thanks for your time.
This question had a simple answer after all. Sympy's Lambda does the trick in this case and then I can re-apply L after evaluation is done.
I really like the syntax of the "magic methods" or whatever they are called in Python, like
class foo:
def __add__(self,other): #It can be called like c = a + b
pass
The call
c = a + b
is then translated to
a.__add__(b)
Is it possible to mimic such behaviour for "non-magic" functions? In numerical computations I need the Kronecker product, and am eager to have "kron" function such that
kron(a,b)
is in fact
a.kron(b)?
The use case is: I have two similar classes, say, matrix and vector, both having Kronecker product. I would like to call them
a = matrix()
b = matrix()
c = kron(a,b)
a = vector()
b = vector()
c = kron(a,b)
matrix and vector classes are defined in one .py file, thus share the common namespace. So, what is the best (Pythonic?) way to implement functions like above? Possible solutions:
1) Have one kron() functions and do type check
2) Have different namespaces
3) ?
The python default operator methods (__add__ and such) are hard-wired; python will look for them because the operator implementations look for them.
However, there is nothing stopping you from defining a kron function that does the same thing; look for __kron__ or __rkron__ on the objects passed to it:
def kron(a, b):
if hasattr(a, '__kron__'):
return a.__kron__(b)
if hasattr(b, '__rkron__'):
return b.__rkron__(a)
# Default kron implementation here
return complex_operation_on_a_and_b(a, b)
What you're describing is multiple dispatch or multimethods. Magic methods is one way to implement them, but it's actually more usual to have an object that you can register type-specific implementations on.
For example, http://pypi.python.org/pypi/multimethod/ will let you write
#multimethod(matrix, matrix)
def kron(lhs, rhs):
pass
#multimethod(vector, vector)
def kron(lhs, rhs):
pass
It's quite easy to write a multimethod decorator yourself; the BDFL describes a typical implementation in an article. The idea is that the multimethod decorator associates the type signature and method with the method name in a registry, and replaces the method with a generated method that performs type lookup to find the best match.
Technically speaking, implementing something similar to the "standard" operator (and operator-like - think len() etc) behaviour is not difficult:
def kron(a, b):
if hasattr(a, '__kron__'):
return a.__kron__(b)
elif hasattr(b, '__kron__'):
return b.__kron__(a)
else:
raise TypeError("your error message here")
Now you just have to add a __kron__(self, other) method on the relevant types (assuming you have control over these types or they don't use slots or whatever else that would prevent adding methods outside the class statement's body).
Now I'd not use a __magic__ naming scheme as in my above snippet since this is supposed to be reserved for the language itself.
Another solution would be to maintain a type:specifici function mapping and have the "generic" kron function looking up the mapping, ie:
# kron.py
from somewhere import Matrix, Vector
def matrix_kron(a, b):
# code here
def vector_kron(a, b):
# code here
KRON_IMPLEMENTATIONS = dict(
Matrix=matrix_kron,
Vector=vector_kron,
)
def kron(a, b):
for typ in (type(a), type(b)):
implementation = KRON_IMPLEMENTATION.get(typ, None)
if implementation:
return implementation(a, b)
else:
raise TypeError("your message here")
This solution doesn't work well with inheritance but it "less surprinsing" - doesn't require monkeypatching nor __magic__ name etc.
I think having one single function that delegate the actual computation is a nice way to do it. If the Kronecker product only works on two similar classes, you can even do the type checking in the function :
def kron(a, b):
if type(a) != type(b):
raise TypeError('expected two instances of the same class, got %s and %s'%(type(a), type(b)))
return a._kron_(b)
Then, you just need to define a _kron_ method on the class. This is only some basic example, you might want to improve it to handle more gracefully the cases where a class doesn't have the _kron_ method, or to handle subclasses.
Binary operations in the standart libary usually have a reverse dual (__add__ and __radd__), but since your operator only work for same type objects, it isn't useful here.
class foo(object):
def __init__(self,f):
self.f = f
def __call__(self,args_list):
def wrapped_f(args_list):
return [self.f(*args) for args in args_list]
return wrapped_f(args_list)
if __name__=='__main__':
class abc(object):
#foo
def f(a,b,c):
return a+b+c
a = range(5)
b = range(5)
c = range(5)
data = list(zip(a,b,c))
print(abc.f(data))
I wrote this a few years back. When you decorate any function f(X) with #foo it becomes
f(list of Xs).
What is this process called? What is it? What is its functional programming name?
Its not currying. I know simple map9(f,list of Xs) could have done it.
What are decorators/operation of decorating called mathematically?
There are two transformations performed on your original function:
it is converted from a function of three arguments to a function that takes a 3-tuple
conversion from a function of a 3-tuple to a function that takes a list of 3-tuples
First transformation
In Haskell, there is a function called uncurry, documented here. (This is a two-argument version; 3-, 4-, ... versions could be easily created, too).
Second transformation
Also in Haskell, there are sets of functions with lift in their names. Here's a page on the Haskell wiki about lifting. I think that page explains it better than I could:
Lifting is a concept which allows you to transform a function into a
corresponding function within another (usually more general) setting.
So in your case, you're lifting a function from operating on tuples to operating on a list of tuples.
Notes:
the OP asked for the mathematical name for decorators. I don't know what that would be, but I've heard that Haskell is supposed to be like executable mathematics, so I think Haskell's terminology is a good starting point. YMMV.
the OP asked for the FP name of these processes. Again, I don't know, but I assume that Haskell's terminology is acceptable.
Decorators just have special syntax, but there are no rules what decorators can return and no mathematical description. They can be any callable after all.
Your function is just a partially applied starmap:
from functools import partial
from itertools import starmap
def foo(f):
return partial(starmap, f)
In a functional language like Haskell, you would do this by partially applying the map function to a function which takes a tuple of arguments, resulting in a function which takes a list of argument tuples. As Jochen Ritzel pointed out in another answer, even in Python you can implement this pretty trivially using functools.partial.
Therefore I suppose this process is called "partial application of map", or some such thing. I'm not aware of any particular name for this special case.
They are simply called Decorators. What it does can be called function chaining or function annotation, but I looked around quite a bit and found no special functional/mathmatical name for this process besides those 2 (chaining/annotation).
PEP Index > PEP 318 -- Decorators for Functions and Methods
On the name 'Decorator'
There's been a number of complaints about the choice of the name
'decorator' for this feature. The major one is that the name is not
consistent with its use in the GoF book [11]. The name 'decorator'
probably owes more to its use in the compiler area -- a syntax tree is
walked and annotated. It's quite possible that a better name may turn
up.