Inequalities as python parameters - python

Firstly, Sry fo the bad title of this question I simply don't know a better one.
If you have a better one => Tell me!
So my problem is that I would like to write a simplex solver in Python by myself to deeply understand how they work.
Therefore, I would like to have something like this in my program:
m.addConstr(x[0] <= 7)
Which basically should add a constraint to my model m.
This works in Gurobi which is just wonderful cause it's so easy to read.
The problem is that x[0] has to be an object where I itself can define what should happen when there is an inequality or equality or whatever, right?
I am happy to figure out most of the stuff by myself would just like to get an idea how this works.

It looks like you want to overload the comparison operators of whatever objects you're working with. So if Foo is the class of x[0] in your example, then you could write it like this:
class Foo:
def __gt__(self, other):
# construct and return some kind of constraint object
def __lt__(self, other):
# likewise
These special methods (__gt__, __ge__, __lt__, __le__, __ne__ and __eq__) are called for the left-hand object in a comparison relation. So if you have x > y, then the __gt__ method on x is called with y as an argument.

I don't think it should be your first concern to come up with an elegant input syntax. You should rather implement the simplex algorithm first.
I suggest, you handle the input by writing a parser for the two standard formats for linear programming problems: .lp and .mps
If you still want to know how to implement proper expression handling in Python I recommend you have a look at PySCIPOpt since this is exactly doing what you want and you can inspect the entire source code.

Related

Where to put a function that acts on two instances of a specific class

This is really a design question and I would like to know a bit of what design patterns to use.
I have a module, let's say curves.py that defines a Bezier class. Then I want to write a function intersection which uses a recursive algorithm to find the intersections between two instances of Bezier.
What options do I have for where to put this functions? What are some best practices in this case? Currently I have written the function in the module itself (and not as a method to the class).
So currently I have something like:
def intersections(inst1, inst2): ...
def Bezier(): ...
and I can call the function by passing two instances:
from curves import Bezier, intersections
a = Bezier()
b = Bezier()
result = intersections(a, b)
However, another option (that I can think of) would be to make intersection a method of the class. In this case I would instead use
a.intersections(b)
For me the first choice makes a bit more sense since it feels more natural to call intersections(a, b) than a.intersections(b). However, the other option feels more natural in the sense that the function intersection really only acts on Bezier instances and this feels more encapsulated.
Do you think one of these is better than the other, and in that case, for what reasons? Are there any other design options to use here? Are there any best practices?
As an example, you can compare how the builtin set class does this:
intersection(*others)
set & other & ...
Return a new set with elements common to the set and all others.
So intersection is defined as a regular instance method on the class that takes another (or multiple) sets and returns the intersection, and it can be called as a.intersection(b).
However, due to the standard mechanics of how instance methods work, you can also spell it set.intersection(a, b) and in practice you'll see this quite often since like you say it feels more natural.
You can also override the __and__ method so this becomes available as a & b.
In terms of ease of use, putting it on the class is also friendlier, because you can just import the Bezier class and have all associated features available automatically, and they're also discoverable via help(Bezier).

How to "fool" duck typing in Python

Suppose I had a class A:
class A:
def __init__(self, x, y):
self.x = x
self.y = y
def sum(self):
return self.x + self.y
And I defined a factory method called factory:
def factory(x, y):
class B: pass
b = B()
setattr(b, 'x', x)
setattr(b, 'y', y)
B.__name__ = 'A'
return b
Now, If I do print(type(A(1, 2))) and print(type(factory(1, 2))) they will show that these are different types. And if I try to do factory(1, 2).sum() I'll get an exception. But, type(A).__name__ and type(factory(1, 2)).__name__ are equivalent and if I do A.sum(factory(1, 2)) I'll get 3, as if I was calling it using an A. So, my question is this:
What would I need to do here to make factory(1, 2).sum() work without defining sum on B or doing inheritance?
I think you're fundamentally misunderstanding the factory pattern, and possibly getting confused with how interfaces work in Python. Either that, or I am fundamentally confused by the question. Either way, there's some sorting out we need to do.
What would I need to do here to make factory(1, 2).sum() work without
defining sum on B or doing inheritance?
Just return an A instead of some other type:
def factory(x, y):
return A(x, y)
then
print(factory(1,2).sum())
will output 3 as expected. But that's kind of a useless factory...could just do A(x, y) and be done with it!
Some notes:
You typically use a "factory" (or factory pattern) when you have easily "nameable" types that may be non-trivial to construct. Consider how when you use scipy.interpolate.interp1d (see here) there's an option for kind, which is basically an enum for all the different strategies you might use to do an interpolation. This is, in essence, a factory (but hidden inside the function for ease of use). You could imagine this could be standalone, so you'd call your "strategy" factory, and then pass this on to the interp1d call. However, doing it inline is a common pattern in Python. Observe: These strategies are easy to "name", somewhat hard to construct in general (you can imagine it would be annoying to have to pass in a function that does linear interpolation as opposed to just doing kind='linear'). That's what makes the factory pattern useful...
If you don't know what A is up front, then it's definitely not the factory pattern you'd want to apply. Furthermore, if you don't know what you're serializing/deserializing, it would be impossible to call it or use it. You'd have to know that, or have some way of inferring it.
Interfaces in Python are not enforced like they are in other languages like Java/C++. That's the spirit of duck typing. If an interface does something like call x.sum(), then it doesn't matter what type x actually is, it just has to have a method called sum(). If it acts like the "sum" duck, quacks like the "sum" duck, then it is the "sum" duck from Python's perspective. Doesn't matter if x is a numpy array, or A, it'll work all the same. In Java/C++, stuff like that wont compile unless the compiler is absolutely certain that x has the method sum defined. Fortunately Python isn't like that, so you can even define it on the fly (which maybe you were trying to do with B). Either way, interfaces are a much different concept in Python than in other mainstream languages.
P.S.
But, type(A).__name__ and type(factory(1, 2)).__name__ are equivalent
Of course they are, you explicitly do this when you say B.__name__ = 'A'. So I'm not sure what you were trying to get at there...
HTH!

Does this qualify as polymorphism?

I'm trying to get a grasp on the concept of polymorphism and I kind of understand what it is in the context of OOP, but I've read that it can also apply to procedural programming and I don't really understand how.
I know that there's a bunch of methods and operators in Python, for example, which already use polymorphism, but what function that I write would qualify as polymorphic? For example, would the function what_type below be considered polymorphic, because by definition it has different behaviour based on the data type of the parameter it takes in, or is this just a regular example of a function with conditional statements in it?
def what_type(input_var):
'Returns a statement of the data type of the input variable'
# If the input variable is a string
if type(input_var) is str:
this_type = "This variable is a string!"
# If it's an integer
elif type(input_var) is int:
this_type = "This variable is an integer!"
# If it's a float
elif type(input_var) is float:
this_type = "This variable is a float!"
# If it's any other type
else:
this_type = "I don't know this type!"
return this_type
Perhaps it could be considered a rather uninteresting example of ad-hoc polymoriphism, but it isn't the sort of thing most people have in mind when they talk about polymorphism.
In addition to the OOP-style polymoriphism which you seem to be somewhat comfortable with, Python has an interesting protocol-based polymorphism. For example, iterators are objects which implement a __next__() method. You can write polymorphic functions which work with any iterator. For example, if you wanted to wrap two iterators into one which alternates between the two you could do something like this:
def alternate(source1,source2):
for x,y in zip(source1,source2):
yield x
yield y
This could be applied to strings:
>>> for item in alternate("cat","dog"): print(item)
c
d
a
o
t
g
but it could be equally applied to e.g. two files that are open for reading.
The point of such polymorphic code is you really don't care what type of iterators are passed to it. As long as they are iterators, the code will work as expected.
what_type is checking the type of the argument and deciding how to handle it by itself.
It's not polymorphism nor oop.
In oop, what_type should not concern the concrete type of the argument and the argument should treat type-specific behavior.
So, what_type should be written like:
def what_type(input_var):
input_var.print_whats_myself()

Working with abstract mathematical Operators/Objects in Sympy

I am wondering if there is an easy way to implement abstract mathematical Operators in Sympy. With operators I simply mean some objects that have 3 values as an input or 3 indices, something like "Operator(a,b,c)". Please note that I am refering to a mathematical operator (over a hilbert space) and not an operator in the context of programming. Depending on these values I want to teach Sympy how to multiply two Operators of this kind and how to multiply it with a float and so on. At some point of the calculation I want to replace these Operators with some others...
So far I couldn't figure out if sympy provides such an abstract calculation. Therefore I started to write a new Python-class for these objects, but this went beyond the scope of my limited knowledge in Python very fast... Is there a more easy way to implement that then creating a new class?
You may also want to look at SageMath, in addition to SymPy, since Sage goes to great lengths to come with prebuilt mathematical structures. However, it's been development more with eyes towards algebraic geometry, various areas of algebra, and combinatorics. I'm not sure to what extent it implements any operator algebra.
Yes, you can do this. Just create a subclass of sympy.Function. You can specify the number of arguments with nargs
class Operator(Function):
nargs = 3
If you want the function to evaluate for certain arguments, define the class function eval. It should return None for when it should remain unevaluated. For instance, to evaluate to 0 when all three arguments are 0, you might use
class Operator(Function):
#classmethod
def eval(cls, a, b, c):
if a == b == c == 0:
return Integer(0)
(note that nargs is not required if you define eval).

The advantages of having static function like len(), max(), and min() over inherited method calls

i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!!
The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.
So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.
An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).
Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).
In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).
"Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).
So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-).
It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.
Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).
This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type.
Actually, those aren't "static" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them.
>>> class Foo(object):
... def __len__(self):
... return 42
...
>>> f = Foo()
>>> len(f)
42
These are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon
I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len:
def foo():
for i in range(10):
yield i
print len(foo())
... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call.
So, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python.
This is unrelated to duck typing; you can say getattr(obj, "len") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, "__len__").
All that said, as language warts go--for those who consider this a wart--this is a very easy one to live with.
On the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example:
import random
def foo():
for i in range(10):
yield random.randint(0, 100)
print max(foo())
However, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.)
To follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list:
def add(f):
f(1)
f(2)
f(3)
lst = []
add(lst.append)
print lst
and this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages.
Of course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.

Categories

Resources