Python -- Optimize system of inequalities - python

I am working on a program in Python in which a small part involves optimizing a system of equations / inequalities. Ideally, I would have wanted to do as can be done in Modelica, write out the equations and let the solver take care of it.
The operation of solvers and linear programming is a bit out of my comfort zone, but I decided to try anyway. The problem is that the general design of the program is object-oriented, and there are many different possibilities of combinations to form up the equations, as well as some non-linearities, so I have not been able to translate this into a linear programming problem (but I might be wrong) .
After some research I found that the Z3 solver seemed to do what I wanted. I came up with this (this looks like a typical case of what I would like to optimize):
from z3 import *
a = Real('a')
b = Real('b')
c = Real('c')
d = Real('d')
e = Real('e')
g = Real('g')
f = Real('f')
cost = Real('cost')
opt = Optimize()
opt.add(a + b - 350 == 0)
opt.add(a - g == 0)
opt.add(c - 400 == 0)
opt.add(b - d * 0.45 == 0)
opt.add(c - f - e - d == 0)
opt.add(d <= 250)
opt.add(e <= 250)
opt.add(cost == If(f > 0, f * 50, f * 0.4) + e * 40 + d * 20 +
If(g > 0, g * 50, g * 0.54))
h = opt.minimize(cost)
opt.check()
opt.lower(h)
opt.model()
Now this works, and gives me the result I want, despite it not being extremely fast (I need to solve such systems several thousands of times).
But I am not sure I am using the right tool for the job (Z3 is a "theorem prover").
The API is basically exactly what I need, but I would be curious if other packages allow a similar syntax. Or should I try to formulate the problem in a different way to allow a standard LP approach? (although I have no idea how)

Z3 is the most powerful solver I have found for such flexible systems of equations. Z3 is an excellent choice now that it is released under the MIT license.
There are a lot of different types of tools with overlapping use cases. You mentioned linear programming -- there are also theorem provers, SMT solvers, and many other types of tools. Despite marketing itself as a theorem prover, Z3 is often marketed as an SMT solver. At the moment, SMT solvers are leading the pack for the flexible and automated solution of coupled algebraic equations and inequalities over the booleans, reals, and integers, and in the world of SMT solvers, Z3 is king. Take a look at the results of the last SMT comp if you want evidence of this. That being said, if your equations are all linear, then you might also find better performance with CVC4. It doesn't hurt to shop around.
If your equations have a very controlled form (for example, minimize some function subject to some constraints) then you might be able to get better performance using a numerical library such as GSL or NAG. However, if you really need the flexibility, then I doubt you are going to find a better tool than Z3.

The best solution will probably be to use an ILP solver. Your problem can be formulated as an integer linear programming (ILP) instance. There are many ILP solvers, and some might perform better than Z3. For only 7 variables, any decent ILP solver should find a solution very rapidly.
The only tricky bit are the conditional expressions (If(...)). However, as #Erwin Kalvelagen suggests, the conditionals can be handled using variable splitting. For instance, introduce variables fplus and fminus, with the constraints f = fplus - fminus and fplus >= 0 and fminus >= 0. Now you can replace If(f > 0, f * 50, f * 0.4) with 50 * fplus - 0.4 * fminus. In this case, that will be equivalent.
Variable splitting doesn't always work. You have to think about whether it might introduce spurious solutions (where both fplus > 0 and fminus > 0). In this case, though, spurious solutions will never be optimal -- one can show that the optimal solution will never be optimal. Consequently, variable splitting works fine here.
If you have a situation where you do have conditional statements but variable splitting doesn't work, you can often use the techniques at https://cs.stackexchange.com/q/12102/755 to formulate the problem as an instance of ILP.

Related

Automatic reduction and cancellation of units with SymPy

I am having issues with getting units to cleanly simplify. The program I am writing is involved with the calculation of spring properties, requiring 10/11 user controlled variables and about a dozen equations, all of which deal with mixed sets of units. The one missing variable can be any of the non-material properties (1/5), so I am trying to use symbolic equations that can solve for whatever my one missing variable is. I tried setting the variables with pint, which could reduce properly and did not have this problem, but they could not Sympify properly, so I had to switch back to SymPy's unit system.
Here is some code that demonstrates the problem:
from sympy.physics.units import *
from sympy import pi, sqrt, N, Eq, symbols, solve
lbf=Quantity('lbf', abbrev='lbf')
lbf.set_global_relative_scale_factor((convert_to(pound*acceleration_due_to_gravity,newton))/newton, newton)
F, Y, L, A, m, ns, xi, d = symbols('F Y L A m ns xi d')
ssy, alpha, beta, C = symbols('ssy alpha beta C')
F= 20*lbf
Y= 2*inch
L= 3.25*inch
d= .08*inch
m= .145
A= 201.00
A*=kilo*psi*inch**m
ns= 1.20
xi= .15
eqSsy=Eq(ssy,.45*A/(d**m))
ssy=solve(eqSsy)[0]
eqAlpha=Eq(alpha,eqSsy.rhs/ns)
alpha=solve(eqAlpha)[0]
eqBeta=Eq(beta,(8*(1+xi)*F)/(N(pi)*(d**2)))
beta=solve(eqBeta)[0]
eqC=Eq(C,((2*eqAlpha.rhs-eqBeta.rhs)/(4*eqBeta.rhs))+sqrt((((2*eqAlpha.rhs-eqBeta.rhs)/(4*eqBeta.rhs))**2)-(3*eqAlpha.rhs)/(4*eqBeta.rhs)))
C=solve(eqC)[0]
print(ssy, '\n', alpha, '\n', beta, '\n', C)
This issue is not related to the lbf unit I had to create, it still happened when I had it using the raw units before I cleaned it up. This leads to C coming out as 1.62e-28*(3.66645442100299e+28*inch**2*psi - 1.54320987654321e+27*lbf + 3.666454421003e+28*sqrt(-0.252539870841386*inch**2*lbf*psi + (inch**2*psi - 0.0420899784735643*lbf)**2))/lbf instead of 10.5334875999498, because none of the units are cancelled through the calculation process.
The "fix" to this problem that I want to avoid is changing line 27, the creation of eqBeta, I have to hard convert the output units to be in psi to prevent the units from coming out as lbf/inch**2 instead of the pressure unit.
eqBeta=Eq(beta,convert_to((8*(1+xi)*F)/(N(pi)*(d**2)),psi))
Is there any way I can make beta automatically reduce to the appropriate pressure unit? The input values are given through a PyQt5 program, not like in this demo, and they can be given either imperial or metric units, so I don't want to be manually forcing a conversion into psi (or forcibly converting C into being unitless).
I would also appreciate if someone knew of a cleaner or better way to do these calculations, as I have just been bashing my head against SymPy because I haven't found another solution. These equations and variable names are taken from a machine design textbook, and I don't want to have to manually create a step-by-step solving process for each possible missing variable.
You might try listing the base units as the quantities to be reduced to:
>>> convert_to(C, [kg, meter, second]).n(2)
11.0
Or if you don't know which ones to use,
>>> from sympy.physics.units import UnitSystem
>>> sibase = UnitSystem.get_unit_system("SI")._base_units
>>> convert_to(C, sibase).n(2)
11.0
cf this issue

Why i**2 is slower than i*i? [duplicate]

I'm curious as to why it's so much faster to multiply than to take powers in python (though from what I've read this may well be true in many other languages too). For example it's much faster to do
x*x
than
x**2
I suppose the ** operator is more general and can also deal with fractional powers. But if that's why it's so much slower, why doesn't it perform a check for an int exponent and then just do the multiplication?
Edit: Here's some example code I tried...
def pow1(r, n):
for i in range(r):
p = i**n
def pow2(r, n):
for i in range(r):
p = 1
for j in range(n):
p *= i
Now, pow2 is just a quick example and is clearly not optimised!
But even so I find that using n = 2 and r = 1,000,000, then pow1 takes ~ 2500ms and pow2 takes ~ 1700ms.
I admit that for large values of n, then pow1 does get much quicker than pow2. But that's not too surprising.
Basically naive multiplication is O(n) with a very low constant factor. Taking the power is O(log n) with a higher constant factor (There are special cases that need to be tested... fractional exponents, negative exponents, etc) . Edit: just to be clear, that's O(n) where n is the exponent.
Of course the naive approach will be faster for small n, you're only really implementing a small subset of exponential math so your constant factor is negligible.
Adding a check is an expense, too. Do you always want that check there? A compiled language could make the check for a constant exponent to see if it's a relatively small integer because there's no run-time cost, just a compile-time cost. An interpreted language might not make that check.
It's up to the particular implementation unless that kind of detail is specified by the language.
Python doesn't know what distribution of exponents you're going to feed it. If it's going to be 99% non-integer values, do you want the code to check for an integer every time, making runtime even slower?
Doing this in the exponent check will slow down the cases where it isn't a simple power of two very slightly, so isn't necessarily a win. However, in cases where the exponent is known in advance( eg. literal 2 is used), the bytecode generated could be optimised with a simple peephole optimisation. Presumably this simply hasn't been considered worth doing (it's a fairly specific case).
Here's a quick proof of concept that does such an optimisation (usable as a decorator). Note: you'll need the byteplay module to run it.
import byteplay, timeit
def optimise(func):
c = byteplay.Code.from_code(func.func_code)
prev=None
for i, (op, arg) in enumerate(c.code):
if op == byteplay.BINARY_POWER:
if c.code[i-1] == (byteplay.LOAD_CONST, 2):
c.code[i-1] = (byteplay.DUP_TOP, None)
c.code[i] = (byteplay.BINARY_MULTIPLY, None)
func.func_code = c.to_code()
return func
def square(x):
return x**2
print "Unoptimised :", timeit.Timer('square(10)','from __main__ import square').timeit(10000000)
square = optimise(square)
print "Optimised :", timeit.Timer('square(10)','from __main__ import square').timeit(10000000)
Which gives the timings:
Unoptimised : 6.42024898529
Optimised : 4.52667593956
[Edit]
Actually, thinking about it a bit more, there's a very good reason why this optimisaton isn't done. There's no guarantee that someone won't create a user defined class that overrides the __mul__ and __pow__ methods and do something different for each. The only way to do it safely is if you can guarantee that the object on the top of the stack is one that has the same result "x**2" and "x*x", but working that out is much harder. Eg. in my example it's impossible, as any object could be passed to the square function.
An implementation of b^p with binary exponentiation
def power(b, p):
"""
Calculates b^p
Complexity O(log p)
b -> double
p -> integer
res -> double
"""
res = 1
while p:
if p & 0x1: res *= b
b *= b
p >>= 1
return res
I'd suspect that nobody was expecting this to be all that important. Typically, if you want to do serious calculations, you do them in Fortran or C or C++ or something like that (and perhaps call them from Python).
Treating everything as exp(n * log(x)) works well in cases where n isn't integral or is pretty large, but is relatively inefficient for small integers. Checking to see if n is a small enough integer does take time, and adds complication.
Whether the check is worth it depends on the expected exponents, how important it is to get best performance here, and the cost of the extra complexity. Apparently, Guido and the rest of the Python gang decided the check wasn't worth doing.
If you like, you could write your own repeated-multiplication function.
how about xxxxx?
is it still faster than x**5?
as int exponents gets larger, taking powers might be faster than multiplication.
but the number where actual crossover occurs depends on various conditions, so in my opinion, that's why the optimization was not done(or couldn't be done) in language/library level. But users can still optimize for some special cases :)

Right matrix division in Scipy/NumPy? [duplicate]

I have this line of MATLAB code:
a/b
I am using these inputs:
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9]
b = ones(25, 18)
This is the result (a 1x25 matrix):
[5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
What is MATLAB doing? I am trying to duplicate this behavior in Python, and the mrdivide documentation in MATLAB was unhelpful. Where does the 5 come from, and why are the rest of the values 0?
I have tried this with other inputs and receive similar results, usually just a different first element and zeros filling the remainder of the matrix. In Python when I use linalg.lstsq(b.T,a.T), all of the values in the first matrix returned (i.e. not the singular one) are 0.2. I have already tried right division in Python and it gives something completely off with the wrong dimensions.
I understand what a least square approximation is, I just need to know what mrdivide is doing.
Related:
Array division- translating from MATLAB to Python
MRDIVIDE or the / operator actually solves the xb = a linear system, as opposed to MLDIVIDE or the \ operator which will solve the system bx = a.
To solve a system xb = a with a non-symmetric, non-invertible matrix b, you can either rely on mridivide(), which is done via factorization of b with Gauss elimination, or pinv(), which is done via Singular Value Decomposition, and zero-ing of the singular values below a (default) tolerance level.
Here is the difference (for the case of mldivide): What is the difference between PINV and MLDIVIDE when I solve A*x=b?
When the system is overdetermined, both algorithms provide the
same answer. When the system is underdetermined, PINV will return the
solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will
pick the solution with least number of non-zero elements.
In your example:
% solve xb = a
a = [1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9];
b = ones(25, 18);
the system is underdetermined, and the two different solutions will be:
x1 = a/b; % MRDIVIDE: sparsest solution (min L0 norm)
x2 = a*pinv(b); % PINV: minimum norm solution (min L2)
>> x1 = a/b
Warning: Rank deficient, rank = 1, tol = 2.3551e-014.
ans =
5.0000 0 0 ... 0
>> x2 = a*pinv(b)
ans =
0.2 0.2 0.2 ... 0.2
In both cases the approximation error of xb-a is non-negligible (non-exact solution) and the same, i.e. norm(x1*b-a) and norm(x2*b-a) will return the same result.
What is MATLAB doing?
A great break-down of the algorithms (and checks on properties) invoked by the '\' operator, depending upon the structure of matrix b is given in this post in scicomp.stackexchange.com. I am assuming similar options apply for the / operator.
For your example, MATLAB is most probably doing a Gaussian elimination, giving the sparsest solution amongst a infinitude (that's where the 5 comes from).
What is Python doing?
Python, in linalg.lstsq uses pseudo-inverse/SVD, as demonstrated above (that's why you get a vector of 0.2's). In effect, the following will both give you the same result as MATLAB's pinv():
from numpy import *
a = array([1,2,3,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9])
b = ones((25, 18))
# xb = a: solve b.T x.T = a.T instead
x2 = linalg.lstsq(b.T, a.T)[0]
x2 = dot(a, linalg.pinv(b))
TL;DR: A/B = np.linalg.solve(B.conj().T, A.conj().T).conj().T
I did not find the earlier answers to create a satisfactory substitute, so I dug into Matlab's reference documents for mrdivide further and found the solution. I cannot explain the actual mathematics here or take credit for coming up with the answer. I'm just following Matlab's explanation. Additionally, I wanted to post the actual detail from Matlab to give credit. If it's a copyright issue, someone tell me and I'll remove the actual text.
%/ Slash or right matrix divide.
% A/B is the matrix division of B into A, which is roughly the
% same as A*INV(B) , except it is computed in a different way.
% More precisely, A/B = (B'\A')'. See MLDIVIDE for details.
%
% C = MRDIVIDE(A,B) is called for the syntax 'A / B' when A or B is an
% object.
%
% See also MLDIVIDE, RDIVIDE, LDIVIDE.
% Copyright 1984-2005 The MathWorks, Inc.
Note that the ' symbol indicates the complex conjugate transpose. In python using numpy, that requires .conj().T chained together.
Per this handy "cheat sheet" of numpy for matlab users, linalg.lstsq(b,a) -- linalg is numpy.linalg.linalg, a light-weight version of the full scipy.linalg.
a/b finds the least square solution to the system of linear equations bx = a
if b is invertible, this is a*inv(b), but if it isn't, the it is the x which minimises norm(bx-a)
You can read more about least squares on wikipedia.
according to matlab documentation, mrdivide will return at most k non-zero values, where k is the computed rank of b. my guess is that matlab in your case solves the least squares problem given by replacing b by b(:1) (which has the same rank). In this case the moore-penrose inverse b2 = b(1,:); inv(b2*b2')*b2*a' is defined and gives the same answer

MATLAB matrix power algorithm

I'm looking to port an algorithm from MATLAB to Python. One step in said algorithm involves taking A^(-1/2) where A is a 9x9 square complex matrix. As I understand it, the square root of matrices (and by extension their inverses) are not-unique.
I've been experimenting with scipy.linalg.fractional_matrix_power and an approximation using A^(-1/2) = exp((-1/2)*log(A)) with numpy's built in expm and logm functions. The former is exceptionally poor and only provides 3 decimal places of precision whereas the latter is decently correct for elements in the top left corner but gets progressively worse as you move down and to the right. This may or may not be a perfectly valid mathematical solution to the expression however it doesn't suffice for this application.
As a result, I'm looking to directly implement MATLAB's matrix power algorithm in Python so that I can 100% confirm the same result each time. Does anyone have any insight or documentation on how this would work? The more parallelizable this algorithm is, the better, as eventually the goal would be to rewrite it in OpenCL for GPU acceleration.
EDIT: An MCVE as requested:
[[(0.591557294607941+4.33680868994202e-19j), (-0.219707725574605-0.35810724986609j), (-0.121305654177909+0.244558388829046j), (0.155552026648172-0.0180264818714123j), (-0.0537690384136066-0.0630740244116577j), (-0.0107526931263697+0.0397896274845627j), (0.0182892503609312-0.00653264433724856j), (-0.00710188853532244-0.0050445035279044j), (-2.20414002823034e-05+0.00373184532662288j)], [(-0.219707725574605+0.35810724986609j), (0.312038814492119+2.16840434497101e-19j), (-0.109433401402399-0.174379997015402j), (-0.0503362231078033+0.108510948023091j), (0.0631826956936223-0.00992931123813742j), (-0.0219902325360141-0.0233215237172002j), (-0.00314837555001163+0.0148621558916679j), (0.00630295247506065-0.00266790359447072j), (-0.00249343102520442-0.00156160619280611j)], [(-0.121305654177909-0.244558388829046j), (-0.109433401402399+0.174379997015402j), (0.136649392858215-1.76182853028894e-19j), (-0.0434623984527311-0.0669251299161109j), (-0.0168737559719828+0.0393768358149159j), (0.0211288536117387-0.00417146769324491j), (-0.00734306979471257-0.00712443264825166j), (-0.000742681625102133+0.00455752452374196j), (0.00179068247786595-0.000862706240042082j)], [(0.155552026648172+0.0180264818714123j), (-0.0503362231078033-0.108510948023091j), (-0.0434623984527311+0.0669251299161109j), (0.0467980890488569+5.14996031930615e-19j), (-0.0140208255975664-0.0209483313237692j), (-0.00472995448413803+0.0117916398375124j), (0.00589653974090387-0.00134198920550751j), (-0.00202109265416585-0.00184021636458858j), (-0.000150793859056431+0.00116822322464066j)], [(-0.0537690384136066+0.0630740244116577j), (0.0631826956936223+0.00992931123813742j), (-0.0168737559719828-0.0393768358149159j), (-0.0140208255975664+0.0209483313237692j), (0.0136137125669776-2.03287907341032e-20j), (-0.00387854073283377-0.0056769786724813j), (-0.0011741038702424+0.00306007798625676j), (0.00144000687517355-0.000355251914809693j), (-0.000481433965262789-0.00042129815655098j)], [(-0.0107526931263697-0.0397896274845627j), (-0.0219902325360141+0.0233215237172002j), (0.0211288536117387+0.00417146769324491j), (-0.00472995448413803-0.0117916398375124j), (-0.00387854073283377+0.0056769786724813j), (0.00347771689075251+8.21621958836671e-20j), (-0.000944046302699304-0.00136521328407881j), (-0.00026318475762475+0.000704212317211994j), (0.00031422288569727-8.10033316327328e-05j)], [(0.0182892503609312+0.00653264433724856j), (-0.00314837555001163-0.0148621558916679j), (-0.00734306979471257+0.00712443264825166j), (0.00589653974090387+0.00134198920550751j), (-0.0011741038702424-0.00306007798625676j), (-0.000944046302699304+0.00136521328407881j), (0.000792908166233942-7.41153828847513e-21j), (-0.00020531962049495-0.000294952695922854j), (-5.36226164765808e-05+0.000145645628243286j)], [(-0.00710188853532244+0.00504450352790439j), (0.00630295247506065+0.00266790359447072j), (-0.000742681625102133-0.00455752452374196j), (-0.00202109265416585+0.00184021636458858j), (0.00144000687517355+0.000355251914809693j), (-0.00026318475762475-0.000704212317211994j), (-0.00020531962049495+0.000294952695922854j), (0.000162971629601464-5.39321759384574e-22j), (-4.03304806590714e-05-5.77159110863666e-05j)], [(-2.20414002823034e-05-0.00373184532662288j), (-0.00249343102520442+0.00156160619280611j), (0.00179068247786595+0.000862706240042082j), (-0.000150793859056431-0.00116822322464066j), (-0.000481433965262789+0.00042129815655098j), (0.00031422288569727+8.10033316327328e-05j), (-5.36226164765808e-05-0.000145645628243286j), (-4.03304806590714e-05+5.77159110863666e-05j), (3.04302590501313e-05-4.10281583826302e-22j)]]
I can think of two explanations, in both cases I accuse user error. In chronological order:
Theory #1 (the subtle one)
My suspicion is that you're copying the printed values of the input matrix from one code as input into the other. I.e. you're throwing away double precision when you switch codes, which gets amplified during the inverse-square-root calculation.
As proof, I compared MATLAB's inverse square root with the very function you're using in python. I will show a 3x3 example due to size considerations, but—spoiler warning—I did the same with a 9x9 random matrix and got two results with condition number 11.245754109790719 (MATLAB) and 11.245754109790818 (numpy). That should tell you something about the similarity of the results without having to save and load the actual matrices between the two codes. I suggest you do this though: keywords are scipy.io.loadmat and savemat.
What I did was generate the random data in python (because that's what I prefer):
>>> import numpy as np
>>> print((np.random.rand(3,3) + 1j*np.random.rand(3,3)).tolist())
[[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]
By copying the same truncated output into both codes, I guarantee the correspondence of the inputs.
Example in MATLAB:
>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)]; [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404131303+0.6480559739625379j)]; [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]];
>> A = M^(-0.5);
>> format long
>> disp(A)
0.922112307438377 + 0.919346397931976i 0.108620882045523 - 0.649850434897895i -0.778737740194425 - 0.320654127149988i
-0.423384022626231 - 0.842737730824859i 0.592015668030645 + 0.661682656423866i 0.529361991464903 - 0.388343838121371i
-0.550789874427422 + 0.021129515921025i 0.472026152514446 - 0.502143106675176i 0.942976466768961 + 0.141839849623673i
>> cond(A)
ans =
3.429368520364765
Example in python:
>>> M = [[(0.8404782758300281+0.29389006737780765j), (0.741574080512219+0.7944606900644321j), (0.12788250870304718+0.37304665786925073j)], [(0.8583402784463595+0.13952117266781894j), (0.2138809231406249+0.6233427148017449j), (0.7276466404
... 131303+0.6480559739625379j)], [(0.1784816129006297+0.72452362541158j), (0.2870462766764591+0.8891190037142521j), (0.0980355896905617+0.03022344706473823j)]]
>>> A = fractional_matrix_power(M,-0.5)
>>> print(A)
[[ 0.92211231+0.9193464j 0.10862088-0.64985043j -0.77873774-0.32065413j]
[-0.42338402-0.84273773j 0.59201567+0.66168266j 0.52936199-0.38834384j]
[-0.55078987+0.02112952j 0.47202615-0.50214311j 0.94297647+0.14183985j]]
>>> np.linalg.cond(A)
3.4293685203647408
My suspicion is that if you scipy.io.loadmat the matrix into python, do the calculation, scipy.io.savemat the result and load it back in with MATLAB, you'll see less than 1e-12 absolute error (hopefully even less) between the results.
Theory #2 (the facepalm one)
My suspicion is that you're using python 2, and your -1/2-powered division is a simple inverse:
>>> # python 3 below
>>> # python 3's // is python 2's /, i.e. integer division
>>> 1/2
0.5
>>> 1//2
0
>>> -1/2
-0.5
>>> -1//2
-1
So if you're using python 2, then calling
fractional_matrix_power(M,-1/2)
is actually the inverse of M. The obvious solution is to switch to python 3. The less obvious solution is to keep using python 2 (which you shouldn't, as the above exemplifies), but use
from __future__ import division
on top of your every source file. This will override the behaviour of the simple / division operator so that it reflects the python 3 version, and you will have one less headache.

Reversing pow function - finding the power [duplicate]

Given positive integers b, c, m where (b < m) is True it is to find a positive integer e such that
(b**e % m == c) is True
where ** is exponentiation (e.g. in Ruby, Python or ^ in some other languages) and % is modulo operation. What is the most effective algorithm (with the lowest big-O complexity) to solve it?
Example:
Given b=5; c=8; m=13 this algorithm must find e=7 because 5**7%13 = 8
From the % operator I'm assuming that you are working with integers.
You are trying to solve the Discrete Logarithm problem. A reasonable algorithm is Baby step, giant step, although there are many others, none of which are particularly fast.
The difficulty of finding a fast solution to the discrete logarithm problem is a fundamental part of some popular cryptographic algorithms, so if you find a better solution than any of those on Wikipedia please let me know!
This isn't a simple problem at all. It is called calculating the discrete logarithm and it is the inverse operation to a modular exponentation.
There is no efficient algorithm known. That is, if N denotes the number of bits in m, all known algorithms run in O(2^(N^C)) where C>0.
Python 3 Solution:
Thankfully, SymPy has implemented this for you!
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
This is the documentation on the discrete_log function. Use this to import it:
from sympy.ntheory import discrete_log
Their example computes \log_7(15) (mod 41):
>>> discrete_log(41, 15, 7)
3
Because of the (state-of-the-art, mind you) algorithms it employs to solve it, you'll get O(\sqrt{n}) on most inputs you try. It's considerably faster when your prime modulus has the property where p - 1 factors into a lot of small primes.
Consider a prime on the order of 100 bits: (~ 2^{100}). With \sqrt{n} complexity, that's still 2^{50} iterations. That being said, don't reinvent the wheel. This does a pretty good job. I might also add that it was almost 4x times more memory efficient than Mathematica's MultiplicativeOrder function when I ran with large-ish inputs (44 MiB vs. 173 MiB).
Since a duplicate of this question was asked under the Python tag, here is a Python implementation of baby step, giant step, which, as #MarkBeyers points out, is a reasonable approach (as long as the modulus isn't too large):
def baby_steps_giant_steps(a,b,p,N = None):
if not N: N = 1 + int(math.sqrt(p))
#initialize baby_steps table
baby_steps = {}
baby_step = 1
for r in range(N+1):
baby_steps[baby_step] = r
baby_step = baby_step * a % p
#now take the giant steps
giant_stride = pow(a,(p-2)*N,p)
giant_step = b
for q in range(N+1):
if giant_step in baby_steps:
return q*N + baby_steps[giant_step]
else:
giant_step = giant_step * giant_stride % p
return "No Match"
In the above implementation, an explicit N can be passed to fish for a small exponent even if p is cryptographically large. It will find the exponent as long as the exponent is smaller than N**2. When N is omitted, the exponent will always be found, but not necessarily in your lifetime or with your machine's memory if p is too large.
For example, if
p = 70606432933607
a = 100001
b = 54696545758787
then 'pow(a,b,p)' evaluates to 67385023448517
and
>>> baby_steps_giant_steps(a,67385023448517,p)
54696545758787
This took about 5 seconds on my machine. For the exponent and the modulus of those sizes, I estimate (based on timing experiments) that brute force would have taken several months.
Discrete logarithm is a hard problem
Computing discrete logarithms is believed to be difficult. No
efficient general method for computing discrete logarithms on
conventional computers is known.
I will add here a simple bruteforce algorithm which tries every possible value from 1 to m and outputs a solution if it was found. Note that there may be more than one solution to the problem or zero solutions at all. This algorithm will return you the smallest possible value or -1 if it does not exist.
def bruteLog(b, c, m):
s = 1
for i in xrange(m):
s = (s * b) % m
if s == c:
return i + 1
return -1
print bruteLog(5, 8, 13)
and here you can see that 3 is in fact the solution:
print 5**3 % 13
There is a better algorithm, but because it is often asked to be implemented in programming competitions, I will just give you a link to explanation.
as said the general problem is hard. however a prcatical way to find e if and only if you know e is going to be small (like in your example) would be just to try each e from 1.
btw e==3 is the first solution to your example, and you can obviously find that in 3 steps, compare to solving the non discrete version, and naively looking for integer solutions i.e.
e = log(c + n*m)/log(b) where n is a non-negative integer
which finds e==3 in 9 steps

Categories

Resources