Fastest Integer Search for Any Arbitrary X and Y - python

With the application of Discrete Math, what's the fastest algorithm in python to solve this problem:
With the equation ax + by = d, where a, b and d are user inputs, search for the integer value pairs for x and y in the range |L, R| (including L and R) that satisfies the equation.
L and R are user inputs as well.
Output all possible values for x and y in increasing order. Print none if there are no possible pairs.
Case 1:
a = 1
b = 5
d = 40
L = 0
R = 10
Result:
0 8
5 7
10 6
Case 2:
a = 14
b = 91
d = 53
L = 1
R = 100
Result:
none
Here's my code, but I believe there is a much faster way in searching. This is too inefficient.
a = int(input())
b = int(input())
d = int(input())
L = int(input())
R = int(input())
isNone = True
for x in range(L, R+1):
for y in range(L, R+1):
if (a*x) + (b*y) == d:
print(x, y)
isNone = False
if isNone: print("none")
Can there be an algorithm with O(1)? What's the fastest way?

The identity I imagine you want to apply is as follows (straight from wikipedia, though a similar phrasing should be found in most discrete math texts or could be proven on your own):
The simplest linear Diophantine equation takes the form ax + by = c, where a, b and c are given integers. The solutions are described by the following theorem:
This Diophantine equation has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + kv, y − ku), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
This nearly immediately answers the question, especially since the standard proof of it uses something called the euclidean algorithm. In the interest of simplicity we're going to do the following:
Use the euclidean algorithm in the forward direction to find g = gcd(a, b).
Backsolve through the euclidean algorithm to find _x, _y such that _x*a + _y*b == g.
If d isn't a multiple of g then there can't possibly be any solutions, so exit early.
Otherwise, x, y = _x*(d//g), _y*(d//g) is a possible solution. Use that to find all solutions in the desired range.
def gcd(a, b):
# forward euclidean algorithm
q, r, x, qs = None, b, a, []
while x%r:
(q, r), x = divmod(x, r), r
qs.append(q)
# save the gcd for later
g = r
# backsolve euclidean algorithm
if not qs:
return 1, 1-a//b, g
theta, omega = 1, -qs[-1]
for q in reversed(qs[:-1]):
theta, omega = omega, theta - q * omega
# theta * a + omega * b == g
# g might be negative, but we don't care about a canonical solution
return theta, omega, g
def idivide_zero(a, b):
# integer division a/b, round toward 0 instead of round down
q = a // b
if q < 0 and b*q != a:
q += 1
return q
def bounded_solutions(a, b, d, L, R):
_x, _y, g = gcd(a, b)
if d%g:
return
# a*x + b*y == d
x, y = _x*(d//g), _y*(d//g)
# solutions are of the form (x+k*v, y-k*u)
u, v = a//g, b//g
# The next trick is to find all solutions in [L, R].
# Basically, we need L <= x+k*v <= R and L <= y-k*u <= R.
# Note that valid choices of k exist in a contiguous interval, so
# we only have to find the lower and upper bounds to be able to
# quickly enumerate all options.
xb = sorted(idivide_zero(b-x, v) for b in (L, R))
yb = sorted(idivide_zero(y-b, u) for b in (L, R))
m, M = min(xb[0], yb[0]), max(xb[1], yb[1])
for k in range(m, M+1):
yield x+k*v, y-k*u
a = int(input())
b = int(input())
d = int(input())
L = int(input())
R = int(input())
empty = True
for x, y in bounded_solutions(a, b, d, L, R):
print(x, y)
empty = False
if empty:
print('none')
Code is untested. It's right in spirit, but there may be some debugging left.

Related

largest and others constraints in portfolio optimization (MILP problem) CVXPY

Here is python code in cvxpy:
import numpy as np
import time
import cvxpy as cp
n = 10
a = np.random.randint(1, 10, size=n)
b = np.random.randint(1, 10, size=n)
c = np.random.randint(1, 10, size=n)
d = np.random.randint(1, 10, size=n)
x = cp.Variable(shape=n, boolean=True)
# objective function
objective = cp.Maximize(cp.sum(cp.multiply(x,a)))
# constraints
constraints = []
constraints.append(cp.sum(cp.multiply(x, b) <= 50) # constraint 1
constraints.append(cp.sum_largest(cp.hstack([
cp.sum(cp.multiply(x, b)),
cp.sum(cp.multiply(x, c)),
cp.sum(cp.multiply(x, d))]), 2) <= 100) # constraint 2
prob = cp.Problem(objective, constraints)
# solve model
prob.solve(solver=cp.CBC, verbose=False, maximumSeconds=100)
print("status:", prob.status)
a, b, c, d and x are all binary. The objective is max(sum(x*a)) and the constraints are:
sum(x*b) <= 50
sum of the largest 2 values in [sum(x*b), sum(x*c), sum(x*d)] <= 100, this is implemented via sum_largest sum_largest([x*a, x*b, x*c], 2) <= 100
define others=[b, c, d] - b - (largest 2 value in [b, c, d])
For example:
case1: [sum(x*b), sum(x*c), sum(x*d)] = [1,2,3], so (largest 2 value in [b, c, d]) = [c, d] and others=[b, c, d] - b - [c, d] = None
case2: [sum(x*b), sum(x*c), sum(x*d)] = [3,2,1], so (largest 2 value in [b, c, d]) = [b, c] and others=[b, c, d] - b - [b, c] = d
constriants:
for i in others:
constraints.append(cp.sum(cp.multiply(x, i) <= 10)
Constraints 1, 2 have already been implemented. How can I implement constraint 3? Is it even possible in cvxpy?
Note: the question has changed, so this is no longer a valid answer.
The original problem was:
(1) sum(x*b) <= 5
(2) max([sum(x*b), sum(x*c), sum(x*d)]) <= 10
(3) define others=[b, c, d] - b - maxBCD([b, c, d]) (others is a set of symbols, maxBCD returns a symbol)
for i in others:
constraints.append(cp.sum(cp.multiply(x, i) <= 1)
This is an answer to that original problem statement. I have not deleted the answer as it is a good starting point (also because the answer states the problem a bit more precisely: in mathematical terms).
(1) and (2) can be written as:
xb = sum(x*b)
xc = sum(x*c)
xd = sum(x*d)
xb <= 5
xc <= 10
xd <= 10
(3) needs to know which one is the maximum. I assume two or all three can be maximum.
bIsMax,cIsMax,dIsMax: binary variables
# bisMax=1 => xb is largest
xb >= xc - (1-bisMax)*10
xb >= xd - (1-bisMax)*10
# cisMax=1 => xc is largest
xc >= xb - (1-cisMax)*5
xc >= xd - (1-cisMax)*10
# disMax=1 => xd is largest
xd >= xb - (1-disMax)*5
xd >= xc - (1-disMax)*10
# at least one of them is largest
bIsMax + cIsMax + dIsMax >= 1
Note that others=[b, c, d] - b- maxBCD can be restated as: others=[c, d] - maxBCD.
# if c is not max then xc<=1
xc <= 1 + 9*cIsMax
# if d is not max then xd<=1
xd <= 1 + 9*dIsMax
Left to the reader:
check the math and implement in CVXPY.
update the answer to the revised question.

Graphing Intergration in Python

In the following code I have implemented Simpsons Rule in Python. I have attempted to plot the absolute error as a function of n for a suitable range of integer values n. I know that the exact result should be 1-cos(pi/2). However my graph doesn't seem to be correct. How can I fix my code to get the correct output? there were two loops and I don't think I implemented my graph coding correctly
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the composite Simpson's rule, using n subintervals (with n even)"""
h = (b - a) / (n)
s = f(a) + f(b)
diffs = {}
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
r = s
exact = 1 - cos(pi/2)
diff = abs(r - exact)
diffs[n] = diff
ordered = sorted(diffs.items())
x,y = zip(*ordered)
plt.autoscale()
plt.loglog(x,y)
plt.xlabel("Intervals")
plt.ylabel("Error")
plt.show()
return s * h / 3
simpson(lambda x: sin(x), 0.0, pi/2, 100)
Your simpson method should just calculate the integral for a single value of n (as it does), but creating the plot for many values of n should be outside that method. as:
from math import pi, cos, sin
from matplotlib import pyplot as plt
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the composite Simpson's rule, using 2n subintervals """
h = (b - a) / (2*n)
s = f(a) + f(b)
for i in range(1, 2*n, 2):
s += 4 * f(a + i * h)
for i in range(2, 2*n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
diffs = {}
exact = 1 - cos(pi/2)
for n in range(1, 100):
result = simpson(lambda x: sin(x), 0.0, pi/2, n)
diffs[2*n] = abs(exact - result) # use 2*n or n here, your choice.
ordered = sorted(diffs.items())
x,y = zip(*ordered)
plt.autoscale()
plt.loglog(x,y)
plt.xlabel("Intervals")
plt.ylabel("Error")
plt.show()

A system of two multivariable coupled ODEs

I'm trying to solve the following problem of coupled ODEs using odeint() from scipy. The system looks like this:
X'_k = mean(Y_k) + F
Y'_{k,j} = X_k - Y_{k,j}
This is a system with 3 X variables, and for each X variable, there are other 3 Y variables.
From what I read from the documentation, and the examples here and here, I can pass the system of equations as a list. And that is what I tried in the following example:
import numpy as np
from scipy.integrate import odeint
def dZdt(Z, t):
X = Z[0]
Y = Z[1]
F = 4
d_x = np.zeros(3)
d_y = np.zeros(3*3).reshape(3,3)
# Compute the Y values
for k in range(3):
for j in range(3):
d_y[k][j] = X[k] - Y[k][j]
# X values
d_x[k] = Y[k].mean() + F
d = [d_x, d_y]
return d
# Initial conditions
X0 = np.random.uniform(size=3)
Y0 = np.random.uniform(size = 3*3).reshape(3,3)
Z0 = [X0, Y0]
t = range(20)
Z = odeint(dZdt, Z0, t)
Where k, j = (1,2,3) and Z = [X,Y]
But I'm afraid I'm getting the following error:
ValueError: could not broadcast input array from shape (3,3) into shape (3)
My real problem is more complex, because j, and k, can be bigger than 3 (they go from 1 to j_max, and K_max, respectively) so I cannot write the 12 variables one by one.
My guessing is that somewhere in the code, Y variables are tried to fill in X shape... but no clue about where.
Any idea of what I'm doing wrong?
You are trying to represent an unknown function by two arrays inside of a list. It must be a one-dimensional array. So, instead of 3 X-variables and 9 Y-variables it must be a flat list of 12 variables. Like this:
def dZdt(Z, t):
X = Z[:3]
Y = Z[3:].reshape(3, 3)
F = 4
d_x = np.zeros(3)
d_y = np.zeros((3, 3))
# Compute the Y values
for k in range(3):
for j in range(3):
d_y[k, j] = X[k] - Y[k, j]
# X values
d_x[k] = Y[k].mean() + F
d = np.concatenate((d_x.ravel(), d_y.ravel()))
return d
# Initial conditions
X0 = np.random.uniform(size=3)
Y0 = np.random.uniform(size=(3, 3))
Z0 = np.concatenate((X0.ravel(), Y0.ravel()))
t = range(20)
Z = odeint(dZdt, Z0, t)
NumPy arrays are indexed as Y[k, j], not Y[k][j]. And there are ample vectorization opportunities that would eliminate the loops in the computation of dZdt. Like this:
def dZdt(Z, t):
X = Z[:3]
Y = Z[3:].reshape(3, 3)
F = 4
d_y = X[:, None] - Y
d_x = Y.mean(axis=1) + F
d = np.concatenate((d_x.ravel(), d_y.ravel()))
return d

Integrate a function by the trapezoidal rule- Python

Here is the homework assignment I'm trying to solve:
A further improvement of the approximate integration method from the last question is to divide the area under the f(x) curve into n equally-spaced trapezoids.
Based on this idea, the following formula can be derived for approximating the integral:
!(https://www.dropbox.com/s/q84mx8r5ml1q7n1/Screenshot%202017-10-01%2016.09.32.png?dl=0)!
where h is the width of the trapezoids, h=(b−a)/n, and xi=a+ih,i∈0,...,n, are the coordinates of the sides of the trapezoids. The figure above visualizes the idea of the trapezoidal rule.
Implement this formula in a Python function trapezint( f,a,b,n ). You may need to check and see if b > a, otherwise you may need to swap the variables.
For instance, the result of trapezint( math.sin,0,0.5*math.pi,10 ) should be 0.9979 (with some numerical error). The result of trapezint( abs,-1,1,10 ) should be 2.0
This is my code but It doesn't seem to return the right values.
For print ((trapezint( math.sin,0,0.5*math.pi,10)))
I get 0.012286334153465965, when I am suppose to get 0.9979
For print (trapezint(abs, -1, 1, 10))
I get 0.18000000000000002, when I am suppose to get 1.0.
import math
def trapezint(f,a,b,n):
g = 0
if b>a:
h = (b-a)/float(n)
for i in range (0,n):
k = 0.5*h*(f(a+i*h) + f(a + (i+1)*h))
g = g + k
return g
else:
a,b=b,a
h = (b-a)/float(n)
for i in range(0,n):
k = 0.5*h*(f(a + i*h) + f(a + (i + 1)*h))
g = g + k
return g
print ((trapezint( math.sin,0,0.5*math.pi,10)))
print (trapezint(abs, -1, 1, 10))
Essentially, your return g statement was indented, when it should not have been.
Also, I removed your duplicated code, so it would adhere to "DRY" "Don't Repeat Yourself" principle, which prevents errors, and keeps code simplified and more readable.
import math
def trapezint(f, a, b, n):
g = 0
if b > a:
h = (b-a)/float(n)
else:
h = (a-b)/float(n)
for i in range (0, n):
k = 0.5 * h * ( f(a + i*h) + f(a + (i+1)*h) )
g = g + k
return g
print ( trapezint( math.sin, 0, 0.5*math.pi, 10) )
print ( trapezint(abs, -1, 1, 10) )
0.9979429863543573
1.0000000000000002
This variation reduces the complexity of branches and reduces number of operations. The summation in last step is reduced to single operation on an array.
from math import pi, sin
def trapezoid(f, a, b, n):
if b < a:
a,b = b, a
h = (b - a)/float(n)
g = [(0.5 * h * (f(a + (i * h)) + f(a + ((i + 1) * h)))) for i in range(0, n)]
return sum(g)
assert trapezoid(sin, 0, 0.5*pi, 10) == 0.9979429863543573
assert trapezoid(abs, -1, 1, 10) == 1.0000000000000002

Implementing the birthday paradox continuation of elliptic curve factorization

I would like to add the birthday paradox continuation of the elliptic curve factorization algorithm to my collection of factoring programs. Brent describes the algorithm in two papers, Montgomery also describes the algorithm, and I am trying to implement the algorithm according to a detailed description by Bosma and Lenstra. Here is what I have so far, in Python, which you can run at ideone.com/vMXSab:
# lenstra's algorithm per bosma/lenstra
from random import randint
from fractions import gcd
def primes(n):
b, p, ps = [True] * (n+1), 2, []
for p in xrange(2, n+1):
if b[p]:
ps.append(p)
for i in xrange(p, n+1, p):
b[i] = False
return ps
def bezout(a, b):
if b == 0: return 1, 0, a
q, r = divmod(a, b)
x, y, g = bezout(b, r)
return y, x-q*y, g
def add(p, q, a, b, m):
if p[2] == 0: return q
if q[2] == 0: return p
if p[0] == q[0]:
if (p[1] + q[1]) % m == 0:
return 0, 1, 0 # infinity
n = (3 * p[0] * p[0] + a) % m
d = (2 * p[1]) % m
else:
n = (q[1] - p[1]) % m
d = (q[0] - p[0]) % m
x, y, g = bezout(d, m)
if g > 1: return 0, 0, d # failure
z = (n*x*n*x - p[0] - q[0]) % m
return z, (n * x * (p[0] - z) - p[1]) % m, 1
def mul(k, p, a, b, m):
r = (0,1,0)
while k > 0:
if k % 2 == 1:
r = add(p, r, a, b, m)
if r[2] > 1: return r
k = k // 2
p = add(p, p, a, b, m)
if p[2] > 1: return p
return r
def lenstra1(n, limit):
g = n
while g == n:
q = randint(0, n-1), randint(0, n-1), 1
a = randint(0, n-1)
b = (q[1]*q[1] - q[0]*q[0]*q[0] - a*q[0]) % n
g = gcd(4*a*a*a + 27*b*b, n)
if g > 1: return 0, g # lucky factor
for p in primes(limit):
pp = p
while pp < limit:
q = mul(p, q, a, b, n)
if q[2] > 1:
return 1, gcd(q[2], n)
pp = p * pp
return False
def parms(b1):
b2 = 10 * b1
er = [(1,31), (2,63), (3,127), (6,255), (12,511), \
(18,511), (24,1023), (30,1023), (60,2047)]
prev = 1,31
for (e, r) in er:
if e*e > b1/1250: break
prev = e, r
e, r = prev
rBar = int(round(b2/r))
u = randint(0, pow(2,30)//(e+2))
v = randint(0, pow(2,30)//(e+2))
uBar = randint(0, pow(2,30)//(e+2))
vBar = randint(0, pow(2,30)//(e+2))
return b2, e, r, rBar, u, v, uBar, vBar
def lenstra2(n, b1):
g = n
while g == n:
q = randint(0, n-1), randint(0, n-1), 1
a = randint(0, n-1)
b = (q[1]*q[1] - q[0]*q[0]*q[0] - a*q[0]) % n
g = gcd(4*a*a*a + 27*b*b, n)
if g > 1: return 0, g # lucky factor
for p in primes(b1):
pp = p
while pp < b1:
q = mul(p, q, a, b, n)
if q[2] > 1: return 1, gcd(q[2], n)
pp = p * pp
b2, e, r, rBar, u, v, uBar, vBar = parms(b1)
f = [1] * (r+1)
for i in range(1, r):
p = mul(pow(u*i+v,e), q, a, b, n)
if p[2] > 1: return 2, gcd(p[2], n)
f[i] = (f[i-1] * (q[0] - p[0])) % n
d = 1
for j in range(1, rBar):
pBar = mul(pow(uBar*j+vBar,e), q, a, b, n)
if pBar[2] > 1: return 3, gcd(pBar[2], n)
t = 0
for i in range(0, r):
t = (t + p[0] * f[i]) % n
d = (d * t) % n
g = gcd(d, n)
if 1 < g < n: return 4, g
return False
The primes function implements a simple version of the Sieve of Eratosthenes, returning a list of prime numbers less than n, and the bezout function implements the extended Euclidean algorithm, returning the inverse of a, the inverse of b, and their greatest common divisor. The elliptic arithmetic is given by the add and mul functions; add returns a "point" (0, 0, d) to signal a non-invertible denominator, mul propagates it, and uses of mul in the factoring functions must check it each time mul is called. Function lenstra1 is a simple one-stage version of elliptic curve factorization, and works properly.
Function lenstra2 and its auxiliary function parms are my attempt to implement the algorithm given in the Bosma/Lenstra paper cited above. I'm first trying to get a basic version working, as described in section 6.1, without considering the optimizations in sections 6.4 and 6.7. I think the calculations in parms are correct. The function runs, but always returns False, indicating that it did not find a factor, or it returns after an early break in the elliptic arithmetic before completing the algorithm and returning from the final gcd calculation. I think the problem is in the computation of the coefficients of f, and in the use of f to calculate d.
So my questions:
Have I correctly calculated the coefficients of f?
Have I correctly calculated the value of d?
How do I implement the optimizations of sections 6.4 and 6.7? I don't understand either of them.
How do I implement the Suyama curve of section 5.1 using Weierstrass coordinates?
Many thanks.

Categories

Resources