In "Great Mathematical problems -- Vision of infinity", page 18 Ian Stewart referred Euclid's proposition 2, Book VII of Element which is a very elementary method of finding Greatest common divisor. I quote "It works by repeatedly subtracting the smaller number from the larger one, then applying a similar process to the resulting remainder and the smaller number, and continuing until there is no remainder." The example is with 630 and 135. 135 is repeadedly "subtracted"from 630 (495, 360, 225) and finally obtain 90 which is less than 135. So the numbers are inverted and 90 is repeatedly subtracted from 135 to have finally, 45. Then 45 is subtracted from 90 and finally obtain 0 yielding 45 the gcd. This is sometimes called Euclidean Algorithm of finding gcd.
To teach a beginner (a child of 10) I need to write the code in python. There should not be any function definition, neither it should use any other mathematical operation than subtraction. I want to use if/while/else/elif/continue/break. There should be provision that if three numbers (or more) are given, the whole program must be repeated deciding the smaller one.
Earlier chain on gcd does not look the algorithm from this perspective.
A typical fast solution to the gcd algorithm would be some iterative version like this one:
def gcd(x, y):
while y != 0:
(x, y) = (y, x % y)
return x
In fact, in python you'd just use directly the gcd function provided by fractions module.
And if you wanted such function to deal with multiple values you'd just use reduce:
reduce(gcd,your_array)
Now, it seems you want to constrain yourself to use only loops+substractions so one possible solution to deal with x,y positive integers would be this:
def gcd_unopt(x, y):
print x,y
while x!=y:
while x>y:
x -= y
print x,y
while y>x:
y -= x
print x,y
return x
print reduce(gcd_unopt,[630,135])
Not sure why you wanted to avoid using functions, gcd is a function by definition, even so, that's simple, just get rid of the function declaration and use the parameters as global variables, for instance:
x = 630
y = 135
print x,y
while x!=y:
while x>y:
x -= y
print x,y
while y>x:
y -= x
print x,y
Related
I have the two following algorithms. My analysis says that both of them are O(m^2+4^n) i.e they are equivalent for big numbers. Is this right?. Note that m and n are the bits numbers for x and y
def pow1(x, y):
if y == 0:
return 1
temp = x
while y > 1:
y -= 1
temp *= x
return temp
def pow2(x, y):
if y == 0:
return 1
temp = pow2(x, y//2)
if y & 1: return temp * temp * x
return temp * temp
Whether the divide-and-conquer algorithm is more efficient depends on a ton of factors. In Python it is more efficient.
Your analysis is right; assuming standard grade-school multiplication, divide-and-conquer does fewer, more expensive multiplications, and asymptotically that makes total runtime a wash (constant factors probably matter -- I'd still guess divide-and-conquer would be faster because the majority of the work is happening in optimized C rather than in Python loop overhead, but that's just a hunch, and it'd be hard to test given that Python doesn't use an elementary multiplication algorithm).
Before going further, note that big integer multiplication in Python is little-o of m^2. In particular, it uses karatsuba, which is around O(m^0.58 n) for an m-bit integer and an n-bit integer with m<=n.
The small terms using ordinary multiplication won't matter asymptotically, so focusing on the large ones we can replace the multiplication cost and find that your iterative algorithm is around O(4^n m^1.58) and your divide-and-conquer solution is around O(3^n m^1.58).
Beginner question - I'm trying to solve CodeAbbey's Problem #174, "Calculation of Pi", and so far I have written a function that accurately calculates the sidelengths of a regular Polygon with 6*N corners, thus approximating a circle.
In the code below, the function x(R,d) prints the correct values for "h" and "side" (compare the values given in the example on CodeAbbey), but when I ran my code through pythontutor, I saw that it returns slightly different values, for example 866025403784438700 instead of 866025403784438646 for the first value of h.
Can someone help me understand why this is?
As you can probably tell, I'm an amateur. I took the isqrt function from here, as the math.sqrt(x) method seems to give very imprecise results for large values of x
def isqrt(x):
# Returns the integer square root. This seems to be unproblematic
if x < 0:
raise ValueError('square root not defined for negative numbers')
n = int(x)
if n == 0:
return 0
a, b = divmod(n.bit_length(), 2)
x = 2**(a+b)
while True:
y = (x + n//x)//2
if y >= x:
return x
x = y
def x(R,d):
# given Radius and sidelength of initial polygon,
# this should return sidelength of new polygon.
h = isqrt(R**2 - d**2)
side = isqrt(d**2 + (R-h)**2)
print (h, side) # the values in this line are slightly
return (h, side) # different than the ones here. Why?
def approximate_pi(K,N):
R = int(10**K)
d = R // 2
for i in range(N):
d = (x(R,d)[1] // 2)
return int(6 * 2**(N) * d)
print (approximate_pi(18,4))
That's an artifact of Python Tutor. It's not something actually happening in your code.
From a very brief look at the Python Tutor source code, it looks like the Python execution backend is a slightly hacked-up, mostly standard CPython instance with debug instrumentation through bdb, but the visualization is in Javascript. The printed output comes from the Python standard output, but the visualization goes through Javascript, and the Python integers get converted to Javascript Number values, losing precision because Number is 64-bit floating point.
It has to do with rounding to the nearest integer.
In your function divmod(n.bit_length(), 2), try to change 2 to 2.0, it will give similar value to what you saw on their plateform.
I'm trying to minimize a function f of ~80 variables stored in an array. The function is defined by two nested loops: the outer one indexes array by i, while the inner loop is performed array[i] times and adds the result of a computation to a running total. The computation depends on some conditions x and y and changes slightly every time it's performed, which is why I need the loop structure. Here is a minimal working example in Python:
def f[array]:
total = 0
x = 0
y = 0
for i in range(len(array)):
for j in range(array[i]):
result = 2*x + y
total = total + result
x = x+1
x = 0
y = y+1
return total
So for instance, print f([2,1]) returns 3, since [(2*0) + 0] + [(2*1) + 0] + [(2*0) + 1] = 0+2+1 = 3.
I want to find the entries of array that minimize the value of f. However, when I tell (e.g.) Mathematica to minimize f([x1, x2, ..., x80]) and spit out the minimizer array, the program complains because it can't perform the loops defining f an indeterminate number of times.
In light of this, my question is the following:
How do I minimize a multivariate function whose parameters describe the number of times a given loop is to be iterated?
I had originally tried to implement this in Mathematica, but found that I could not define f by the procedure above. The best I could do is tell Mathematica to perform the loops above, then define f[array_] := total after total had been computed. When I ran my code, Mathematica naturally claimed that it could not evaluate f, throwing an error even before it executed my command to NMinimize[{f[array] array ϵ Integers}, array]. The fact that Mathematica is trying to evaluate f before it is called in NMinimize indicates that I don't quite understand how functions work in Mathematica. Any help in untangling this situation would be greatly appreciated!
As written your function has an analytical minimum and there is no need for numerical optimization. Unfortunately, StackOverflow won't let me show the mathematics of it (if you ask it on MathExchange I can provide a derivation), but given an array A = [a0 a1 ... an] where each ai is a positive integer, and an array Y = [0 1 ... n] the function you posted reduces to the following matrix multiplication A * (A - 1 + Y)' where ' denotes a matrix transpose and * denotes matrix multiplication. So, trivially, the function is minimized when each ai is minimized. So, if this is part of a larger optimization, your task should be focused on finding the minimum of each element of A if the elements themselves are constrained.
The goal is to calculate large digit numbers raised to other large digit numbers, e.g., 100 digit number raised to another 100 digit number, using recursion.
My plan was to recursively calculate exp/2, where exp is the exponent, and making an additional calculation depending on if exp is even or odd.
My current code is:
def power(y, x, n):
#Base Case
if x == 0:
return 1
#If d is even
if (x%2==0):
m = power(y, x//2, n)
#Print statment only used as check
print(x, m)
return m*m
#If d is odd
else:
m = y*power(y, x//2, n)
#Print statement only used as check
print(x, m)
return m*m
The problem I run into is that it makes one too many calculations, and I'm struggling to figure out how to fix it. For example, 2^3 returns 64, 2^4 returns 256, 2^5 returns 1024 and so on. It's calculating the m*m one too many times.
Note: this is part of solving modulus of large numbers. I'm strictly testing the exponent component of my code.
First of all there is a weird thing with your implementation: you use a parameter n that you never use, but simply keep passing and you never modify.
Secondly the second recursive call is incorrect:
else:
m = y*power(y, x//2, n)
#Print statement only used as check
print(x, m)
return m*m
If you do the math, you will see that you return: (y yx//2)2=y2*(x//2+1) (mind the // instead of /) which is thus one y too much. In order to do this correctly, you should thus rewrite it as:
else:
m = power(y, x//2, n)
#Print statement only used as check
print(x, m)
return y*m*m
(so removing the y* from the m part and add it to the return statement, such that it is not squared).
Doing this will make your implementation at least semantically sound. But it will not solve the performance/memory aspect.
Your comment makes it clear that you want to do a modulo on the result, so this is probably Project Euler?
The strategy is to make use of the fact that modulo is closed under multiplication. In other words the following holds:
(a b) mod c = ((a mod c) * (b mod c)) mod c
You can use this in your program to prevent generating huge numbers and thus work with small numbers that require little computational effort to run.
Another optimization is that you can simply use the square in your argument. So a faster implementation is something like:
def power(y, x, n):
if x == 0: #base case
return 1
elif (x%2==0): #x even
return power((y*y)%n,x//2,n)%n
else: #x odd
return (y*power((y*y)%n,x//2,n))%n
If we do a small test with this function, we see that the two results are identical for small numbers (where the pow() can be processed in reasonable time/memory): (12347**2742)%1009 returns 787L and power(12347,2742,1009) 787, so they generate the same result (of course this is no proof), that both are equivalent, it's just a short test that filters out obvious mistakes.
here is my approach accornding to the c version of this problem it works with both positives and negatives exposents:
def power(a,b):
"""this function will raise a to the power b but recursivelly"""
#first of all we need to verify the input
if isinstance(a,(int,float)) and isinstance(b,int):
if a==0:
#to gain time
return 0
if b==0:
return 1
if b >0:
if (b%2==0):
#this will reduce time by 2 when number are even and it just calculate the power of one part and then multiply
if b==2:
return a*a
else:
return power(power(a,b/2),2)
else:
#the main case when the number is odd
return a * power(a, b- 1)
elif not b >0:
#this is for negatives exposents
return 1./float(power(a,-b))
else:
raise TypeError('Argument must be interfer or float')
Let's say that I know that x is bigger than y and both x and y are bigger than 0.
Can someone please help me write a function that takes two linear formulas (+/- only) and returns which one is bigger?
For example:
foo("x+y","2*x") #should return 2
foo("2*x","x+y") #should return 1
foo("x","2*y") #should return 0 (can't decide)
thanks alot!
The best way to do this in SymPy is to use the assumptions system.
First off, don't try to do tokenizing. Just use sympify if you have to input as strings, and if you don't have to, just create the expressions using symbols, like
x, y = symbols('x y')
a = x - y
b = 2*x
Please read the SymPy tutorial for more information.
The assumptions system doesn't support inequalities directly yet, so to represent x > y, you need to state that x - y is positive. To ask if 2*x > x - y, i.e., if 2*x - (x - y) is positive, given that x, y, and x - y are positive, do
In [27]: ask(Q.positive((2*x) - (x - y)), Q.positive(x) & Q.positive(y) & Q.positive(x - y))
Out[27]: True
The first argument to ask is what you are asking and the second argument is what you are assuming. & is logical and, so Q.positive(x) & Q.positive(y) & Q.positive(x - y) means to assume all three of those things.
It will return False if it knows it is false, and None if it can't determine. Note that SymPy works in a complex domain, so not positive doesn't necessarily mean negative. Hence, you should probably call ask on the negated expression as well if you get None, or call it again with negative instead of positive. If you want to include 0 (i.e., use >= instead of >, use nonnegative instead of positive and nonpositive instead of negative.
It isn't as smart as it could be yet, so you'll get a lot of Nones now when the answer could be known. In particular, I don't think it will really use the x > y fact very well at this point.