I am asked to write a program to solve this equation ( x^3 + x -1 = 0 ) using fixed point iteration.
What is the algorithm for fixed point iteration?
Is there any fixed point iteration code sample in Python? (not a function from any modules, but the code with algorithms)
Thank You
First, read this:
Fixed point iteration:Applications
I chose Newton's Method.
Now if you'd like to learn about generator functions, you could define a generator function, and instance a generator object as follows
def newtons_method(n):
n = float(n) #Force float arithmetic
nPlusOne = n - (pow(n,3) + n - 1)/(3*pow(n,2) +1)
while 1:
yield nPlusOne
n = nPlusOne
nPlusOne = n - (pow(n,3) + n - 1)/(3*pow(n,2) +1)
approxAnswer = newtons_method(1.0) #1.0 can be any initial guess...
Then you can gain successively better approximations by calling:
approxAnswer.next()
see: PEP 255 or Classes (Generators) - Python v2.7 for more info on Generators
For example
approx1 = approxAnswer.next()
approx2 = approxAnswer.next()
Or better yet use a loop!
As for deciding when your approximation is good enough... ;)
Pseudocode is here, you should be able to figure it out from there.
Related
I need help with evaluating the expression. I just started it but at a loss for what next plus all the for loops I am using seem unnecessary. it has sum, products and combinations:
What I tried is incomplete and in my opinion not accurate. I tried several but all I can come up with for now. I don't have the denominator yet.
i = 10
N = 3.1
j = []
for x in range(1, i + 1):
for y in range(1, i):
for z in range(1, n - i):
l = N * y * z
j.append(l)
ll = sum(j)
Any help is appreciated. I want to be able to understand it so I can do more complex examples.
Here are some hints. If you try them and are still stuck, ask for more help.
First, you know that the expression involves "combinations," also called "binomial coefficients." So you will need a routine that calculates those. Here is a question with multiple answers on how to calculate these numbers. Briefly, you can use the scipy package or make your own routine that uses Python's factorial function or that uses iteration.
Next, you see that the expression involves sums and products and is written as a single expression. Python has a sum function which works on generator expressions (as well as list and set generators and other iterables). Your conversion from math to Python will be easier if you know how to set up such expressions. If you do not understand these generators/iterables and how to sum them, do research on this topic. This approach is not necessary, since you could use loops rather than the generators, but this approach will be easier. Study until you can understand an expression (including why the final number in the range has 1 added to it) such as
sum(N * f(x) for x in range(1, 5+1))
Last, your expression has products, but Python has no built-in way to take the product of an iterable. Here is such a function in Python 3.
from operator import mul
from functools import reduce
def prod(iterable):
"""Return the product of the numbers in an iterable."""
return reduce(mul, iterable, 1)
With all of that, your desired expression will look like this (you will need to finish the job by replacing the ... with something more useful):
numerator = sum(N * prod(... for y in range(1, 1+1)) for x in range(1, 5+1))
denominator = prod(y + N for y in range(1, 5+1))
result = numerator / denominator
Note that your final result is a function of N.
The question is available here. My Python code is
def solution(A, B):
if len(A) == 1:
return [1]
ways = [0] * (len(A) + 1)
ways[1], ways[2] = 1, 2
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
result = [1] * len(A)
for i in xrange(len(A)):
result[i] = ways[A[i]] & ((1<<B[i]) - 1)
return result
The detected time complexity by the system is O(L^2) and I can't see why. Thank you in advance.
First, let's show that the runtime genuinely is O(L^2). I copied a section of your code, and ran it with increasing values of L:
import time
import matplotlib.pyplot as plt
def solution(L):
if L == 0:
return
ways = [0] * (L+5)
ways[1], ways[2] = 1, 2
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
points = []
for L in xrange(0, 100001, 10000):
start = time.time()
solution(L)
points.append(time.time() - start)
plt.plot(points)
plt.show()
The result graph is this:
To understand why this O(L^2) when the obvious "time complexity" calculation suggests O(L), note that "time complexity" is not a well-defined concept on its own since it depends on which basic operations you're counting. Normally the basic operations are taken for granted, but in some cases you need to be more careful. Here, if you count additions as a basic operation, then the code is O(N). However, if you count bit (or byte) operations then the code is O(N^2). Here's the reason:
You're building an array of the first L Fibonacci numbers. The length (in digits) of the i'th Fibonacci number is Theta(i). So ways[i] = ways[i-1] + ways[i-2] adds two numbers with approximately i digits, which takes O(i) time if you count bit or byte operations.
This observation gives you an O(L^2) bit operation count for this loop:
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
In the case of this program, it's quite reasonable to count bit operations: your numbers are unboundedly huge as L increases and addition of huge numbers is linear in clock time rather than O(1).
You can fix the complexity of your code by computing the Fibonacci numbers mod 2^32 -- since 2^32 is a multiple of 2^B[i]. That will keep a finite bound on the numbers you're dealing with:
for i in xrange(3, len(ways)):
ways[i] = (ways[i-1] + ways[i-2]) & ((1<<32) - 1)
There are some other issues with the code, but this will fix the slowness.
I've taken the relevant parts of the function:
def solution(A, B):
for i in xrange(3, len(A) + 1): # replaced ways for clarity
# ...
for i in xrange(len(A)):
# ...
return result
Observations:
A is an iterable object (e.g. a list)
You're iterating over the elements of A in sequence
The behavior of your function depends on the number of elements in A, making it O(A)
You're iterating over A twice, meaning 2 O(A) -> O(A)
On point 4, since 2 is a constant factor, 2 O(A) is still in O(A).
I think the page is not correct in its measurement. Had the loops been nested, then it would've been O(A²), but the loops are not nested.
This short sample is O(N²):
def process_list(my_list):
for i in range(0, len(my_list)):
for j in range(0, len(my_list)):
# do something with my_list[i] and my_list[j]
I've not seen the code the page is using to 'detect' the time complexity of the code, but my guess is that the page is counting the number of loops you're using without understanding much of the actual structure of the code.
EDIT1:
Note that, based on this answer, the time complexity of the len function is actually O(1), not O(N), so the page is not incorrectly trying to count its use for the time-complexity. If it were doing that, it would've incorrectly claimed a larger order of growth because it's used 4 separate times.
EDIT2:
As #PaulHankin notes, asymptotic analysis also depends on what's considered a "basic operation". In my analysis, I've counted additions and assignments as "basic operations" by using the uniform cost method, not the logarithmic cost method, which I did not mention at first.
Most of the time simple arithmetic operations are always treated as basic operations. This is what I see most commonly being done, unless the algorithm being analysed is for a basic operation itself (e.g. time complexity of a multiplication function), which is not the case here.
The only reason why we have different results appears to be this distinction. I think we're both correct.
EDIT3:
While an algorithm in O(N) is also in O(N²), I think it's reasonable to state that the code is still in O(N) b/c, at the level of abstraction we're using, the computational steps that seem more relevant (i.e. are more influential) are in the loop as a function of the size of the input iterable A, not the number of bits being used to represent each value.
Consider the following algorithm to compute an:
def function(a, n):
r = 1
for i in range(0, n):
r *= a
return r
Under the uniform cost method, this is in O(N), because the loop is executed n times, but under logarithmic cost method, the algorithm above turns out to be in O(N²) instead due to the time complexity of the multiplication at line r *= a being in O(N), since the number of bits to represent each number is dependent on the size of the number itself.
Codility Ladder competition is best solved in here:
It is super tricky.
We first compute the Fibonacci sequence for the first L+2 numbers. The first two numbers are used only as fillers, so we have to index the sequence as A[idx]+1 instead of A[idx]-1. The second step is to replace the modulo operation by removing all but the n lowest bits
thankyou for your help.
i am very new to programming, but have decided to learn Python. i am doing a program that can check if a number is a prime. this is mathematically done by checking if (x-1)^p -(x^p-1) is devisible by p (Capable of being divided, with no remainder) then p is a prime.
However i have run into trouble. this is my code so far:
from sympy import *
x=symbols('x')
p=11
f=(pow(x - 1, p)) - (pow(x, p) - 1) # (x-1)^p -(x^p-1)
f1=expand(f)
>>> -11*x**10 + 55*x**9 - 165*x**8 + 330*x**7 - 462*x**6 + 462*x**5 - 330*x**4 + 165*x**3 - 55*x**2 + 11*x
f2= f1/p
>>> -x**10 + 5*x**9 - 15*x**8 + 30*x**7 - 42*x**6 + 42*x**5 - 30*x**4 + 15*x**3 - 5*x**2 + x
to tell if the number p is a prime i need to check if the coefficients of the polynomium is divisible by p. so i have to check if the coefficients of f2 is whole numbers or real numbers.
this is what i would like to make a program that can check: https://www.youtube.com/watch?v=HvMSRWTE2mI
i have tried making it into int but it still shows fractions like 1/2 and 3/7. i wish that it will only show whole numbers.
how do i make it so?
What the method effective does is expand the polynomial and drop the first (x^p) and last coefficients (x^0). Then you have to iterate through the rest and check for divisibility. Since a polynomial expansion of power p produces p+1 terms (from 0 to p), we want to collect p-2 terms (from 1 to p-1). This is all summed up in the following code.
from sympy.abc import x
def is_prime_sympy(p):
poly = pow((x - 1), p).expand()
return not any(poly.coeff(x, i) % p for i in xrange(1, p))
This works, but the higher the number you input, e.g. 1013, the longer you'll notice it takes. Sympy is slow because internally it stores all expressions as some classes and all multiplications and additions take a long time. We can simply generate the coefficients using Pascal's triangle. For the polynomial (x - 1)^p, the coefficients are supposed to change sign, but we don't care about that. We just want the raw numbers. Credits to Copperfield for pointing out you only need half of the coefficients because of symmetry.
import math
def combination(n, r):
return math.factorial(n) // (math.factorial(r) * math.factorial(n - r))
def pascals_triangle(row):
# only generate half of the coefficients because of symmetry
return (combination(row, term) for term in xrange(1, (row+1)//2))
def is_prime_math(p):
return not any(c % p for c in pascals_triangle(p))
We can time both methods now to see which one is faster.
import time
def benchmark(p):
t0 = time.time()
is_prime_math(p)
t1 = time.time()
is_prime_sympy(p)
t2 = time.time()
print 'Math: %.3f, Sympy: %.3f' % (t1-t0, t2-t1)
And some tests.
>>> benchmark(512)
Math: 0.001, Sympy: 0.241
>>> benchmark(2003)
Math: 3.852, Sympy: 41.695
We know that 512 is not a prime. The very second term we have to check for divisibility fails the test, so most of the time is actually spent generating the coefficients. Python lazily computes them while sympy must expand the whole polynomial out before we can start collecting them. This shows as that a generator approach is preferable.
2003 is prime and here we notice sympy performs 10 times as slowly. In fact, all of the time is spent generating the coefficients, as iterating over 2000 elements for a modulo operation takes no time. So if there are any further optimisations, that's where one should focus.
numpy.poly1d()
Numpy has a class that can manipulate polynomial coefficients and it's exactly what we want. It even works relatively fast for powers up to 50k. However, in its original implementation it's useless to us. That is because the coefficients are stored as signed int32, which means very quickly they will overflow and our modulo operations will be thrown off. In fact, it'll fail for even 37.
But it's fast, though, right? Maybe if we can hack it so it accepts infite precision integers... Maybe it's possible, maybe it isn't. But even if it is, we have to consider that maybe the reason why it is so fast is exactly because it uses a fixed precision type under the hood.
For the sake of curiosity, this is what the implementation would look like if it were any useful.
import numpy as np
def is_prime_numpy(p):
poly = pow(np.poly1d([1, -1]), p)
return not any(c % p for c in poly.coeffs[1:-1])
And for the curious ones, the source code is located in ...\numpy\lib\polynomial.py.
I am not sure if I understood what you mean, but for checking if a number is an integer or float you can use isinstance:
>>> isinstance(1/2.0, float)
>>> True
>>> isinstance(1/2, float)
>>> False
beginner programmer here. I have been assigned to create a function 'Roots' that takes two parameters x and n(n has to be an integer) and then calculates all complex and real roots of the equation z^n=x. However, the only module/package I can use is math. Also, I have been told that the certain aspects of the following function 'Power_complex' play a big role into creating 'Roots':
def Power_complex(re, im, n): #calculates the n-th power of a complex number(lets call this a), where 're' is the real part and 'im' the imaginary part
import math
r=math.sqrt((re)**2+(im)**2) #calculates modulus
h=math.atan2(re,im) #calculates argument(angle)
ren=(r**n)*math.cos(h*n) #calculates the real part of a^n
imn=(r**n)*math.sin(h*n) #calculates the imaginary part of a^n
return ren, imn
print '(',re, '+', im, 'i',')','^',n,'=',ren,'+',imn,'i' #displays the result
Also, I need to somehow implement a for loop into 'Roots'.
I have been pondering over this for hours, but alas I really can't figure it out and am hoping one of you can help me out here.
BTW my python version is 2.7.10
The expression for the solutions is ( explained here ):
when
In the case that z^n is real, equal to the x in your question, then r = |x| and the angle is 0 or pi for positive and negative values, respectively.
So you make the modulus and argument as you have done, then make every solution corresponding to a value of k
z = [r**(1./n) * exp(1j * (theta + 2*pi*k) / n ) for k in range(n)]
This line uses a Python technique called list comprehension. An eqvivalent way of doing it (that you may be more familiar to) could be:
z = []
for k in range(n):
nthroot = r**(1./n) * exp( 1j * (theta + 2*pi*k) / n )
z.append(nthroot)
Printing them out could be done in the same fashion, using a for-loop:
for i in range(len(z)):
print "Root #%d = %g + i*%g" % (i, z[i].real, z[i].imag)
Note that the exp-function used must be from the module cmath (math can't handle complex numbers). If you are not allowed to use cmath, then I suggest you rewrite the expression for the solutions to the form without modulus and argument.
Successive approximation is a general method in which on each iteration of an algorithm, we find a closer estimate of the answer for which we are seeking. One class of successive approximation algorithms uses the idea of a fixed point. If f(x) is a mathematical function, then finding the x such that f(x) = x gives us the fixed point of f.
One way to find a fixed point is to start with some guess (e.g. guess = 1.0) and, if this is not good enough, use as a next guess the value of f(guess). We can keep repeating this process until we get a guess that is within epsilon of f(guess).
here's my code:
def fixedPoint(f, epsilon):
"""
f: a function of one argument that returns a float
epsilon: a small float
returns the best guess when that guess is less than epsilon
away from f(guess) or after 100 trials, whichever comes first.
"""
guess = 1.0
for i in range(100):
if abs(f(guess) - guess) < epsilon:
return guess
else:
guess = f(guess)
return guess
further now...
def sqrt(a):
def tryit(x):
return 0.5 * (a/x + x)
return fixedPoint(tryit, 0.0001)
I want to compute square root of a number "a", is the fixed point of the function f(x) = 0.5 * (a/x + x). AND DONE.
(Above solutions are correct)
Successive approximation is a general method in which on each iteration of an algorithm, we find a closer estimate of the answer for which we are seeking. One class of successive approximation algorithms uses the idea of a fixed point. If f(x) is a mathematical function, then finding the x such that f(x) = x gives us the fixed point of f.
One way to find a fixed point is to start with some guess (e.g. guess = 1.0) and, if this is not good enough, use as a next guess the value of f(guess). We can keep repeating this process until we get a guess that is within epsilon of f(guess).
example in python:
def fixedPoint(f, epsilon):
"""
f: a function of one argument that returns a float
epsilon: a small float
returns the best guess when that guess is less than epsilon
away from f(guess) or after 100 trials, whichever comes first.
"""
guess = 1.0
for i in range(100):
if -epsilon < f(guess) - guess < epsilon:
return guess
else:
guess = f(guess)
return guess
further let function f be for finding square root using babylon method:
def sqrt(a):
def babylon(x):
def test(x):
return 0.5 * ((a / x) + x)
return test(x)
return fixedPoint(babylon, 0.0001)
You might want to take a look at the optimization and root-finding functions in scipy.
Especially scipy.optimize.fixed_point. The actual source code is here.