Egyptian fraction using Fibonacci's Algorithm - python

I have this problem in which we are trying to find an Egyptian fraction using Fibonacci's algorithm. For the numerator, it is always must be equal to one. Then, we have to determine whether the bottom is a practical number.
We have 2 inputs from the user in which they give us a number (that must be positive)
I have already found a way to determine whether or not the bottom number of the rational number is a practical number..(a great similiar example : Practical Number) but I am lost on how to convert it to an Egyptian fraction.
In the instructions, it states that we should find the biggest fraction based off of our fractors list. For example: if the rational number is 5/8, the factors of 8 are [1,2,4]. The largest fraction that could be subtracted from this is 1/2.
I don't even know where to start with this conversion. I just know that if the second number from the user input is a practical number, I must calculate the equivalent egyptian fraction..
The output should run similiarly to this:
Num1 : 7
Num 2: 8
Denomiator factors: [1,2,4,8]
Num 2 is a practical number.
Fraction can be represented by:
1/2 + 1/4 + 1/8
Any starting help would be appreciated. I truly understand the concept and what it's asking - I am just stuck on where to start. Example codes would be a great help.

Okay ... I'm going to echo what you've just told us, using the example 7/8.
Start with the two parts of the fraction: numer=7, denom=8
Determine that denom is a practical number; this includes returning its factors, [1, 2, 4, 8].
Sort the factors in order; if you're guaranteed that the fraction is always less than 1, you can discard the 1 factor.
Iterate through the list, one factor at a time, building the terms of your Egyptian fraction.
pseudo-code:
for factor in factor_list:
weight = denom/factor
while weight < numer:
# Add the fraction 1/factor to the solution;
# reduce numer by weight (subtracting that fraction)
# When you exit these loops,
# numer should be 0, and
# you should have accumulated all of
# the "1/factor" fractions in your solution.

Related

How does python handle very small float numbers?

This is more a curiosity than a technical problem.
I'm trying to better understand how floating point numbers are handled in Python. In particular, I'm curious about the number returned by sys.float_info.epsilon = 2.220446049250313e-16.
I can see, looking up on the documentation on Double-precision floating-point, that this number can also be written as 1/pow(2, 52). So far, so good.
I decided to write a small python script (see below. Disclaimer: this code is ugly and can burn your eyes) which start from eps = 0.1 and makes the comparison 1.0 == 1.0 + eps. If False, it means eps is big enough to make a difference. Then I try to find a smaller number by subtracting 1 from the last digit and adding the digit 1 to the right of the last and looking for False again by incrementing the last digit.
I am pretty confident that the code is ok because at certain point (32 decimal places) I get eps = 0.00000000000000011102230246251567 = 1.1102230246251567e-16 which is very close to 1/pow(2, 53) = 1.1102230246251565e-16 (last digit differs by 2).
I thought the code would no produce sensible numbers after that. However, the script kept working, always zeroing in a more accurate decimal number until it reached 107 decimal places. Beyond that, the code did not find a False to the test. I got very intrigued with that result and could not wrap my head around it.
Does this 107 decimal places float number have any meaning? If positive, what is it particular about it?
If not, what is python doing past the 32 decimal places eps? Surely there is some algorithm python is cranking to get to the 107 long float.
The script.
total = 520 # hard-coded after try-and-error max number of iterations.
dig = [1]
n = 1
for t in range(total):
eps = '0.'+''.join(str(x) for x in dig)
if(1.0 == 1.0 + float(eps)):
if dig[-1] == 9:
print(eps, n)
n += 1
dig.append(1)
else:
dig[-1] += 1
else:
print(eps, n)
n += 1
dig[-1] -= 1
dig.append(1)
The output (part of it). Values are the eps and the number of decimal places
0.1 1
0.01 2
(...)
0.000000000000001 15
0.0000000000000002 16
0.00000000000000012 17
0.000000000000000112 18
0.0000000000000001111 19
0.00000000000000011103 20
(...)
0.0000000000000001110223024625157 31
0.00000000000000011102230246251567 32
0.000000000000000111022302462515667 33
(...)
0.000000000000000111022302462515666368314810887391490808258832543534838643850548578484449535608291625976563 105
0.0000000000000001110223024625156663683148108873914908082588325435348386438505485784844495356082916259765626 106
0.00000000000000011102230246251566636831481088739149080825883254353483864385054857848444953560829162597656251 107
I ran this code in Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:20:19) [MSC v.1925 32 bit (Intel)] on win32.
Your test involves a double rounding and is finding the number 2−53+2−105.
Many Python implementations use the IEEE-754 binary64 format. (This is not required by the Python documentation.) In this format, the significand (fraction portion) of a floating-point number has 53 bits. (52 are encoded in a primary significand field. 1 is encoded via the exponent field.) For numbers in the interval [1, 2), the significand is scaled (by the exponent portion of the floating-point representation) so that its leading bit corresponds to a value of 1 (20). This means is trailing bit corresponds to a value of 2−52.
Thus, the difference between 1 and the next number representable in this format is 2−52—that is the smallest change that can be made in the number, by increasing the low bit.
Now, suppose x contains 1. If we add 2−52 to it, we will of course get 1+2−52, since that result is representable. What happens if we add something slightly smaller, say ¾•2−52? In this case, the real-number result, 1+¾•2−52, is not representable. It must be rounded to a representable number. The common default rounding method is to round to the nearest representable number. In this case, that is 1+2−52.
Thus, adding to 1 some numbers smaller than 2−52 still produces 1+2−52. What is the smallest number we can add to 1 and get this result?
In case of ties, where the real-number result is exactly halfway between two representable numbers, the common default rounding method uses the one with the even low bit. So, with a choice between 1 (trailing bit 0) and 1+2−52 (trailing bit 1), it chooses 1. That means if we add ½•2−52 to 1, it will produce 1.
If we add any number greater than ½•2−52 to 1, there will be no tie; the real-number result will be nearer to 1+2−52, and that will be the result.
The next question is what is the smallest number greater than ½•2−52 (2−53) that we can add to 1? If the number has to be in the IEEE-754 binary64 format, it is limited by its significand. With the leading bit scaled to represent 2−53, the trailing bit represents 2−53−52 = 2−105.
Therefore, 2−53+2−105 is the smallest binary64 value we can add to 1 to get 1+2−52.
As your program tests values, it works with a decimal numeral. That decimal numeral is converted to the floating-point format and then added to 1. So it is finding the smallest number in the floating-point format that produces a sum greater than 1, and that is the number described above, 2−53+2−105. Its value in decimal is 1.110223024625156663683148108873914908082588325435348386438505485784844495356082916259765625•10−16.

technical problem on python with infinite float

I am using Python, and I have a problem, I want to do a program tha can count from 1 to infinite, to know how much is the infinite.
Here is my code :
a=0
for i in range(1, 10e+99):
a += 1
print (a)
but it says " 'float' object cannot be interpreted as an integer "
whereas 10e+99 is not a float
help me please
Per the Python 2 documentation and Python 3 documentation, range requires integer arguments.
In IEEE-754 32-bit binary floating-point, the largest representable finite number is about 3.4028e38. When converting numerals, such as 1e99 in source code, to this format, any number greater than or equal to 2128−2104 (340,282,377,062,143,265,289,209,819,405,393,854,464) will be converted to infinity, assuming the common round-to-nearest-ties-to-even method is used. Because of this, 10e+99 (which stands for 10•1099 and hence 10100) would act like infinity. However, Python implementations more typically use IEEE-754 64-bit binary floating-point, in which the largest representable finite number is 21024−2971, and 10e99 acts as a finite number.1 Thus, to get infinity, you would need around 1e309.
It is not humanly possible to test whether a loop incrementing by 1 from 1 to 10e99 will produce infinity because the total computing power available to humans is only around 1030 additions per year (for a loose sense of “around”, some orders of magnitude). This is insufficient to count to the limit of 32-bit floating-point finite numbers, let alone that of the 64-bit floating-point numbers.
If the arithmetic were done in a floating-point format, it would never reach infinity even with unlimited computing power because, once the sum reached 253 in IEEE-754 64-bit binary, adding 1 would not change the number; 253 would be produced in each iteration. This is because IEEE-754 64-bit binary has only 53 bits available for the significand, so 253+1 is not representable. The nearest representable values are 253 and 253+2. When arithmetic is performed, the exact real-number result is by default rounded to the nearest representable value, with ties rounded to the number with the even low bit in its significand. When 1 is added to 253 the real-number result 253+1 is rounded to 253, and the sum thus stays at 253 for all future iterations.
Footnote
1 The representable value nearest 10100 is 10,000,000,000,000,000,159,028,911,097,599,180,468,360,808,563,945,281,389,781,327,557,747,838,772,170,381,060,813,469,985,856,815,104.
The problem arises because the range() function takes an int, whereas 10e+99 is indeed a float. While 10e+99 is of course not infinity, and therefore you shouldn't expect infinity to pop up anywhere during the execution of your program, if you really wanted to get your for loop to work as it is you could simply do
a=0
for i in range(1, int(10e+99)):
a += 1
print (a)
As other users have pointed out, I would however rethink your strategy entirely: using a range-based for loop to "find out" the value of infinity just doesn't work. Infinity is not a number.
Perhaps you meant your program to go on forever:
a = 0
while True:
a += 1
print(a)
In my head when I see while True: I replace it with 'forever'.
With is code you can check you variable is infinity or not.
import math
infinity = float('inf')
a = 99999999999999999999999999999999
if a > infinity:
print('Your number is an infinity number')
else:
print('Your number is not an infinity number')
#or you can check with math.isinf
print('Your number is Infinity: ',math.isinf(infinity ))
# Also infinity can be both positive and negative
Note: infinity is infinity that has no end, whatever your value or number you enter it will always return false.
Here is what is going to happen if you correct and execute your program:
a=0
for i in range(1, 10**100):
a += 1
print (a)
Suppose you have a super efficient python virtual machine (everyone knows how efficient they are...).
Suppose you have a very efficient implementation of (unbounded) large integers.
Suppose each loop takes a few machine cycles to print those numbers in decimal form (say only 1000 which is well under reality).
Suppose each cycle takes approximately 1.0e-10 s (10GHz) which means having an implementation of print taking advantage of parallelism.
With those irrealistic hypothesis, that's already 10^93 s necessary for the program to complete.
The age of universe is estimated to be less than 10^18 s. Whaouh! It gonna be long.
Now let's compute the energy it's gonna take on a base of 400W computer.
Assuming that all Sun matter (2e30 kg) can be converted into electrical power for your computer (thru E=m c^2), you are going to consume about 2 10^48 equivalent of Sun to perform this computation.
Before you hit return, I kindly ask you: think twice! Save the universe!

What are the odds of a repeat in numpy.random.rand(n) (assuming perfect randomness)?

For the moment, put aside any issues relating to pseudorandom number generators and assume that numpy.random.rand perfectly samples from the discrete distribution of floating point numbers over [0, 1). What are the odds getting at least two exactly identical floating point numbers in the result of:
numpy.random.rand(n)
for any given value of n?
Mathematically, I think this is equivalent to first asking how many IEEE 754 singles or doubles there are in the interval [0, 1). Then I guess the next step would be to solve the equivalent birthday problem? I'm not really sure. Anyone have some insight?
The computation performed by numpy.random.rand for each element generates a number 0.<53 random bits>, for a total of 2^53 equally likely outputs. (Of course, the memory representation isn't a fixed-point 0.stuff; it's still floating point.) This computation is incapable of producing most binary64 floating-point numbers between 0 and 1; for example, it cannot produce 1/2^60. You can see the code in numpy/random/mtrand/randomkit.c:
double
rk_double(rk_state *state)
{
/* shifts : 67108864 = 0x4000000, 9007199254740992 = 0x20000000000000 */
long a = rk_random(state) >> 5, b = rk_random(state) >> 6;
return (a * 67108864.0 + b) / 9007199254740992.0;
}
(Note that rk_random produces 32-bit outputs, regardless of the size of long.)
Assuming a perfect source of randomness, the probability of repeats in numpy.random.rand(n) is 1-(1-0/k)(1-1/k)(1-2/k)...(1-(n-1)/k), where k=2^53. It's probably best to use an approximation instead of calculating this directly for large values of n. (The approximation may even be more accurate, depending on how the approximation error compares to the rounding error accumulated in a direct computation.)
I think you are correct, this is like the birthday problem.
But you need to decide on the number of possible options. You do this by deciding the precision of your floating point numbers.
For example, if you decide to have a precision of 2 numbers after the dot, then there are 100 options(including zero and excluding 1).
And if you have n numbers then the probability of not having a collision is:
or when given R possible numbers and N data points, the probability of no collision is:
And of collision is 1 - P.
This is because the probability of getting any given number is 1/R. And at any point, the probability of a data point not colliding with prior data points is (R-i)/R for i being the index of the data point. But to get the probability of no data points colliding with each other, we need to multiply all the probabilities of data points not colliding with those prior to them. Applying some algebraic operations, we get the equation above.

Does this truely generate a random foating point number? (Python)

For an introduction to Python course, I'm looking at generating a random floating point number in Python, and I have seen a standard recommended code of
import random
lower = 5
upper = 10
range_width = upper - lower
x = random.random() * range_width + lower
for a random floating point from 5 up to but not including 10.
It seems to me that the same effect could be achieved by:
import random
x = random.randrange(5, 10) + random.random()
Since that would give an integer of 5, 6, 7, 8, or 9, and then tack on a decimal to it.
The question I have is would this second code still give a fully even probability distribution, or would it not keep the full randomness of the first version?
According to the documentation then yes random() is indeed a uniform distribution.
random(), which generates a random float uniformly in the semi-open range [0.0, 1.0). Python uses the Mersenne Twister as the core generator.
So both code examples should be fine. To shorten your code, you can equally do:
random.uniform(5, 10)
Note that uniform(a, b) is simply a + (b - a) * random() so the same as your first example.
The second example depends on the version of Python you're using.
Prior to 3.2 randrange() could produce a slightly uneven distributions.
There is a difference. Your second method is theoretically superior, although in practice it only matters for large ranges. Indeed, both methods will give you a uniform distribution. But only the second method can return all values in the range that are representable as a floating point number.
Since your range is so small, there is no appreciable difference. But still there is a difference, which you can see by considering a larger range. If you take a random real number between 0 and 1, you get a floating-point representation with a given number of bits. Now suppose your range is, say, in the order of 2**32. By multiplying the original random number by this range, you lose 32 bits of precision in the result. Put differently, there will be gaps between the values that this method can return. The gaps are still there when you multiply by 4: You have lost the two least significant bits of the original random number.
The two methods can give different results, but you'll only notice the difference in fairly extreme situations (with very wide ranges). For instance, If you generate random numbers between 0 and 2/sys.float_info.epsilon (9007199254740992.0, or a little more than 9 quintillion), you'll notice that the version using multiplication will never give you any floats with fractional values. If you increase the maximum bound to 4/sys.float_info.epsilon, you won't get any odd integers, only even ones. That's because the 64-bit floating point type Python uses doesn't have enough precision to represent all integers at the upper end of that range, and it's trying to maintain a uniform distribution (so it omits small odd integers and fractional values even though those can be represented in parts of the range).
The second version of the calculation will give extra precision to the smaller random numbers generated. For instance, if you're generating numbers between 0 and 2/sys.float_info.epsilon and the randrange call returned 0, you can use the full precision of the random call to add a fractional part to the number. On the other hand if the randrange returned the largest number in the range (2/sys.float_info.epsilon - 1), very little of the precision of the fraction would be used (the number will round to the nearest integer without any fractional part remaining).
Adding a fractional value also can't help you deal with ranges that are too large for every integer to be represented. If randrange returns only even numbers, adding a fraction usually won't make odd numbers appear (it can in some parts of the range, but not for others, and the distribution may be very uneven). Even for ranges where all integers can be represented, the odds of a specific floating point number appearing will not be entirely uniform, since the smaller numbers can be more precisely represented. Large but imprecise numbers will be more common than smaller but more precisely represented ones.

python 3.3.2 do I get the right understanding of the function "round"?

Sorry, but I really don't know what's the meaning of the defination of round in python 3.3.2 doc:
round(number[, ndigits])
Return the floating point value number rounded to ndigits digits after the decimal point. If ndigits is omitted, it defaults to zero. Delegates to number.__round__(ndigits).
For the built-in types supporting round(), values are rounded to the closest multiple of 10 to the power minus ndigits if two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2). The return value is an integer if called with one argument, otherwise of the same type as number.
I don't know how come the multiple of 10 and pow.
After reading the following examples, I think round(number,n) works like:
if let number be 123.456, let n be 2
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.006) and abs(number-123.46) (0.004),and chooses the smaller one.
so, 123.46 is the result.
and if let number be 123.455, let n be 2:
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.005) and abs(number-123.46) (0.005). They are equal. So round checks the last digit of 123.45 and 123.46. The even one is the result.
so, the result is 123.46
Am I right?
If not, could you offer a understandable version of values are rounded to the closest multiple of 10 to the power minus ndigits?
ndigits = 0 => pow(10, -ndigits) = 10^(-ndigits) = 1
ndigits = 1 => pow(10, -ndigits) = 10^(-ndigits) = 0.1
etc.
>>> for ndigits in range(6):
... print round(123.456789, ndigits) / pow(10, -ndigits)
123.0
1235.0
12346.0
123457.0
1234568.0
12345679.0
Basically, the number you get is always an integer multiple of 10^(-ndigits). For ndigits=0, that means the number you get is itself an integer, for ndigits=1 it means it won't have more than one non-zero value after the decimal point.
It helps to know that anything to the power of 0 equals 1. As ndigits increases, the function:
f(ndigits) = 10-ndigits gets smaller as you increase ndigits. Specifically as you increase ndigits by 1, you simply shift the decimal place of precision one left. e.g. 10^-0 = 1, 10^-1 = .1 and 10^-2 = .01. The place where the 1 is in the answer is the last point of precision for round.
For the part where it says
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus ndigits; if two multiples
are equally close, rounding is done toward the even choice (so, for
example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
This has unexpected behavior in Python 3 and it will not work for all floats. Consider the example you gave, round(123.455, 2) yields the value 123.45. This is not expected behavior because the closest even multiple of 10^-2 is 123.46, not 123.45!
To understand this, you have to pay special attention to the note below this:
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
And that is why certain floats will round to the "wrong value" and there is really no easy workaround as far as I am aware. (sadface) You could use fractions (i.e. two variables representing the numerator and the denominator) to represent floats in a custom round function if you want to get different behavior than the unpredictable behavior for floats.

Categories

Resources