Right now I am trying to solve Project Euler 71.
Consider the fraction, n/d, where n and d are positive integers. If
n
If we list the set of reduced proper fractions for d ≤ 8 in ascending
order of size, we get:
1/8, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 3/8, 2/5, 3/7, 1/2, 4/7, 3/5, 5/8,
2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 7/8
It can be seen that 2/5 is the fraction immediately to the left of
3/7.
By listing the set of reduced proper fractions for d ≤ 1,000,000 in
ascending order of size, find the numerator of the fraction
immediately to the left of 3/7.
The Current Code:
from fractions import Fraction
import math
n = 428572
d = 1000000
x = Fraction(3,7)
best = Fraction(0)
while d > 1:
if Fraction(n,d) >= x:
n-=1
else:
y = Fraction(n,d)
if (x - y) < (x - best):
best = y
d -= 1
n = int(math.ceil(d*0.428572))
print(best.denominator)
Explanation:
from fractions import Fraction
import math
Needed for Fractions and math.ceil.
n = 428572
d = 1000000
These two variables represent the n and d stated in the original problem. The numbers start out this way because this is a slightly bigger representation of 3/7 (will be converted to Fraction later).
x = Fraction(3,7)
best = Fraction(0)
x is just a quick reference to Fraction(3,7) so I don't have to keep typing it. best is used to keep track what fraction is closest to 3/7 but still left of it.
while d > 1:
If d <= 1 and n has to be less than 1 what is the point of checking? Stop check then.
if Fraction(n,d) >= x:
n-=1
If the fraction ends up being bigger than or equal to 3/7 it isn't to the left of it, so keep subtracting from n till it is to the left of 3/7.
else:
y = Fraction(n,d)
if (x - y) < (x - best):
best = y
If it is to the left of 3/7 see if 3/7 minus best or y (which is equal to the fraction we need to check) is closer to 0. The one closer to zero will be the least left, or closest to 3/7.
d -= 1
n = int(math.ceil(d*0.428572))
Regardless of whether best changes or not, the denominator needs to be changed. So subtract one from the denominator and set n the Fraction(n,d) slightly greater (added extra ceil method to make sure it is greater!) than 3/7 to prune the test space.
print(best.denominator)
Finally print what the question wants.
Note
Changing d to 8 and n to 4 (like the test case) gives the desired result of 5 for the denominator. Keeping it as is gives: 999997.
Can someone please explain to me what I am doing wrong?
This isn't the correct way to do things. You are supposed to use the Stern-Brocot tree. You shouldn't have to mess around with floating points at all.
What you are doing wrong:
find the numerator
Apart from that, follow #Antimony's advice and learn about the Stern-Brocot tree, that's useful and fun.
Not to make you feel stupid. But your answer is perfectly correct, read the question again and change the last line to:
print( best.numerator )
Also, for the record there is a MUCH more efficient way of calculating this.
Related
I want to do division but with subtraction. I also don't necessarily want the exact answer.
No floating point numbers too (preferably)
How can this be achieved?
Thanks in advance:)
Also the process should almost be as fast as normal division.
to approximate x divided by y you can subtract y from x until the result is smaller or equal to 0 and then the result of the division would be the number of times you subtracted y from x. However this doesn't work with negatives numbers.
Well, let's say you have your numerator and your denominator. The division basically consists in estimating how many denominator you have in your numerator.
So a simple loop should do:
def divide_by_sub(numerator, denominator):
# Init
result = 0
remains = numerator
# Substract as much as possible
while remains >= denominator:
remains -= denominator
result += 1
# Here we have the "floor" part of the result
return result
This will give you the "floor" part of your result. Please consider adding some guardrails to handle "denominator is zero", "numerator is negative", etc.
My best guess, if you want go further, would be to then add an argument to the function for the precision you want like precision and then multiply remains by it (for instance 10 or 100), and reloop on it. It's doable recursively:
def divide_by_sub(numerator, denominator, precision):
# Init
result = 0
remains = numerator
# Substract as much as possible
while remains >= denominator:
remains -= denominator
result += 1
# Here we have the "floor" part of the result. We proceed to more digits
if precision > 1:
remains = remains * precision
float_result = divide_by_sub(remains, denominator, 1)
result += float_result/precision
return result
Giving you, for instance for divide_by_sub(7,3,1000) the following:
2.333
I was attempting an approximated value of pi through the formula of
pi= 3 + (4/(2*3*4)) - (4/(4*5*6)) + (4/(6*7*8)) - … (and so on). However, my code (shown below) had 2 separate answers (3.1415926535900383 and 3.141592653590042) when:
approx variable started with "0" and "3" respectively
n=10000
Does anyone know why?
def approximate_pi(n):
approx=0
deno=2
if n == 1:
return 3
for x in range(n-1):
if x%2:
approx -= 4/((deno)*(deno+1)*(deno+2))
else:
approx += 4/((deno)*(deno+1)*(deno+2))
deno+=2
return approx+3
and
def approximate_pi(n):
approx=3
deno=2
if n == 1:
return 3
for x in range(n-1):
if x%2:
approx -= 4/((deno)*(deno+1)*(deno+2))
else:
approx += 4/((deno)*(deno+1)*(deno+2))
deno+=2
return approx
I think is because you can't have exact float numbers in pc. More info you can get here: Why can't decimal numbers be represented exactly in binary?
It is because of the way float is in python. If you do not have the digit before the decimal it gives 3 extra precision digits(from my trial). This changes the answer because when you start with 0, you get a different calculation altogether.
An approximation algorithm approximates. Neither number is the true value of π. Your two versions start at different starting points, so why are you surprised that they give you slightly different approximations? What matters is that the longer you run them, the further they will both converge to the true value.
Note that this is not an artifact of the finite-precision representation of floats. While floating point rounding will affect your results, you would see differences even with unlimited precision arithmetic.
I have curious question about python coding. I have a simple code I used to perform Euler approximations, which numerically approximates a solution for a differential equation. It does this by taking a section of the curve and dividing it into intervals of equal width 'w'.
The code is:
import math
x = 0
y = 2
w = 0.5
while x < 1:
dydx = 1 - 2*x + y
deltaY = dydx*w
y = y + deltaY
x += w
print(x,y)
Curiously, I found that the code works for 'w' from 1 to 1/5, but not any smaller.
For example, using w = 1/5, the code correctly outputs (1.0, 5.48832...)
Or using w = 1/4. the code correctly outputs (1.0, 5.4414...)
But if I used w = 1/6, the output is (1.16667,6.27523...)
I have adapted the same code for programs running Euler's modified method and Romberg's method (for approximating the same thing) and they do the same thing for w < 1/5.
I feel like the answer to this is either very obvious or very obscure. If anyone has a solution, I would very much appreciate it.
Thank you
The 0.2 cutoff is just a coincidence. What's really going on here is float rounding.
float values can't represent most fractions exactly; they just give you the closest 52-bit binary fraction to the number you wanted. Which leads to rounding errors.
If you add 1/2 to 0 twice, you get exactly 1.
If you add 1/3 to 0 three times, you get a number a tiny bit larger than 1, but 1 is actually the closest binary fraction to that number.
If you add 1/4 to 0 four times, you get exactly 1.
If you add 1/5 to 0 five times, you get a number a tiny bit larger than 1.
If you add 1/6 to 0 six times, you get a number a tiny bit smaller than 1.
If you add 1/7 to 0 seven times, you get a number a tiny bit smaller than 1.
If you add 1/8 to 0 four times, you get exactly 1.
So, 1/3 is fine because it happens to round to 1 anyway; 1/5 is fine, because when x is a tiny bit larger than 1, x < 1 is false, and your loop stops. But 1/6 and 1/7 are not fine, because when x is a tiny bit smaller than 1, x < 1 is still true, so your loop goes one time too many.
The simplest fix is to use isclose:
while not math.isclose(x, 1):
… although that will mean an infinite loop if x isn't pretty close to a unary fraction. Of course your method doesn't work for such values, but it might be nice to get an error or an incorrect result instead of waiting until the end of the universe. So you might want to do something a little more clever, like:
while x < 0.999999:
A nicer fix, at the cost of some speed, is to use the Fraction type for w and x instead of float. You can still leave y as a float, so your calculations won't eat up all of your memory and time building fractions with ridiculous denominators when you're just looking for an approximation:
import fractions
x = 0
y = 2.0
w = fractions.Fraction(1, 6)
while x < 1:
dydx = 1 - 2*x + y
deltaY = dydx*w
y = y + deltaY
x += w
print(x,y)
Now you'll get:
1 5.521626371742112
But the best option is probably to just keep track of the fact that w is 1/6, like this:
import math
x = 0
y = 2
w_inv = 6
w = 1/w_inv
for _ in range(w_inv):
dydx = 1 - 2*x + y
deltaY = dydx*w
y = y + deltaY
x += w
print(x,y)
Now the rounding error isn't a problem; we're definitely going to loop 6 times anyway.
0.9999999999999999 5.521626371742112
This question is only for Python programmers. This question is not duplicate not working Increment a python floating point value by the smallest possible amount see explanation bottom.
I want to add/subtract for any float some smallest values which will change this float value about one bit of mantissa/significant part. How to calculate such small number efficiently in pure Python.
For example I have such array of x:
xs = [1e300, 1e0, 1e-300]
What will be function for it to generate the smallest value? All assertion should be valid.
for x in xs:
assert x < x + smallestChange(x)
assert x > x - smallestChange(x)
Consider that 1e308 + 1 == 1e308 since 1 does means 0 for mantissa so `smallestChange' should be dynamic.
Pure Python solution will be the best.
Why this is not duplicate of Increment a python floating point value by the smallest possible amount - two simple tests prove it with invalid results.
(1) The question is not aswered in Increment a python floating point value by the smallest possible amount difference:
Increment a python floating point value by the smallest possible amount just not works try this code:
import math
epsilon = math.ldexp(1.0, -53) # smallest double that 0.5+epsilon != 0.5
maxDouble = float(2**1024 - 2**971) # From the IEEE 754 standard
minDouble = math.ldexp(1.0, -1022) # min positive normalized double
smallEpsilon = math.ldexp(1.0, -1074) # smallest increment for doubles < minFloat
infinity = math.ldexp(1.0, 1023) * 2
def nextafter(x,y):
"""returns the next IEEE double after x in the direction of y if possible"""
if y==x:
return y #if x==y, no increment
# handle NaN
if x!=x or y!=y:
return x + y
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if -minDouble < x < minDouble:
if y > x:
return x + smallEpsilon
else:
return x - smallEpsilon
m, e = math.frexp(x)
if y > x:
m += epsilon
else:
m -= epsilon
return math.ldexp(m,e)
print nextafter(0.0, -1.0), 'nextafter(0.0, -1.0)'
print nextafter(-1.0, 0.0), 'nextafter(-1.0, 0.0)'
Results of Increment a python floating point value by the smallest possible amount is invalid:
>>> nextafter(0.0, -1)
0.0
Should be nonzero.
>>> nextafter(-1,0)
-0.9999999999999998
Should be '-0.9999999999999999'.
(2) It was not asked how to add/substract the smallest value but was asked how to add/substract value in specific direction - propose solution is need to know x and y. Here is required to know only x.
(3) Propose solution in Increment a python floating point value by the smallest possible amount will not work on border conditions.
>>> (1.0).hex()
'0x1.0000000000000p+0'
>>> float.fromhex('0x0.0000000000001p+0')
2.220446049250313e-16
>>> 1.0 + float.fromhex('0x0.0000000000001p+0')
1.0000000000000002
>>> (1.0 + float.fromhex('0x0.0000000000001p+0')).hex()
'0x1.0000000000001p+0'
Just use the same sign and exponent.
Mark Dickinson's answer to a duplicate fares much better, but still fails to give the correct results for the parameters (0, 1).
This is probably a good starting point for a pure Python solution. However, getting this exactly right in all cases is not easy, as there are many corner cases. So you should have a really good unit test suite to cover all corner cases.
Whenever possible, you should consider using one of the solutions that are based on the well-tested C runtime function instead (i.e. via ctypes or numpy).
You mentioned somewhere that you are concerned about the memory overhead of numpy. However, the effect of this one function on your working set shout be very small, certainly not several Megabytes (that might be virtual memory or private bytes.)
When researching for this question and reading the sourcecode in random.py, I started wondering whether randrange and randint really behave as "advertised". I am very much inclined to believe so, but the way I read it, randrange is essentially implemented as
start + int(random.random()*(stop-start))
(assuming integer values for start and stop), so randrange(1, 10) should return a random number between 1 and 9.
randint(start, stop) is calling randrange(start, stop+1), thereby returning a number between 1 and 10.
My question is now:
If random() were ever to return 1.0, then randint(1,10) would return 11, wouldn't it?
From random.py and the docs:
"""Get the next random number in the range [0.0, 1.0)."""
The ) indicates that the interval is exclusive 1.0. That is, it will never return 1.0.
This is a general convention in mathematics, [ and ] is inclusive, while ( and ) is exclusive, and the two types of parenthesis can be mixed as (a, b] or [a, b). Have a look at wikipedia: Interval (mathematics) for a formal explanation.
Other answers have pointed out that the result of random() is always strictly less than 1.0; however, that's only half the story.
If you're computing randrange(n) as int(random() * n), you also need to know that for any Python float x satisfying 0.0 <= x < 1.0, and any positive integer n, it's true that 0.0 <= x * n < n, so that int(x * n) is strictly less than n.
There are two things that could go wrong here: first, when we compute x * n, n is implicitly converted to a float. For large enough n, that conversion might alter the value. But if you look at the Python source, you'll see that it only uses the int(random() * n) method for n smaller than 2**53 (here and below I'm assuming that the platform uses IEEE 754 doubles), which is the range where the conversion of n to a float is guaranteed not to lose information (because n can be represented exactly as a float).
The second thing that could go wrong is that the result of the multiplication x * n (which is now being performed as a product of floats, remember) probably won't be exactly representable, so there will be some rounding involved. If x is close enough to 1.0, it's conceivable that the rounding will round the result up to n itself.
To see that this can't happen, we only need to consider the largest possible value for x, which is (on almost all machines that Python runs on) 1 - 2**-53. So we need to show that (1 - 2**-53) * n < n for our positive integer n, since it'll always be true that random() * n <= (1 - 2**-53) * n.
Proof (Sketch) Let k be the unique integer k such that 2**(k-1) < n <= 2**k. Then the next float down from n is n - 2**(k-53). We need to show that n*(1-2**53) (i.e., the actual, unrounded, value of the product) is closer to n - 2**(k-53) than to n, so that it'll always be rounded down. But a little arithmetic shows that the distance from n*(1-2**-53) to n is 2**-53 * n, while the distance from n*(1-2**-53) to n - 2**(k-53) is (2**k - n) * 2**-53. But 2**k - n < n (because we chose k so that 2**(k-1) < n), so the product is closer to n - 2**(k-53), so it will get rounded down (assuming, that is, that the platform is doing some form of round-to-nearest).
So we're safe. Phew!
Addendum (2015-07-04): The above assumes IEEE 754 binary64 arithmetic, with round-ties-to-even rounding mode. On many machines, that assumption is fairly safe. However, on x86 machines that use the x87 FPU for floating-point (for example, various flavours of 32-bit Linux), there's a possibility of double rounding in the multiplication, and that makes it possible for random() * n to round up to n in the case where random() returns the largest possible value. The smallest such n for which this can happen is n = 2049. See the discussion at http://bugs.python.org/issue24546 for more.
From Python documentation:
Almost all module functions depend on the basic function random(), which generates a random float uniformly in the semi-open range [0.0, 1.0).
Like almost every PRNG of float numbers..