I am looking for a way in python to perform right shift and bit masking on a binary number which has a fraction part as well. For e.g., if there are 1 integer and 2 fraction bits in the number, then number 0b101 corresponds to 1.25 in decimal. First, I want to know the pythonic way to represent this number in python.
Second, I want to perform 1 right shift on this number (0b101>>1) so that the resultant number will be 0b010 which will be 0.5 in decimal. Is there an intrinsic and pythonic way in python to perform this operation. Similarly, how to mask and get a specific bit from the binary number?
Presently, for shift I am multiplying the number by 2**-x, x is the number of right shifts. I cannot think a similar operation I can perform for bit mask.
If you really must get directly at the internal representation of a float you can use struct, like this:
>>> import struct
>>> a = 1.25
>>> b = struct.pack('>d',a)
>>> b
b'?\xf4\x00\x00\x00\x00\x00\x00' # the ? means \x3f, leftmost 7 bits of exponent
>>> a.hex()
'0x1.4000000000000p+0'
You can mask the bit you want out of the bytestring that struct.pack() returns.
[edit] The question mark representing \x3f is because the default output representation of a bytestring is a string and Python will where possible show an ascii character, not two hex digits.
[edit] This representation is in principle platform-dependent, but in practice it isn't, because virtually every computer (even IBM mainframes nowadays) has a floating-point processor that uses this format.
Finding out which bit you want may be something of a challenge.
>>> c = struct.pack('>d',a/2)
>>> c
b'?\xe4\x00\x00\x00\x00\x00\x00'
>>> (a/2).hex()
'0x1.4000000000000p-1'
As you can see, division by 2 is not quite the simple one-bit shift to the right that your question seems to suggest you are expecting. In this case, the division by 2 has decremented the exponent by 1 (from 0x3ff to 0x3fe; 1023 to 1022) and left the bit pattern of the fraction (0x4000) unchanged. The exponent appears large because it is biased by 1023.
The main difficulties are
Sign, exponent and fraction don't align to byte boundaries, but to nybble boundaries (sign plus exponent: 12 bits; fraction: 52 bits)
The number is normalized so that it has no leading zeroes (much as scientific notation in decimal is normalized so that it has no leading zeroes) and, since everyone knows it's there, the leading 1 is not stored.
I can recommend the Wikpedia article on this subject: it has lots of useful examples.
But I suspect that you don't really want to get at the internal representation of a float. Instead, you want a fixed-point binary class, without pesky binary exponents, that works much the same as you would do it on paper, and where division by a power of 2 really does reflect as a shift of so many bits to the right.
Depending on how much work you want to put into it, you could do this by defining a FixedBinary class as a subclass of numbers.Real, with the integer portion internally represented by one int and the fractional component by another int, and the sign by a third int, so that 1.25 would be represented as (1, int(0.25 * 65536), +1) (or some other power of 2).
This also shows you the simplest way to get a bit representation of your fraction.
[edit] I recommend storing the sign separately. You could store it in the integer portion, or the fraction, or both, but all have disadvantages.
If you store it in the sign of the fraction, the twos-complement
representation of negative integers will give you difficulty when
you want to mask your bits.
If you don't store it in the sign of the fraction there will be no way to
represent -0.5.
If you don't store it in the sign of the integer portion there will be no way to represent -1.0.
A multiplicand of 65536 will give you 4 decimal digits of accuracy. You can increase it if you want more. I also recommend that you store your fraction in the rightmost bits and simply ignore the leftmost bits. In other words, be content with the binary point being in the middle of the int, don't insist on it being on the left. That is because you will need headroom to the left of the binary point when you do multiplication.
Implementing your own numeric class is a considerable amount of work, though.
You can work using fxpmath.
Info about this package is at:
https://github.com/francof2a/fxpmath
For your example:
from fxpmath import Fxp
x = Fxp('0b0101', signed=True, n_word=4, n_frac=2)
print(x)
y = x >> 1
print(y)
# example of AND mask
z = x & Fxp('0b0110', signed=True, n_word=4, n_frac=2)
print(z.bin())
outputs:
1.25
0.5
0100
Related
Apologies if a question like this is inappropriate for this platform but I can't find any information on this anywhere. I'm using sklearn to do a cluster analysis on some points; this is the relevant portion of my code:
clustering = AgglomerativeClustering(n_clusters=None, affinity='euclidean',
distance_threshold=d, linkage='single').fit(i)
number = clustering.n_clusters_
I would like to know the precision to which I can define 'd' which in this case is the distance threshold above which clusters won't be merged. For example, if I set d = 0.000002, would it use this value or would it be rounded to zero? How many decimal places can I use basically.
Thanks in advance
Scikit-learn's AgglomerativeClustering class stores the distance_threshold value as a float type, which on most Python systems means double precision, that is 64 bit. The decimal number you enter is converted to a base-2 exponential number under the hood and rounded accordingly if necessary to fit into the 64 bit storage slot. 1 bit is reserved for the sign, 11 bits for the exponent, and 52 bits for the significant digits.
Note that when you have a number such as 0.000002, starting with many zeros and having only one significant digit, the factor determining the smallest possible value is the number of bits for the exponent. So the question is, how small a number can be represented with 11 bits storage for the exponent?
Let's see:
2 ** -(2 ** 11)
Out: 0.0
2 ** -(2 ** 10)
Out: 5.562684646268003e-309
So if you enter your d value as a decimal number, without using exponential notation, you would have to type at least 309 zeros for that limit to kick in. Thus the value will practically never be rounded to zero, but there will be a small rounding error unless your decimal number happens to have a simple base-2 representation.
Why are in some float multiplications in python those weird residuum?
e.g.
>>> 50*1.1
55.00000000000001
but
>>> 30*1.1
33.0
The reason should be somewhere in the binary representation of floats, but where is the difference in particular of both examples?
(This answer assumes your Python implementation uses IEEE-754 binary64, which is common.)
When 1.1 is converted to floating-point, the result is exactly 1.100000000000000088817841970012523233890533447265625, because this is the nearest representable value. (This number is 4953959590107546 • 2−52 — an integer with at most 53 bits multiplied by a power of two.)
When that is multiplied by 50, the exact mathematical result is 55.00000000000000444089209850062616169452667236328125. That cannot be exactly represented in binary64. To fit it into the binary64 format, it is rounded to the nearest representable value, which is 55.00000000000000710542735760100185871124267578125 (which is 7740561859543041 • 2−47).
When it is multiplied by 30, the exact result is 33.00000000000000266453525910037569701671600341796875. it also cannot be represented exactly in binary64. It is rounded to the nearest representable value, which is 33. (The next higher representable value is 33.00000000000000710542735760100185871124267578125, and we can see …026 is closer to …000 than to …071.)
That explains what the internal results are. Next there is an issue of how your Python implementation formats the output. I do not believe the Python implementation is strict about this, but it is likely one of two methods is used:
In effect, the number is converted to a certain number of decimal digits, and then trailing insignificant zeros are removed. Converting 55.00000000000000710542735760100185871124267578125 to a numeral with 16 digits after the decimal point yields 55.00000000000001, which has no trailing zeros to remove. Converting 33 to a numeral with 16 digits after the decimal point yields 33.00000000000000, which has 15 trailing zeros to remove. (Presumably your Python implementation always leaves at least one trailing zero after a decimal point to clearly distinguish that it is a floating-point number rather than an integer.)
Just enough decimal digits are used to uniquely distinguish the number from adjacent representable values. This method is required in Java and JavaScript but is not yet common in other programming languages. In the case of 55.00000000000000710542735760100185871124267578125, printing “55.00000000000001” distinguishes it from the neighboring values 55 (which would be formatted as “55.0”) and 55.0000000000000142108547152020037174224853515625 (which would be “55.000000000000014”).
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
When i execute these 2 lines, i get 2 different results. - Why?
item variable is a type numpy.float32
print(item)
print(item * 1)
output:
0.0006
0.0006000000284984708
I suspect this is being related to the numpy.float32 type somehow?
If i try to convert the numpy.float32 to float i get this:
item = float(item)
print(item)
output:
0.0006000000284984708
What you observe unfortunately is not avoidable. It has to do with the internal representation of a float number. In this case it doesn't even have to do with calculation issues, as suggested in comments here.
(Binary base) float numbers as used by most languages are represented as (+/- mantisse)*2^exponent.
The important part here is the mantisse, that doesn't allow to represent all numbers exactly. The value range of the mantisse and the exponent depend on the bit length of the float you use. The exponent is responsible for the maximum and minimum representable numbers, while the mantisse is responsible for the precision of the displayable numers (loosely speaking the "granularity" of the numbers).
So for your question, the mantisse is more important. As said it is like a bit array. In a byte a bit has a value depending on it's position of 1, 2, 4, ...
In the mantisse it is similar, but instead of 1, 2, 3, 4, the bits have the value 1/2, 1/4, 1/8, ...
So if you want to represent 0.75, the bits with the values 1/2 and 1/4 would be set in your mantisse and the exponent would be 0. That's it in very short.
Now, if you would try to represent a value like 0.11 in a float represenation, you will notice, that it is not possible. No matter if you use float32 or float64:
import numpy as np
item=np.float64('0.11')
print('{0:2.60f}'.format(item))
output: 0.110000000000000000555111512312578270211815834045410156250000
item=np.float32('0.11')
print('{0:2.60f}'.format(item))
output: 0.109999999403953552246093750000000000000000000000000000000000
Btw. if you want to represent the value 0.25 (1/4) it is not that the bit for 1/4 is set, but instead the bit for 1/2 and the exponent is set to -1, so 1/2*2^(-1) is again 0.25. This is done in a normalization process.
But if you want to increase the precision you could use float64, as I did it in my example. It will reduce this phenomenon a bit.
It seems, that some systems also support decimal based floats. I haven't worked with them, but probably they would avoid this kind of problems (not the calculation issus though mentioned in the post someone else posted as an answer).
The reason you see two different results is that your variable item is in numpy.float32, as you said. Python internally uses 64 bit floating point numbers, so
print(item)
returns the (lower precision) result in 32 bit, while
print(item * 1)
first multiplies with 1, which is an integer. It is not possible to multiply integer with float, so Python converts both into floats - 64 bit floats, since you do not specify anything else. The result is then a 64 bit float.
If you would specify another type of "1",
print(item * numpy.float32(1))
returns the same result as print(item), because there is no type conversion and everything can stay in 32 bit.
You haven't specified exactly what the problem is, beyond "the numbers don't match". How you handle floating point depends a little on your application, but in general you can't rely on comparing floating point numbers exactly. With a few obvious exceptions: 0 times anything should be 0, 1 times anything should be 1 (there's more, but lets stop there). So why is 1*item different from item?
>>> item = np.float32(0.0006)
>>> item
0.0006
>>> item*1
0.0006000000284984708
Right, this seems to contradict common sense. No, it's just the wrong way. Do an actual comparison and everything is still alright with the world.
>>> item == item*1
True
The numbers are the same. This should make sense - increasing the precision of a floating point shouldn't change it's value, and multiplying by 1 should not change a number.
So, what's going on? Numpy converts an np.float32 value to a python float which prints with nice rounding. However, item*1 is an np.float64 which by default shows more siginificant figures. If you print both of these with the same amount of significant figures you can see there's no real difference.
>>> "{:0.015f}".format(item*1)
'0.000600000028498'
>>> "{:0.015f}".format(item)
'0.000600000028498'
So that's it. What python prints isn't meant to be a completely accurate representation of numbers. The other answers get into why 0.0006 can't be represented exactly.
Edit
Rounding doesn't change this, it just converts item to a python float which prints with rounding.
>>> "{:0.015f}".format(round(item, 4))
'0.000600000028498'
I cannot seem to find the logic in this, but have made a workaround simply converting the numpy.float32 to float and rounding the numbers to a specific decimal.
This is more of a numerical analysis rather than programming question, but I suppose some of you will be able to answer it.
In the sum two floats, is there any precision lost? Why?
In the sum of a float and a integer, is there any precision lost? Why?
Thanks.
In the sum two floats, is there any precision lost?
If both floats have differing magnitude and both are using the complete precision range (of about 7 decimal digits) then yes, you will see some loss in the last places.
Why?
This is because floats are stored in the form of (sign) (mantissa) × 2(exponent). If two values have differing exponents and you add them, then the smaller value will get reduced to less digits in the mantissa (because it has to adapt to the larger exponent):
PS> [float]([float]0.0000001 + [float]1)
1
In the sum of a float and a integer, is there any precision lost?
Yes, a normal 32-bit integer is capable of representing values exactly which do not fit exactly into a float. A float can still store approximately the same number, but no longer exactly. Of course, this only applies to numbers that are large enough, i. e. longer than 24 bits.
Why?
Because float has 24 bits of precision and (32-bit) integers have 32. float will still be able to retain the magnitude and most of the significant digits, but the last places may likely differ:
PS> [float]2100000050 + [float]100
2100000100
The precision depends on the magnitude of the original numbers. In floating point, the computer represents the number 312 internally as scientific notation:
3.12000000000 * 10 ^ 2
The decimal places in the left hand side (mantissa) are fixed. The exponent also has an upper and lower bound. This allows it to represent very large or very small numbers.
If you try to add two numbers which are the same in magnitude, the result should remain the same in precision, because the decimal point doesn't have to move:
312.0 + 643.0 <==>
3.12000000000 * 10 ^ 2 +
6.43000000000 * 10 ^ 2
-----------------------
9.55000000000 * 10 ^ 2
If you tried to add a very big and a very small number, you would lose precision because they must be squeezed into the above format. Consider 312 + 12300000000000000000000. First you have to scale the smaller number to line up with the bigger one, then add:
1.23000000000 * 10 ^ 15 +
0.00000000003 * 10 ^ 15
-----------------------
1.23000000003 <-- precision lost here!
Floating point can handle very large, or very small numbers. But it can't represent both at the same time.
As for ints and doubles being added, the int gets turned into a double immediately, then the above applies.
When adding two floating point numbers, there is generally some error. D. Goldberg's "What Every Computer Scientist Should Know About Floating-Point Arithmetic" describes the effect and the reasons in detail, and also how to calculate an upper bound on the error, and how to reason about the precision of more complex calculations.
When adding a float to an integer, the integer is first converted to a float by C++, so two floats are being added and error is introduced for the same reasons as above.
The precision available for a float is limited, so of course there is always the risk that any given operation drops precision.
The answer for both your questions is "yes".
If you try adding a very large float to a very small one, you will for instance have problems.
Or if you try to add an integer to a float, where the integer uses more bits than the float has available for its mantissa.
The short answer: a computer represents a float with a limited number of bits, which is often done with mantissa and exponent, so only a few bytes are used for the significant digits, and the others are used to represent the position of the decimal point.
If you were to try to add (say) 10^23 and 7, then it won't be able to accurately represent that result. A similar argument applies when adding a float and integer -- the integer will be promoted to a float.
In the sum two floats, is there any precision lost?
In the sum of a float and a integer, is there any precision lost? Why?
Not always. If the sum is representable with the precision you ask, and you won't get any precision loss.
Example: 0.5 + 0.75 => no precision loss
x * 0.5 => no precision loss (except if x is too much small)
In the general case, one add floats in slightly different ranges so there is a precision loss which actually depends on the rounding mode.
ie: if you're adding numbers with totally different ranges, expect precision problems.
Denormals are here to give extra-precision in extreme cases, at the expense of CPU.
Depending on how your compiler handle floating-point computation, results can vary.
With strict IEEE semantics, adding two 32 bits floats should not give better accuracy than 32 bits.
In practice it may requires more instruction to ensure that, so you shouldn't rely on accurate and repeatable results with floating-point.
In both cases yes:
assert( 1E+36f + 1.0f == 1E+36f );
assert( 1E+36f + 1 == 1E+36f );
The case float + int is the same as float + float, because a standard conversion is applied to the int. In the case of float + float, this is implementation dependent, because an implementation may choose to do the addition at double precision. There may be some loss when you store the result, of course.
In both cases, the answer is "yes". When adding an int to a float, the integer is converted to floating point representation before the addition takes place anyway.
To understand why, I suggest you read this gem: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.