Does anyone know how to round a number to nearest .0125? For example there's a number 167.1131 then it needs to be converted to 167.1125. I have tried to do it with round but it rounds to 0.x.
Convert it to "0.0125's", round THAT, and convert back:
round(x/0.0125)*0.0125
The round() function is focused on rounding to tenths, hundredths, thousandths and so on - essentially rounding to some negative exponent of 10.
So, as 0.0125 is not such a number that the round() can handle, you can
either apply a multiplication to your input number before giving it to round, so that it can do a rounding for which it is designed, and afterwards you correct for the initial multiplication. One of the other answers does it like this.
or you can, if you think the first approach looks complicated, solve the problem with pure mathematics. The code below essentially looks how much there is actually "too much" above a multiple of 0.0125. This "too much" amount is a remainder (modulus) of a division. This division is done on integers, so there is an initial multiplication and correction afterwards, just like in the first approach.
Code for the second approach:
def round_0125(number):
mult_number = number * 10000
remainder = mult_number % 125
return (mult_number - remainder) / 10000
round_0125(167.1131)
#167.1125
round_0125(167.5738)
#167.5625
value = 167.1125
dec_value = value % 1 # get decimal part
whole_value = value // 1
my_range = np.arange(0, 1, 0.0125)
distance = np.abs(dec_value - my_range) # get the absolute distance
index = np.argmin(distance) # find the index of smallest distance
result = whole_value + my_range[index]
Related
I know that questions about rounding in python have been asked multiple times already, but the answers did not help me. I'm looking for a method that is rounding a float number half up and returns a float number. The method should also accept a parameter that defines the decimal place to round to. I wrote a method that implements this kind of rounding. However, I think it does not look elegant at all.
def round_half_up(number, dec_places):
s = str(number)
d = decimal.Decimal(s).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
I don't like it, that I have to convert float to a string (to avoid floating point inaccuracy) and then work with the decimal module.
Do you have any better solutions?
Edit: As pointed out in the answers below, the solution to my problem is not that obvious as correct rounding requires correct representation of numbers in the first place and this is not the case with float. So I would expect that the following code
def round_half_up(number, dec_places):
d = decimal.Decimal(number).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
(that differs from the code above just by the fact that the float number is directly converted into a decimal number and not to a string first) to return 2.18 when used like this: round_half_up(2.175, 2) But it doesn't because Decimal(2.175) will return Decimal('2.17499999999999982236431605997495353221893310546875'), the way the float number is represented by the computer.
Suprisingly, the first code returns 2.18 because the float number is converted to string first. It seems that the str() function conducts an implicit rounding to the number that was initially meant to be rounded. So there are two roundings taking place. Even though this is the result that I would expect, it is technically wrong.
Rounding is surprisingly hard to do right, because you have to handle floating-point calculations very carefully. If you are looking for an elegant solution (short, easy to understand), what you have like like a good starting point. To be correct, you should replace decimal.Decimal(str(number)) with creating the decimal from the number itself, which will give you a decimal version of its exact representation:
d = Decimal(number).quantize(...)...
Decimal(str(number)) effectively rounds twice, as formatting the float into the string representation performs its own rounding. This is because str(float value) won't try to print the full decimal representation of the float, it will only print enough digits to ensure that you get the same float back if you pass those exact digits to the float constructor.
If you want to retain correct rounding, but avoid depending on the big and complex decimal module, you can certainly do it, but you'll still need some way to implement the exact arithmetics needed for correct rounding. For example, you can use fractions:
import fractions, math
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
number_exact = abs(fractions.Fraction(number))
shifted = number_exact * 10**dec_places
shifted_trunc = int(shifted)
if shifted - shifted_trunc >= fractions.Fraction(1, 2):
result = (shifted_trunc + 1) / 10**dec_places
else:
result = shifted_trunc / 10**dec_places
return sign * float(result)
assert round_half_up(1.49) == 1
assert round_half_up(1.5) == 2
assert round_half_up(1.51) == 2
assert round_half_up(2.49) == 2
assert round_half_up(2.5) == 3
assert round_half_up(2.51) == 3
Note that the only tricky part in the above code is the precise conversion of a floating-point to a fraction, and that can be off-loaded to the as_integer_ratio() float method, which is what both decimals and fractions do internally. So if you really want to remove the dependency on fractions, you can reduce the fractional arithmetic to pure integer arithmetic; you stay within the same line count at the expense of some legibility:
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
exact = abs(number).as_integer_ratio()
shifted = (exact[0] * 10**dec_places), exact[1]
shifted_trunc = shifted[0] // shifted[1]
difference = (shifted[0] - shifted_trunc * shifted[1]), shifted[1]
if difference[0] * 2 >= difference[1]: # difference >= 1/2
shifted_trunc += 1
return sign * (shifted_trunc / 10**dec_places)
Note that testing these functions brings to spotlight the approximations performed when creating floating-point numbers. For example, print(round_half_up(2.175, 2)) prints 2.17 because the decimal number 2.175 cannot be represented exactly in binary, so it is replaced by an approximation that happens to be slightly smaller than the 2.175 decimal. The function receives that value, finds it smaller than the actual fraction corresponding to the 2.175 decimal, and decides to round it down. This is not a quirk of the implementation; the behavior derives from properties of floating-point numbers and is also present in the round built-in of Python 3 and 2.
I don't like it, that I have to convert float to a string (to avoid
floating point inaccuracy) and then work with the decimal module. Do
you have any better solutions?
Yes; use Decimal to represent your numbers throughout your whole program, if you need to represent numbers such as 2.675 exactly and have them round to 2.68 instead of 2.67.
There is no other way. The floating point number which is shown on your screen as 2.675 is not the real number 2.675; in fact, it is very slightly less than 2.675, which is why it gets rounded down to 2.67:
>>> 2.675 - 2
0.6749999999999998
It only shows in string form as '2.675' because that happens to be the shortest string such that float(s) == 2.6749999999999998. Note that this longer representation (with lots of 9s) isn't exact either.
However you write your rounding function, it is not possible for my_round(2.675, 2) to round up to 2.68 and also for my_round(2 + 0.6749999999999998, 2) to round down to 2.67; because the inputs are actually the same floating point number.
So if your number 2.675 ever gets converted to a float and back again, you have already lost the information about whether it should round up or down. The solution is not to make it float in the first place.
After trying for a very long time to produce an elegant one-line function, I ended up getting something that is comparable to a dictionary in size.
I would say the simplest way to do this is just to
def round_half_up(inp,dec_places):
return round(inp+0.0000001,dec_places)
i would acknowledge that this is not accurate in every cases, but should work if you just want a simple quick workaround.
Hey i'm a little noob still but i'm playing around with python and i want to round D to it's nearest decimal, C and H are fixed and D is the rawinput, the whole answer should be rounded but i keep getting decimals,i want this formula :
Q = Square root of [(2 * C * D)/H]
her's my code:*
import math
C=50
H=30
D=int(raw_input())
a=int(round(D))
Q= math.sqrt(2*C*a/H)
print Q
if i enter 100 i get 18.24
i just want it to be 18
i would really appreciate your help, thanks
import math
C = 50
H = 30
a = int(raw_input())
# prints as a float
# Q = round(math.sqrt(2 * C * a / H), 0)
# prints as an int
Q = int(round(math.sqrt(2 * C * a / H), 0))
print Q
Your code appears to be rounding in the wrong place. You're rounding the input a, which was already the integer D. You're not rounding the result of the square root Q, which is a float.
Note that you're code actually has an extra rounding step you may not intend in it. When you divide two integers in Python 2, you'll get another integer, even if the computation should have had a remainder. You get floor division, always rounding towards negative infinity, not to the nearest integer (so e.g. 9/10 is 0). In your code, 2*C*a is an integer, since all values you're multiplying are, and so when you divide by h (another integer), it's going to round the division off. In the case you gave where you entered 100 as the user input, you'll get 333 as the result of the division instead of the more precise 333.3333333333333. This in turn makes your square root calculation give a different value (18.24828759089466 instead of 18.257418583505537).
Anyway, you probably want to use floating point values everywhere except maybe at the end when you round off the value before printing it. You almost certainly don't want to be using integer math by accident as your current code does. One way to do that is to turn one of your constant values into a float, and move the rounding to the end:
C=50.0 # make the calculations use floats, rather than ints
H=30
D=int(raw_input()) # no need for `a` anymore, we're rounding later instead
Q= int(round(math.sqrt(2*C*D/H))) # round after taking the square root, not before
An alternative to using C=50.0 is to put from __future__ import division at the top of your file, which tells Python that you want division between integers to return a float. That's the default behavior in Python 3, and it's much nicer most of the time. If you specifically want "floor" division, you can explicitly ask for it with the // operator. You might also consider actually using Python 3, rather than making do with Python 2's forwards compatibility features.
Alrighty, first post here, so please forgive and ignore if the question is not workable;
Background:
I'm in computer science 160. I haven't taken any computer related classes since high school, so joining this class was a big shift for me. It all seemed very advanced. We have been working in Python and each week we are prompted to write a program.
I have been working with this problem for over a week and am having a hard time even starting.
The prompt is to read an integer containing only 1's and 0's,
process the binary number digit by digit and report the decimal equivalent. Now, I have gotten some tips from a classmate and it sent me at least in a direction.
Set up a couple of counters;
using the % operator to check the remainder of the number divided by 2, and slicing off the last number (to the right) to move on to and process the next digit.
I am having an incredibly hard time wrapping my head around what formula to use on the binary digits themselves which will convert the number to decimal.
setbitval = 0
counter = 0
user = int(input("enter a binary value. "))
if user % 2 == 1:
user = (user/10) - .1
setbitval += 1
This is all I've got so far.. My thinking is getting in the way. I've searched and searched, even through these forums.
Any information or thoughts are extremely appreciated,
T
Edit: okay guys, everyone's help has been extremely useful but I'm having a problem checking if the user input is not a binary number.
for i in reversed(bits):
decimal += 2**counter * int(i)
counter += 1
This is the formula someone here gave me and I've been trying different iterations of "for i in bits: if i in bits: != 0 or 1" and also "if i in bits: >= 1 or <=0".
Any thoughts?
you can use this code:
binary= raw_input("Binary: ")
d= int(binary, 2)
print d
To convert binary value to decimal you need to do the following:
Take the least significant bit and multiply it by 2^0, then take the next least significant beat and multiply it by 2^1, next one by 2^2 and so on...
Let's say, for example you need to convert a number 1010 to decimal:
You would have 0*2^0 + 1*2^1 + 0*2^2 + 1*2^3 = 0 + 2 + 0 + 8 = 10
So in your python code, you need to:
read the int that the user inputted (representing the binary value).
convert that int and convert it to string, so you can break it into list of digits
make a list of digits from the string you created (a list int python can be created from a string not an int, that's why you need the conversion to string first)
go trough that list of bits in reverse and multiply every bit by 2^k, k being the counter starting from 0
Here's the code that demonstrates what I just tried to explain:
user_input = int(input("enter a binary value"))
bits = list(str(user_input))
decimal = 0
counter = 0
for i in reversed(bits):
decimal += 2**counter * int(i)
counter+=1
print 'The decimal value is: ', decimal
I'll agree this is close to the "code this for me" territory, but I'll try to answer in a way that gets you on the right track, instead of just posting a working code snippet.
A simple way of doing this is just to use int()'s base argument, but I'm guessing that is disallowed.
You already have a way of testing the current bit in your question, namely checking whether n % 2 == 1. If this is the case, we need to add a power of two.
Then, we need some way of going to the next bit. In binary, we would use bit shifts, but sadly, we don't have those. a >> b is equivalent to a // (2**b) - can you write a decimal equivalent to that?
You also need to keep a counter of which power of two the current bit represents, a loop, and some way of detecting an end condition. Those are left as exercises to the reader.
I’d recommend reading the following articles on Wikipedia:
https://en.wikipedia.org/wiki/Radix
https://en.wikipedia.org/wiki/Binary_number
The first one gives you an idea how the numeral systems work in general and the second one explains and shows the formula to convert between binary and decimal systems.
Try to implement the solution after reading this. That’s what I did when I dealt with this problem. If that doesn’t help, let me know and I’ll post the code.
Hopefully, this code clarifies things a bit.
x = input("Enter binary number: ").strip()
decimal = 0
for i in range(len(x)):
decimal += int(x[i]) * 2**abs((i - (len(x) - 1)))
print(decimal)
This code takes in a binary number as a string, converts it to a decimal number and outputs it as an integer. The procedure is the following:
1st element of binary number * 2^(length of binary number - 1)
2nd element of binary number * 2^(length of binary number - 2)
and so on till we get to the last element and ...2^0
If we take number 10011, the conversion using this formula will look like this:
1*2^4 + 0*2^3 + 0*2^2 + 1*2^1 + 1*2^0, which equals to 19.
This code, however, assumes that the binary number is valid. Let me know if it helps.
Another implementation using while loop might look like this. Maybe it'll be easier to understand than the code with the for loop.
x = input("Enter binary number: ").strip()
decimal = 0
index = 0
exp = len(x) - 1
while index != len(x):
decimal += int(x[index]) * 2**exp
index += 1
exp -= 1
print(decimal)
In this one we start from the beginning of the number with the highest power, which is length of binary number minus one, we loop through the number, lowering the power and changing index.
Regarding checking if number is binary.
Try using helper function to determine if number is binary and then insert this function inside your main function. For example:
def is_binary(x):
""" Returns True if number x is binary and False otherwise.
input: x as a string
"""
for i in list(x):
if i not in ["1", "0"]:
return False
return True
def binary_decimal(x):
""" Converts binary to decimal.
input: binary number x as a string
output: decimal number as int
"""
if not is_binary(x):
return "Number is invalid"
decimal = 0
for i in range(len(x)):
decimal += int(x[i]) * 2**abs((i - (len(x) - 1)))
return decimal
The first function checks if number consists only of ones and zeros and the second function actually converts your number only if it's binary according to the first function.
You can also try using assert statement or try / except if you'd better raise an error if number is not binary instead of simply printing the message.
Of course, you can implement this solution without any functions.
I have a big integer below, as 'max'. How come the value dividing max by '27' is not equivalent to just completely omiting the first number '27'. Technically they should be equal, but in python they are not. How can I get the same answer by dividing the max value with '27', in this example?
max = 27*37*47*30*17*6*20*17*21*43*5*49*49*50*20*42*45*1*22*44
no27 = 37*47*30*17*6*20*17*21*43*5*49*49*50*20*42*45*1*22*44
div27 = (max/27)
modno27 = no27%40
moddiv27 = div27%40
The values printed are:
no27 = 35882855955274315680000000
div27 = 3.5882855955274316e+25
modno27 = 0
moddiv27 = 8.0
Assuming this is Python 3, you used true division, which computes float results, but float (based on C double) has representation limitations that Python ints do not (above ~2**53, it can't represent every integer value).
When you know the number is evenly divisible, use // to preserve int-ness. If it's not evenly divisible, you'll round down, e.g. 5 // 3 == 1. If that's unacceptable, you can use divmod to compute both quotient and remainder at once (so no information is lost) or the fractions.Fraction type or decimal.Decimal type (with appropriate precision) to get more precise results in a single result type.
Sorry, but I really don't know what's the meaning of the defination of round in python 3.3.2 doc:
round(number[, ndigits])
Return the floating point value number rounded to ndigits digits after the decimal point. If ndigits is omitted, it defaults to zero. Delegates to number.__round__(ndigits).
For the built-in types supporting round(), values are rounded to the closest multiple of 10 to the power minus ndigits if two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2). The return value is an integer if called with one argument, otherwise of the same type as number.
I don't know how come the multiple of 10 and pow.
After reading the following examples, I think round(number,n) works like:
if let number be 123.456, let n be 2
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.006) and abs(number-123.46) (0.004),and chooses the smaller one.
so, 123.46 is the result.
and if let number be 123.455, let n be 2:
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.005) and abs(number-123.46) (0.005). They are equal. So round checks the last digit of 123.45 and 123.46. The even one is the result.
so, the result is 123.46
Am I right?
If not, could you offer a understandable version of values are rounded to the closest multiple of 10 to the power minus ndigits?
ndigits = 0 => pow(10, -ndigits) = 10^(-ndigits) = 1
ndigits = 1 => pow(10, -ndigits) = 10^(-ndigits) = 0.1
etc.
>>> for ndigits in range(6):
... print round(123.456789, ndigits) / pow(10, -ndigits)
123.0
1235.0
12346.0
123457.0
1234568.0
12345679.0
Basically, the number you get is always an integer multiple of 10^(-ndigits). For ndigits=0, that means the number you get is itself an integer, for ndigits=1 it means it won't have more than one non-zero value after the decimal point.
It helps to know that anything to the power of 0 equals 1. As ndigits increases, the function:
f(ndigits) = 10-ndigits gets smaller as you increase ndigits. Specifically as you increase ndigits by 1, you simply shift the decimal place of precision one left. e.g. 10^-0 = 1, 10^-1 = .1 and 10^-2 = .01. The place where the 1 is in the answer is the last point of precision for round.
For the part where it says
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus ndigits; if two multiples
are equally close, rounding is done toward the even choice (so, for
example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
This has unexpected behavior in Python 3 and it will not work for all floats. Consider the example you gave, round(123.455, 2) yields the value 123.45. This is not expected behavior because the closest even multiple of 10^-2 is 123.46, not 123.45!
To understand this, you have to pay special attention to the note below this:
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
And that is why certain floats will round to the "wrong value" and there is really no easy workaround as far as I am aware. (sadface) You could use fractions (i.e. two variables representing the numerator and the denominator) to represent floats in a custom round function if you want to get different behavior than the unpredictable behavior for floats.