I want to print a big number of decimal object in python3.6
import decimal
a = decimal.Decimal('0.0')
for idx in range(10):
a += decimal.Decimal('1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111')
print(a)
The execution result I want is below.
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111
2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222.22222222222222222222
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333.33333333333333333333
4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444.44444444444444444444
5555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555.55555555555555555555
6666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666.66666666666666666666
7777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777.77777777777777777777
8888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888.88888888888888888888
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999.99999999999999999999
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111110
But the actual execution result is below.
1.111111111111111111111111111E+99
2.222222222222222222222222222E+99
3.333333333333333333333333333E+99
4.444444444444444444444444444E+99
5.555555555555555555555555555E+99
6.666666666666666666666666666E+99
7.777777777777777777777777777E+99
8.888888888888888888888888888E+99
9.999999999999999999999999999E+99
1.111111111111111111111111111E+100
How can I get the results I want?
You can use string formatting, but before that you have to set precision (the default for Decimals is 28 places)
import decimal
a = decimal.Decimal('0.0')
with decimal.localcontext() as ctx:
ctx.prec = 120
for idx in range(10):
a += decimal.Decimal('1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111')
print('{:.20f}'.format(a))
Prints:
1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111111
2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222.22222222222222222222
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333.33333333333333333333
4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444.44444444444444444444
5555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555.55555555555555555555
6666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666666.66666666666666666666
7777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777.77777777777777777777
8888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888.88888888888888888888
9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999.99999999999999999999
11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111.11111111111111111110
Related
Currently I am logging stuff and I am using my own formatter with a custom formatTime():
def formatTime(self, _record, _datefmt):
t = datetime.datetime.now()
return t.strftime('%Y-%m-%d %H:%M:%S.%f')
My issue is that the microseconds, %f, are six digits. Is there anyway to spit out less digits, like the first three digits of the microseconds?
The simplest way would be to use slicing to just chop off the last three digits of the microseconds:
def format_time():
t = datetime.datetime.now()
s = t.strftime('%Y-%m-%d %H:%M:%S.%f')
return s[:-3]
I strongly recommend just chopping. I once wrote some logging code that rounded the timestamps rather than chopping, and I found it actually kind of confusing when the rounding changed the last digit. There was timed code that stopped running at a certain timestamp yet there were log events with that timestamp due to the rounding. Simpler and more predictable to just chop.
If you want to actually round the number rather than just chopping, it's a little more work but not horrible:
def format_time():
t = datetime.datetime.now()
s = t.strftime('%Y-%m-%d %H:%M:%S.%f')
head = s[:-7] # everything up to the '.'
tail = s[-7:] # the '.' and the 6 digits after it
f = float(tail)
temp = "{:.03f}".format(f) # for Python 2.x: temp = "%.3f" % f
new_tail = temp[1:] # temp[0] is always '0'; get rid of it
return head + new_tail
Obviously you can simplify the above with fewer variables; I just wanted it to be very easy to follow.
As of Python 3.6 the language has this feature built in:
def format_time():
t = datetime.datetime.now()
s = t.isoformat(timespec='milliseconds')
return s
This method should always return a timestamp that looks exactly like this (with or without the timezone depending on whether the input dt object contains one):
2016-08-05T18:18:54.776+0000
It takes a datetime object as input (which you can produce with datetime.datetime.now()). To get the time zone like in my example output you'll need to import pytz and pass datetime.datetime.now(pytz.utc).
import pytz, datetime
time_format(datetime.datetime.now(pytz.utc))
def time_format(dt):
return "%s:%.3f%s" % (
dt.strftime('%Y-%m-%dT%H:%M'),
float("%.3f" % (dt.second + dt.microsecond / 1e6)),
dt.strftime('%z')
)
I noticed that some of the other methods above would omit the trailing zero if there was one (e.g. 0.870 became 0.87) and this was causing problems for the parser I was feeding these timestamps into. This method does not have that problem.
An easy solution that should work in all cases:
def format_time():
t = datetime.datetime.now()
if t.microsecond % 1000 >= 500: # check if there will be rounding up
t = t + datetime.timedelta(milliseconds=1) # manually round up
return t.strftime('%Y-%m-%d %H:%M:%S.%f')[:-3]
Basically you do manual rounding on the date object itself first, then you can safely trim the microseconds.
Edit: As some pointed out in the comments below, the rounding of this solution (and the one above) introduces problems when the microsecond value reaches 999500, as 999.5 is rounded to 1000 (overflow).
Short of reimplementing strftime to support the format we want (the potential overflow caused by the rounding would need to be propagated up to seconds, then minutes, etc.), it is much simpler to just truncate to the first 3 digits as outlined in the accepted answer, or using something like:
'{:03}'.format(int(999999/1000))
-- Original answer preserved below --
In my case, I was trying to format a datestamp with milliseconds formatted as 'ddd'. The solution I ended up using to get milliseconds was to use the microsecond attribute of the datetime object, divide it by 1000.0, pad it with zeros if necessary, and round it with format. It looks like this:
'{:03.0f}'.format(datetime.now().microsecond / 1000.0)
# Produces: '033', '499', etc.
You can subtract the current datetime from the microseconds.
d = datetime.datetime.now()
current_time = d - datetime.timedelta(microseconds=d.microsecond)
This will turn 2021-05-14 16:11:21.916229 into 2021-05-14 16:11:21
This method allows flexible precision and will consume the entire microsecond value if you specify too great a precision.
def formatTime(self, _record, _datefmt, precision=3):
dt = datetime.datetime.now()
us = str(dt.microsecond)
f = us[:precision] if len(us) > precision else us
return "%d-%d-%d %d:%d:%d.%d" % (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, int(f))
This method implements rounding to 3 decimal places:
import datetime
from decimal import *
def formatTime(self, _record, _datefmt, precision='0.001'):
dt = datetime.datetime.now()
seconds = float("%d.%d" % (dt.second, dt.microsecond))
return "%d-%d-%d %d:%d:%s" % (dt.year, dt.month, dt.day, dt.hour, dt.minute,
float(Decimal(seconds).quantize(Decimal(precision), rounding=ROUND_HALF_UP)))
I avoided using the strftime method purposely because I would prefer not to modify a fully serialized datetime object without revalidating it. This way also shows the date internals in case you want to modify it further.
In the rounding example, note that the precision is string-based for the Decimal module.
Here is my solution using regexp:
import re
# Capture 6 digits after dot in a group.
regexp = re.compile(r'\.(\d{6})')
def to_splunk_iso(dt):
"""Converts the datetime object to Splunk isoformat string."""
# 6-digits string.
microseconds = regexp.search(dt.isoformat()).group(1)
return regexp.sub('.%d' % round(float(microseconds) / 1000), dt.isoformat())
Fixing the proposed solution based on Pablojim Comments:
from datetime import datetime
dt = datetime.now()
dt_round_microsec = round(dt.microsecond/1000) #number of zeroes to round
dt = dt.replace(microsecond=dt_round_microsec)
If once want to get the day of the week (i.e, 'Sunday)' along with the result, then by slicing '[:-3]' will not work. At that time you may go with,
dt = datetime.datetime.now()
print("{}.{:03d} {}".format(dt.strftime('%Y-%m-%d %I:%M:%S'), dt.microsecond//1000, dt.strftime("%A")))
#Output: '2019-05-05 03:11:22.211 Sunday'
%H - for 24 Hour format
%I - for 12 Hour format
Thanks,
Adding my two cents here as this method will allow you to write your microsecond format as you would a float in c-style. It takes advantage that they both use %f.
import datetime
import re
def format_datetime(date, format):
"""Format a ``datetime`` object with microsecond precision.
Pass your microsecond as you would format a c-string float.
e.g "%.3f"
Args:
date (datetime.datetime): You input ``datetime`` obj.
format (str): Your strftime format string.
Returns:
str: Your formatted datetime string.
"""
# We need to check if formatted_str contains "%.xf" (x = a number)
float_format = r"(%\.\d+f)"
has_float_format = re.search(float_format, format)
if has_float_format:
# make microseconds be decimal place. Might be a better way to do this
microseconds = date.microsecond
while int(microseconds): # quit once it's 0
microseconds /= 10
ms_str = has_float_format.group(1) % microseconds
format = re.sub(float_format, ms_str[2:], format)
return date.strftime(format)
print(datetime.datetime.now(), "%H:%M:%S.%.3f")
# '17:58:54.424'
I'm new to programming so I thought I'd ask here for help.
So when I use:
eval('12.5 + 3.2'),
it converts 12.5 and 3.2 into floats.
But I want them to be converted into the Decimal datatype.
I can use:
from decimal import Decimal
eval(Decimal(12.5) + Decimal(3.2))
But I can't do that in my program as I'm accepting user input.
I've found a solution but it uses regular expressions, which I'm not familiar with right now (and I can't find it again for some reason).
It would be great if someone could help me out. Thanks!
UPDATE: apparently the official docs has a recipe that does exactly what you're looking for. From https://docs.python.org/3/library/tokenize.html#examples:
from tokenize import tokenize, untokenize, NUMBER, STRING, NAME, OP
from io import BytesIO
def decistmt(s):
"""Substitute Decimals for floats in a string of statements.
>>> from decimal import Decimal
>>> s = 'print(+21.3e-5*-.1234/81.7)'
>>> decistmt(s)
"print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))"
The format of the exponent is inherited from the platform C library.
Known cases are "e-007" (Windows) and "e-07" (not Windows). Since
we're only showing 12 digits, and the 13th isn't close to 5, the
rest of the output should be platform-independent.
>>> exec(s) #doctest: +ELLIPSIS
-3.21716034272e-0...7
Output from calculations with Decimal should be identical across all
platforms.
>>> exec(decistmt(s))
-3.217160342717258261933904529E-7
"""
result = []
g = tokenize(BytesIO(s.encode('utf-8')).readline) # tokenize the string
for toknum, tokval, _, _, _ in g:
if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
result.extend([
(NAME, 'Decimal'),
(OP, '('),
(STRING, repr(tokval)),
(OP, ')')
])
else:
result.append((toknum, tokval))
return untokenize(result).decode('utf-8')
Which you can then use like so:
from decimal import Decimal
s = "12.5 + 3.2 + 1.0000000000000001 + (1.0 if 2.0 else 3.0)"
s = decistmt(s)
print(s)
print(eval(s))
Result:
Decimal ('12.5')+Decimal ('3.2')+Decimal ('1.0000000000000001')+(Decimal ('1.0')if Decimal ('2.0')else Decimal ('3.0'))
17.7000000000000001
Feel free to skip the rest of this answer, which is now only of interest to historians of half-correct solutions.
As far as I know, there's no easy way to "hook into" eval in order to change how it interprets float objects.
But if we use the ast module to convert your string into an abstract syntax tree before evaling it, then we can manipulate the tree to replace the floats with Decimal calls.
import ast
from decimal import Decimal
def construct_decimal_node(value):
return ast.Call(
func = ast.Name(id="Decimal", ctx=ast.Load()),
args = [value],
keywords = []
)
return expr
class FloatLiteralReplacer(ast.NodeTransformer):
def visit_Num(self, node):
return construct_decimal_node(node)
s = '12.5 + 3.2'
node = ast.parse(s, mode="eval")
node = FloatLiteralReplacer().visit(node)
ast.fix_missing_locations(node) #add diagnostic information to the nodes we created
code = compile(node, filename="", mode="eval")
result = eval(code)
print("The type of the result of this expression is:", type(result))
print("The result of this expression is:", result)
Result:
The type of the result of this expression is: <class 'decimal.Decimal'>
The result of this expression is: 15.70000000000000017763568394
As you can see, the result is identical to what you would have gotten if you had calculated Decimal(12.5) + Decimal(3.2) directly.
But perhaps you're thinking "Why isn't the result 15.7?". This is because Decimal(3.2) is not exactly identical to 3.2. It's actually equal to 3.20000000000000017763568394002504646778106689453125. This is a hazard when it comes to initializing decimals using float objects -- the inaccuracy is already present. Better to use strings to create decimals, e.g. Decimal("3.2").
Maybe you're now thinking "Ok, so how do I turn 12.5 + 3.2 into Decimal("12.5") + Decimal("3.2")?". The quickest approach would be to modify construct_decimal_node so the Call's args is an ast.Str rather than an ast.Num:
import ast
from decimal import Decimal
def construct_decimal_node(value):
return ast.Call(
func = ast.Name(id="Decimal", ctx=ast.Load()),
args = [ast.Str(str(value.n))],
keywords = []
)
return expr
class FloatLiteralReplacer(ast.NodeTransformer):
def visit_Num(self, node):
return construct_decimal_node(node)
s = '12.5 + 3.2'
node = ast.parse(s, mode="eval")
node = FloatLiteralReplacer().visit(node)
ast.fix_missing_locations(node) #add diagnostic information to the nodes we created
code = compile(node, filename="", mode="eval")
result = eval(code)
print("The type of the result of this expression is:", type(result))
print("The result of this expression is:", result)
Result:
The type of the result of this expression is: <class 'decimal.Decimal'>
The result of this expression is: 15.7
But take care: while I expect this approach to return good results most of the time, there is a corner case where it returns surprising results. In particular, when the expression contains a float f such that float(str(f)) != f. In other words, when the printed representation of the float lacks the precision necessary to represent the float exactly.
For example, if you changed s in the above code to "1.0000000000000001 + 0", the result would be 1.0. This is incorrect, since the result of Decimal("1.0000000000000001") + Decimal("0") is 1.0000000000000001.
I'm not sure how you could prevent this problem... By the time ast.parse has finished executing, the float literal has already been converted into a float object, and there's no obvious way to retrieve the string that was used to create it. Perhaps you could extract it from the expression string, but you'd basically have to reinvent Python's parser to do that.
To format the number 20 as 20.00 I can do:
'%.2f' % 20.283928
'20.28'
However, I want to allow the user to be able to specify the number of decimal places. For example:
value = 20.283928
num_decimal_places = 7 # from user input
'%.%sf' % (20, '7')
How could I do something like the above?
If version of python greater or equals to 3.6:
print(f'%.{num_decimal_places}f'%value)
Output:
20.2839280
If you are willing to use newer format you can do this:
>>> n = 4
>>> value = 20.283928
>>> '{:.{count}f}'.format(value, count=n)
'20.2839'
I'm trying to increase the time.
I want to get an hour format like this: 13:30:45,123 (in Java: "HH:mm:ss,SSS"), but Python displays 13:30:45,123456 ("%H:%M:%S,%f")(microseconds of 6 digits).
I read on the web and found possible solutions like:
from datetime import datetime
hour = datetime.utcnow().strftime('%H:%M:%S,%f')[:-3]
print(hour)
The output is: 04:33:16,123
But it's a bad solution, because if the hour is for example: 01:49:56,020706, the output is: 01:49:56,020, that the right should be: 01:49:56,021 (rounded).
The real purpose is that if I increase the milliseconds, even reaching rounds the seconds.
Example: (I want to increase 500 microseconds)
If the Input: 00:01:48,557, the Output should be: 00:01:49,057
The code of the program in Java (working good) is:
SimpleDateFormat df = new SimpleDateFormat("HH:mm:ss,SSS");
System.out.print("Input the time: ");
t1 = in.next();
Date d = df.parse(t1);
Calendar cal = Calendar.getInstance();
cal.setTime(d);
cal.add(Calendar.MILLISECOND, 500);//here increase the milliseconds (microseconds)
t2 = df.format(cal.getTime());
System.out.print("The Output (+500): "+t2);
I don't know if exists in Python something like SimpleDateFormat (in Java).
As to addition, you can add 500ms to your datetime object, using a timedelta object:
from datetime import datetime, timedelta
t1 = datetime.utcnow()
t2 = t1 + timedelta(milliseconds=500)
So as long as you're working with datetime objects instead of strings, you can easily do all the time-operations you'd like.
So we're left with the question of how to format the time when you want to display it.
As you pointed out, the [:-3]-trick seems to be the common solution, and seems to me it should work fine. If you really care about rounding correctly to the closest round millisecond, you can use the following "rounding trick":
You must have seen this trick in the past, for floats:
def round(x):
return int(x + 0.5)
The same idea (i.e. adding 0.5) can also be applied to datetimes:
def format_dt(t):
tr = t + timedelta(milliseconds=0.5)
return tr.strftime('%H:%M:%S,%f')[:-3]
You can round of digits using decimal
from decimal import Decimal
ts = datetime.utcnow()
sec = Decimal(ts.strftime('%S.%f'))
print ts.strftime('%H:%M:')+str(round(sec, 3))
I'm in need of a function returning only the significant part of a value with respect to a given error. Meaning something like this:
def (value, error):
""" This function takes a value and determines its significant
accuracy by its error.
It returns only the scientific important part of a value and drops the rest. """
magic magic magic....
return formated value as String.
What i have written so far to show what I mean:
import numpy as np
def signigicant(value, error):
""" Returns a number in a scintific format. Meaning a value has an error
and that error determines how many digits of the
value are signifcant. e.g. value = 12.345MHz,
error = 0.1MHz => 12.3MHz because the error is at the first digit.
(in reality drop the MHz its just to show why.)"""
xx = "%E"%error # I assume this is most ineffective.
xx = xx.split("E")
xx = int(xx[1])
if error <= value: # this should be the normal case
yy = np.around(value, -xx)
if xx >= 0: # Error is 1 or bigger
return "%i"%yy
else: # Error is smaller than 1
string = "%."+str(-xx) +"f"
return string%yy
if error > value: # This should not be usual but it can happen.
return "%g"%value
What I don't want is a function like numpys around or round. Those functions take a value and want to know what part of this value is important. The point is that in general I don't know how many digits are significant. It depends in the size of the error of that value.
Another example:
value = 123, error = 12, => 120
One can drop the 3, because the error is at the size of 10. However this behaviour is not so important, because some people still write 123 for the value. Here it is okay but not perfectly right.
For big numbers the "g" string operator is a usable choice but not always what I need. For e.g.
If the error is bigger than the value.( happens e.g. when someone wants to measure something that does not exist.)
value = 10, error = 100
I still wish to keep the 10 as the value because I done know it any better. The function should return 10 then and not 0.
The stuff I have written does work more or less, but its clearly not effective or elegant in any way. Also I assume this question does concern hundreds of people because every scientist has to format numbers in that way. So I'm sure there is a ready to use solution somewhere but I haven't found it yet.
Probably my google skills aren't good enough but I wasn't able to find a solution to this in two days and now I ask here.
For testing my code I used this the following but more is needed.
errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
for value, error in zip(values, errors):
print "Teste Value: ",value, "Error:", error
print "Result: ", signigicant(value, error)
import math
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits
Example:
>>> errors = [0.2,1.123,1.0, 123123.1233215,0.123123123768]
>>> values = [12.3453,123123321.4321432, 0.000321 ,321321.986123612361236,0.00001233214 ]
>>> map(round_on_error, values, errors)
[12.3, 123123321.0, 0.0, 300000.0, 0.0]
And if you want to keep a value that is inferior to its error
if (value < error)
return value
else
def round_on_error(value, error):
significant_digits = 10**math.floor(math.log(error, 10))
return value // significant_digits * significant_digits