I am trying to read in a 2 dimensional range of values from a ".xlsb" file using xlwings. The range contains a series of formulas, that returns floats. When I read in the values, it gets read in as Decimals rather than floats. The problem is Decimals beyond 4 spots get truncated. For example, I have a value in excel of 0.0913495 but it gets read in as Decimal('0.0913'). To make matters worse, when I try converting these decimals to floats, I see that any precision beyond 4 decimal places has been completely ignored. For example calling float(Decimal('0.0913')) returns 0.0913!
So far I have tried the following to fix this problem, none have worked:
Set precision to 28 by calling decimal.getcontext().prec = 28. I have also tried 7, 8, etc. This seems to change nothing.
Use the .options method: sheet.range("myrange").options(numbers = lambda x : float(x)).value
Tried ".raw_value"
Ironically (2) still returns numbers as decimals, it is as if my options got ignored.
This is a problem as for my particular application I rely on a higher degree of accuracy than 4 decimals places, yet xlwings refuses to read in the estimated values at any precision beyond 4 decimal places. How do I fix this?
For reference, I am using xlwings 0.23.0 with Python 3.8.8 and Excel version 2108 (Build 14326.20238)
Related
I am trying to read a dataframe from a csv, do some calculations with it and then export the results to another csv. While doing that I noticed that the value 8.1e-202 is getting changed to 8.1000000000000005e-202. But all the other numbers are represented correctly.
Example:
A example.csv looks like this:
id,e-value
ID1,1e-20
ID2,8.1e-202
ID3,9.24e-203
If I do:
import pandas as pd
df = pd.read_csv("example.csv")
df.iloc[1]["e-value"]
>>> 8.1000000000000005e-202
df.iloc[2]["e-value"]
>>> 9.24e-203
Why is 8.1e-202 being altered but 9.24e-203 isn't?
I tried to change the datatype that pandas is using from the default
df["e-value"].dtype
>>> dtype('float64')
to numpy datatypes like this:
import numpy as np
df = pd.read_csv("./temp/test", dtype={"e-value" : np.longdouble})
but this will just result in:
df.iloc[1]["e-value"]
>>> 8.100000000000000522e-202
Can someone explain to me why this is happening? I can't replicate this problem with any other number. Everything bigger or smaller than 8.1e-202 seems to work normally.
EDIT:
To specify my problem. I am aware that floats are not perfect. My actual problem with this is that once I write the dataframe back to a csv the resulting file will then look like this:
id,e-value
ID1,1e-20
ID2,8.1000000000000005e-202
ID3,9.24e-203
And I need the second row to be ID2,8.1e-202
I "fixed" this by just formatting this column before I write the csv, but I'm unhappy with this solution since the formatting will change other elements to something scientific notation where it was just a normal float.
def format_eval(e):
return "{0:.1e}".format(e)
df["e-value"] = df["e-value"].apply(lambda x: format_eval(x))
Float number representation is something not so simple. Not every real number can be represented and almost all (relatively speaking) are actually approximations. Is not like integers, the precision varies and python has a precision undefined float really.
Each floating point standar will have their own set of real numbers that can represent exactly. There's no work around.
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
https://en.wikipedia.org/wiki/IEEE_754-2008_revision
If the problem really is the arithmetic or comparison, you should consider if error will grow or decrease. For example multiplying by large numbers can grow the representation error.
And also, when comparing you should do things like math.is_close. Basically comparing the distance between the numbers.
If you are trying to represent and operate real numbers, that aren't irrational numbers. Like integers, fractions or decimal numbers with finite digits, you can also consider cast to the proper digit representation like: int, decimal or fraction.
See this for further ideas:
https://davidamos.dev/the-right-way-to-compare-floats-in-python/#:~:text=How%20To%20Compare%20Floats%20in%20Python&text=If%20abs(a%20%2D%20b),rel_tol%20keyword%20argument%20of%20math.
I am attempting to decode some data from a Shark 100 Power Meter via TCP modbus. I have successfully pulled down the registers that I need, and am left with two raw values from the registers like so:
[17138, 59381]
From the manual, I know that I need to convert these two numbers into a 32bit IEEE floating-point number. I also know from the manual that "The lower-addressed register is the
high order half (i.e., contains the exponent)." The first number in the list shown above is the lower-addressed register.
Using Python (any library will do if needed), how would I take these two values and make them into a 32 bit IEEE floating point value.
I have tried to use various online converters and calculators to figure out a non-programmatic way to do this, however, anything I have tried gets me a result that is way out of bounds (I am reading volts in this case so the end result should be around 120-122 from the supplied values above).
Update for Python 3.6+ (f-strings).
I am not sure why the fill in #B.Go's answer was only 2. Also, since the byte order was big-endian, I hardcoded it as such.
import struct
a = 17138
b = 59381
struct.unpack('>f', bytes.fromhex(f"{a:0>4x}" + f"{b:0>4x}"))[0]
Output: 121.45304107666016
The following code works:
import struct
a=17138
b=59381
struct.unpack('!f', bytes.fromhex('{0:02x}'.format(a) + '{0:02x}'.format(b)))
It gives
(121.45304107666016,)
Adapted from Convert hex to float and Integer to Hexadecimal Conversion in Python
I read in the comments, and #Sanju had posted this link: https://github.com/riptideio/pymodbus/blob/master/examples/common/modbus_payload.py
For anyone using pymodbus, the BinaryPayloadDecoder is useful as it's built in. It's very easy to pass a result.registers, as shown in the example. Also, it has a logging integrated, so you can help debug why a conversion isn't working (ex: wrong endianness).
As such, I made a working example for this question (using pymodbus==2.3.0):
from pymodbus.constants import Endian
from pymodbus.payload import BinaryPayloadDecoder
a = 17138
b = 59381
registers = [a, b]
decoder = BinaryPayloadDecoder.fromRegisters(registers, byteorder=Endian.Big)
decoder.decode_32bit_float() # type: float
Output: 121.45304107666016
I used netCDF Python library to read netCDF variable which has list(variable) returns correct decimal precision as in the image (using PyCharm IDE). However, when I try to get the element by index, e.g: variable[0], it returns the rounded value instead (e.g: 5449865.55794), while I need 5449865.55793999997.
How can I iterate this list with correct decimal precision ?
Some basic code
from netCDF4 import Dataset
nc_dataset = Dataset(self.get_file().get_filepath(), "r")
variable_name = "E"
// netCDF file contains few variables (axis dimensions)
variable = nc_dataset.variables[variable_name]
variable is not a list but a netCDF object, however when using list() or variable[index] will return element value of the axis dimension.
The decimals you are chasing are bogus. The difference is only in the way these numbers are represented on your screen, not in how they are stored in your computer.
Try the following to convince yourself
a = 5449865.55793999997
a
# prints 5449865.55794
The difference between the two numbers if we were to take them literally is 3x10^-11. The smallest difference a 64 bit floating point variable at the size of a can resolve is more than an order of magnitude larger. So your computer cannot tell these two decimal numbers apart.
But look at the bright side. Your data aren't corrupted by some mysterious process.
Hope this is what suits your needs:
import decimal
print decimal.Decimal('5449865.55793999997')
In my PYTHON program, I am loading a lot of (floating) numbers for later use. I am talking about 100 Million numbers or more. It seems that I run into problems with memory space on RAM. Since the numbers I am saving do not need to have a high precision (3-4 digits would be more than enough) and are usually small (in the range -1000 .. 1000) I do not use the precision provided by a 64bit float.
Is there a possibility to save a floating number using less memory (maybe 8 or 16 bit)?
Thank you!
I would use the types in the numpy library, which provides the following types of interest:
float_
float16
float32
float64
So, if you wanted a 16-bit floating point number (1 sign bit, 5 exponent, and
10 for the mantissa), you could use the following:
import numpy as np
x = np.float16(10.0)
See also, data types in NumPy
Pack them into arrays of float-format values using the struct package's f format.
I have a question about the plotting. I want to plot some data between ranges :
3825229325678980.0786812569752124806963380417361932
and
3825229325678980.078681262584097479512892231994772
but I get the following error:
Attempting to set identical bottom==top results
in singular transformations; automatically expanding.
bottom=3.82522932568e+15, top=3.82522932568e+15
How should I increase the decimal points here to solve the problem?
The difference between your min and max value is less than the precision an eps of a double (~1e-15).
Basically using a 4-byte floating point representation you can not distinguish between the two numbers.
I suggest to remove all the integer digits from your data and represent only the decimal part. The integer part is only a big constant that you can always add later.
It might be easiest to scale your data to provide a range that looks less like zero.