How do I print entire number in Python from describe() function? - python

I am doing some statistical work using Python's pandas and I am having the following code to print out the data description (mean, count, median, etc).
data=pandas.read_csv(input_file)
print(data.describe())
But my data is pretty big (around 4 million rows) and each rows has very small data. So inevitably, the count would be big and the mean would be pretty small and thus Python print it like this.
I just want to print these numbers entirely just for ease of use and understanding, for example it better be 4393476 instead of 4.393476e+06. I have googled it around and the most I can find is Display a float with two decimal places in Python and some other similar posts. But that will only work only if I have the numbers in a variable already. Not in my case though. In my case I haven't got those numbers. The numbers are created by the describe() function, so I don't know what numbers I will get.
Sorry if this seems like a very basic question, I am still new to Python. Any response is appreaciated. Thanks.

Suppose you have the following DataFrame:
Edit
I checked the docs and you should probably use the pandas.set_option API to do this:
In [13]: df
Out[13]:
a b c
0 4.405544e+08 1.425305e+08 6.387200e+08
1 8.792502e+08 7.135909e+08 4.652605e+07
2 5.074937e+08 3.008761e+08 1.781351e+08
3 1.188494e+07 7.926714e+08 9.485948e+08
4 6.071372e+08 3.236949e+08 4.464244e+08
5 1.744240e+08 4.062852e+08 4.456160e+08
6 7.622656e+07 9.790510e+08 7.587101e+08
7 8.762620e+08 1.298574e+08 4.487193e+08
8 6.262644e+08 4.648143e+08 5.947500e+08
9 5.951188e+08 9.744804e+08 8.572475e+08
In [14]: pd.set_option('float_format', '{:f}'.format)
In [15]: df
Out[15]:
a b c
0 440554429.333866 142530512.999182 638719977.824965
1 879250168.522411 713590875.479215 46526045.819487
2 507493741.709532 300876106.387427 178135140.583541
3 11884941.851962 792671390.499431 948594814.816647
4 607137206.305609 323694879.619369 446424361.522071
5 174424035.448168 406285189.907148 445616045.754137
6 76226556.685384 979050957.963583 758710090.127867
7 876261954.607558 129857447.076183 448719292.453509
8 626264394.999419 464814260.796770 594750038.747595
9 595118819.308896 974480400.272515 857247528.610996
In [16]: df.describe()
Out[16]:
a b c
count 10.000000 10.000000 10.000000
mean 479461624.877280 522785202.100082 536344333.626082
std 306428177.277935 320806568.078629 284507176.411675
min 11884941.851962 129857447.076183 46526045.819487
25% 240956633.919592 306580799.695412 445818124.696121
50% 551306280.509214 435549725.351959 521734665.600552
75% 621482597.825966 772901261.744377 728712562.052142
max 879250168.522411 979050957.963583 948594814.816647
End of edit
In [7]: df
Out[7]:
a b c
0 4.405544e+08 1.425305e+08 6.387200e+08
1 8.792502e+08 7.135909e+08 4.652605e+07
2 5.074937e+08 3.008761e+08 1.781351e+08
3 1.188494e+07 7.926714e+08 9.485948e+08
4 6.071372e+08 3.236949e+08 4.464244e+08
5 1.744240e+08 4.062852e+08 4.456160e+08
6 7.622656e+07 9.790510e+08 7.587101e+08
7 8.762620e+08 1.298574e+08 4.487193e+08
8 6.262644e+08 4.648143e+08 5.947500e+08
9 5.951188e+08 9.744804e+08 8.572475e+08
In [8]: df.describe()
Out[8]:
a b c
count 1.000000e+01 1.000000e+01 1.000000e+01
mean 4.794616e+08 5.227852e+08 5.363443e+08
std 3.064282e+08 3.208066e+08 2.845072e+08
min 1.188494e+07 1.298574e+08 4.652605e+07
25% 2.409566e+08 3.065808e+08 4.458181e+08
50% 5.513063e+08 4.355497e+08 5.217347e+08
75% 6.214826e+08 7.729013e+08 7.287126e+08
max 8.792502e+08 9.790510e+08 9.485948e+08
You need to fiddle with the pandas.options.display.float_format attribute. Note, in my code I've used import pandas as pd. A quick fix is something like:
In [29]: pd.options.display.float_format = "{:.2f}".format
In [10]: df
Out[10]:
a b c
0 440554429.33 142530513.00 638719977.82
1 879250168.52 713590875.48 46526045.82
2 507493741.71 300876106.39 178135140.58
3 11884941.85 792671390.50 948594814.82
4 607137206.31 323694879.62 446424361.52
5 174424035.45 406285189.91 445616045.75
6 76226556.69 979050957.96 758710090.13
7 876261954.61 129857447.08 448719292.45
8 626264395.00 464814260.80 594750038.75
9 595118819.31 974480400.27 857247528.61
In [11]: df.describe()
Out[11]:
a b c
count 10.00 10.00 10.00
mean 479461624.88 522785202.10 536344333.63
std 306428177.28 320806568.08 284507176.41
min 11884941.85 129857447.08 46526045.82
25% 240956633.92 306580799.70 445818124.70
50% 551306280.51 435549725.35 521734665.60
75% 621482597.83 772901261.74 728712562.05
max 879250168.52 979050957.96 948594814.82

import numpy as np
import pandas as pd
np.random.seed(2016)
N = 4393476
df = pd.DataFrame(np.random.uniform(1e-4, 0.1, size=(N,3)), columns=list('ABC'))
desc = df.describe()
desc.loc['count'] = desc.loc['count'].astype(int).astype(str)
desc.iloc[1:] = desc.iloc[1:].applymap('{:.6f}'.format)
print(desc)
yields
A B C
count 4393476 4393476 4393476
mean 0.050039 0.050056 0.050057
std 0.028834 0.028836 0.028849
min 0.000100 0.000100 0.000100
25% 0.025076 0.025081 0.025065
50% 0.050047 0.050050 0.050037
75% 0.074987 0.075027 0.075055
max 0.100000 0.100000 0.100000
Under the hood, DataFrames are organized in columns. The values in a column can only have one data type (the column's dtype).
The DataFrame returned by df.describe() has columns of floating-point dtype:
In [116]: df.describe().info()
<class 'pandas.core.frame.DataFrame'>
Index: 8 entries, count to max
Data columns (total 3 columns):
A 8 non-null float64
B 8 non-null float64
C 8 non-null float64
dtypes: float64(3)
memory usage: 256.0+ bytes
DataFrames do not allow you to treat one row as integers and the other rows as floats.
However, if you change the contents of the DataFrame to strings, then you have full control over the way the values are displayed
since all the values are just strings.
Thus, to create a DataFrame in the desired format, you could use
desc.loc['count'] = desc.loc['count'].astype(int).astype(str)
to convert the count row to integers (by calling astype(int)), and then convert the integers to strings (by calling astype(str)). Then
desc.iloc[1:] = desc.iloc[1:].applymap('{:.6f}'.format)
converts the rest of the floats to strings using the str.format method to format the floats to 6 digits after the decimal point.
Alternatively, you could use
import numpy as np
import pandas as pd
np.random.seed(2016)
N = 4393476
df = pd.DataFrame(np.random.uniform(1e-4, 0.1, size=(N,3)), columns=list('ABC'))
desc = df.describe().T
desc['count'] = desc['count'].astype(int)
print(desc)
which yields
count mean std min 25% 50% 75% max
A 4393476 0.050039 0.028834 0.0001 0.025076 0.050047 0.074987 0.1
B 4393476 0.050056 0.028836 0.0001 0.025081 0.050050 0.075027 0.1
C 4393476 0.050057 0.028849 0.0001 0.025065 0.050037 0.075055 0.1
By transposing the desc DataFrame, the counts are now in their own column.
So now the problem can be solved by converting that column's dtype to int.
One advantage of doing it this way is that the values in desc remain numerical.
So further calculations based on the numeric values can still be done.
I think this solution is preferrable, provided that the transposed format is acceptable.

Related

understand behavior of sqrt - giving different results when written different

I have pandas series that has the following numbers:
0 -1.309176
1 -1.226239
2 -1.339079
3 -1.298509
...
I'm trying to calculate the square root of each number in the series.
when I tried the whole series:
s**0.5
>>>
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
..
10778 NaN
but if I take the numbers it works:
-1.309176**0.5
I also tried to slice the numbers from the series:
b1[0]**0.5
>>>
nan
So i'm trying to understand why it works when I write number but doesn't work when I use the series
*the values are float type :
s.dtype
>>>dtype('float64')
s.to_frame().info()
>>>
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10783 entries, 0 to 10782
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 B1 10783 non-null float64
dtypes: float64(1)
memory usage: 84.4 KB
You can't take a square root of a negative number (without venturing to complex numbers).
>>> np.sqrt(-1.30)
<stdin>:1: RuntimeWarning: invalid value encountered in sqrt
nan
When you do -1.309176**0.5, you're actually doing -(1.309176 ** 0.5), which is valid.
This has to do with operator precedence in python. Precedence of ** > unary operator -.
The square root of a negative number should be complex number. but when you compute -1.309176**0.5, it first computes the 1.309176**0.5 and then takes minus of it because the precedence of ** is > -.
>>>1.309176**0.5
-1.144192291531454
>>> (-1.309176)**0.5
(7.006157137165352e-17+1.144192291531454j)
Now numbers in your series are already negative, it's not like you are doing the unary operation - on them hence the square root of theses numbers should be complex number which Series shows as nan because the dtype is float.
>>> s = pd.Series([-1.30, -1.22])
>>> s
0 -1.30
1 -1.22
dtype: float64
Square root of this series gives nan.
>>> s**0.5
0 NaN
1 NaN
dtype: float64
Change dtype to np.complex
>>> s = s.astype(np.complex)
>>> s
0 -1.300000+0.000000j
1 -1.220000+0.000000j
dtype: complex128
Now you get the square root of s.
>>> s**0.05
0 1.000730+0.158500j
1 0.997557+0.157998j
dtype: complex128

Pandas not changing dtype when using describe()

This is probably related to my lack of understanding about programming.
If i do:
dict = {'A':[10.0, 9.10, 8.93, 9.5],
'B':[3.0, 7.45, 5.6, 10.3],
'C':[5.32, 4.30, 8.0, 9.8]}
df = pd.DataFrame(dict)
df['A'].describe()
Out:
count 4.000000
mean 9.382500
std 0.475981
min 8.930000
25% 9.057500
50% 9.300000
75% 9.625000
max 10.000000
Name: A, dtype: float64
If i try to change to integer type
df['A'] = df['A'].round(0).astype('int32')
df['A'].describe()
Out:
count 4.00000
mean 9.50000
std 0.57735
min 9.00000
25% 9.00000
50% 9.50000
75% 10.00000
max 10.00000
Name: A, dtype: float64
Seems it hasn't changed. However:
df['A'].describe
Out:
<bound method NDFrame.describe of
0 10
1 9
2 9
3 10
Name: A, dtype: int32>
The latter result can be confirmed by using df.dtypes.
What is happening here?
Thanks in advance!
describe is a method of the DataFrame object. To call a method, you have to use the brackets. Without brackets, it just gives you the method object itself, and not the result.
Also, if you want to convert the describe section to int format, you should write:
df['A'] = df['A']
df['A'].describe().astype(int)
result:
count 4
mean 9
std 0
min 8
25% 9
50% 9
75% 9
max 10
Name: A, dtype: int64
describe() method is primarily to generate descriptive statistics information. you can refer describe method doc
To check data type, you should use dtypes.
When you use describe method, it will not return as series or dataframe. It just type of object which does hold some data. You should use () to retrieve the data.

How can I get the row with a min for a certain column in a Pandas DataFrame?

My DataFrame is:
model epochs loss
0 <keras.engine.sequential.Sequential object at ... 1 0.0286867
1 <keras.engine.sequential.Sequential object at ... 1 0.0210836
2 <keras.engine.sequential.Sequential object at ... 1 0.0250625
3 <keras.engine.sequential.Sequential object at ... 1 0.109146
4 <keras.engine.sequential.Sequential object at ... 1 0.253897
I want to get the row with the lowest loss.
I'm trying self.models['loss'].idxmin(), but that gives an error:
TypeError: reduction operation 'argmin' not allowed for this dtype
There are a number of ways to do exactly that:
Consider this example dataframe
df
level beta
0 0 0.338
1 1 0.294
2 2 0.308
3 3 0.257
4 4 0.295
5 5 0.289
6 6 0.269
7 7 0.259
8 8 0.288
9 9 0.302
1) Using pandas conditionals
df[df.beta == df.beta.min()] #returns pandas DataFrame object
level beta
3 3 0.257
2) Using sort_values and choosing the first(0th) index
df.sort_values(by="beta").iloc[0] #returns pandas Series object
level 3
beta 0.257
Name: 3, dtype: object
These are most readable methods I guess
Edit :
Made this graph to visualize time taken by the above two methods over increasing no. of rows in the dataframe. Although it largely depends on the dataframe in question, sort_values is considerably faster than conditionals when the number of rows is greater than 1000 or so.
self.models[self.models['loss'] == self.models['loss'].min()]
Will give you the row the lowest loss (as long as self.models is your df). add .index to get the index number.
Hope this works
import pandas as pd
df = pd.DataFrame({'epochs':[1,1,1,1,1],'loss':[0.0286867,0.0286867,0.0210836,0.0109146,0.0109146]})
out = df.loc[df['loss'].idxmin()]

using pandas to store experimental data

I am using a pandas DataFrame to store data from a series of experiments so that I can easily make cuts across various parameter values for the next stage of analysis. I have a few questions about how to do this most effectively.
Currently I create my DataFrame from a dictionary of lists. There is typically a few thousand rows in the DataFrame. One of the columns is a device_id which indicates which of the 20 devices that the experimental data pertains to. Other columns include info about the experimental setup, like temperature, power, etc. and measurement results, like resonant_frequency, bandwidth, etc.
So far, I've been using this DataFrame rather "naively," that is, I use it sort of like a numpy record array, and so I don't think I'm fully taking advantage of the power of the DataFrame. The following are some examples of what I'm trying to achieve.
First I want to create a new column which is the maximum resonant_frequency measured for a given device over all experiments: call it max_freq. I do this like so:
df['max_freq'] = np.zeros((data.shape[0])) # create the new column
for index in np.unique(df.device_index):
group = df[df.device_index == index]
max = group.resonant_frequency.max()
df.max_freq[df.resonator_index == index] = max
Second One of my columns contains 1-D numpy arrays of a noise measurement. I want to compute a statistic on this 1-D array and put it into a new column. Currently I do this as:
noise_est = []
for vals,freq in (df.noise,df.resonant_freq):
noise_est.append(vals.std()/(1e6*freq))
df['noise_est'] = noise_est
Third Related the the previous one: Is it possible to iterate through rows of a DataFrame where the resulting object has attribute access to the columns? I.e. something like:
for row in df:
row.noise_est = row.noise.std()/(1e6*row.resonant_freq)
I know that this instead iterates through columns. I also know there is an iterrows method, but this provides a Series which doesn't allow attribute access.
I think this should get me started for now, thanks for your time!
edited to add df.info(), df.head() as requested:
df.info() # df.head() looks the same, but 5 non-null values
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9620 entries, 0 to 9619
Data columns (total 83 columns):
A_mag 9620 non-null values
A_mag_err 9620 non-null values
A_phase 9620 non-null values
A_phase_err 9620 non-null values
....
total_dac_atten 9600 non-null values
round_temp 9620 non-null values
dtypes: bool(1), complex128(4), float64(39), int64(12), object(27)
I trimmed this down because it's 83 columns, and I don't think this adds much to the example code snippets I shared, but have posted this bit in case it's helpful.
Create data. Note that storing a numpy array INSIDE a frame is generally not a good idea as its pretty inefficient.
In [84]: df = pd.DataFrame(dict(A = np.random.randn(20), B = np.random.randint(0,3,size=20), C = [ np.random.randn(5) for i in range(20) ]))
In [85]: df
Out[85]:
A B C
0 -0.493730 1 [-0.8790126045, -1.87366673214, 0.76227570837,...
1 -0.105616 2 [0.612075134682, -1.64452324091, 0.89799758012...
2 1.487656 1 [-0.379505426885, 1.17611806172, 0.88321152932...
3 0.351694 2 [0.132071242514, -1.54701609348, 1.29813626801...
4 -0.330538 2 [0.395383858214, 0.874419943107, 1.21124463921...
5 0.360041 0 [0.439133138619, -1.98615530266, 0.55971723554...
6 -0.505198 2 [-0.770830608002, 0.243255072359, -1.099514797...
7 0.631488 1 [0.676233200011, 0.622926691271, -0.1110029751...
8 1.292087 1 [1.77633938532, -0.141683361957, 0.46972952154...
9 0.641987 0 [1.24802709304, 0.477527098462, -0.08751885691...
10 0.732596 2 [0.475771915314, 1.24219702097, -0.54304296895...
11 0.987054 1 [-0.879620967644, 0.657193159735, -0.093519342...
12 -1.409455 1 [1.04404325784, -0.310849157425, 0.60610368623...
13 1.063830 1 [-0.760467872808, 1.33659372288, -0.9343171844...
14 0.533835 1 [0.985463451645, 1.76471927635, -0.59160181340...
15 0.062441 1 [-0.340170594584, 1.53196133354, 0.42397775978...
16 1.458491 2 [-1.79810090668, -1.82865815817, 1.08140831482...
17 -0.886119 2 [0.281341969073, -1.3516126536, 0.775326038501...
18 0.662076 1 [1.03992509625, 1.17661862104, -0.562683934951...
19 1.216878 2 [0.0746149754367, 0.156470450639, -0.477269150...
In [86]: df.dtypes
Out[86]:
A float64
B int64
C object
dtype: object
Apply an operation to the value of a series (2 and 3)
In [88]: df['C_std'] = df['C'].apply(np.std)
Get the max of each group and return the value (1)
In [91]: df['A_max_by_group'] = df.groupby('B')['A'].transform(lambda x: x.max())
In [92]: df
Out[92]:
A B C A_max_by_group C_std
0 -0.493730 1 [-0.8790126045, -1.87366673214, 0.76227570837,... 1.487656 1.058323
1 -0.105616 2 [0.612075134682, -1.64452324091, 0.89799758012... 1.458491 0.987980
2 1.487656 1 [-0.379505426885, 1.17611806172, 0.88321152932... 1.487656 1.264522
3 0.351694 2 [0.132071242514, -1.54701609348, 1.29813626801... 1.458491 1.150026
4 -0.330538 2 [0.395383858214, 0.874419943107, 1.21124463921... 1.458491 1.045408
5 0.360041 0 [0.439133138619, -1.98615530266, 0.55971723554... 0.641987 1.355853
6 -0.505198 2 [-0.770830608002, 0.243255072359, -1.099514797... 1.458491 0.443872
7 0.631488 1 [0.676233200011, 0.622926691271, -0.1110029751... 1.487656 0.432342
8 1.292087 1 [1.77633938532, -0.141683361957, 0.46972952154... 1.487656 1.021847
9 0.641987 0 [1.24802709304, 0.477527098462, -0.08751885691... 0.641987 0.676835
10 0.732596 2 [0.475771915314, 1.24219702097, -0.54304296895... 1.458491 0.857441
11 0.987054 1 [-0.879620967644, 0.657193159735, -0.093519342... 1.487656 0.628655
12 -1.409455 1 [1.04404325784, -0.310849157425, 0.60610368623... 1.487656 0.835633
13 1.063830 1 [-0.760467872808, 1.33659372288, -0.9343171844... 1.487656 0.936746
14 0.533835 1 [0.985463451645, 1.76471927635, -0.59160181340... 1.487656 0.991327
15 0.062441 1 [-0.340170594584, 1.53196133354, 0.42397775978... 1.487656 0.700299
16 1.458491 2 [-1.79810090668, -1.82865815817, 1.08140831482... 1.458491 1.649771
17 -0.886119 2 [0.281341969073, -1.3516126536, 0.775326038501... 1.458491 0.910355
18 0.662076 1 [1.03992509625, 1.17661862104, -0.562683934951... 1.487656 0.666237
19 1.216878 2 [0.0746149754367, 0.156470450639, -0.477269150... 1.458491 0.275065

Pandas dataframe total row

I have a dataframe, something like:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
and I would like to add a 'total' row to the end of dataframe:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
5 total 18 9.47
I've tried to use the sum command but I end up with a Series, which although I can convert back to a Dataframe, doesn't maintain the data types:
tot_row = pd.DataFrame(df.sum()).T
tot_row['foo'] = 'tot'
tot_row.dtypes:
foo object
bar object
qux object
I would like to maintain the data types from the original data frame as I need to apply other operations to the total row, something like:
baz = 2*tot_row['qux'] + 3*tot_row['bar']
Update June 2022
pd.append is now deprecated. You could use pd.concat instead but it's probably easier to use df.loc['Total'] = df.sum(numeric_only=True), as Kevin Zhu commented. Or, better still, don't modify the data frame in place and keep your data separate from your summary statistics!
Append a totals row with
df.append(df.sum(numeric_only=True), ignore_index=True)
The conversion is necessary only if you have a column of strings or objects.
It's a bit of a fragile solution so I'd recommend sticking to operations on the dataframe, though. eg.
baz = 2*df['qux'].sum() + 3*df['bar'].sum()
df.loc["Total"] = df.sum()
works for me and I find it easier to remember. Am I missing something?
Probably wasn't possible in earlier versions.
I'd actually like to add the total row only temporarily though.
Adding it permanently is good for display but makes it a hassle in further calculations.
Just found
df.append(df.sum().rename('Total'))
This prints what I want in a Jupyter notebook and appears to leave the df itself untouched.
New Method
To get both row and column total:
import numpy as np
import pandas as pd
df = pd.DataFrame({'a': [10,20],'b':[100,200],'c': ['a','b']})
df.loc['Column_Total']= df.sum(numeric_only=True, axis=0)
df.loc[:,'Row_Total'] = df.sum(numeric_only=True, axis=1)
print(df)
a b c Row_Total
0 10.0 100.0 a 110.0
1 20.0 200.0 b 220.0
Column_Total 30.0 300.0 NaN 330.0
Use DataFrame.pivot_table with margins=True:
import pandas as pd
data = [('a',1,3.14),('b',3,2.72),('c',2,1.62),('d',9,1.41),('e',3,.58)]
df = pd.DataFrame(data, columns=('foo', 'bar', 'qux'))
Original df:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
Since pivot_table requires some sort of grouping (without the index argument, it'll raise a ValueError: No group keys passed!), and your original index is vacuous, we'll use the foo column:
df.pivot_table(index='foo',
margins=True,
margins_name='total', # defaults to 'All'
aggfunc=sum)
VoilĂ !
bar qux
foo
a 1 3.14
b 3 2.72
c 2 1.62
d 9 1.41
e 3 0.58
total 18 9.47
Alternative way (verified on Pandas 0.18.1):
import numpy as np
total = df.apply(np.sum)
total['foo'] = 'tot'
df.append(pd.DataFrame(total.values, index=total.keys()).T, ignore_index=True)
Result:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
5 tot 18 9.47
Building on JMZ answer
df.append(df.sum(numeric_only=True), ignore_index=True)
if you want to continue using your current index you can name the sum series using .rename() as follows:
df.append(df.sum().rename('Total'))
This will add a row at the bottom of the table.
This is the way that I do it, by transposing and using the assign method in combination with a lambda function. It makes it simple for me.
df.T.assign(GrandTotal = lambda x: x.sum(axis=1)).T
Building on answer from Matthias Kauer.
To add row total:
df.loc["Row_Total"] = df.sum()
To add column total,
df.loc[:,"Column_Total"] = df.sum(axis=1)
New method [September 2022]
TL;DR:
Just use
df.style.concat(df.agg(['sum']).style)
for a solution that won't change you dataframe, works even if you have an "sum" in your index, and can be styled!
Explanation
In pandas 1.5.0, a new method named .style.concat() gives you the ability to display several dataframes together. This is a good way to show the total (or any other statistics), because it is not changing the original dataframe, and works even if you have an index named "sum" in your original dataframe.
For example:
import pandas as pd
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
df.style.concat(df.agg(['sum']).style)
and it will return a formatted table that is visible in jupyter as this:
Styling
with a little longer code, you can even make the last row look different:
df.style.concat(
df.agg(['sum']).style
.set_properties(**{'background-color': 'yellow'})
)
to get:
see other ways to style (such as bold font, or table lines) in the docs
Following helped for me to add a column total and row total to a dataframe.
Assume dft1 is your original dataframe... now add a column total and row total with the following steps.
from io import StringIO
import pandas as pd
#create dataframe string
dfstr = StringIO(u"""
a;b;c
1;1;1
2;2;2
3;3;3
4;4;4
5;5;5
""")
#create dataframe dft1 from string
dft1 = pd.read_csv(dfstr, sep=";")
## add a column total to dft1
dft1['Total'] = dft1.sum(axis=1)
## add a row total to dft1 with the following steps
sum_row = dft1.sum(axis=0) #get sum_row first
dft1_sum=pd.DataFrame(data=sum_row).T #change it to a dataframe
dft1_sum=dft1_sum.reindex(columns=dft1.columns) #line up the col index to dft1
dft1_sum.index = ['row_total'] #change row index to row_total
dft1.append(dft1_sum) # append the row to dft1
Actually all proposed solutions render the original DataFrame unusable for any further analysis and can invalidate following computations, which will be easy to overlook and could lead to false results.
This is because you add a row to the data, which Pandas cannot differentiate from an additional row of data.
Example:
import pandas as pd
data = [1, 5, 6, 8, 9]
df = pd.DataFrame(data)
df
df.describe()
yields
0
0
1
1
5
2
6
3
8
4
9
0
count
5
mean
5.8
std
3.11448
min
1
25%
5
50%
6
75%
8
max
9
After
df.loc['Totals']= df.sum(numeric_only=True, axis=0)
the dataframe looks like this
0
0
1
1
5
2
6
3
8
4
9
Totals
29
This looks nice, but the new row is treated as if it was an additional data item, so df.describe will produce false results:
0
count
6
mean
9.66667
std
9.87252
min
1
25%
5.25
50%
7
75%
8.75
max
29
So: Watch out! and apply this only after doing all other analyses of the data or work on a copy of the DataFrame!
When the "totals" need to be added to an index column:
totals = pd.DataFrame(df.sum(numeric_only=True)).transpose().set_index(pd.Index({"totals"}))
df.append(totals)
e.g.
(Pdb) df
count min bytes max bytes mean bytes std bytes sum bytes
row_0 837200 67412.0 368733992.0 2.518989e+07 5.122836e+07 2.108898e+13
row_1 299000 85380.0 692782132.0 2.845055e+08 2.026823e+08 8.506713e+13
row_2 837200 67412.0 379484173.0 8.706825e+07 1.071484e+08 7.289354e+13
row_3 239200 85392.0 328063972.0 9.870446e+07 1.016989e+08 2.361011e+13
row_4 59800 67292.0 383487021.0 1.841879e+08 1.567605e+08 1.101444e+13
row_5 717600 112309.0 379483824.0 9.687554e+07 1.103574e+08 6.951789e+13
row_6 119600 664144.0 358486985.0 1.611637e+08 1.171889e+08 1.927518e+13
row_7 478400 67300.0 593141462.0 2.824301e+08 1.446283e+08 1.351146e+14
row_8 358800 215002028.0 327493141.0 2.861329e+08 1.545693e+07 1.026645e+14
row_9 358800 202248016.0 321657935.0 2.684668e+08 1.865470e+07 9.632590e+13
(Pdb) totals = pd.DataFrame(df.sum(numeric_only=True)).transpose()
(Pdb) totals
count min bytes max bytes mean bytes std bytes sum bytes
0 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
(Pdb) totals = pd.DataFrame(df.sum(numeric_only=True)).transpose().set_index(pd.Index({"totals"}))
(Pdb) totals
count min bytes max bytes mean bytes std bytes sum bytes
totals 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
(Pdb) df.append(totals)
count min bytes max bytes mean bytes std bytes sum bytes
row_0 837200.0 67412.0 3.687340e+08 2.518989e+07 5.122836e+07 2.108898e+13
row_1 299000.0 85380.0 6.927821e+08 2.845055e+08 2.026823e+08 8.506713e+13
row_2 837200.0 67412.0 3.794842e+08 8.706825e+07 1.071484e+08 7.289354e+13
row_3 239200.0 85392.0 3.280640e+08 9.870446e+07 1.016989e+08 2.361011e+13
row_4 59800.0 67292.0 3.834870e+08 1.841879e+08 1.567605e+08 1.101444e+13
row_5 717600.0 112309.0 3.794838e+08 9.687554e+07 1.103574e+08 6.951789e+13
row_6 119600.0 664144.0 3.584870e+08 1.611637e+08 1.171889e+08 1.927518e+13
row_7 478400.0 67300.0 5.931415e+08 2.824301e+08 1.446283e+08 1.351146e+14
row_8 358800.0 215002028.0 3.274931e+08 2.861329e+08 1.545693e+07 1.026645e+14
row_9 358800.0 202248016.0 3.216579e+08 2.684668e+08 1.865470e+07 9.632590e+13
totals 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
Since i generally want to do this at the very end as to avoid breaking the integrity of the dataframe (right before printing). I created a summary_rows_cols method which returns a printable dataframe:
def summary_rows_cols(df: pd.DataFrame,
column_sum: bool = False,
column_avg: bool = False,
column_median: bool = False,
row_sum: bool = False,
row_avg: bool = False,
row_median: bool = False
) -> pd.DataFrame:
ret = df.copy()
if column_sum: ret.loc['Sum'] = df.sum(numeric_only=True, axis=0)
if column_avg: ret.loc['Avg'] = df.mean(numeric_only=True, axis=0)
if column_median: ret.loc['Median'] = df.median(numeric_only=True, axis=0)
if row_sum: ret.loc[:, 'Sum'] = df.sum(numeric_only=True, axis=1)
if row_median: ret.loc[:, 'Avg'] = df.mean(numeric_only=True, axis=1)
if row_avg: ret.loc[:, 'Median'] = df.median(numeric_only=True, axis=1)
ret.fillna('-', inplace=True)
return ret
This allows me to enter a generic (numeric) df and get a summarized output such as:
a b c Sum Median
0 1 4 7 12 4
1 2 5 8 15 5
2 3 6 9 18 6
Sum 6 15 24 - -
from:
data = {
'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]
}
df = pd.DataFrame(data)
printable = summary_rows_cols(df, row_sum=True, column_sum=True, row_median=True)

Categories

Resources