I have some measurements organized in *.csv files as follows:
m_number,value
0,0.154
1,0.785
…
55,0.578
NaN,NaN
0,1.214
1,0.742
…
So there is always a set of x measurements (x should be constant inside a single file but it's not guaranteed and I have to check this number) separated by a NaN line.
After reading the data into a dataframe, I want to reorganize it for later usage:
m_number value 1 value 2 value 3 value 4
0 0 0.154 0.214 0.229 0.234
1 1 0.785 0.742 0.714 0.771
...
55 55 0.578 0.647 0.597 0.623
Each set of measurements should be one column.
Here's a snippet of the code:
split_index = df.index[df_benchmark['id'].isnull()]
df_sliced = pd.DataFrame()
for i, index in enumerate(split_index):
if i == 0:
df_sliced = df.loc[0:index - 1].copy()
else:
#ToDo: Rename first column to 'value 1' if more than 1 measurement
temp = df['value'].loc[0:index - 1].copy()
temp.reset_index(drop=True, inplace=True)
df_sliced['value '+str(i)] = temp
df.drop(df.index[0:index - split_index[i - 1]], inplace=True)
The code works, but I do not like my current approach. So I'm asking if there's a better and more elegant solution for this problem.
Best,
Julz
You can use cumsum, set_index, and unstack to do this is three lines of code:
#Create dummy data with 4 runs of 10 measures
df = pd.DataFrame({'m_number':np.tile(np.arange(10),4), 'value':np.random.random(40)})
#Use condition to find first run and increment using cumsum and unstack to create
MultiIndex column headers
df_u = df.set_index([df['m_number'].eq(0).cumsum(), df['m_number']])[['value']].unstack()
#Use condition to find first run and increment using cumsum and unstack to create
#MultiIndex column headers (Corrected per comments below)
df_u = df.set_index([df['m_number'], df['m_number'].eq(0).cumsum()])[['value']].unstack()
#Flatten MultiIndex column headers
df_u.columns = [f'{i}_{j}' for i, j in df_u.columns]
#Display results
df_u
Output:
value_1 value_2 value_3 value_4
m_number
0 0.919057 0.064409 0.288592 0.742759
1 0.449587 0.867031 0.193493 0.853700
2 0.551929 0.925111 0.895273 0.117306
3 0.487501 0.893696 0.696540 0.381469
4 0.389431 0.818801 0.771516 0.489404
5 0.790619 0.478995 0.023236 0.344112
6 0.015389 0.815073 0.195856 0.628263
7 0.068860 0.483731 0.752803 0.581106
8 0.109404 0.281335 0.330910 0.909965
9 0.695120 0.538676 0.766864 0.247283
I have a data frame like this, but with many more columns and I would like to multiply each two adjacent columns and state the product of the two in a new column beside it and call it Sub_pro and at the end have the total sum of all Sub_pro in a column called F_Pro and reduce the precision to 3 decimal places. I don't know how to get the Sub_pro columns. Below is my code.
import pandas as pd
df = pd.read_excel("C:dummy")
df['F_Pro'] = ("Result" * "Attribute").sum(axis=1)
df.round(decimals=3)
print (df)
Input
Id Result Attribute Result1 Attribute1
1 0.5621 0.56 536 0.005642
2 0.5221 0.5677 2.15 93
3 0.024564 5.23 6.489 8
4 11.564256 4.005 0.45556 5.25
5 0.6123 0.4798 0.6667 5.10
Desire Output
id Result Attribute Sub_Pro Result1 Attribute1 Sub_pro1 F_Pro
1 0.5621 0.56 0.314776 536 0.005642 3.024112 3.338888
2 0.5221 0.5677 0.29639617 2.15 93 199.95 200.2463962
3 0.024564 5.23 0.12846972 6.489 8 51.912 52.04046972
4 11.564256 4.005 46.31484528 0.45556 5.25 2.39169 48.70653528
5 0.6123 0.4798 0.29378154 0.6667 5.1 3.40017 3.69395154
Because you have several columns named kind of the same, here is one way using filter. To see how it works, on your df, you do df.filter(like='Result') and you get the columns where the name has Result in it:
Result Result1
0 0.562100 536.00000
1 0.522100 2.15000
2 0.024564 6.48900
3 11.564256 0.45556
4 0.612300 0.66670
You can create an array containing the columns 'Sub_Pro':
import numpy as np
arr_sub_pro = np.round(df.filter(like='Result').values* df.filter(like='Attribute').values,3)
and you get the values of the columns sub_pro such as arr_sub_pro:
array([[3.1500e-01, 3.0240e+00],
[2.9600e-01, 1.9995e+02],
[1.2800e-01, 5.1912e+01],
[4.6315e+01, 2.3920e+00],
[2.9400e-01, 3.4000e+00]])
Now you need to add them at the right position in the dataframe, I think a loop for is necessary
for nb, col in zip( range(arr_sub_pro.shape[1]), df.filter(like='Attribute').columns):
df.insert(df.columns.get_loc(col)+1, 'Sub_pro{}'.format(nb), arr_sub_pro[:,nb])
here I get the location of the column Attibut(nb) and insert the value from column nb of arr_sub_pro at the next position
To add the column 'F_Pro', you can do:
df.insert(len(df.columns), 'F_Pro', arr_sub_pro.sum(axis=1))
the final df looks like:
Id Result Attribute Sub_pro0 Result1 Attribute1 Sub_pro1 \
0 1 0.562100 0.5600 0.315 536.00000 0.005642 3.024
1 2 0.522100 0.5677 0.296 2.15000 93.000000 199.950
2 3 0.024564 5.2300 0.128 6.48900 8.000000 51.912
3 4 11.564256 4.0050 46.315 0.45556 5.250000 2.392
4 5 0.612300 0.4798 0.294 0.66670 5.100000 3.400
F_Pro
0 3.339
1 200.246
2 52.040
3 48.707
4 3.694
import pandas as pd
src = "/opt/repos/pareto/test/stack/data.csv"
df = pd.read_csv(src)
count = 0
def multiply(x):
res = x.copy()
keys_len = len(x)
idx = 1
while idx + 1 < keys_len:
left = x[idx]
right = x[idx + 1]
new_key = "sub_prod_{}".format(idx)
# Multiply and round to three decimal places.
res[new_key] = round(left * right,3)
idx = idx + 1
return res
res_df = df.apply(lambda x: multiply(x),axis=1)
It solve the problem but you need now order de columns you can iterate over the keys instead of make a deep copy of the full row. I hope that the code help you.
Here's one way using NumPy and a dictionary comprehension:
# extract NumPy array for relevant columns
A = df.iloc[:, 1:].values
n = int(A.shape[1] / 2)
# calculate products and feed to pd.DataFrame
prods = pd.DataFrame({'Sub_Pro_'+str(i): np.prod(A[:, 2*i: 2*(i+1)], axis=1) \
for i in range(n)})
# calculate sum of product rows
prods['F_Pro'] = prods.sum(axis=1)
# join to original dataframe
df = df.join(prods)
print(df)
Id Result Attribute Result1 Attribute1 Sub_Pro_0 Sub_Pro_1 \
0 1 0.562100 0.5600 536.00000 0.005642 0.314776 3.024112
1 2 0.522100 0.5677 2.15000 93.000000 0.296396 199.950000
2 3 0.024564 5.2300 6.48900 8.000000 0.128470 51.912000
3 4 11.564256 4.0050 0.45556 5.250000 46.314845 2.391690
4 5 0.612300 0.4798 0.66670 5.100000 0.293782 3.400170
F_Pro
0 3.338888
1 200.246396
2 52.040470
3 48.706535
4 3.693952
I have df as given below which I am splitting column wise.
>>> df
ID Started
0 NaN 20.06.2017 13:19:04
1 NaN 10.04.2018 04:48:32
2 WBTS-1509 06.11.2017 10:28:14
3 WBTS-1509 03.03.2018 10:12:29
4 WBTS-1117 07.03.2018 17:04:28
df['Started'].apply(lambda x: x.split(':')[0])
df['ID'].apply(lambda x: x.split('-')[1])
I would like to set 3 list variables
col_names = ['ID' , 'Started']
splitby = ['-' , ':']
index_after_split = [1 , 0]
do splitting using one line (avoiding loop) using inplace = True.
Please help me do same.
Thanks
I think loop is necessary here with str.split and indexing by str[]:
for a,b,c, in zip(col_names, splitby, index_after_split):
df[a] = df[a].str.split(b).str[c]
print (df)
ID Started
0 NaN 20.06.2017 13
1 NaN 10.04.2018 04
2 1509 06.11.2017 10
3 1509 03.03.2018 10
4 1117 07.03.2018 17
i have a csv file "trainning_data.csv" contains 7 columns of data but i only read the last one
the format of csv file is as below:
A B C D E F Last
1 1.5 14.2 21.5 50.1 25.5 14.2 25.2
2 ... ... ... ... ... ... ...
3
.
.
.
I read the data file using pandas then visulized it:
import pandas as pd
df = pd.read_csv('trainning_data.csv')
saved_column = df['Last']
plt.plot(saved_column, 'o')
plt.show()
then i removed the oulier as:
Q1 = np.percentile(saved_column, 25)
Q3 = np.percentile(saved_column, 75)
range=[Q1-1.5*(Q3-Q1),Q3+1.5*(Q3-Q1)];
id_max = np.where(saved_column>range[1])
id_min = np.where(saved_column<range[0])
position = np.concatenate( (id_max, id_min), axis=1)
saved_column = np.array(saved_column, dtype = 'double')
new_column = np.delete(saved_column, position.T)
len(new_column)
plt.plot(new_column, 'o')
plt.xlim(0, 1000)
plt.ylim(0,500)
plt.show()
after removed all the outlier, i want to rebuild the data set, i tried:
fileHeader = ["Last"]
myFile = open('Training_Data_New.csv', 'w')
writer = csv.writer(myFile)
writer.writerow(fileHeader)
writer.writerows(new_column)
but it throws me an Error: iterable expected, not numpy.float64
another problem is i need to delete all the data related to the position of the outlier that i found as well. How do i fix this?
You can create DataFrame by numopy array and write to file by to_csv:
pd.DataFrame({'Last':new_column}).to_csv('Training_Data_New.csv', index=False)
Pandas solution for remove outliers:
I think you can use quantile and filter by between with boolean indexing, last for write DataFrame to file use to_csv:
df = pd.DataFrame({'Last':[1,2,3,5,8,10,45,100], 'A': np.arange(8)})
print (df)
A Last
0 0 1
1 1 2
2 2 3
3 3 5
4 4 8
5 5 10
6 6 45
7 7 100
Q1 = df['Last'].quantile(.25)
Q3 = df['Last'].quantile(.75)
q1 = Q1-1.5*(Q3-Q1)
q3 = Q3+1.5*(Q3-Q1)
df1 = df[df['Last'].between(q1, q3)]
print (df1)
A Last
0 0 1
1 1 2
2 2 3
3 3 5
4 4 8
5 5 10
plt.plot(df1['Last'].values, 'o')
plt.xlim(0, 1000)
plt.ylim(0,500)
plt.show()
#if want write only Last column
df1[['Last']].to_csv('Training_Data_New.csv', index=False)
#if you want write all columns
df1.to_csv('Training_Data_New.csv', index=False)
You can add you new column variable as a column in your existing_column and then use pd.to_csv() to save.
After you get new_column variable
Drop column last from df.
df.drop('last',axis=1, inplace=True)
2.
df['last'] = new_column
Save your df.
df.to_csv('Training_Data_New.csv',index=False)
I have a dataframe, something like:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
and I would like to add a 'total' row to the end of dataframe:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
5 total 18 9.47
I've tried to use the sum command but I end up with a Series, which although I can convert back to a Dataframe, doesn't maintain the data types:
tot_row = pd.DataFrame(df.sum()).T
tot_row['foo'] = 'tot'
tot_row.dtypes:
foo object
bar object
qux object
I would like to maintain the data types from the original data frame as I need to apply other operations to the total row, something like:
baz = 2*tot_row['qux'] + 3*tot_row['bar']
Update June 2022
pd.append is now deprecated. You could use pd.concat instead but it's probably easier to use df.loc['Total'] = df.sum(numeric_only=True), as Kevin Zhu commented. Or, better still, don't modify the data frame in place and keep your data separate from your summary statistics!
Append a totals row with
df.append(df.sum(numeric_only=True), ignore_index=True)
The conversion is necessary only if you have a column of strings or objects.
It's a bit of a fragile solution so I'd recommend sticking to operations on the dataframe, though. eg.
baz = 2*df['qux'].sum() + 3*df['bar'].sum()
df.loc["Total"] = df.sum()
works for me and I find it easier to remember. Am I missing something?
Probably wasn't possible in earlier versions.
I'd actually like to add the total row only temporarily though.
Adding it permanently is good for display but makes it a hassle in further calculations.
Just found
df.append(df.sum().rename('Total'))
This prints what I want in a Jupyter notebook and appears to leave the df itself untouched.
New Method
To get both row and column total:
import numpy as np
import pandas as pd
df = pd.DataFrame({'a': [10,20],'b':[100,200],'c': ['a','b']})
df.loc['Column_Total']= df.sum(numeric_only=True, axis=0)
df.loc[:,'Row_Total'] = df.sum(numeric_only=True, axis=1)
print(df)
a b c Row_Total
0 10.0 100.0 a 110.0
1 20.0 200.0 b 220.0
Column_Total 30.0 300.0 NaN 330.0
Use DataFrame.pivot_table with margins=True:
import pandas as pd
data = [('a',1,3.14),('b',3,2.72),('c',2,1.62),('d',9,1.41),('e',3,.58)]
df = pd.DataFrame(data, columns=('foo', 'bar', 'qux'))
Original df:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
Since pivot_table requires some sort of grouping (without the index argument, it'll raise a ValueError: No group keys passed!), and your original index is vacuous, we'll use the foo column:
df.pivot_table(index='foo',
margins=True,
margins_name='total', # defaults to 'All'
aggfunc=sum)
Voilà!
bar qux
foo
a 1 3.14
b 3 2.72
c 2 1.62
d 9 1.41
e 3 0.58
total 18 9.47
Alternative way (verified on Pandas 0.18.1):
import numpy as np
total = df.apply(np.sum)
total['foo'] = 'tot'
df.append(pd.DataFrame(total.values, index=total.keys()).T, ignore_index=True)
Result:
foo bar qux
0 a 1 3.14
1 b 3 2.72
2 c 2 1.62
3 d 9 1.41
4 e 3 0.58
5 tot 18 9.47
Building on JMZ answer
df.append(df.sum(numeric_only=True), ignore_index=True)
if you want to continue using your current index you can name the sum series using .rename() as follows:
df.append(df.sum().rename('Total'))
This will add a row at the bottom of the table.
This is the way that I do it, by transposing and using the assign method in combination with a lambda function. It makes it simple for me.
df.T.assign(GrandTotal = lambda x: x.sum(axis=1)).T
Building on answer from Matthias Kauer.
To add row total:
df.loc["Row_Total"] = df.sum()
To add column total,
df.loc[:,"Column_Total"] = df.sum(axis=1)
New method [September 2022]
TL;DR:
Just use
df.style.concat(df.agg(['sum']).style)
for a solution that won't change you dataframe, works even if you have an "sum" in your index, and can be styled!
Explanation
In pandas 1.5.0, a new method named .style.concat() gives you the ability to display several dataframes together. This is a good way to show the total (or any other statistics), because it is not changing the original dataframe, and works even if you have an index named "sum" in your original dataframe.
For example:
import pandas as pd
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=['A', 'B', 'C'])
df.style.concat(df.agg(['sum']).style)
and it will return a formatted table that is visible in jupyter as this:
Styling
with a little longer code, you can even make the last row look different:
df.style.concat(
df.agg(['sum']).style
.set_properties(**{'background-color': 'yellow'})
)
to get:
see other ways to style (such as bold font, or table lines) in the docs
Following helped for me to add a column total and row total to a dataframe.
Assume dft1 is your original dataframe... now add a column total and row total with the following steps.
from io import StringIO
import pandas as pd
#create dataframe string
dfstr = StringIO(u"""
a;b;c
1;1;1
2;2;2
3;3;3
4;4;4
5;5;5
""")
#create dataframe dft1 from string
dft1 = pd.read_csv(dfstr, sep=";")
## add a column total to dft1
dft1['Total'] = dft1.sum(axis=1)
## add a row total to dft1 with the following steps
sum_row = dft1.sum(axis=0) #get sum_row first
dft1_sum=pd.DataFrame(data=sum_row).T #change it to a dataframe
dft1_sum=dft1_sum.reindex(columns=dft1.columns) #line up the col index to dft1
dft1_sum.index = ['row_total'] #change row index to row_total
dft1.append(dft1_sum) # append the row to dft1
Actually all proposed solutions render the original DataFrame unusable for any further analysis and can invalidate following computations, which will be easy to overlook and could lead to false results.
This is because you add a row to the data, which Pandas cannot differentiate from an additional row of data.
Example:
import pandas as pd
data = [1, 5, 6, 8, 9]
df = pd.DataFrame(data)
df
df.describe()
yields
0
0
1
1
5
2
6
3
8
4
9
0
count
5
mean
5.8
std
3.11448
min
1
25%
5
50%
6
75%
8
max
9
After
df.loc['Totals']= df.sum(numeric_only=True, axis=0)
the dataframe looks like this
0
0
1
1
5
2
6
3
8
4
9
Totals
29
This looks nice, but the new row is treated as if it was an additional data item, so df.describe will produce false results:
0
count
6
mean
9.66667
std
9.87252
min
1
25%
5.25
50%
7
75%
8.75
max
29
So: Watch out! and apply this only after doing all other analyses of the data or work on a copy of the DataFrame!
When the "totals" need to be added to an index column:
totals = pd.DataFrame(df.sum(numeric_only=True)).transpose().set_index(pd.Index({"totals"}))
df.append(totals)
e.g.
(Pdb) df
count min bytes max bytes mean bytes std bytes sum bytes
row_0 837200 67412.0 368733992.0 2.518989e+07 5.122836e+07 2.108898e+13
row_1 299000 85380.0 692782132.0 2.845055e+08 2.026823e+08 8.506713e+13
row_2 837200 67412.0 379484173.0 8.706825e+07 1.071484e+08 7.289354e+13
row_3 239200 85392.0 328063972.0 9.870446e+07 1.016989e+08 2.361011e+13
row_4 59800 67292.0 383487021.0 1.841879e+08 1.567605e+08 1.101444e+13
row_5 717600 112309.0 379483824.0 9.687554e+07 1.103574e+08 6.951789e+13
row_6 119600 664144.0 358486985.0 1.611637e+08 1.171889e+08 1.927518e+13
row_7 478400 67300.0 593141462.0 2.824301e+08 1.446283e+08 1.351146e+14
row_8 358800 215002028.0 327493141.0 2.861329e+08 1.545693e+07 1.026645e+14
row_9 358800 202248016.0 321657935.0 2.684668e+08 1.865470e+07 9.632590e+13
(Pdb) totals = pd.DataFrame(df.sum(numeric_only=True)).transpose()
(Pdb) totals
count min bytes max bytes mean bytes std bytes sum bytes
0 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
(Pdb) totals = pd.DataFrame(df.sum(numeric_only=True)).transpose().set_index(pd.Index({"totals"}))
(Pdb) totals
count min bytes max bytes mean bytes std bytes sum bytes
totals 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
(Pdb) df.append(totals)
count min bytes max bytes mean bytes std bytes sum bytes
row_0 837200.0 67412.0 3.687340e+08 2.518989e+07 5.122836e+07 2.108898e+13
row_1 299000.0 85380.0 6.927821e+08 2.845055e+08 2.026823e+08 8.506713e+13
row_2 837200.0 67412.0 3.794842e+08 8.706825e+07 1.071484e+08 7.289354e+13
row_3 239200.0 85392.0 3.280640e+08 9.870446e+07 1.016989e+08 2.361011e+13
row_4 59800.0 67292.0 3.834870e+08 1.841879e+08 1.567605e+08 1.101444e+13
row_5 717600.0 112309.0 3.794838e+08 9.687554e+07 1.103574e+08 6.951789e+13
row_6 119600.0 664144.0 3.584870e+08 1.611637e+08 1.171889e+08 1.927518e+13
row_7 478400.0 67300.0 5.931415e+08 2.824301e+08 1.446283e+08 1.351146e+14
row_8 358800.0 215002028.0 3.274931e+08 2.861329e+08 1.545693e+07 1.026645e+14
row_9 358800.0 202248016.0 3.216579e+08 2.684668e+08 1.865470e+07 9.632590e+13
totals 4305600.0 418466685.0 4.132815e+09 1.774725e+09 1.025805e+09 6.365722e+14
Since i generally want to do this at the very end as to avoid breaking the integrity of the dataframe (right before printing). I created a summary_rows_cols method which returns a printable dataframe:
def summary_rows_cols(df: pd.DataFrame,
column_sum: bool = False,
column_avg: bool = False,
column_median: bool = False,
row_sum: bool = False,
row_avg: bool = False,
row_median: bool = False
) -> pd.DataFrame:
ret = df.copy()
if column_sum: ret.loc['Sum'] = df.sum(numeric_only=True, axis=0)
if column_avg: ret.loc['Avg'] = df.mean(numeric_only=True, axis=0)
if column_median: ret.loc['Median'] = df.median(numeric_only=True, axis=0)
if row_sum: ret.loc[:, 'Sum'] = df.sum(numeric_only=True, axis=1)
if row_median: ret.loc[:, 'Avg'] = df.mean(numeric_only=True, axis=1)
if row_avg: ret.loc[:, 'Median'] = df.median(numeric_only=True, axis=1)
ret.fillna('-', inplace=True)
return ret
This allows me to enter a generic (numeric) df and get a summarized output such as:
a b c Sum Median
0 1 4 7 12 4
1 2 5 8 15 5
2 3 6 9 18 6
Sum 6 15 24 - -
from:
data = {
'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]
}
df = pd.DataFrame(data)
printable = summary_rows_cols(df, row_sum=True, column_sum=True, row_median=True)