I have a data frame (df) in pandas with four columns and I want a new column to represent the mean of this four columns: df['mean']= df.mean(1)
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400
So far so good. But when I save the results to a csv file this is what I found:
5.9,5.4,2.4,3.2,4.2250000000000005
0.6,0.7,0.7,0.7,0.6749999999999999
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
I guess I can force the format in the mean column, but any idea why this is happenning?
I am using winpython with python 3.3.2 and pandas 0.11.0
You could use the float_format parameter:
import pandas as pd
import io
content = '''\
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400'''
df = pd.read_table(io.BytesIO(content), sep='\s+')
df.to_csv('/tmp/test.csv', float_format='%g', index=False)
yields
1,2,3,4,mean
,,,,
5.9,5.4,2.4,3.2,4.225
0.6,0.7,0.7,0.7,0.675
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
The answers seem correct. Floating point numbers cannot be perfectly represented on our systems. There are bound to be some differences. Read The Floating Point Guide.
>>> a = 5.9+5.4+2.4+3.2
>>> a / 4
4.2250000000000005
As you said, you could always format the results if you want to get only a fixed number of points after the decimal.
>>> "{:.3f}".format(a/4)
'4.225'
Related
I have a dataframe as follows
id
return1
return2
weekday1
0.1
0.2
weekday1
0.2
0.4
weekday1
0.3
0.5
weekday2
0.4
0.7
weekday2
0.5
0.6
weekday2
0.6
0.1
I know how to do the rolling-groupby-sum, which is
df.groupby(df.index.dayofweek) #originally the index is a time series
.rolling(52).sum()
.droplevel(level=0).sort_index()
Now I need to add 1 to all the elements first and then multiply those in the same group as follows.
Step 1 - add 1:
id
return1
return2
weekday1
1.1
1.2
weekday1
1.2
1.4
weekday1
1.3
1.5
weekday2
1.4
1.7
weekday2
1.5
1.6
weekday2
1.6
1.1
Step2 - multiply by group:
id
return1
return2
weekday1
1.1×1.2×1.3
1.2×1.4×1.5
weekday2
1.4×1.5×1.6
1.7×1.6×1.1
I use the following codes
df.transform(lambda x : x+1).groupby(df.index.dayofweek)
.rolling(52).mul()
.droplevel(level=0).sort_index()
but it gives an AttributeError: 'RollingGroupby' object has no attribute 'mul'.
cumprod() doesn't work either. Perhaps it has somthing to do with the rolling part for that there's no such thing as rolling.cumprod() or rolling.mul().
Is there a better way to do the multiplication within a group with rolling part?
Use numpy.prod in Rolling.apply:
df.add(1).groupby(df.index.dayofweek).rolling(52).apply(np.prod)
Btw, from expected ouput seems need GroupBy.prod:
df.add(1).groupby(df.index).prod()
I read somewhere that the preferred way for accessing dataframes columns is through the method .loc, but I found some drawbacks and I am wondering if it is normal.
Say that I do the following:
import pandas as pd
df = read_csv("MyFile.csv")
and assume that the dataframe df that looks like this:
ColA ColB ColC
Time
0.0 9.2 -3.5 2.0
0.1 10.2 -0.9 1.1
0.2 4.3 2.1 4.2
If I type df[" and then hit TAB the autocompletion kicks in and I can choose the column name from a pop up list whereas if I type df.loc[" and then hit TAB, then nothing happens and I am wondering if it is a normal behavior.
Also, it seems that if the column name are tuples, e.g.
('ColA','X') ('ColB','Y') ('ColC','Z')
Time
0.0 9.2 -3.5 2.0
0.1 10.2 -0.9 1.1
0.2 4.3 2.1 4.2
then I can access them with e.g. df[('ColA','X')] but I cannot with df.loc[('ColA','X')].
I am running iPython 7.2.2 (console) on a Windows 10 machine, if it may help.
Background:
I'm currently developing some data profiling in SQL Server. This consists of calculating aggregate statistics on the values in targeted columns.
I'm using SQL for most of the heavy lifting, but calling Python for some of the statistics that SQL is poor at calculating. I'm leveraging the Pandas package through SQL Server Machine Language Services.
However,
I'm currently developing this script on Visual Studio. The SQL portion is irrelevant other than as background.
Problem:
My issue is that when I call one of the Python statistics functions, it produces the output as a series with the labels seemingly not part of the data. I cannot access the labels at all. I need the values of these labels, and I need to normalize the data and insert a column with static values describing which calculation was performed on that row.
Constraints:
I will need to normalize each statistic so I can union the datasets and pass the values back to SQL for further processing. All output needs to accept dynamic schemas, so no hardcoding labels etc.
Attempted solutions:
I've tried explicitly coercing output to dataframes. This just results in a series with label "0".
I've also tried adding static values to the columns. This just adds the target column name as one of the inaccessible labels, and the intended static value as part of the series.
I've searched many times for a solution, and couldn't find anything relevant to the problem.
Code and results below. Using the iris dataset as an example.
###########################
## AGG STATS TEST SCRIPT
##
###########################
#LOAD MODULES
import pandas as pds
#GET SAMPLE DATASET
iris = pds.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
#CENTRAL TENDENCY
mode1 = iris.mode()
stat_mode = pds.melt(
mode1
)
stat_median = iris.median()
stat_median['STAT_NAME'] = 'STAT_MEDIAN' #Try to add a column with the value 'STAT_MEDIAN'
#AGGREGATE STATS
stat_describe = iris.describe()
#PRINT RESULTS
print(iris)
print(stat_median)
print(stat_describe)
###########################
## OUTPUT
##
###########################
>>> #PRINT RESULTS
... print(iris) #ORIGINAL DATASET
...
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa
.. ... ... ... ... ...
145 6.7 3.0 5.2 2.3 virginica
146 6.3 2.5 5.0 1.9 virginica
147 6.5 3.0 5.2 2.0 virginica
148 6.2 3.4 5.4 2.3 virginica
149 5.9 3.0 5.1 1.8 virginica
[150 rows x 5 columns]
>>> print(stat_median) #YOU CAN SEE THAT IT INSERTED COLUMN INTO ROW LABELS, VALUE INTO RESULTS SERIES
sepal_length 5.8
sepal_width 3
petal_length 4.35
petal_width 1.3
STAT_NAME STAT_MEDIAN
dtype: object
>>> print(stat_describe) #BASIC DESCRIPTIVE STATS, NEED TO LABEL THE STATISTIC NAMES TO UNPIVOT THIS
sepal_length sepal_width petal_length petal_width
count 150.000000 150.000000 150.000000 150.000000
mean 5.843333 3.057333 3.758000 1.199333
std 0.828066 0.435866 1.765298 0.762238
min 4.300000 2.000000 1.000000 0.100000
25% 5.100000 2.800000 1.600000 0.300000
50% 5.800000 3.000000 4.350000 1.300000
75% 6.400000 3.300000 5.100000 1.800000
max 7.900000 4.400000 6.900000 2.500000
>>>
Any assistance is greatly appreciated. Thank you!
I figured it out. There's a function called reset_index that will convert the index to a column, and create a new numerical index.
stat_median = pds.DataFrame(stat_median)
stat_median.reset_index(inplace=True)
stat_median = stat_median.rename(columns={'index' : 'fieldname', 0: 'value'})
stat_median['stat_name'] = 'median'
I have a data frame and I am looking to get the max value for each row and the column header for the column where the max value is located and return a new dataframe. In reality my data frame has over 50 columns and over 30,000 rows:
df1:
ID Tis RNA DNA Prot Node Exv
AB 1.4 2.3 0.0 0.3 2.4 4.4
NJ 2.2 3.4 2.1 0.0 0.0 0.2
KL 0.0 0.0 0.0 0.0 0.0 0.0
JC 5.2 4.4 2.1 5.4 3.4 2.3
So the ideal output looks like this:
df2:
ID
AB Exv 4.4
NJ RNA 3.4
KL N/A N/A
JC Prot 5.4
I have tried the following without any success:
df2 = df1.max(axis=1)
result.index = df1.idxmax(axis=1)
also tried:
df2=pd.Series(df1.columns[np.argmax(df1.values,axis=1)])
final=pd.DataFrame(df1.lookup(s.index,s),s)
I have looked at other posts but still can't seem to solve this.
Any help would be great
Use if ID is index DataFrame.agg with replace 0 rows by missing values:
df = df1.agg(['idxmax','max'], axis=1).mask(lambda x: x['max'].eq(0))
print (df)
idxmax max
AB Exv 4.4
NJ RNA 3.4
KL NaN NaN
JC Prot 5.4
Use if ID is column:
df = df1.set_index('ID').agg(['idxmax','max'], axis=1).mask(lambda x: x['max'].eq(0))
There is a large csv file imported. Below is the output, where Flavor_Score and Overall_Score are results of applying df.groupby('beer_name').mean() across a multitude of testers. I would like to add a column Std Deviation for each: Flavor_Score and Overall_Score to the right of the mean column. The function is clear but how to add a column for display? Of course, I can generate an array and append it (right?) but it would seem to be a cumbersome way.
Beer_name Beer_Style Flavor_Score Overall_Score
Coors Light 2.0 3.0
Sam Adams Dark 4.0 4.5
Becks Light 3.5 3.5
Guinness Dark 2.0 2.2
Heineken Light 3.5 3.7
You could use
df.groupby('Beer_name').agg(['mean','std'])
This computes the mean and the std for each group.
For example,
import numpy as np
import pandas as pd
np.random.seed(2015)
N = 100
beers = ['Coors', 'Sam Adams', 'Becks', 'Guinness', 'Heineken']
style = ['Light', 'Dark', 'Light', 'Dark', 'Light']
df = pd.DataFrame({'Beer_name': np.random.choice(beers, N),
'Flavor_Score': np.random.uniform(0, 10, N),
'Overall_Score': np.random.uniform(0, 10, N)})
df['Beer_Style'] = df['Beer_name'].map(dict(zip(beers, style)))
print(df.groupby('Beer_name').agg(['mean','std']))
yields
Flavor_Score Overall_Score
mean std mean std
Beer_name
Becks 5.779266 3.033939 6.995177 2.697787
Coors 6.521966 2.008911 4.066374 3.070217
Guinness 4.836690 2.644291 5.577085 2.466997
Heineken 4.622213 3.108812 6.372361 2.904932
Sam Adams 5.443279 3.311825 4.697961 3.164757
groupby.agg([fun1, fun2]) computes any number of functions in one step:
from random import choice, random
import pandas as pd
import numpy as np
beers = ['Coors', 'Sam Adams', 'Becks', 'Guinness', 'Heineken']
styles = ['Light', 'Dark']
def generate():
for i in xrange(0, 100):
yield dict(beer=choice(beers), style=choice(styles),
flavor_score=random()*10.0,
overall_score=random()*10.0)
pd.options.display.float_format = ' {:,.1f} '.format
df = pd.DataFrame(generate())
print df.groupby(['beer', 'style']).agg([np.mean, np.std])
=>
flavor_score overall_score
mean std mean std
beer style
Becks Dark 7.1 3.6 1.9 1.6
Light 4.7 2.4 2.0 1.0
Coors Dark 5.5 3.2 2.6 1.1
Light 5.3 2.5 1.9 1.1
Guinness Dark 3.3 1.4 2.1 1.1
Light 4.7 3.6 2.2 1.1
Heineken Dark 4.4 3.0 2.7 1.0
Light 6.0 2.3 2.1 1.3
Sam Adams Dark 3.4 3.0 1.7 1.2
Light 5.2 3.6 1.6 1.3
What if I need to use a user-defined function to just a flavor_score column? let's say I want subtract 0.5 from a flavor_score column (from all rows, except for Heineken, for which I want to add 0.25)
grouped[grouped.beer != 'Heineken']['flavor_score']['mean'] - 0.5
grouped[grouped.beer == 'Heineken']['flavor_score']['mean'] + 0.25