How to assign a value_count output to a dataframe - python

I am trying to assign the output from a value_count to a new df. My code follows.
import pandas as pd
import glob
df = pd.concat((pd.read_csv(f, names=['date','bill_id','sponsor_id']) for f in glob.glob('/home/jayaramdas/anaconda3/df/s11?_s_b')))
column_list = ['date', 'bill_id']
df = df.set_index(column_list, drop = True)
df = df['sponsor_id'].value_counts()
df.columns=['sponsor', 'num_bills']
print (df)
The value count is not being assigned the column headers specified 'sponsor', 'num_bills'. I'm getting the following output from print.head
1036 426
791 408
1332 401
1828 388
136 335
Name: sponsor_id, dtype: int64

your column length doesn't match, you read 3 columns from the csv and then set the index to 2 of them, you calculated value_counts which produces a Series with the column values as the index and the value_counts as the values, you need to reset_index and then overwrite the column names:
df = df.reset_index()
df.columns=['sponsor', 'num_bills']
Example:
In [276]:
df = pd.DataFrame({'col_name':['a','a','a','b','b']})
df
Out[276]:
col_name
0 a
1 a
2 a
3 b
4 b
In [277]:
df['col_name'].value_counts()
Out[277]:
a 3
b 2
Name: col_name, dtype: int64
In [278]:
type(df['col_name'].value_counts())
Out[278]:
pandas.core.series.Series
In [279]:
df = df['col_name'].value_counts().reset_index()
df.columns = ['col_name', 'count']
df
Out[279]:
col_name count
0 a 3
1 b 2

Appending value_counts() to multi-column dataframe:
df = pd.DataFrame({'C1':['A','B','A'],'C2':['A','B','A']})
vc_df = df.value_counts().to_frame('Count').reset_index()
display(df, vc_df)
C1 C2
0 A A
1 B B
2 A A
C1 C2 Count
0 A A 2
1 B B 1

Related

Merging df in python

Say I have two DataFrames
df1 = pd.DataFrame({'A':[1,2], 'B':[3,4]}, index = [0,1])
df2 = pd.DataFrame({'B':[8,9], 'C':[10,11]}, index = [1,2])
I want to merge so that any values in df1 are overwritten in there is a value in df2 at that location and any new values in df2 are added including the new rows and columns.
The result should be:
A B C
0 1 3 nan
1 2 8 10
2 nan 9 11
I've tried combine_first but that causes only nan values to be overwritten
updated has the issue where new rows are created rather than overwritten
merge has many issues.
I've tried writing my own function
def take_right(df1, df2, j, i):
print (df1)
print (df2)
try:
s1 = df1[j][i]
except:
s1 = np.NaN
try:
s2 = df2[j][i]
except:
s2 = np.NaN
if math.isnan(s2):
#print(s1)
return s1
else:
# print(s2)
return s2
def combine_df(df1, df2):
rows = (set(df1.index.values.tolist()) | set(df2.index.values.tolist()))
#print(rows)
columns = (set(df1.columns.values.tolist()) | set(df2.columns.values.tolist()))
#print(columns)
df = pd.DataFrame()
#df.columns = columns
for i in rows:
#df[:][i]=[]
for j in columns:
df = df.insert(int(i), j, take_right(df1,df2,j,i), allow_duplicates=False)
# print(df)
return df
This won't add new columns or rows to an empty DataFrame.
Thank you!!
One approach is to create an empty output dataframe with the union of columns and indices from df1 and df2 and then use the df.update method to assign their values into the out_df
import pandas as pd
df1 = pd.DataFrame({'A':[1,2], 'B':[3,4]}, index = [0,1])
df2 = pd.DataFrame({'B':[8,9], 'C':[10,11]}, index = [1,2])
out_df = pd.DataFrame(
columns = df1.columns.union(df2.columns),
index = df1.index.union(df2.index),
)
out_df.update(df1)
out_df.update(df2)
out_df
Why does combine_first not work?
df = df2.combine_first(df1)
print(df)
Output:
A B C
0 1.0 3 NaN
1 2.0 8 10.0
2 NaN 9 11.0

Apply pd.Series by saving another column

My initial df is:
df =
ID location
0 141 [(45.1024, 7.7498), (45.1027, 7.75), (45.1072,...
1 403 [(45.0595, 7.6829), (45.0595, 7.6829), (45.056...
2 920 [(45.0695, 7.5454), (44.9727, 7.666), (44.9726..
after applying b = df['location'].apply(pd.Series) I get
1 2 3 4
(45.1024, 7.7498) (45.1027, 7.75) (45.1072, 7.7568) (45.1076, 7.7563)
...
I get partially what I want. But I'm loosing my column 'ID'. How I can save it and apply pd.Series?
df = pd.concat([df, b], axis=1)
df = df.drop(labels=['location'], axis=1)

Sum columns in a pandas dataframe which contain a string

I am trying to do something relatively simple in summing all columns in a pandas dataframe that contain a certain string. Then making that a new column in the dataframe from the sum. These columns are all numeric float values...
I can get the list of columns which contain the string I want
StmCol = [col for col in cdf.columns if 'Stm_Rate' in col]
But when I try to sum them using:
cdf['PadStm'] = cdf[StmCol].sum()
I get a new column full of "nan" values.
You need to pass in axis=1 to .sum, by default (axis=0) sums over each column:
In [11]: df = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
In [12]: df
Out[12]:
A B
0 1 2
1 3 4
In [13]: df[["A"]].sum() # Here I'm passing the list of columns ["A"]
Out[13]:
A 4
dtype: int64
In [14]: df[["A"]].sum(axis=1)
Out[14]:
0 1
1 3
dtype: int64
Only the latter matches the index of df:
In [15]: df["C"] = df[["A"]].sum()
In [16]: df["D"] = df[["A"]].sum(axis=1)
In [17]: df
Out[17]:
A B C D
0 1 2 NaN 1
1 3 4 NaN 3

Pandas flatten hierarchical index on non overlapping columns

I have a dataframe, and I set the index to a column of the dataframe. This creates a hierarchical column index. I want to flatten the columns to a single level. Similar to this question - Python Pandas - How to flatten a hierarchical index in columns, however, the columns do not overlap (i.e. 'id' is not at level 0 of the hierarchical index, and other columns are at level 1 of the index).
df = pd.DataFrame([(101,3,'x'), (102,5,'y')], columns=['id', 'A', 'B'])
df.set_index('id', inplace=True)
A B
id
101 3 x
102 5 y
Desired output is flattened columns, like this:
id A B
101 3 x
102 5 y
You are misinterpreting what you are seeing.
A B
id
101 3 x
102 5 y
Is not showing you a hierarchical column index. id is the name of the row index. In order to show you the name of the index, pandas is putting that space there for you.
The answer to your question depends on what you really want or need.
As the df is, you can dump it to a csv just the way you want:
print(df.to_csv(sep='\t'))
id A B
101 3 x
102 5 y
print(df.to_csv())
id,A,B
101,3,x
102,5,y
Or you can alter the df so that it displays the way you'd like
print(df.rename_axis(None))
A B
101 3 x
102 5 y
please do not do this!!!!
I'm putting it to demonstrate how to manipulate
I could also keep the index as it is but manipulate both column and row index names to print how you would like.
print(df.rename_axis(None).rename_axis('id', 1))
id A B
101 3 x
102 5 y
But this has named the columns' index id which makes no sense.
there will always be an index in your dataframes. if you don't set 'id' as index, it will be at the same level as other columns and pandas will populate an increasing integer for your index starting from 0.
df = pd.DataFrame([(101,3,'x'), (102,5,'y')], columns=['id', 'A', 'B'])
In[52]: df
Out[52]:
id A B
0 101 3 x
1 102 5 y
the index is there so you can slice the original dataframe. such has
df.iloc[0]
Out[53]:
id 101
A 3
B x
Name: 0, dtype: object
so let says you want ID as index and ID as a column, which is very redundant, you could do:
df = pd.DataFrame([(101,3,'x'), (102,5,'y')], columns=['id', 'A', 'B'])
df.set_index('id', inplace=True)
df['id'] = df.index
df
Out[55]:
A B id
id
101 3 x 101
102 5 y 102
with this you can slice by 'id' such has:
df.loc[101]
Out[57]:
A 3
B x
id 101
Name: 101, dtype: object
but it would the same info has :
df = pd.DataFrame([(101,3,'x'), (102,5,'y')], columns=['id', 'A', 'B'])
df.set_index('id', inplace=True)
df.loc[101]
Out[58]:
A 3
B x
Name: 101, dtype: object
Given:
>>> df2=pd.DataFrame([(101,3,'x'), (102,5,'y')], columns=['id', 'A', 'B'])
>>> df2.set_index('id', inplace=True)
>>> df2
A B
id
101 3 x
102 5 y
For printing purdy, you can produce a copy of the DataFrame with a reset the index and use .to_string:
>>> print df2.reset_index().to_string(index=False)
id A B
101 3 x
102 5 y
Then play around with the formatting options so that the output suites your needs:
>>> fmts=[lambda s: u"{:^5}".format(str(s).strip())]*3
>>> print df2.reset_index().to_string(index=False, formatters=fmts)
id A B
101 3 x
102 5 y

How to do an update on only part of a DataFrame

Lets say one has a DataFrame df1 with INDEX, Column1, Column2 and another df2 with INDEX, Column1, Column3.
Both INDEX have similar values so I want to use that to merge the information of one table on the other.
I have been told to do as follows by other users:
df1.update(df2, join='left', overwrite=True)
This works if both INDEXES have similar values. The result will be df1 will now have INDEX, Column1 (from df2) and Column2 (original from df1). Column3 is not added to df1 (this behaviour is wanted vs. the "merge" command that adds everything).
Now, I would like to update df1 only on a few cases and based on Column2. I thought this would work:
df1[df1['Column2'] == 'Cond'].update(df2, join='left', overwrite=True)
But it doesn't; sometimes I get an error, other the command works but ALL df1 values have been modified.
Any idea on how to do this?
PS: Using .loc won't work as that requires that whatever INDEX you search for exists and this is not the case.
EDIT: Additional example
In [37]: df1 = pd.DataFrame([['USA',1],['USA',2],['USA',3],['FRA',1],['FRA',2]], columns = ['country', 'value'])
In [38]: df2 = pd.DataFrame([['USA',10],['FRA',20]], columns = ['country', 'value'])
In [39]: df1 = df1.set_index('country')
In [40]: df2 = df2.set_index('country')
In [41]: mask = df1['value'] >= 2
In [42]: idx = df1.index[mask]
In [43]: idx = idx.unique()
In [44]: df1
Out[44]:
value
country
USA 1
USA 2
USA 3
FRA 1
FRA 2
In [45]: df2
Out[45]:
value
country
USA 10
FRA 20
In [46]: idx
Out[46]: array(['USA', 'FRA'], dtype=object)
In [47]: df1.update(df2.loc[idx])
In [48]: df1
Out[48]:
value
country
USA 10
USA 10
USA 10
FRA 20
FRA 20
Define the boolean mask
mask = (df1['Column2'] == 'Cond')
If df1.index is identical to df2.index, then mask can be used to select
rows from df2 -- i.e., df2.loc[mask]. But if they are not identical, then
df2.loc[mask] may raise an error (if len(df1) != len(df2)), or worse, silently select the wrong rows
because the boolean mask is not aligning index values between df1 and df2.
So in the more general case when the indexes are not identical, the trick is to
convert the boolean mask into an Index that can be used to restrict
df2.
If df1.index is unique then call df1.update on the restricted df2:
idx = df1.index[mask]
df1.update(df2.loc[idx])
For example,
import pandas as pd
df1 = pd.DataFrame({'Column1':[1,2,3], 'Column2':['Cond',5,'Cond']}, index=['A','B','C'])
# Column1 Column2
# A 1 Cond
# B 2 5
# C 3 Cond
df2 = pd.DataFrame({'Column1':[10,20,30], 'Column3':[40,50,60]}, index=['D','B','C'])
# Column1 Column3
# D 10 40
# B 20 50
# C 30 60
mask = df1['Column2'] == 'Cond'
idx = df1.index[mask]
df1.update(df2.loc[idx])
print(df1)
prints
Column1 Column2
A 1 Cond
B 2 5
C 30 Cond
If df1.index is not unique, then make the index unique by adding mask to it:
df1['mask'] = df1['value'] >= 2
df2['mask'] = True
df1 = df1.set_index('mask', append=True)
df2 = df2.set_index('mask', append=True)
Then calling df1.update(df2) produces the desired result because update aligns indices.
For example,
import pandas as pd
df1 = pd.DataFrame([['USA',1],['USA',2],['USA',3],['FRA',1],['FRA',2]],
columns = ['country', 'value'])
df2 = pd.DataFrame([['USA',10],['FRA',20]], columns = ['country', 'value'])
df1 = df1.set_index('country')
# value
# country
# USA 1
# USA 2
# USA 3
# FRA 1
# FRA 2
df2 = df2.set_index('country')
# value
# country
# USA 10
# FRA 20
df1['mask'] = df1['value'] >= 2
df2['mask'] = True
df1 = df1.set_index('mask', append=True)
# value
# country mask
# USA False 1
# True 2
# True 3
# FRA False 1
# True 2
df2 = df2.set_index('mask', append=True)
# value
# country mask
# USA True 10
# FRA True 20
df1.update(df2)
df1.index = df1.index.droplevel('mask')
print(df1)
yields
value
country
USA 1
USA 10
USA 10
FRA 1
FRA 20

Categories

Resources