Get value from pandas series object - python

I have a bunch of data files, with columns 'Names', 'Gender', 'Count', one file per one year. I need to concatenate all the files for some period, sum all counts for all unique names and add a new column with amount of consonant. I can't extract string value from 'Names'. How can I implement that?
Here is my code:
import os
import re
import pandas as pd
PATH = ...
def consonants_dynamics (years):
names_by_year = {}
for year in years:
names_by_year[year] = pd.read_csv(PATH+"\\yob{}.txt".format(year), names =['Names', 'Gender', 'Count'])
names_all = pd.concat(names_by_year, names=['Year', 'Pos'])
dynamics = names_all.groupby('Names').sum().sort_values(by='Count', ascending=False).unstack('Names')
dynamics['Consonants'] = dynamics.apply(count_vowels(dynamics.Names), axis = 1)
return dynamics.head(10)
def count_vowels (name):
vowels = re.compile('A|E|I|O|U|a|e|i|o|u')
return len(name) - len (vowels.findall(name))
If I run something like
a = consonants_dynamics(i for i in range (1900, 2001, 10))
I get the following error message
<ipython-input-9-942fc155267e> in consonants_dynamcis(years)
...
---> 12 dynamics['Consonants'] = dynamics.apply(count_vowels(dynamics.Names), axis = 1)
AttributeError: 'Series' object has no attribute 'Names'
I tried various ways but all failed. How can it be done?

after doing unstack you converted dynamics to a series object where you no longer have Names column dynamics.Names. I think it should be fixed by removing .unstack('Names')
after that use dynamics.index:
dynamics['Consonants'] = dynamics.reset_index()['Names'].apply(count_vowels)

Convert index to_series and apply function:
print (dynamics)
Count
Names
James 2
John 3
Robert 10
def count_vowels (name):
vowels = re.compile('A|E|I|O|U|a|e|i|o|u')
return len(name) - len (vowels.findall(name))
dynamics['Consonants'] = dynamics.index.to_series().apply(count_vowels)
Solution without function with str.len and substract only wovels by str.count:
pat = 'A|E|I|O|U|a|e|i|o|u'
s = dynamics.index.to_series()
dynamics['Consonants_new'] = s.str.len() - s.str.count(pat)
print (dynamics)
Count Consonants_new Consonants
Names
James 2 3 3
John 3 3 3
Robert 10 4 4
EDIT:
Solutions without to_series is add as_index=False to groupby for return DataFrame:
names_all = pd.DataFrame({
'Names':['James','James','John','John', 'Robert', 'Robert'],
'Count':[10,20,10,30, 80,20]
})
dynamics = names_all.groupby('Names', as_index=False).sum()
.sort_values(by='Count', ascending=False)
pat = 'A|E|I|O|U|a|e|i|o|u'
s = dynamics.index.to_series()
dynamics['Consonants'] = dynamics['Names'].str.len() - dynamics['Names'].str.count(pat)
print (dynamics)
Names Count Consonants
2 Robert 100 4
1 John 40 3
0 James 30 3

Related

Cannot set a DataFrame with multiple columns to the single column total_servings

I am a beginner and getting familiar with pandas .
It is throwing an error , When I was trying to create a new column this way :
drinks['total_servings'] = drinks.loc[: ,'beer_servings':'wine_servings'].apply(calculate,axis=1)
Below is my code, and I get the following error for line number 9:
"Cannot set a DataFrame with multiple columns to the single column total_servings"
Any help or suggestion would be appreciated :)
import pandas as pd
drinks = pd.read_csv('drinks.csv')
def calculate(drinks):
return drinks['beer_servings']+drinks['spirit_servings']+drinks['wine_servings']
print(drinks)
drinks['total_servings'] = drinks.loc[:, 'beer_servings':'wine_servings'].apply(calculate,axis=1)
drinks['beer_sales'] = drinks['beer_servings'].apply(lambda x: x*2)
drinks['spirit_sales'] = drinks['spirit_servings'].apply(lambda x: x*4)
drinks['wine_sales'] = drinks['wine_servings'].apply(lambda x: x*6)
drinks
In your code, when functioncalculate is called with axis=1, it passes each row of the Dataframe as an argument. Here, the function calculate is returning dataframe with multiple columns but you are trying to assigned to a single column, which is not possible. You can try updating your code to this,
def calculate(each_row):
return each_row['beer_servings'] + each_row['spirit_servings'] + each_row['wine_servings']
drinks['total_servings'] = drinks.apply(calculate, axis=1)
drinks['beer_sales'] = drinks['beer_servings'].apply(lambda x: x*2)
drinks['spirit_sales'] = drinks['spirit_servings'].apply(lambda x: x*4)
drinks['wine_sales'] = drinks['wine_servings'].apply(lambda x: x*6)
print(drinks)
I suppose the reason is the wrong argument name inside calculate method. The given argument is drink but drinks used to calculate sum of columns.
The reason is drink is Series object that represents Row and sum of its elements is scalar. Meanwhile drinks is a DataFrame and sum of its columns will be a Series object
Sample code shows that this method works.
import pandas as pd
df = pd.DataFrame({
"A":[1,1,1,1,1],
"B":[2,2,2,2,2],
"C":[3,3,3,3,3]
})
def calculate(to_calc_df):
return to_calc_df["A"] + to_calc_df["B"] + to_calc_df["C"]
df["total"] = df.loc[:, "A":"C"].apply(calculate, axis=1)
print(df)
Result
A B C total
0 1 2 3 6
1 1 2 3 6
2 1 2 3 6
3 1 2 3 6
4 1 2 3 6

Count list length in a column of a DataFrame

This is my Dataframe:
CustomerID InvoiceNo
0 12346.0 [541431, C541433]
1 12347.0 [537626, 542237, 549222, 556201, 562032, 57351]
2 12348.0 [539318, 541998, 548955, 568172]
3 12349.0 [577609]
4 12350.0 [543037]
Desired Output:
CustomerID InvoiceCount
0 12346.0 2
1 12347.0 6
2 12348.0 4
3 12349.0 1
4 12350.0 1
I want to calculate the total number of Invoice a customer(CustomerID) have.
Please help.
See if this works:
df["InvoiceCount"] = df['InvoiceNo'].str.len()
If you have real list then you can do
df['InvoiceCount'] = df['InvoiceNo'].apply(len)
If you have string with list then you would have to convert string to real list before count
df['InvoiceNo'] = df['InvoiceNo'].apply(eval)
But it may not work if number C541433 (with C) is correct and may need
df['InvoiceCount'] = df['InvoiceNo'].apply(lambda x: len(x.split(',')))
or similar to example in #Datanovice comment
df['InvoiceCount'] = df['InvoiceNo'].str.split(',').str.len()
Minimal working example
import pandas as pd
import io
text = '''CustomerID;InvoiceNo
12346.0;[541431, 541433]
12347.0;[537626, 542237, 549222, 556201, 562032, 57351]
12348.0;[539318, 541998, 548955, 568172]
12349.0;[577609]
12350.0;[543037]'''
df = pd.read_csv(io.StringIO(text), sep=';')
print( df['InvoiceNo'].apply(lambda x: len(eval(x))) )
print( df['InvoiceNo'].apply(eval).apply(len) )
print( df['InvoiceNo'].apply(lambda x: len(x.split(','))) )
print( df['InvoiceNo'].str.split(',').str.len() )
df['InvoiceNo'] = df['InvoiceNo'].apply(eval)
print( df['InvoiceNo'].apply(len) )
If thats in a list, you can use the function 'len'
So let's say the list is in the variable values:
values = [537626, 542237, 549222, 556201, 562032, 57351]
then the amount is:
len(values) # 6
this would return 6 in this example

How to select the specific datas from the Dataframe after being used value_couts()?

I used python to read a file which contains the baby's names, genders and birth-years. Now I want to find out the names which are used both by boys and girls. I used value_counts()to get the appearance times of each name, but now I don't know how to extract the names from all the names.
Here is my codes:
def names_both(year):
names = []
path = 'babynames/yob%d.txt' % year
columns = ['name', 'sex', 'birth']
frame = pd.read_csv(path, names=columns)
frame = frame['name'].value_counts()
print(frame)
"""if len(names) != 0:
print(names)
else:
print('None')"""
The frame now is like this:
Lou 2
Willie 2
Erie 2
Cora 2
..
Perry 1
Coy 1
Adolphus 1
Ula 1
Emily 1
Name: name, Length: 1889, dtype: int64
Here is the csv:
Anna,F,2604
Emma,F,2003
Elizabeth,F,1939
Minnie,F,1746
Margaret,F,1578
Ida,F,1472
Alice,F,1414
Bertha,F,1320
Sarah,F,1288
Annie,F,1258
Clara,F,1226
Ella,F,1156
Florence,F,1063
...
Thanks for helping!
Here we are for counting the number of names given both to girls and boys:
common_girl_and_boys_names = (
# work name by name
frame.groupby('name')
# count the number of sex given for the name and keep the one given to both sex, this boolean will be put in a column call 0
.apply(lambda x: len(x['sex'].unique()) == 2)
# the name are now in the index, reset it in order to get the names
.reset_index()
# keep only names with the column 0 with True value
.loc[lambda x: x[0], 'name']
)
final_df = (
# keep only the names common to boys and girls (the series build before)
frame.loc[frame['name'].isin(common_girl_and_boys_names), :]
# sex is now useless
.drop(['sex'], axis='columns')
# work name by name and sum the number of birth
.groupby('name')
.sum()
)
You can put those lines after the read_csv function. I hope it is want you want.

groupby and sum two columns and set as one column in pandas

I have the following data frame:
import pandas as pd
data = pd.DataFrame()
data['Home'] = ['A','B','C','D','E','F']
data['HomePoint'] = [3,0,1,1,3,3]
data['Away'] = ['B','C','A','E','D','D']
data['AwayPoint'] = [0,3,1,1,0,0]
i want to groupby the columns ['Home', 'Away'] and change the name as Team. Then i like to sum homepoint and awaypoint as name as Points.
Team Points
A 4
B 0
C 4
D 1
E 4
F 3
How can I do it?
I was trying different approach using the following post:
Link
But I was not able to get the format that I wanted.
Greatly appreciate your advice.
Thanks
Zep.
A simple way is to create two new Series indexed by the teams:
home = pd.Series(data.HomePoint.values, data.Home)
away = pd.Series(data.AwayPoint.values, data.Away)
Then, the result you want is:
home.add(away, fill_value=0).astype(int)
Note that home + away does not work, because team F never played away, so would result in NaN for them. So we use Series.add() with fill_value=0.
A complicated way is to use DataFrame.melt():
goo = data.melt(['HomePoint', 'AwayPoint'], var_name='At', value_name='Team')
goo.HomePoint.where(goo.At == 'Home', goo.AwayPoint).groupby(goo.Team).sum()
Or from the other perspective:
ooze = data.melt(['Home', 'Away'])
ooze.value.groupby(ooze.Home.where(ooze.variable == 'HomePoint', ooze.Away)).sum()
You can concatenate, pairwise, columns of your input dataframe. Then use groupby.sum.
# calculate number of pairs
n = int(len(df.columns)/2)+1)
# create list of pairwise dataframes
df_lst = [data.iloc[:, 2*i:2*(i+1)].set_axis(['Team', 'Points'], axis=1, inplace=False) \
for i in range(n)]
# concatenate list of dataframes
df = pd.concat(df_lst, axis=0)
# perform groupby
res = df.groupby('Team', as_index=False)['Points'].sum()
print(res)
Team Points
0 A 4
1 B 0
2 C 4
3 D 1
4 E 4
5 F 3

comparing column values based on other column values in pandas

I have a dataframe:
import pandas as pd
import numpy as np
df = pd.DataFrame([['M',2014,'Seth',5],
['M',2014,'Spencer',5],
['M',2014,'Tyce',5],
['F',2014,'Seth',25],
['F',2014,'Spencer',23]],columns =['sex','year','name','number'])
print df
I would like to find the most gender ambiguous name for 2014. I have tried many ways but haven't had any luck yet.
NOTE: I do write a function at the end of my answer, but I decided to run through the code part by part for better understanding.
Obtaining Gender Ambiguous Names
First, you would want to get the list of gender ambiguous names. I would suggest using set intersection:
>>> male_names = df[df.sex == "M"].name
>>> female_names = df[df.sex == "F"].name
>>> gender_ambiguous_names = list(set(male_names).intersection(set(female_names)))
Now, you want to actually subset the data to show only gender ambiguous names in 2014. You would want to use membership conditions and chain the boolean conditions as a one-liner:
>>> gender_ambiguous_data_2014 = df[(df.name.isin(gender_ambiguous_names)) & (df.year == 2014)]
Aggregating the Data
Now you have this as gender_ambiguous_data_2014:
>>> gender_ambiguous_data_2014
sex year name number
0 M 2014 Seth 5
1 M 2014 Spencer 5
3 F 2014 Seth 25
4 F 2014 Spencer 23
Then you just have to aggregate by number:
>>> gender_ambiguous_data_2014.groupby('name').number.sum()
name
Seth 30
Spencer 28
Name: number, dtype: int64
Extracting the Name(s)
Now, the last thing you want is to get the name with the highest numbers. But in reality you might have gender ambiguous names that have the same total numbers. We should apply the previous result to a new variable gender_ambiguous_numbers_2014 and play with it:
>>> gender_ambiguous_numbers_2014 = gender_ambiguous_data_2014.groupby('name').number.sum()
>>> # get the max and find the list of names:
>>> gender_ambiguous_max_2014 = gender_ambiguous_numbers_2014[gender_ambiguous_numbers_2014 == gender_ambiguous_numbers_2014.max()]
Now you get this:
>>> gender_ambiguous_max_2014
name
Seth 30
Name: number, dtype: int64
Cool, let's extract the index names then!
>>> gender_ambiguous_max_2014.index
Index([u'Seth'], dtype='object')
Wait, what the heck is this type? (HINT: it's pandas.core.index.Index)
No problem, just apply list coercion:
>>> list(gender_ambiguous_max_2014.index)
['Seth']
Let's Write This in a Function!
So, in this case, our list has only element. But maybe we want to write a function where it returns a string for the sole contender, or returns a list of strings if some gender ambiguous names have the same total number in that year.
In the wrapper function below, I abbreviated my variable names with ga to shorten the code. Of course, this is assuming the data set is in the same format you have shown and is named df. If it's named otherwise just change the df accordingly.
def get_most_popular_gender_ambiguous_name(year):
"""Get the gender ambiguous name with the most numbers in a certain year.
Returns:
a string, or a list of strings
Note:
'gender_ambiguous' will be abbreviated as 'ga'
"""
# get the gender ambiguous names
male_names = df[df.sex == "M"].name
female_names = df[df.sex == "F"].name
ga_names = list(set(male_names).intersection(set(female_names)))
# filter by year
ga_data = df[(df.name.isin(ga_names)) & (df.year == year)]
# aggregate to get total numbers
ga_total_numbers = ga_data.groupby('name').number.sum()
# find the max number
ga_max_number = ga_total_numbers.max()
# subset the Series to only those that have max numbers
ga_max_data = ga_total_numbers[
ga_total_numbers == ga_max_number
]
# get the index (the names) for those satisfying the conditions
most_popular_ga_names = list(ga_max_data.index) # list coercion
# if list only contains one element, return the only element
if len(most_popular_ga_names) == 1:
return most_popular_ga_names[0]
return most_popular_ga_names
Now, calling this function is as easy as it gets:
>>> get_most_popular_gender_ambiguous_name(2014) # assuming df is dataframe var name
'Seth'
Not sure what do you mean by 'most gender ambigious', but you can start from this
>>> dfy = (df.year == 2014)
>>> dfF = df[(df.sex == 'F') & dfy][['name', 'number']]
>>> dfM = df[(df.sex == 'M') & dfy][['name', 'number']]
>>> pd.merge(dfF, dfM, on=['name'])
name number_x number_y
0 Seth 25 5
1 Spencer 23 5
If you want just the name with highest total number then:
>>> dfT = pd.merge(dfF, dfM, on=['name'])
>>> dfT
name number_x number_y
0 Seth 25 5
1 Spencer 23 5
>>> dfT['total'] = dfT['number_x'] + dfT['number_y']
>>> dfT.sort_values('total', ascending=False).head(1)
name number_x number_y total
0 Seth 25 5 30

Categories

Resources