How do I groupby two columns and create a loop to subplots? - python

I have a large dataframe (df) in this strutcture:
year person purchase
2016 Peter 0
2016 Peter 223820
2016 Peter 0
2017 Peter 261740
2017 Peter 339987
2018 Peter 200000
2016 Carol 256400
2017 Carol 33083820
2017 Carol 154711
2018 Carol 3401000
2016 Frank 824043
2017 Frank 300000
2018 Frank 214416259
2018 Frank 4268825
2018 Frank 463080
2016 Rita 0
To see how much each person spent per year I do groupby year and person, which gives me what I want.
code:
df1 = df.groupby(['person','year']).sum().reset_index()
How do I create a loop to create subplots for each person containing what he/she spent on purchase each year?
So a subplot for each person where x = year and y = purchase.
I've tried a lot of different things explained here but none seems to work.
Thanks!

You can either do pivot_table or groupby().sum().unstack('person') and then plot:
(df.pivot_table(index='year',
columns='person',
values='purchase',
aggfunc='sum')
.plot(subplots=True)
);
Or
(df.groupby(['person','year'])['purchase']
.sum()
.unstack('person')
.plot(subplots=True)
);
Output:

Related

Pandas where function

I'm using Pandas where function trying to find the percentage in each state
filter1 = df['state']=='California'
filter2 = df['state']=='Texas'
filter3 = df['state']=='Florida'
df['percentage']= df['total'].where(filter1)/df['total'].where(filter1).sum()
The output is
Year state total percentage
2014 California 914198.0 0.134925
2014 Florida 766441.0 NaN
2014 Texas 1045274.0 NaN
2015 California 874642.0 0.129087
2015 Florida 878760.0 NaN
how do I apply the rest of 2 filters into there too?
Don't use where but groupby.transform:
df['percentage'] = df['total'].div(df.groupby('state')['total'].transform('sum'))
Output:
Year state total percentage
0 2014 California 914198.0 0.511056
1 2014 Florida 766441.0 0.465865
2 2014 Texas 1045274.0 1.000000
3 2015 California 874642.0 0.488944
4 2015 Florida 878760.0 0.534135
You can try out df.loc[(filter1) & (filter2) & (filter3)] in pandas to apply multiple filter together !

I want to filter rows from data frame where the year is 2020 and 2021 using re.search and re.match functions

Data Frame:
Unnamed: 0 date target insult tweet year
0 1 2014-10-09 thomas-frieden fool Can you believe this fool, Dr. Thomas Frieden ... 2014
1 2 2014-10-09 thomas-frieden DOPE Can you believe this fool, Dr. Thomas Frieden ... 2014
2 3 2015-06-16 politicians all talk and no action Big time in U.S. today - MAKE AMERICA GREAT AG... 2015
3 4 2015-06-24 ben-cardin It's politicians like Cardin that have destroy... Politician #SenatorCardin didn't like that I s... 2015
4 5 2015-06-24 neil-young total hypocrite For the nonbeliever, here is a photo of #Neily... 2015
I want the data frame which consists for only year with 2020 and 2021 using search and match methods.
df_filtered = df.loc[df.year.str.contains('2014|2015', regex=True) == True]

Histogram using plot in Pandas - set x label

Dataframe:
Horror films released in 2019
Title Director Country Year
3 from Hell Rob Zombie United States 2019
Bliss Joe Begos United States 2019
Bedeviled The Vang Brothers United States 2016
Creep 2 Patrick Brice United States 2017
Brightburn David Yarovesky United States 2019
Delirium Dennis Iliadis Ireland 2018
Child's Play Lars Klevberg United States 2019
The Conjuring 2 James Wan United States 2016
Bloodlands Steven Kastrissios Albania 2017
Bird Box Susanne Bier United States 2017
need to plot a histogram showing the number of titles released over the years using Pandas plot function
code:
df=pd.read_csv(filename)
group = df.groupby('Year').count()[['Title']]
new_df = grouped.reset_index()
xtick=newdf['Year'].tolist()
width = newdf.Year[1] - newdf.Year[0]
newdf.iloc[:,1:2].plot(kind='bar', width=width)
Cannot figure out a way to label x axis with values from the Year column, also unsure if my approach is correct.
Thanks in advance :)
It sounds like you want a bar chart, not a histogram, because you have discrete/categorical variables (years). And you say "kind=bar" in your plot statement, so you are on the right track. Try this to see if it works for you. I forced the y-axis to be integers since you are looking for counts, but that is optional.
import pandas as pd
import matplotlib.pyplot as plt
title = [ 'Movie1','Movie2','Movie3',
'Movie4','Movie5','Movie6',
'Movie7','Movie8','Movie9',
]
year = [2019,2019,2018,
2017,2019,2018,
2019,2017,2018
]
df = pd.DataFrame(list(zip(title, year)),
columns =['Title', 'Year']
)
print(df)
group = df.groupby('Year').count()[['Title']]\
.rename(columns={'Title': 'No. of Movies'})\
.reset_index()
print(group)
ax = group.plot.bar(x='Year', rot=0)
ax.yaxis.get_major_locator().set_params(integer=True)
plt.show()
Title Year
0 Movie1 2019
1 Movie2 2019
2 Movie3 2018
3 Movie4 2017
4 Movie5 2019
5 Movie6 2018
6 Movie7 2019
7 Movie8 2017
8 Movie9 2018
Year No. of Movies
0 2017 2
1 2018 3
2 2019 4
The api offers a few different ways to do this (not a great thing imo). Here is one way to get what you want:
df = pd.read_csv(filename)
group = df.groupby('Year').count()[['Title']]
df2 = group.reset_index()
df2.plot(kind='bar', x="Year", y="Title")
Or, even more concisely:
df.value_counts("Year").plot(kind="bar")
Note that in the second case, you're creating a bar plot from a Series object.
You can simply do
df.groupby('Year').Title.count().plot(kind='bar')
Output

Python - Extract multiple values from string in pandas df

I've searched for an answer for the following question but haven't found the answer yet. I have a large dataset like this small example:
df =
A B
1 I bought 3 apples in 2013
3 I went to the store in 2020 and got milk
1 In 2015 and 2019 I went on holiday to Spain
2 When I was 17, in 2014 I got a new car
3 I got my present in 2018 and it broke down in 2019
What I would like is to extract all the values of > 1950 and have this as an end result:
A B C
1 I bought 3 apples in 2013 2013
3 I went to the store in 2020 and got milk 2020
1 In 2015 and 2019 I went on holiday to Spain 2015_2019
2 When I was 17, in 2014 I got a new car 2014
3 I got my present in 2018 and it broke down in 2019 2018_2019
I tried to extract values first, but didn't get further than:
df["C"] = df["B"].str.extract('(\d+)').astype(int)
df["C"] = df["B"].apply(lambda x: re.search(r'\d+', x).group())
But all I get are error messages (I've only started python and working with texts a few weeks ago..). Could someone help me?
With single regex pattern (considering your comment "need the year it took place"):
In [268]: pat = re.compile(r'\b(19(?:[6-9]\d|5[1-9])|[2-9]\d{3})')
In [269]: df['C'] = df['B'].apply(lambda x: '_'.join(pat.findall(x)))
In [270]: df
Out[270]:
A B C
0 1 I bought 3 apples in 2013 2013
1 3 I went to the store in 2020 and got milk 2020
2 1 In 2015 and 2019 I went on holiday to Spain 2015_2019
3 2 When I was 17, in 2014 I got a new car 2014
4 3 I got my present in 2018 and it broke down in ... 2018_2019
Here's one way using str.findall and joining those items from the resulting lists that are greater than 1950::
s = df["B"].str.findall('\d+')
df['C'] = s.apply(lambda x: '_'.join(i for i in x if int(i)> 1950))
A B C
0 1 I bought 3 apples in 2013 2013
1 3 I went to the store in 2020 and got milk 2020
2 1 In 2015 and 2019 I went on holiday to Spain 2015_2019
3 2 When I was 17, in 2014 I got a new car 2014
4 3 I got my present in 2018 and it broke down in ... 2018_2019

Looking back at previous row in data-frame and selecting specific records

I have a dataframe df which looks like:
name year dept metric
0 Steve Jones 2018 A 0.703300236
1 Steve Jones 2019 A 0.255587222
2 Jane Smith 2018 A 0.502505934
3 Jane Smith 2019 B 0.698808749
4 Barry Evans 2019 B 0.941325241
5 Tony Edwards 2017 B 0.880940126
6 Tony Edwards 2018 B 0.649086123
7 Tony Edwards 2019 A 0.881365905
I would like to create 2 new data-frame which contains the records where someone has moved from dept A to B and and another where someone has moved from dept B to A. Therefore my desired output is:
name year dept metric
0 Jane Smith 2018 A 0.502505934
1 Tony Edwards 2019 B 0.649086123
name year dept metric
0 Jane Smith 2019 B 0.698808749
1 Tony Edwards 2018 B 0.881365905
Where records for the year the last year that someone is in their old dept are captured in one data-frame and the first year in the new dept are captured in another only. The records are sorted by name and year so will be in the correct order.
I've tried :
for row in agg_data.rows:
df['match'] = np.where(df.dept == 'A' and df.dept.shift() =='B','1')
df['match'] = np.where(df.dept == 'B' and df.dept.shift() =='A','2')
and then select out the records into a data-frame but I get it to work.
I believe you need:
df = df[df.groupby('name')['dept'].transform('nunique') > 1]
df = df.drop_duplicates(['name','dept'], keep='last')
df1 = df.drop_duplicates('name')
print (df1)
name year dept metric
2 Jane Smith 2018 A 0.502506
6 Tony Edwards 2018 B 0.649086
df2 = df.drop_duplicates('name', keep='last')
print (df2)
name year dept metric
3 Jane Smith 2019 B 0.698809
7 Tony Edwards 2019 A 0.881366
You could join the initial dataframe with a shift of itself to have convecutive rows on same line. Then you ask the departments you want requiring the names to be the same and you get the indices of one of the expected rows, the other row just has an adjacent index. It gives:
df = agg_data.join(agg_data.shift(), rsuffix='_old')
df1 = df[(df.name_old==df.name)&(df.dept_old=='A')&(df.dept=='B')]
print(pd.concat([agg_data.loc[df1.index], agg_data.loc[df1.index-1]]
).sort_index())
df2 = df[(df.name_old==df.name)&(df.dept_old=='B')&(df.dept=='A')]
print(pd.concat([agg_data.loc[df2.index], agg_data.loc[df2.index-1]]
).sort_index())
with following output:
name year dept metric
2 Jane Smith 2018 A 0.502506
3 Jane Smith 2019 B 0.698809
name year dept metric
6 Tony Edwards 2018 B 0.649086
7 Tony Edwards 2019 A 0.881366
I come up with a solution using drop_duplicates, groupby and rank. Creating df2 on rank=2 and creating df1 on rank==1 and name exists in df2
df['rk'] = df.sort_values(['name', 'dept', 'year']).drop_duplicates(['name', 'dept'], keep='last').groupby('name').year.rank()
df2 = df[df.rk.eq(2)].drop('rk', 1)
df1 = df[df.rk.eq(1) & df.name.isin(df2.name)].drop('rk', 1)
df1:
name year dept metric
2 Jane Smith 2018 A 0.502506
6 Tony Edwards 2018 B 0.649086
df2:
name year dept metric
3 Jane Smith 2019 B 0.698809
7 Tony Edwards 2019 A 0.881366

Categories

Resources