Adding missing dates to the dataframe - python

I have a dataframe that has a list of One Piece manga, which currently looks like this:
0 # Title Pages
Date
1997-07-19 1 Romance Dawn - The Dawn of the Adventure 53
1997-07-28 2 That Guy, "Straw Hat Luffy" 23
1997-08-04 3 Introducing "Pirate Hunter Zoro" 21
1997-08-11 4 Marine Captain "Axe-Hand Morgan" 19
1997-08-25 5 Pirate King and Master Swordsman 19
1997-09-01 6 The First Crew Member 23
1997-09-08 7 Friends 20
1997-09-13 8 Introducing Nami 19
Although every episode is to be issued weekly, sometimes they are delayed or on break, resulting in an irregular interval in the dates. What I would like to do is to add a missing date. For example, between 1997-08-11 and 1997-08-25, there should be 1997-08-18 (7 days from 1997-08-11) where the episode was not issued. Could you help me out with how to operate this code?
Thank you.

You sould use the shift builtin function.
df['day_between'] = df['Date'].shift(-1) - df['Date']
output of print(df[['Date', 'day_between']]) is then:
Date day_between
0 1997-07-19 9 days
1 1997-07-28 7 days
2 1997-08-04 7 days
3 1997-08-11 14 days
4 1997-08-25 7 days
5 1997-09-01 7 days
6 1997-09-08 5 days
7 1997-09-13 NaT

I used relativedelta and list comprehension to get a 14-day interval per row and .shift(1) to compare to another row with .np.where() with a 1 returning a row where we would want to insert a row before. Then, I looped through the dataframe and appended the relevant rows to another dataframe. Then, I used pd.concat to bring the two dataframes together, sorted by date, deleted the helper columns and reset the index.
There may be some gaps as others have mentioned like 22 days+ but this should get you in the right direction. Perhaps you could turn it into a function and run it multiple times, which is why I added .reset_index(drop=True) at the end. Obviously, you could just make this more advanced, but I hope this helps.
from dateutil.relativedelta import relativedelta
import pandas
from datetime import datetime
df = pd.DataFrame({'Date': {0: '1997-07-19',
1: '1997-07-28',
2: '1997-08-04',
3: '1997-08-11',
4: '1997-08-25',
5: '1997-09-01',
6: '1997-09-08',
7: '1997-09-13'},
'#': {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8},
'Title': {0: 'Romance Dawn - The Dawn of the Adventure',
1: 'That Guy, "Straw Hat Luffy"',
2: 'Introducing "Pirate Hunter Zoro"',
3: 'Marine Captain "Axe-Hand Morgan"',
4: 'Pirate King and Master Swordsman',
5: 'The First Crew Member',
6: 'Friends',
7: 'Introducing Nami'},
'Pages': {0: 53, 1: 23, 2: 21, 3: 19, 4: 19, 5: 23, 6: 20, 7: 19}})
df['Date'] = pd.to_datetime(df['Date'])
df['Date2'] = [d - relativedelta(days=-14) for d in df['Date']]
df['Date3'] = np.where((df['Date'] >= df['Date2'].shift(1)), 1 , 0)
df1 = pd.DataFrame({})
n=0
for j in (df['Date3']):
n+=1
if j == 1:
new_row = pd.DataFrame({"Date": df['Date'][n-1] - relativedelta(days=7)}, index=[n])
df1=df1.append(new_row)
df = pd.concat([df, df1]).sort_values('Date').drop(['Date2', 'Date3'], axis=1).reset_index(drop=True)
df
Output:
Date # Title Pages
0 1997-07-19 1.0 Romance Dawn - The Dawn of the Adventure 53.0
1 1997-07-28 2.0 That Guy, "Straw Hat Luffy" 23.0
2 1997-08-04 3.0 Introducing "Pirate Hunter Zoro" 21.0
3 1997-08-11 4.0 Marine Captain "Axe-Hand Morgan" 19.0
4 1997-08-18 NaN NaN NaN
5 1997-08-25 5.0 Pirate King and Master Swordsman 19.0
6 1997-09-01 6.0 The First Crew Member 23.0
7 1997-09-08 7.0 Friends 20.0
8 1997-09-13 8.0 Introducing Nami 19.0

Related

Python/increase code efficiency about multiple columns filter

I was wondering if someone could help me find a more efficiency way to run my code.
I have a dataset contains 7 columns, which are country,sector,year,month,week,weekday,value.
the year column have only 3 elements, 2019,2020,2021
What I have to do here is to substract every value in 2020 and 2021 from 2019.
But its more complicated that I need to match the weekday columns.
For example,i need to use year 2020, month 1, week 1, weekday 0(monday) value to substract,
year 2019, month 1, week 1, weekday 0(monday) value, if cant find it, it will pass, and so on, which means, the weekday(monday,Tuesaday....must be matched)
And here is my code, it can run, but it tooks me hours:(
for i in itertools.product(year_list,country_list, sector_list,month_list,week_list,weekday_list):
try:
data_2 = df_carbon[(df_carbon['country'] == i[1])
& (df_carbon['sector'] == i[2])
& (df_carbon['year'] == i[0])
& (df_carbon['month'] == i[3])
& (df_carbon['week'] == i[4])
& (df_carbon['weekday'] == i[5])]['co2'].tolist()[0]
data_1 = df_carbon[(df_carbon['country'] == i[1])
& (df_carbon['sector'] == i[2])
& (df_carbon['year'] == 2019)
& (df_carbon['month'] == i[3])
& (df_carbon['week'] == i[4])
& (df_carbon['weekday'] == i[5])]['co2'].tolist()[0]
co2.append(data_2-data_1)
country.append(i[1])
sector.append(i[2])
year.append(i[0])
month.append(i[3])
week.append(i[4])
weekday.append(i[5])
except:
pass
I changed the for loops to itertools, but it still not fast enough, any other ideas?
many thanks:)
##############################
here is the sample dataset
country co2 sector date week weekday year month
Brazil 108.767782 Power 2019-01-01 0 1 2019 1
China 14251.044482 Power 2019-01-01 0 1 2019 1
EU27 & UK 1886.493814 Power 2019-01-01 0 1 2019 1
France 53.856398 Power 2019-01-01 0 1 2019 1
Germany 378.323440 Power 2019-01-01 0 1 2019 1
Japan 21.898788 IA 2021-11-30 48 1 2021 11
Russia 19.773822 IA 2021-11-30 48 1 2021 11
Spain 42.293944 IA 2021-11-30 48 1 2021 11
UK 56.425121 IA 2021-11-30 48 1 2021 11
US 166.425000 IA 2021-11-30 48 1 2021 11
or this
import pandas as pd
pd.DataFrame({
'year': [2019, 2020, 2021],
'co2': [1,2,3],
'country': ['Brazil', 'Brazil', 'Brazil'],
'sector': ['power', 'power', 'power'],
'month': [1, 1, 1],
'week': [0,0,0],
'weekday': [0,0,0]
})
pandas can subtract two dataframe index-by-index, so the idea would be to separate your data into a minuend and a subtrahend, set ['country', 'sector', 'month', 'week', 'weekday'] as their indices, just subtract them, and remove rows (by dropna) where a match in year 2019 is not found.
df_carbon = pd.DataFrame({
'year': [2019, 2020, 2021],
'co2': [1,2,3],
'country': ['ab', 'ab', 'bc']
})
index = ['country']
# index = ['country', 'sector', 'month', 'week', 'weekday']
df_2019 = df_carbon[df_carbon['year']==2019].set_index(index)
df_rest = df_carbon[df_carbon['year']!=2019].set_index(index)
ans = (df_rest - df_2019).reset_index().dropna()
ans['year'] += 2019
Two additional points:
In this subtraction the year is also covered, so I need to add 2019 back.
I created a small example of df_carbon to test my code. If you had provided a more realistic version in text form, I would have tested my code using your data.

Performing an Operation On Grouped Pandas Data

I have a pandas DataFrame with the following information:
year state candidate percvotes electoral_votes perc_evotes vote_frac vote_int
1976 ALABAMA CARTER, JIMMY 55.727269 9 5.015454 0.015454 5
1976 ALABAMA FORD, GERALD 42.614871 9 3.835338 0.835338 3
1976 ALABAMA MADDOX, LESTER 0.777613 9 0.069985 0.069985 0
1976 ALABAMA BUBAR, BENJAMIN 0.563808 9 0.050743 0.050743 0
1976 ALABAMA HALL, GUS 0.165194 9 0.014867 0.014867 0
where pervotes is the percentage of the total votes cast the candidate received (calculated before), electoral_votes are the electoral college votes for that state, perc_evotes is the calculated percent of the electoral votes, and vote_frac and vote_int are the fraction and whole number part of the electoral votes earned respectively. This data repeats for each year of an election and then by state per year. The candidates each have a row for each state, and it is similar data.
What I want to do is allocate the leftover electoral votes to the candidate with the highest fraction. This number is different for each state and year. In this case there would be 1 leftover electoral vote (9 total votes and 5+3=8 are already allocated) and the remaining one will go to 'FORD, GERALD' since he has 0.85338 in the vote_frac column. Sometimes there are 2 or 3 left unallocated.
I have a solution that adds the data to a dictionary, but it is using for loops. I know there must be a better way to do this in a more "pandas" way. I have touched on groupby in this loop but I feel like I am not utilizing pandas to it's full potential.
My for loop:
results = {}
grouped = electdf.groupby(["year", "state"])
for key, group in grouped:
year, state = key
group['vote_remaining'] = group['electoral_votes'] - group['vote_int'].sum()
remaining = group['vote_remaining'].iloc[0]
top_fracs = group['vote_frac'].nlargest(remaining)
group['total'] = (group['vote_frac'].isin(top_fracs)).astype(int) + group['vote_int']
if year not in results:
results[year] = {}
for candidate, evotes in zip(group['candidate'], group['total']):
if candidate not in results[year] and evotes:
results[year][candidate] = 0
if evotes:
results[year][candidate] += evotes
Thanks in advance!
Perhaps an apply function which finds the available electoral votes, the amount of votes cast, and conditionally updates the max 'vote_frac' row's 'vote_int' column with the difference of available and cast votes:
import pandas as pd
df = pd.DataFrame({'year': {0: 1976, 1: 1976, 2: 1976, 3: 1976, 4: 1976},
'state': {0: 'ALABAMA', 1: 'ALABAMA', 2: 'ALABAMA',
3: 'ALABAMA', 4: 'ALABAMA'},
'candidate': {0: 'CARTER, JIMMY', 1: 'FORD, GERALD',
2: 'MADDOX, LESTER', 3: 'BUBAR, BENJAMIN',
4: 'HALL, GUS'},
'percvotes': {0: 55.727269, 1: 42.614871, 2: 0.777613, 3: 0.563808,
4: 0.165194},
'electoral_votes': {0: 9, 1: 9, 2: 9, 3: 9, 4: 9},
'perc_evotes': {0: 5.015454, 1: 3.835338, 2: 0.069985,
3: 0.050743, 4: 0.014867},
'vote_frac': {0: 0.015454, 1: 0.835338, 2: 0.069985,
3: 0.050743, 4: 0.014867},
'vote_int': {0: 5, 1: 3, 2: 0, 3: 0, 4: 0}})
def apply_extra_e_votes(grp):
# Get First Electoral Vote
# (Assumes first row in group contains the
# correct number of electoral votes for the group)
available_e_votes = grp['electoral_votes'].iloc[0]
# Get the Sum of the vote_int column
current_e_votes = grp['vote_int'].sum()
# If there more available votes than votes cast
if available_e_votes > current_e_votes:
# Update the 'vote_int' column at the max value of 'vote_frac'
grp.loc[
grp['vote_frac'].idxmax(),
'vote_int'
] += available_e_votes - current_e_votes # (Remaining Votes)
return grp
# Groupby and Apply Function
new_df = df.groupby(['year', 'state']).apply(apply_extra_e_votes)
# For Display
print(new_df.to_string(index=False))
Output:
year
state
candidate
percvotes
electoral_votes
perc_evotes
vote_frac
vote_int
1976
ALABAMA
CARTER, JIMMY
55.727269
9
5.015454
0.015454
5
1976
ALABAMA
FORD, GERALD
42.614871
9
3.835338
0.835338
4
1976
ALABAMA
MADDOX, LESTER
0.777613
9
0.069985
0.069985
0
1976
ALABAMA
BUBAR, BENJAMIN
0.563808
9
0.050743
0.050743
0
1976
ALABAMA
HALL, GUS
0.165194
9
0.014867
0.014867
0

How can I create a stacked bar chart in matplotlib where the stacks vary from bar to bar?

So I have a pandas DataFrame that looks something like this:
year country total
0 2010 USA 10
1 2010 CHIN 12
2 2011 USA 8
3 2011 JAPN 12
4 2012 KORR 7
5 2012 USA 10
6 2013 CHIN 9
7 2013 USA 13
I'd like to create a stacked bar chart in matplotlib, where there is one bar for each year and stacks for the two countries in that year with height based on the total column. The color should be based on the country and be represented in the legend.
I can't seem to figure out how to make this happen. I think I could do it using for loops to go through each year and each country, then construct the bar with the color corresponding to values in a dictionary. However, this will create individual legend entries for each individual bar such that there are 8 total values in the legend. This is also a horribly inefficient way to graph in matplotlib as far as I can tell.
Can anyone give some pointers?
You need to transform your df first. It can be done via the below:
df = pd.DataFrame({'year': {0: 2010, 1: 2010, 2: 2011, 3: 2011, 4: 2012, 5: 2012, 6: 2013, 7: 2013},
'country': {0: 'USA', 1: 'CHIN', 2: 'USA', 3: 'JAPN', 4: 'KORR', 5: 'USA', 6: 'CHIN', 7: 'USA'},
'total': {0: 10, 1: 12, 2: 8, 3: 12, 4: 7, 5: 10, 6: 9, 7: 13}})
df2 = df.groupby(['year',"country"])['total'].sum().unstack("country")
print (df2)
#
country CHIN JAPN KORR USA
year
2010 12.0 NaN NaN 10.0
2011 NaN 12.0 NaN 8.0
2012 NaN NaN 7.0 10.0
2013 9.0 NaN NaN 13.0
#
ax = df2.plot(kind='bar', stacked=True)
plt.show()
Result:

Select rows where values in at least one column is negative

Given a DataFrame:
df = pd.DataFrame(
{'AgeAtMedStart': {1: -46.47, 2: 46.47, 3: 46.8, 4: 51.5, 5: 51.5},
'AgeAtMedStop': {1: 46.8, 2: 46.8, 3: nan, 4: -51.9, 5: 51.81},
'MedContinuing': {1: 'No', 2: 'No', 3: 'Yes', 4: 'No', 5: 'No'},
'Medication': {1: 'Med1', 2: 'Med2', 3: 'Med3', 4: 'Med4', 5: 'Med4'},
'YearOfMedStart': {1: 2016.0, 2: 2016.0, 3: 2016.0, 4: 2016.0, 5: 2016.0}}
)
df
AgeAtMedStart AgeAtMedStop MedContinuing Medication YearOfMedStart
1 -46.47 46.80 No Med1 2016.0
2 46.47 46.80 No Med2 2016.0
3 46.80 NaN Yes Med3 2016.0
4 51.50 -51.90 No Med4 2016.0
5 51.50 51.81 No Med4 2016.0
I want to filter to retain rows where any of the numeric values in the "AgeAt*" columns is negative.
My expected output for this output is to have row with index 1 since "AgeAtMedStart" has value -46.47 and row with index 4 since "AgeAtMedStop" is -51.9, so the output would be
AgeAtMedStart AgeAtMedStop MedContinuing Medication YearOfMedStart
1 -46.47 46.8 No Med1 2016.0
4 51.50 -51.9 No Med4 2016.0
EDIT1:
So I've tried the different answers provided thus far, but all return an empty dataframe. And I believe part of the problem is that I have another column called AgeAtMedStartFlag (and AgeAtMedStopFlag) which contain strings. So for this sample csv:
RecordKey Medication CancerSiteForTreatment CancerSiteForTreatmentCode TreatmentLineCodeKey AgeAtMedStart AgeAtMedStartFlag YearOfMedStart MedContinuing AgeAtMedStop AgeAtMedStopFlag ChangeOfTreatment
1 Drug1 Site1 C1.0 First -46.47 Year And Month Are Known But Day Is Missing And Coded To 15 2016 No 46.8 Year And Month Are Known But Day Is Missing And Coded To 15 Yes
1 Drug2 Site2 C1.1 First 46.47 Year And Month Are Known But Day Is Missing And Coded To 15 2016 No 46.8 Year And Month Are Known But Day Is Missing And Coded To 15 Yes
1 Drug3 Site3 C1.2 First 46.8 Year And Month Are Known But Day Is Missing And Coded To 15 2016 Yes Yes
2 Drug4 Site4 C1.3 First 51.5 2016 No 51.9 Yes
2 Drug5 Site5 C1.4 First 51.5 2016 No -51.81 Yes
3 Drug6 Site6 C1.5 First 73.93 2016 No 74.42 Yes
3 Drug7 Site7 C1.6 First 73.93 2016 No 74.42 Yes
4 Drug8 Site8 C1.7 First 36.66 2015 No 37.24 Yes
4 Drug9 Site9 C1.8 First 36.66 2015 No 37.24 Yes
4 Drug10 Site10 C1.9 First 36.66 2015 No 37.24 Yes
9 Drug11 Site11 C1.10 First 43.55 2016 No 43.68 Yes
9 Drug12 Site12 C1.11 First 43.22 2016 No 43.49 Yes
9 Drug13 Site13 C1.12 First 43.55 2016 No 43.68 Yes
9 Drug14 Site14 C1.13 First 43.22 2016 No 43.49 Yes
10 Drug15 Site15 C1.14 First 74.42 2016 No 74.84 Yes
10 Drug16 Site16 C1.15 First 73.56 2015 No 73.98 Yes
10 Drug17 Site17 C1.16 First 73.56 2015 No 73.98 No
10 Drug18 Site18 C1.17 First 74.42 2016 No 74.84 No
10 Drug19 Site19 C1.18 First 73.56 2015 No 73.98 No
10 Drug20 Site20 C1.19 First 74.42 2016 No 74.84 No
11 Drug21 Site21 C1.20 First 70.72 2013 No 72.76 No
11 Drug22 Site22 C1.21 First 68.76 2011 No 70.62 No
11 Drug23 Site23 C1.22 First 73.43 2016 No 73.96 No
11 Drug24 Site24 C1.23 First 72.76 2015 No 73.43 No
with this change to my script:
age_df = df.columns[(df.columns.str.startswith('AgeAt')) & (~df.columns.str.endswith('Flag'))]
df[df[age_df] < 0].to_excel('invalid.xlsx', 'Benjamin_Button')
It returns:
RecordKey Medication CancerSiteForTreatment CancerSiteForTreatmentCode TreatmentLineCodeKey AgeAtMedStart AgeAtMedStartFlag YearOfMedStart MedContinuing AgeAtMedStop AgeAtMedStopFlag ChangeOfTreatment
1 -46.47
1
1
2
2 -51.81
3
3
4
4
4
9
9
9
9
10
10
10
10
10
10
11
11
11
11
Can I modify this implementation to only return the rows where the negatives are and if possible, the rest of the values for those rows? Or even better, just the negative ages and the RecordKey for that row.
Here's a simple one-liner for you. If you need to logically determine if the column is numeric refer to coldspeed's answer. But, if you are ok with explicit column references a simple method like this will work.
Note I'm also filling NaN's with 0; this will meet your requirement even though the data is missing. Nan's can be handled in other ways, but this will suffice here. If you have missing values in other columns you'd like to preserve, this can also be done (I didn't include it here for simplicity).
myData = df.fillna(0).query('AgeAtMedStart < 0 or AgeAtMedStop < 0')
Returns:
AgeAtMedStart AgeAtMedStop MedContinuing Medication YearOfMedStart
1 -46.47 46.8 No Med1 2016.0
4 51.50 -51.9 No Med4 2016.0
Pandas native query method is very handy for simple filter expressions.
Refer to the docs for more info: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html
First get the columns of interest:
cols = [col for col in df if col.lower().startswith('AgeAt')]
Then get the DF with those columns:
df_wanted = df[cols]
Then get the rows:
x = df_wanted[df_wanted < 0]
Of course, if you are looking at multiple columns, some of the cells will contain nan.

Aggregate a bunch of different data in a single groupby with multiple columns

I have large dataframe of data in Pandas (let's say of courses at a university) looking like:
ID name credits enrolled ugrad/grad year semester
1 Math 4 62 ugrad 2016 Fall
2 History 3 15 ugrad 2016 Spring
3 Adv Math 3 8 grad 2017 Fall
...
and I want to group it by year and semester, and then get a bunch of different aggregate data on it, but all at one time if I can. For example, I want a total count of courses, count of only undergraduate courses, and sum of enrollment for a given semester. I can do each of these individually using value_counts, but I'd like to get an output such as:
year semester count count_ugrad total_enroll
2016 Fall # # #
Spring # # #
2017 Fall # # #
Spring # # #
...
Is this possible?
Here I added a new subject for Python and provided as a dict to load into dataframe.
Solution is a combination of the agg() method on a groupby, where the aggregations are provided in a dictionary, and then the use of a custom aggregation function for your ugrad requirement:
def my_custom_ugrad_aggregator(arr):
return sum(arr == 'ugrad')
dict = {'name': {0: 'Math', 1: 'History', 2: 'Adv Math', 3: 'Python'}, 'year': {0: 2016, 1: 2016, 2: 2017, 3: 2017}, 'credits': {0: 4, 1: 3, 2: 3, 3: 4}, 'semester': {0: 'Fall', 1: 'Spring', 2: 'Fall', 3: 'Spring'}, 'ugrad/grad': {0: 'ugrad', 1: 'ugrad', 2: 'grad', 3: 'ugrad'}, 'enrolled': {0: 62, 1: 15, 2: 8, 3: 8}, 'ID': {0: 1, 1: 2, 2: 3, 3: 4}}
df =pd.DataFrame(dict)
ID credits enrolled name semester ugrad/grad year
0 1 4 62 Math Fall ugrad 2016
1 2 3 15 History Spring ugrad 2016
2 3 3 8 Adv Math Fall grad 2017
3 4 4 8 Python Spring ugrad 2017
print df.groupby(['year','semester']).agg({'name':['count'],'enrolled':['sum'],'ugrad/grad':my_custom_ugrad_aggregator})
gives:
name ugrad/grad enrolled
count my_custom_ugrad_aggregator sum
year semester
2016 Fall 1 1 62
Spring 1 1 15
2017 Fall 1 0 8
Spring 1 1 8
Use agg with dictionary on how to rollup/aggregate each column:
df_out = df.groupby(['year','semester'])[['enrolled','ugrad/grad']]\
.agg({'ugrad/grad':lambda x: (x=='ugrad').sum(),'enrolled':['sum','size']})\
.set_axis(['Ugrad Count','Total Enrolled','Count Courses'], inplace=False, axis=1)
df_out
Output:
Ugrad Count Total Enrolled Count Courses
year semester
2016 Fall 1 62 1
Spring 1 15 1
2017 Fall 0 8 1

Categories

Resources