Concat 2 Dataframes with 54 entries yields 1 row - python

I have created 2 dataframes with a common index based on Year and District. There are 58 rows in each dataframe and the Year and Districts are exact matches. Yet when I try to join them, I get a new dataframe with all of the columns combined (which is what I want) but only one single row - New York City. That row exists in both dataframes, as do all the rest, but only this one makes it to the merged DF. I have tried a few different methods of joining the dataframes but they all do the same thing. This example uses:
pd.concat([ groupeddf,Popdf], axis=1)
This is the Popdf with (Year, District) as Index:
Population
Year District
2017 Albany 309612
Allegany 46894
Broome 193639
Cattaraugus 77348
Cayuga 77603
This is the groupeddf indexed on Year and District (some columns eliminated for clarity):
Total SNAP Households Total SNAP Persons \
Year District
2017 Albany 223057 416302
Allegany 36935 69802
Broome 201586 363504
Cattaraugus 75567 144572
Cayuga 64168 121988
This is the merged DF after executing pd.concat([ groupeddf,Popdf], axis=1):
Population Total SNAP Households Total SNAP Persons
Year District
2017 New York City 8622698 11314598 19987958
This shows the merged dataframe has only 1 entry:
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 1 entries, (2017, New York City) to (2017, New York City)
Data columns (total 4 columns):
Population 1 non-null int64
Total SNAP Households 1 non-null int64
Total SNAP Persons 1 non-null int64
Total SNAP Benefits 1 non-null float64
dtypes: float64(1), int64(3)
memory usage: 170.0+ bytes
UPDATE: I tried another approach and it demonstrates that the indices which appear identical to me, are not being seen as identical.
When I execute this code, I get duplicates instead of a merge:
combined_df = groupeddf.merge(Popdf, how='outer', left_index=True, right_index=True)
The results look like this:
Year District
2017 Albany 223057.0 416302.0
Albany NaN NaN
Allegany 36935.0 69802.0
Allegany NaN NaN
Broome 201586.0 363504.0
Broome NaN NaN
Cattaraugus 75567.0 144572.0
Cattaraugus NaN NaN
The only exception is when you get down to New York City. That one does not duplicate, so is actually seen as the same index. So there is something wrong with the data, but I an not sure what.

Did you try using merge, like this:
combined_df = merge(groupeddf, Popdf, how = 'inner', on = ['Year','District'])
I did inner if you want to combine only where the district and year exist in both dataframes. If you want to keep all on the left dataframe, but only matching from the right, then do a left join, etc.

It took a while but I finally sorted it out. The District name in the population dataframe had a space at the end of the name, where there was not space in the SNAP df.
"Albany " vs "Albany"

Related

How to collapse all rows in pandas dataframe across all columns

I am trying to collapse all the rows of a dataframe into one single row across all columns.
My data frame looks like the following:
name
job
value
bob
business
100
NAN
dentist
Nan
jack
Nan
Nan
I am trying to get the following output:
name
job
value
bob jack
business dentist
100
I am trying to group across all columns, I do not care if the value column is converted to dtype object (string).
I'm just trying to collapse all the rows across all columns.
I've tried groupby(index=0) but did not get good results.
You could apply join:
out = df.apply(lambda x: ' '.join(x.dropna().astype(str))).to_frame().T
Output:
name job value
0 bob jack business dentist 100.0
Try this:
new_df = df.agg(lambda x: x.dropna().astype(str).tolist()).str.join(' ').to_frame().T
Output:
>>> new_df
name job value
0 bob jack business dentist 100.0

Compare different df's row by row and return changes

Every month I collect data that contains details of employees to be stored in our database.
I need to find a solution to compare the data stored in the previous month to the data received and, for each row that any of the columns had a change, it would return into a new dataframe.
I would also need to know somehow which columns in each row of this new returned dataframe had a change when this comparison happened.
There are also some important details to mention:
Each column can also contain blank values in any of the dataframes;
The dataframes have the same column names but not necessarily the same data type;
The dataframes do not have the same number of rows necessarily;
If a row do not find its Index match, do not return to the new dataframe;
The rows of the dataframes can be matched by a column named "Index"
So, for example, we would have this dataframe (which is just a slice of the real one as it has 63 columns):
df1:
Index Department Salary Manager Email Start_Date
1 IT 6000.00 Jack ax#i.com 01-01-2021
2 HR 7000 O'Donnel ay#i.com
3 MKT $7600 Maria d 30-06-2021
4 I'T 8000 Peter az#i.com 14-07-2021
df2:
Index Department Salary Manager Email Start_Date
1 IT 6000.00 Jack ax#i.com 01-01-2021
2 HR 7000 O'Donnel ay#i.com 01-01-2021
3 MKT 7600 Maria dy#i.com 30-06-2021
4 IT 8000 Peter az#i.com 14-07-2021
5 IT 9000 John NOT PROVIDED
6 IT 9900 John NOT PROVIDED
df3:
Index Department Salary Manager Email Start_Date
2 HR 7000 O'Donnel ay#i.com 01-01-2021
3 MKT 7600 Maria dy#i.com 30-06-2021
4 IT 8000 Peter az#i.com 14-07-2021
**The differences in this example are:
Start date added in row of Index 2
Salary format corrected and email corrected for row Index 3
Department format corrected for row Index 4
What would be the best way to to this comparison?
I am not sure if there is an easy solution to understand what changed in each field but returning the dataframe with rows that had at least 1 change would be helpful.
Thank you for the support!
I think compare could do the trick: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.compare.html
But first you would need to align the rows between old and new dataframe via the index:
new_df_to_compare=new_df.loc[old_df.index]
When datatypes don't match. You would also need to align them:
new_df_to_compare = new_df_to_compare.astype(old_df.dtypes.to_dict())
Then compare should work just like this:
difference_df = old_df.compare(new_df_to_compare)

When merging Dataframes on a common column like ID (primary key),how do you handle data that appears more than once for a single ID, in the second df?

So I have two dfs.
DF1
Superhero ID Superhero City
212121 Spiderman New york
364331 Ironman New york
678523 Batman Gotham
432432 Dr Strange New york
665544 Thor Asgard
123456 Superman Metropolis
555555 Nightwing Gotham
666666 Loki Asgard
Df2
SID Mission End date
665544 10/10/2020
665544 03/03/2021
212121 02/02/2021
665544 05/12/2020
212121 15/07/2021
123456 03/06/2021
666666 12/10/2021
I need to create a new df that summarizes how many heroes are in each city and in which quarter will their missions be complete. I'll be able to match the superhero (and their city) in df1 to the mission end date via their Superhero ID or SID in Df2 ('Superhero Id'=='SID'). Superhero IDs appear only once in Df1 but can appear multiple times in DF2.
Ultimately I need a count for the total no. of heroes in the different cities (which I can do - see below) as well as how many heroes will be free per quarter.
These are the thresholds for the quarters
Quarter 1 – Apr, May, Jun
Quarter 2 – Jul, Aug, Sept
Quarter 3 – Oct, Nov, Dec
Quarter 4 – Jan, Feb, Mar
The following code tells me how many heroes are in each city:
df_Count = pd.DataFrame(df1.City.value_counts().reset_index())
Which produces:
City Count
New york 3
Gotham 2
Asgard 2
Metropolis 1
I can also convert the dates into datetime format via the following operation:
#Convert to datetime series
Df2['Mission End date'] = pd.to_datetime('Df2['Mission End date']')
Ultimately I need a new df that looks like this
City Total Count No. of heroes free in Q3 No. of heroes free in Q4 Free in Q1 2021+
New york 3 2 0 1
Gotham 2 2 2 0
Asgard 2 1 2 0
Metropolis 1 0 0 1
If anyone can help me create the appropriate quarters and be able to sort them into the appropriate columns I'd be extremely grateful. I'd also like a way to handle heroes having multiple mission end dates. I can't ignore them I need to still count them. I suspect I'll need to create a custom function which I can than apply to each row via the apply() method and a lambda expression. This issue has been a pain for a while now so I'd appreciate all the help I can get. Thank you very much :)
After merging your dataframe with
df = df1.merge(df2, left_on='Superhero ID', right_on='SID')
And converting your date column to pd.datetime format
df.assign(missing_end_date=lambda x: pd.to_datetime(x['Missing End Date']))
You can create two columns; one to extract the quarter and one to extract the year of the newly created datetime column
df.assign(quarter_end_date=lambda x: x.missing_end_date.dt.quarter)
.assign(year_end_date=lambda x: x.missing_end_date.dt.year)
And combine them into a column that shows the quarter in a format Qx, yyyy
df.assign(quarter_year_end=lambda x: f"Q{int(x.quarter_end_date)}, {int(x.year_end_date)}")
Finally groupby the city and quarter, count the number of superheros and pivot the dataframe to get your desired result
df.groupby(['City', 'quarter_year_end'])
.count()
.reset_index()
.pivot(index='City', columns='quarter_year_end', values='Superhero')

How to aggregate rows in a pandas dataframe

I have a dataframe shown in the image 1. It is a sample of pubs in London,UK (3337 pubs/rows). And the geometry is at an LSOA level. In some LSOAs, there is more than 1 pub. I want my dataframe to summarise the number of pubs in every LSOA. I already have the information by using
psdf['lsoa11nm'].value_counts()
prints out:
City of London 001F 103
City of London 001G 40
Westminster 013B 36
Westminster 018A 36
Westminster 013E 30
...
Lambeth 005A 1
Croydon 043C 1
Hackney 002E 1
Merton 022D 1
Bexley 008B 1
Name: lsoa11nm, Length: 1630, dtype: int64
I cant use this as a new dataframe because it is a key and one column as opposed two columns where one would be lsoa11nm and the other pub count.
Does anyone know how to groupby the dataframe so that there will be only one row for every lsoa, that says how many pubs are in it?

Output after groupby and aggregation

I have a PANDAS dataframe. When I make a GROUP BY and an aggregation function such as min or max, I get only partial results, namely the column on which I made the min/max aggregation on a numeric column. How can I get the full line, ie all the data corresponding to this min/max?
The dataframe looks llike:
Place Year Time TimeS
BOSTON 1973 02:16:03 8163
FUKUOKA 1973 02:11:45 7905
NEW YORK 1973 02:21:54 8514
BERLIN 1974 02:44:53 9893
BOSTON 1974 02:13:39 8019
FUKUOKA 1974 02:11:32 7892
NEW YORK 1974 02:26:30 8790
I want the min or max time realized per year and the city. I can only get the time with (marathon is the name of the pandas.DataFrame)
marathon.groupby('year').TimeS.max()
that gives:
1973 02:21:54
1974 02:44:53
How can I have the place that corresponds to this time?
Namely:
NEW YORK 1973 02:21:54
BERLIN 1974 02:44:53
There's many ways to do this, definitely. Here's two:
marathon[marathon.TimeS == marathon.groupby('Year').TimeS.transform('max')]
or
marathon[marathon.TimeS.isin(marathon.groupby('Year').TimeS.max())]
Let's check out some of these intermediate objects
In [29]: marathon.groupby('Year').TimeS.max()
Out[29]:
Year
1973 8514
1974 9893
Name: TimeS, dtype: int64
So we get a series, but only of two values. So we can index the dataframe wherever the column values are equal to one of these, which is the second solution.
The first solution uses transform('max') instead, which preserves the size of the dataframe:
In [30]: marathon.groupby('Year').TimeS.transform('max')
Out[30]:
0 8514
1 8514
2 8514
3 9893
4 9893
5 9893
6 9893
Name: TimeS, dtype: int64
So now it's the same size and we can just compare equality directly to the columns that it's equal to.
Note that if the max values occur multiple times, both of these methods will return the duplicates as well---that may or may not be what you want.

Categories

Resources