Rename hundred or more column names in pandas dataframe - python

I am working with the John Hopkins Covid data for personal use to create charts. The data shows cumulative deaths by country, I want deaths per day. Seems to me the easiest way is to create two dataframes and subtract one from the other. But the file has column names as dates and the code, e.g. df3 = df2 - df1 subtracts the columns with the matching dates. So I want to rename all the columns with some easy index, for example, 1, 2, 3, ....
I cannot figure out how to do this?

new_names=list(range(data.shape[1]))
data.columns=new_names
This renames the columns of data from 0 upwards.

You could re-shape the data: use dates and row labels, and use country, province as column labels.
import pandas as pd
covid_csv = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv'
df_raw = (pd.read_csv(covid_csv)
.set_index(['Country/Region', 'Province/State'])
.drop(columns=['Lat', 'Long'])
.transpose())
df_raw.index = pd.to_datetime(df_raw.index)
print( df_raw.iloc[-5:, 0:5] )
Country/Region Afghanistan Albania Algeria Andorra Angola
Province/State NaN NaN NaN NaN NaN
2020-07-27 1269 144 1163 52 41
2020-07-28 1270 148 1174 52 47
2020-07-29 1271 150 1186 52 48
2020-07-30 1271 154 1200 52 51
2020-07-31 1272 157 1210 52 52
Now, you can use the rich set of pandas tools for time-series analysis. For example, use diff() to go from cumulative deaths to per-day rates. Or, you could compute N-day moving averages, create time-series plots, ...
print(df_raw.diff().iloc[-5:, 0:5])
Country/Region Afghanistan Albania Algeria Andorra Angola
Province/State NaN NaN NaN NaN NaN
2020-07-27 10.0 6.0 8.0 0.0 1.0
2020-07-28 1.0 4.0 11.0 0.0 6.0
2020-07-29 1.0 2.0 12.0 0.0 1.0
2020-07-30 0.0 4.0 14.0 0.0 3.0
2020-07-31 1.0 3.0 10.0 0.0 1.0
Finally, df_raw.sum(level='Country/Region', axis=1) will aggregate all Provinces within a Country.

Thanks for the time and effort but I figured out a simple way.
for i, row in enumerate(df):
df.rename(columns = { row : str(i)}, inplace = True)
to change the columns names and then
for i, row in enumerate(df):
df.rename(columns = { row : str( i + 43853)}, inplace = True)
to change them back to the dates I want.

Related

Groupby two columns on on two axis

I'd like to groupby pandas dataframe on two different columns on two different axes, however, struggling to figure it out
Sample code:
import numpy as np
import pandas as pd
x = pd.date_range("2022-01-01", "2022-06-01", freq="D")
y = np.arange(0, x.shape[0])
z = np.random.choice(["Jack", "Jul", "John"], size=x.shape[0])
df = pd.DataFrame({"Date": x, "numbers": y, "names": z})
so far I have the following solution, I cannot use .resample because then I loose all the names:
min_ = x.min()
max_ = x.max()
dt_range = pd.date_range(min_, max_, freq="W")
list_ = []
for date in dt_range:
temp_df = df[df["Date"].dt.week == date.week]
temp_df = temp_df.groupby("names").sum()
list_.append(temp_df)
pd.concat(list_, axis=1)
Sample output:
numbers numbers numbers numbers numbers numbers ... numbers numbers numbers numbers numbers numbers
names ...
Jack 0.0 7 36.0 39 53 99 ... 113 237 247 260 416 NaN
John 1.0 16 48.0 54 78 68 ... 436 233 250 262 139 726.0
Jul NaN 12 NaN 40 51 64 ... 221 349 371 395 411 289.0
You can use df.pivot to get this (I have added in a group by following from comments saying pivot causes an error), using the below:
df_out = (df.groupby(['names', 'Date'], as_index=False).sum()
.pivot(index='names', columns='Date', values='numbers'))
However this will output with Date as the column names, rather than 'numbers' as in your question:
Date 2022-01-01 2022-01-02 2022-01-03 ... 2022-05-30 2022-05-31 2022-06-01
names ...
Jack NaN NaN NaN ... NaN NaN NaN
John 0.0 1.0 2.0 ... 149.0 NaN NaN
Jul NaN NaN NaN ... NaN 150.0 151.0
(Note: not an exact match the the output in the question due to the random data in the df in the question).
To correct this, you can just set all the columns to be 'numbers' using the below:
df_out.columns = ['numbers']*len(df_out.columns)
numbers numbers numbers numbers ... numbers numbers numbers numbers
names ...
Jack NaN NaN NaN 3.0 ... NaN NaN NaN NaN
John 0.0 1.0 2.0 NaN ... 148.0 149.0 NaN NaN
Jul NaN NaN NaN NaN ... NaN NaN 150.0 151.0

Merging multiple dataframes with overlapping rows and different columns

I have multiple pandas data frames with some common columns and some overlapping rows. I would like to combine them in such a way that I have one final data frame with all of the columns and all of the unique rows (overlapping/duplicate rows dropped). The remaining gaps should be nans.
I have come up with the function below. In essence it goes through all columns one by one, appending all of the values from each data frame, dropping the duplicates (overlap), and building a new output data frame column by column.
def combine_dfs(dataframes:list):
## Identifying all unique columns in all data frames
columns = []
for df in dataframes:
columns.extend(df.columns)
columns = np.unique(columns)
## Appending values from each data frame per column
output_df = pd.DataFrame()
for col in columns:
column = pd.Series(dtype="object", name=col)
for df in dataframes:
if col in df.columns:
column = column.append(df[col])
## Removing overlapping data (assuming consistent values)
column = column[~column.index.duplicated()]
## Adding column to output data frame
column = pd.DataFrame(column)
output_df = pd.concat([output_df,column], axis=1)
output_df.sort_index(inplace=True)
return output_df
df_1 = pd.DataFrame([[10,20,30],[11,21,31],[12,22,32],[13,23,33]], columns=["A","B","C"])
df_2 = pd.DataFrame([[33,43,54],[34,44,54],[35,45,55],[36,46,56]], columns=["C","D","E"], index=[3,4,5,6])
df_3 = pd.DataFrame([[50,60],[51,61],[52,62],[53,63],[54,64]], columns=["E","F"])
print(combine_dfs([df_1,df_2,df_3]))
The output, as intended in the visualization, looks like this:
A B C D E F
0 10.0 20.0 30 NaN 50 60.0
1 11.0 21.0 31 NaN 51 61.0
2 12.0 22.0 32 NaN 52 62.0
3 13.0 23.0 33 43.0 54 63.0
4 NaN NaN 34 44.0 54 64.0
5 NaN NaN 35 45.0 55 NaN
6 NaN NaN 36 46.0 56 NaN
This method works well on small data sets. Is there a way to optimize this?
IIUC you can chain combine_first:
print (df_1.combine_first(df_2).combine_first(df_3))
A B C D E F
0 10.0 20.0 30 NaN 50.0 60.0
1 11.0 21.0 31 NaN 51.0 61.0
2 12.0 22.0 32 NaN 52.0 62.0
3 13.0 23.0 33 43.0 54.0 63.0
4 NaN NaN 34 44.0 54.0 64.0
5 NaN NaN 35 45.0 55.0 NaN
6 NaN NaN 36 46.0 56.0 NaN

Loop over pandas column names to create lists

Here comes an easy one.. I want to create lists from each of the column in my data frame and tried to loop over it.
for columnName in grouped.iteritems():
columnName = grouped[columnName]
It gives me a TypeError: '('africa', year (note africa is one of the columns and year the index). Anybody knows what is going on here?
This is my dataframe
continent africa antarctica asia ... north america oceania south america
year ...
2009 NaN NaN 1.0 ... NaN NaN NaN
2010 94.0 1.0 306.0 ... 72.0 12.0 21.0
2011 26.0 NaN 171.0 ... 21.0 2.0 4.0
2012 975.0 28.0 5318.0 ... 480.0 58.0 140.0
2013 1627.0 30.0 7363.0 ... 725.0 124.0 335.0
2014 3476.0 41.0 7857.0 ... 1031.0 202.0 520.0
2015 2999.0 43.0 12048.0 ... 1374.0 256.0 668.0
2016 2546.0 55.0 11429.0 ... 1798.0 325.0 3021.0
2017 7486.0 155.0 18467.0 ... 2696.0 640.0 2274.0
2018 10903.0 340.0 22979.0 ... 2921.0 723.0 1702.0
2019 7367.0 194.0 15928.0 ... 1971.0 457.0 993.0
[11 rows x 7 columns]
So I would expect to get one list with eleven elements for each column.
iteritems returns pairs of column_name, column_data similar to python's dict.items(). If you want to iterate over the column names you can just iterate over grouped like so:
result = {}
for column_name in grouped:
result[column_name] = [*grouped[column_name]]
This will leave you with a plain python dict containing plain python lists in result. Note that you would get pandas Series instead of lists if you would just do result[column_name] = grouped[column_name].

Pandas DataFrame mean of data in columns occurring before certain date time

I have a dataframe with ID's of clients and their expenses for 2014-2018. What I want is to have the mean of the expenses per ID but only the years before a certain date can be taken into account when calculating the mean value (so column 'Date' dictates which columns can be taken into account for the mean).
Example: for index 0 (ID: 12), the date states '2016-03-08', then the mean should be taken from the columns 'y_2014' and 'y_2015', so then for this index, the mean is 111.0. If the date is too early (e.g. somewhere in 2014 or earlier in this case), then NaN should be returned (see index 6 and 9).
Desired output:
y_2014 y_2015 y_2016 y_2017 y_2018 Date ID mean
0 100.0 122.0 324 632 NaN 2016-03-08 12 111.0
1 120.0 159.0 54 452 541.0 2015-04-09 96 120.0
2 NaN 164.0 687 165 245.0 2016-02-15 20 164.0
3 180.0 421.0 512 184 953.0 2018-05-01 73 324.25
4 110.0 654.0 913 173 103.0 2017-08-04 84 559.0
5 130.0 NaN 754 124 207.0 2016-07-03 26 130.0
6 170.0 256.0 843 97 806.0 2013-02-04 87 NaN
7 140.0 754.0 95 101 541.0 2016-06-08 64 447
8 80.0 985.0 184 84 90.0 2019-03-05 11 284.6
9 96.0 65.0 127 130 421.0 2014-05-14 34 NaN
The code below is what I tried.
Tried code:
import pandas as pd
import numpy as np


df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
 '2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})

print(df)

# the years from columns
data = df.filter(like='y_')
data_years = data.columns.str.extract('(\d+)')[0].astype(int)

# the years from Date
years = pd.to_datetime(df.Date).dt.year.values


df['mean'] = data.where(data_years<years[:,None]).mean(1)
print(df)
-> ValueError: Lengths must match to compare
Solved: one possible answer to my own question
import pandas as pd
import numpy as np

df = pd.DataFrame({"ID": [12,96,20,73,84,26,87,64,11,34],
"y_2014": [100,120,np.nan,180,110,130,170,140,80,96],
"y_2015": [122,159,164,421,654,np.nan,256,754,985,65],
"y_2016": [324,54,687,512,913,754,843,95,184,127],
"y_2017": [632,452,165,184,173,124,97,101,84,130],
"y_2018": [np.nan,541,245,953,103,207,806,541,90,421],
"Date": ['2016-03-08', '2015-04-09', '2016-02-15', '2018-05-01', '2017-08-04',
'2016-07-03', '2013-02-04', '2016-06-08', '2019-03-05', '2014-05-14']})
#Subset from original df to calculate mean
subset = df.loc[:,['y_2014', 'y_2015', 'y_2016', 'y_2017', 'y_2018']]
#an expense value is only available for the calculation of the mean when that year has passed, therefore 2015-01-01 is chosen for the 'y_2014' column in the subset etc. to check with the 'Date'-column
subset.columns = ['2015-01-01', '2016-01-01', '2017-01-01', '2018-01-01', '2019-01-01']

s = subset.columns[0:].values < df.Date.values[:,None]
t = s.astype(float)
t[t == 0] = np.nan
df['mean'] = (subset.iloc[:,0:]*t).mean(1)

print(df)
#Additionally: (gives the sum of expenses before a certain date in the 'Date'-column
df['sum'] = (subset.iloc[:,0:]*t).sum(1)

print(df)

How to use melt function in pandas for large table?

I currently have data which looks like this:
Afghanistan_co2 Afghanistan_income Year Afghanistan_population Albania_co2
1 NaN 603 1801 3280000 NaN
2 NaN 603 1802 3280000 NaN
3 NaN 603 1803 3280000 NaN
4 NaN 603 1804 3280000 NaN
and I would like to use melt to turn it into this:
But with the labels instead as 'Year', 'Country', 'population Value',' co2 Value', 'income value'
It is a large dataset with many rows and columns, so I don't know what to do, I only have this so far:
pd.melt(merged_countries_final, id_vars=['Year'])
I've done this since there does exist a column in the dataset titled 'Year'.
What should I do?
Just doing with str.split with your columns
df.set_index('Year',inplace=True)
df.columns=pd.MultiIndex.from_tuples(df.columns.str.split('_').map(tuple))
df=df.stack(level=0).reset_index().rename(columns={'level_1':'Country'})
df
Year Country co2 income population
0 1801 Afghanistan NaN 603.0 3280000.0
1 1802 Afghanistan NaN 603.0 3280000.0
2 1803 Afghanistan NaN 603.0 3280000.0
3 1804 Afghanistan NaN 603.0 3280000.0

Categories

Resources