I'd like to groupby pandas dataframe on two different columns on two different axes, however, struggling to figure it out
Sample code:
import numpy as np
import pandas as pd
x = pd.date_range("2022-01-01", "2022-06-01", freq="D")
y = np.arange(0, x.shape[0])
z = np.random.choice(["Jack", "Jul", "John"], size=x.shape[0])
df = pd.DataFrame({"Date": x, "numbers": y, "names": z})
so far I have the following solution, I cannot use .resample because then I loose all the names:
min_ = x.min()
max_ = x.max()
dt_range = pd.date_range(min_, max_, freq="W")
list_ = []
for date in dt_range:
temp_df = df[df["Date"].dt.week == date.week]
temp_df = temp_df.groupby("names").sum()
list_.append(temp_df)
pd.concat(list_, axis=1)
Sample output:
numbers numbers numbers numbers numbers numbers ... numbers numbers numbers numbers numbers numbers
names ...
Jack 0.0 7 36.0 39 53 99 ... 113 237 247 260 416 NaN
John 1.0 16 48.0 54 78 68 ... 436 233 250 262 139 726.0
Jul NaN 12 NaN 40 51 64 ... 221 349 371 395 411 289.0
You can use df.pivot to get this (I have added in a group by following from comments saying pivot causes an error), using the below:
df_out = (df.groupby(['names', 'Date'], as_index=False).sum()
.pivot(index='names', columns='Date', values='numbers'))
However this will output with Date as the column names, rather than 'numbers' as in your question:
Date 2022-01-01 2022-01-02 2022-01-03 ... 2022-05-30 2022-05-31 2022-06-01
names ...
Jack NaN NaN NaN ... NaN NaN NaN
John 0.0 1.0 2.0 ... 149.0 NaN NaN
Jul NaN NaN NaN ... NaN 150.0 151.0
(Note: not an exact match the the output in the question due to the random data in the df in the question).
To correct this, you can just set all the columns to be 'numbers' using the below:
df_out.columns = ['numbers']*len(df_out.columns)
numbers numbers numbers numbers ... numbers numbers numbers numbers
names ...
Jack NaN NaN NaN 3.0 ... NaN NaN NaN NaN
John 0.0 1.0 2.0 NaN ... 148.0 149.0 NaN NaN
Jul NaN NaN NaN NaN ... NaN NaN 150.0 151.0
Related
I have multiple pandas data frames with some common columns and some overlapping rows. I would like to combine them in such a way that I have one final data frame with all of the columns and all of the unique rows (overlapping/duplicate rows dropped). The remaining gaps should be nans.
I have come up with the function below. In essence it goes through all columns one by one, appending all of the values from each data frame, dropping the duplicates (overlap), and building a new output data frame column by column.
def combine_dfs(dataframes:list):
## Identifying all unique columns in all data frames
columns = []
for df in dataframes:
columns.extend(df.columns)
columns = np.unique(columns)
## Appending values from each data frame per column
output_df = pd.DataFrame()
for col in columns:
column = pd.Series(dtype="object", name=col)
for df in dataframes:
if col in df.columns:
column = column.append(df[col])
## Removing overlapping data (assuming consistent values)
column = column[~column.index.duplicated()]
## Adding column to output data frame
column = pd.DataFrame(column)
output_df = pd.concat([output_df,column], axis=1)
output_df.sort_index(inplace=True)
return output_df
df_1 = pd.DataFrame([[10,20,30],[11,21,31],[12,22,32],[13,23,33]], columns=["A","B","C"])
df_2 = pd.DataFrame([[33,43,54],[34,44,54],[35,45,55],[36,46,56]], columns=["C","D","E"], index=[3,4,5,6])
df_3 = pd.DataFrame([[50,60],[51,61],[52,62],[53,63],[54,64]], columns=["E","F"])
print(combine_dfs([df_1,df_2,df_3]))
The output, as intended in the visualization, looks like this:
A B C D E F
0 10.0 20.0 30 NaN 50 60.0
1 11.0 21.0 31 NaN 51 61.0
2 12.0 22.0 32 NaN 52 62.0
3 13.0 23.0 33 43.0 54 63.0
4 NaN NaN 34 44.0 54 64.0
5 NaN NaN 35 45.0 55 NaN
6 NaN NaN 36 46.0 56 NaN
This method works well on small data sets. Is there a way to optimize this?
IIUC you can chain combine_first:
print (df_1.combine_first(df_2).combine_first(df_3))
A B C D E F
0 10.0 20.0 30 NaN 50.0 60.0
1 11.0 21.0 31 NaN 51.0 61.0
2 12.0 22.0 32 NaN 52.0 62.0
3 13.0 23.0 33 43.0 54.0 63.0
4 NaN NaN 34 44.0 54.0 64.0
5 NaN NaN 35 45.0 55.0 NaN
6 NaN NaN 36 46.0 56.0 NaN
I have a dataset that looks like below:
Zn Pb Ag Cu Mo Cr Ni Co Ba
87 7 0.02 42 2 57 38 14 393
70 6 0.02 56 2 27 29 20 404
75 5 0.02 69 2 44 23 17 417
70 6 0.02 54 1 20 19 12 377
I want to create a pandas dataframe out of this dataset. I have written the function below:
def correlation_iterated(raw_data,element_concentration):
columns = element_concentration.split()
df1 = pd.DataFrame(columns=columns)
data1=[]
selected_columns = raw_data.loc[:, element_concentration.split()].columns
for i in selected_columns:
for j in selected_columns:
# another function that takes 'i' and 'j' and returns 'a'
zipped1 = zip([i], a)
data1.append(dict(zipped1))
df1 = df1.append(data1,True)
print(df1)
This function is supposed to do the calculations for each element and create a 9 by 9 pandas dataframe and store each calculation in each cell. But I get the following:
Zn Pb Ag Cu Mo Cr Ni Co Ba
0 1.000000 NaN NaN NaN NaN NaN NaN NaN NaN
1 0.460611 NaN NaN NaN NaN NaN NaN NaN NaN
2 0.127904 NaN NaN NaN NaN NaN NaN NaN NaN
3 0.276086 NaN NaN NaN NaN NaN NaN NaN NaN
4 -0.164873 NaN NaN NaN NaN NaN NaN NaN NaN
.. ... .. .. .. .. .. .. .. ...
76 NaN NaN NaN NaN NaN NaN NaN NaN 0.113172
77 NaN NaN NaN NaN NaN NaN NaN NaN 0.027251
78 NaN NaN NaN NaN NaN NaN NaN NaN -0.036409
79 NaN NaN NaN NaN NaN NaN NaN NaN 0.041396
80 NaN NaN NaN NaN NaN NaN NaN NaN 1.000000
[81 rows x 9 columns]
which is basically calculating the results of the first column and storing them in just the first column, then doing the calculations and appending new rows to the column. How can I program the code in a way that appends new calculations to the next column when finished with one column? I want sth like this:
Zn Pb Ag Cu Mo Cr Ni Co Ba
0 1.000000 0.460611 ...
1 0.460611 1.000000 ...
2 0.127904 0.111559 ...
3 0.276086 0.303925 ...
4 -0.164873 -0.190886 ...
5 0.402046 0.338073 ...
6 0.174774 0.096724 ...
7 0.165760 -0.005301 ...
8 -0.043695 0.174193 ...
[9 rows x 9 columns]
Could you not just do something like this:
def correlation_iterated(raw_data,element_concentration):
columns = element_concentration.split()
data = {}
selected_columns = raw_data.loc[:,columns].columns
for i in selected_columns:
temp = []
for j in selected_columns:
# another function that takes 'i' and 'j' and returns 'a'
temp.append(a)
data[i] = temp
df = pd.DataFrame(data)
print(df)
I'm new to Python Pandas and struggling with the following problem for a while now.
The following dataframe df1 values show the indices that are coupled to the values of df2 that should be called
Name1 Name2 ... Name160 Name161
0 62 18 ... NaN 75
1 79 46 ... NaN 5
2 3 26 ... NaN 0
df2 contains the values that belong to the indices that have to be called.
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
3 379.0 722.0 ... NaN 655.0
4 398.0 834.0 ... NaN 675.0
.. ... ... ... ... ...
213 NaN NaN ... NaN NaN
214 NaN NaN ... NaN NaN
215 NaN NaN ... NaN NaN
216 NaN NaN ... NaN NaN
217 NaN NaN ... NaN NaN
For example, df1 shows the value '0' in column 'Name161'. Then df3 should show the value that is listed in df2 with index 0. In this case '164'.
Till so far, I got df3 showing the first 3 values of df2, but of course that not what I would like to achieve.
Input:
df3 = df1*0
for c in df1.columns:
df3[c]= df2[c]
print(df3)
Output:
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
Any help would be much appreciated, thanks!
Use DataFrame.stack with Series.reset_index for reshape both DataFrames, then merging by DataFrame.merge with left join and last pivoting by DataFrame.pivot:
#change index values for match by sample data in df2
print (df1)
Name1 Name2 Name160 Name161
0 2 4 NaN 4
1 0 213 NaN 216
2 3 2 NaN 0
df11 = df1.stack().reset_index(name='idx')
df22 = df2.stack().reset_index(name='val')
df = (df11.merge(df22,
left_on=['idx','level_1'],
right_on=['level_0','level_1'],
how='left')
.pivot('level_0_x','level_1','val')
.reindex(df1.columns, axis=1)
.rename_axis(None)
)
print (df)
Name1 Name2 Name160 Name161
0 252.0 834.0 NaN 675.0
1 152.0 NaN NaN NaN
2 379.0 695.0 NaN 164.0
I have searched around, but could not find the answer I was looking for. I have two dataframes, one has fairly discrete integer values in column A (df2) the other does not (df1). I would like to merge the two such that where column A is within 1, values in columns C and D would get merged once and NaN otherwise.
df1=
A B
0 30.00 -52.382420
1 33.14 -50.392513
2 36.28 -53.699646
3 39.42 -49.228439
.. ... ...
497 1590.58 -77.646561
498 1593.72 -77.049423
499 1596.86 -77.711639
500 1600.00 -78.092979
df2=
A C D
0 0.009 NaN NaN
1 0.036 NaN NaN
2 0.100 NaN NaN
3 10.000 12.4 0.29
4 30.000 12.82 0.307
.. ... ... ...
315 15000.000 NaN 7.65
316 16000.000 NaN 7.72
317 17000.000 NaN 8.36
318 18000.000 NaN 8.35
I would like the output to be
merged=
A B C D
0 30.00 -52.382420 12.82 0.29
1 33.14 -50.392513 NaN NaN
2 36.28 -53.699646 NaN NaN
3 39.42 -49.228439 NaN NaN
.. ... ... ... ...
497 1590.58 -77.646561 NaN NaN
498 1593.72 -77.049423 NaN NaN
499 1596.86 -77.711639 NaN NaN
500 1600.00 -78.092979 28.51 2.5
I tried:
merged = pd.merge_asof(df1, df2, left_on='A', tolerance=1, direction='nearest')
Which gives me a MergeError: key must be integer or timestamp.
So far the only way I've been able to successfully merge the dataframes is with:
merged = pd.merge_asof(df1, df2, on='A')
But this takes whatever value was close enough in columns C and D and fills in the NaN values.
For anyone else facing a similar problem, the column that the merge is performed on must be an integer. In my case this meant having to change column A to an int.
df1['A Int'] = df1['A'].astype(int)
df2['A Int'] = df2['A'].astype(int)
merged = pd.merge_asof(df1, df2, on='A Int', direction='nearest', tolerance=1)
I have a dataframe that looks like the following. There are >=1 consecutive rows where y_l is populated and y_h is NaN and vice versa.
When we have more than 1 consecutive populated lines between the NaNs we only want to keep the one with the lowest y_l or the highest y_h.
e.g. on the df below from the last 3 rows we would only keep the 2nd and discard the other two.
What would be a smart way to implement that?
df = pd.DataFrame({'y_l': [NaN, 97,95,98,NaN],'y_h': [90, NaN,NaN,NaN,95]}, columns=['y_l','y_h'])
>>> df
y_l y_h
0 NaN 90.0
1 97.0 NaN
2 95.0 NaN
3 98.0 NaN
4 NaN 95
Desired result:
y_l y_h
0 NaN 90.0
1 95.0 NaN
2 NaN 95
You need create new column or Series for distinguish each consecutives and then use groupby with aggreagte by agg, last for change order of columns use reindex:
a = df['y_l'].isnull()
b = a.ne(a.shift()).cumsum()
df = (df.groupby(b, as_index=False)
.agg({'y_l':'min', 'y_h':'max'})
.reindex(columns=['y_l','y_h']))
print (df)
y_l y_h
0 NaN 90.0
1 95.0 NaN
2 NaN 95.0
Detail:
print (b)
0 1
1 2
2 2
3 2
4 3
Name: y_h, dtype: int32
What if you had more columns?
for example
df = pd.DataFrame({'A': [NaN, 15,20,25,NaN],'y_l': [NaN, 97,95,98,NaN],'y_h': [90, NaN,NaN,NaN,95]}, columns=['A','y_l','y_h'])
>>>df
A y_l y_h
0 NaN NaN 90.0
1 15.0 97.0 NaN
2 20.0 95.0 NaN
3 25.0 98.0 NaN
4 NaN NaN 95.0
How could you keep the values in column A after filtering out the irrelevant rows as below?
A y_l y_h
0 NaN NaN 90.0
1 20.0 95.0 NaN
2 NaN NaN 95.0