Pandas merge_asof tolerance must be integer - python

I have searched around, but could not find the answer I was looking for. I have two dataframes, one has fairly discrete integer values in column A (df2) the other does not (df1). I would like to merge the two such that where column A is within 1, values in columns C and D would get merged once and NaN otherwise.
df1=
A B
0 30.00 -52.382420
1 33.14 -50.392513
2 36.28 -53.699646
3 39.42 -49.228439
.. ... ...
497 1590.58 -77.646561
498 1593.72 -77.049423
499 1596.86 -77.711639
500 1600.00 -78.092979
df2=
A C D
0 0.009 NaN NaN
1 0.036 NaN NaN
2 0.100 NaN NaN
3 10.000 12.4 0.29
4 30.000 12.82 0.307
.. ... ... ...
315 15000.000 NaN 7.65
316 16000.000 NaN 7.72
317 17000.000 NaN 8.36
318 18000.000 NaN 8.35
I would like the output to be
merged=
A B C D
0 30.00 -52.382420 12.82 0.29
1 33.14 -50.392513 NaN NaN
2 36.28 -53.699646 NaN NaN
3 39.42 -49.228439 NaN NaN
.. ... ... ... ...
497 1590.58 -77.646561 NaN NaN
498 1593.72 -77.049423 NaN NaN
499 1596.86 -77.711639 NaN NaN
500 1600.00 -78.092979 28.51 2.5
I tried:
merged = pd.merge_asof(df1, df2, left_on='A', tolerance=1, direction='nearest')
Which gives me a MergeError: key must be integer or timestamp.
So far the only way I've been able to successfully merge the dataframes is with:
merged = pd.merge_asof(df1, df2, on='A')
But this takes whatever value was close enough in columns C and D and fills in the NaN values.

For anyone else facing a similar problem, the column that the merge is performed on must be an integer. In my case this meant having to change column A to an int.
df1['A Int'] = df1['A'].astype(int)
df2['A Int'] = df2['A'].astype(int)
merged = pd.merge_asof(df1, df2, on='A Int', direction='nearest', tolerance=1)

Related

Groupby two columns on on two axis

I'd like to groupby pandas dataframe on two different columns on two different axes, however, struggling to figure it out
Sample code:
import numpy as np
import pandas as pd
x = pd.date_range("2022-01-01", "2022-06-01", freq="D")
y = np.arange(0, x.shape[0])
z = np.random.choice(["Jack", "Jul", "John"], size=x.shape[0])
df = pd.DataFrame({"Date": x, "numbers": y, "names": z})
so far I have the following solution, I cannot use .resample because then I loose all the names:
min_ = x.min()
max_ = x.max()
dt_range = pd.date_range(min_, max_, freq="W")
list_ = []
for date in dt_range:
temp_df = df[df["Date"].dt.week == date.week]
temp_df = temp_df.groupby("names").sum()
list_.append(temp_df)
pd.concat(list_, axis=1)
Sample output:
numbers numbers numbers numbers numbers numbers ... numbers numbers numbers numbers numbers numbers
names ...
Jack 0.0 7 36.0 39 53 99 ... 113 237 247 260 416 NaN
John 1.0 16 48.0 54 78 68 ... 436 233 250 262 139 726.0
Jul NaN 12 NaN 40 51 64 ... 221 349 371 395 411 289.0
You can use df.pivot to get this (I have added in a group by following from comments saying pivot causes an error), using the below:
df_out = (df.groupby(['names', 'Date'], as_index=False).sum()
.pivot(index='names', columns='Date', values='numbers'))
However this will output with Date as the column names, rather than 'numbers' as in your question:
Date 2022-01-01 2022-01-02 2022-01-03 ... 2022-05-30 2022-05-31 2022-06-01
names ...
Jack NaN NaN NaN ... NaN NaN NaN
John 0.0 1.0 2.0 ... 149.0 NaN NaN
Jul NaN NaN NaN ... NaN 150.0 151.0
(Note: not an exact match the the output in the question due to the random data in the df in the question).
To correct this, you can just set all the columns to be 'numbers' using the below:
df_out.columns = ['numbers']*len(df_out.columns)
numbers numbers numbers numbers ... numbers numbers numbers numbers
names ...
Jack NaN NaN NaN 3.0 ... NaN NaN NaN NaN
John 0.0 1.0 2.0 NaN ... 148.0 149.0 NaN NaN
Jul NaN NaN NaN NaN ... NaN NaN 150.0 151.0

Create Dataframe by calling indices of df1 that are listed in df2

I'm new to Python Pandas and struggling with the following problem for a while now.
The following dataframe df1 values show the indices that are coupled to the values of df2 that should be called
Name1 Name2 ... Name160 Name161
0 62 18 ... NaN 75
1 79 46 ... NaN 5
2 3 26 ... NaN 0
df2 contains the values that belong to the indices that have to be called.
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
3 379.0 722.0 ... NaN 655.0
4 398.0 834.0 ... NaN 675.0
.. ... ... ... ... ...
213 NaN NaN ... NaN NaN
214 NaN NaN ... NaN NaN
215 NaN NaN ... NaN NaN
216 NaN NaN ... NaN NaN
217 NaN NaN ... NaN NaN
For example, df1 shows the value '0' in column 'Name161'. Then df3 should show the value that is listed in df2 with index 0. In this case '164'.
Till so far, I got df3 showing the first 3 values of df2, but of course that not what I would like to achieve.
Input:
df3 = df1*0
for c in df1.columns:
df3[c]= df2[c]
print(df3)
Output:
Name1 Name2 ... Name160 Name161
0 152.0 204.0 ... NaN 164.0
1 175.0 308.0 ... NaN 571.0
2 252.0 695.0 ... NaN 577.0
Any help would be much appreciated, thanks!
Use DataFrame.stack with Series.reset_index for reshape both DataFrames, then merging by DataFrame.merge with left join and last pivoting by DataFrame.pivot:
#change index values for match by sample data in df2
print (df1)
Name1 Name2 Name160 Name161
0 2 4 NaN 4
1 0 213 NaN 216
2 3 2 NaN 0
df11 = df1.stack().reset_index(name='idx')
df22 = df2.stack().reset_index(name='val')
df = (df11.merge(df22,
left_on=['idx','level_1'],
right_on=['level_0','level_1'],
how='left')
.pivot('level_0_x','level_1','val')
.reindex(df1.columns, axis=1)
.rename_axis(None)
)
print (df)
Name1 Name2 Name160 Name161
0 252.0 834.0 NaN 675.0
1 152.0 NaN NaN NaN
2 379.0 695.0 NaN 164.0

Pandas corr() returning NaN too often

I'm attempting to run what I think should be a simple correlation function on a dataframe but it is returning NaN in places where I don't believe it should.
Code:
# setup
import pandas as pd
import io
csv = io.StringIO(u'''
id date num
A 2018-08-01 99
A 2018-08-02 50
A 2018-08-03 100
A 2018-08-04 100
A 2018-08-05 100
B 2018-07-31 500
B 2018-08-01 100
B 2018-08-02 100
B 2018-08-03 0
B 2018-08-05 100
B 2018-08-06 500
B 2018-08-07 500
B 2018-08-08 100
C 2018-08-01 100
C 2018-08-02 50
C 2018-08-03 100
C 2018-08-06 300
''')
df = pd.read_csv(csv, sep = '\t')
# Format manipulation
df = df[df['num'] > 50]
df = df.pivot(index = 'date', columns = 'id', values = 'num')
df = pd.DataFrame(df.to_records())
# Main correlation calculations
print df.iloc[:, 1:].corr()
Subject DataFrame:
A B C
0 NaN 500.0 NaN
1 99.0 100.0 100.0
2 NaN 100.0 NaN
3 100.0 NaN 100.0
4 100.0 NaN NaN
5 100.0 100.0 NaN
6 NaN 500.0 300.0
7 NaN 500.0 NaN
8 NaN 100.0 NaN
corr() Result:
A B C
A 1.0 NaN NaN
B NaN 1.0 1.0
C NaN 1.0 1.0
According to the (limited) documentation on the function, it should exclude "NA/null values". Since there are overlapping values for each column, should the result not all be non-NaN?
There are good discussions here and here, but neither answered my question. I've tried the float64 idea discussed here, but that failed as well.
#hellpanderr's comment brought up a good point, I'm using 0.22.0
Bonus question - I'm no mathematician, but how is there a 1:1 correlation between B and C in this result?
The result seems to be an artefact of the data you work with. As you write, NAs are ignored, so it basically boils down to:
df[['B', 'C']].dropna()
B C
1 100.0 100.0
6 500.0 300.0
So, there are only two values per column left for the calculation which should therefore lead to to correlation coefficients of 1:
df[['B', 'C']].dropna().corr()
B C
B 1.0 1.0
C 1.0 1.0
So, where do the NAs then come from for the remaining combinations?
df[['A', 'B']].dropna()
A B
1 99.0 100.0
5 100.0 100.0
df[['A', 'C']].dropna()
A C
1 99.0 100.0
3 100.0 100.0
So, also here you end up with only two values per column. The difference is that the columns B and C contain only one value (100) which gives a standard deviation of 0:
df[['A', 'C']].dropna().std()
A 0.707107
C 0.000000
When the correlation coefficient is calculated, you divide by the standard deviation, which leads to a NA.

Interpolate a missing values using rows and columns values

In Python Pandas, how should I interactively interpolate a dataframe with some NaN rows and columns?
For example, the following dataframe -
90 92.5 95 100 110 120
Index
1 NaN NaN NaN NaN NaN NaN
2 0.469690 NaN NaN NaN NaN NaN
3 0.478220 NaN 0.492232 0.505685 NaN NaN
4 0.486377 NaN 0.503853 0.518890 0.550517 NaN
5 0.485862 NaN 0.502130 0.515076 0.537675 0.564383
My goal is to interpolate & fill all the NaN efficiently, I.E, to interpolate whatever NaN that is possible. However If I use
df.interpolate(inplace=True, axis=0, method='spline', order=1, limit=20, limit_direction='both')
it will return "TypeError: Cannot interpolate with all NaNs."
You can try this (thank you #Boud for df.dropna(axis=1, how='all')):
In [138]: new = df.dropna(axis=1, how='all').interpolate(limit=20, limit_direction='both')
In [139]: new
Out[139]:
90 95 100 110 120
Index
1 0.469690 0.492232 0.505685 0.550517 0.564383
2 0.469690 0.492232 0.505685 0.550517 0.564383
3 0.478220 0.492232 0.505685 0.550517 0.564383
4 0.486377 0.503853 0.518890 0.550517 0.564383
5 0.485862 0.502130 0.515076 0.537675 0.564383

Combine_first and null values in Pandas

df1:
0 1
0 nan 3.00
1 -4.00 nan
2 nan 7.00
df2:
0 1 2
1 -42.00 nan 8.00
2 -5.00 nan 4.00
df3 = df1.combine_first(df2)
df3:
0 1 2
0 nan 3.00 nan
1 -4.00 nan 8.00
2 -5.00 7.00 4.00
This is what I'd like df3 to be:
0 1 2
0 nan 3.00 nan
1 -4.00 nan 8.00
2 nan 7.00 4.00
(The difference is in df3.ix[2:2,0:0])
That is, if the column and index are the same for any cell in both df1 and df2, I'd like df1's value to prevail, even if that value is nan. combine_first does that, except when the value in df1 is nan.
Here's a bit of a hacky way to do it. First, align df2 with df1, which creates a frame indexed with the union of df1/df2, filled with df2's values. Then assign back df1's values.
In [325]: df3, _ = df2.align(df1)
In [327]: df3.loc[df1.index, df1.columns] = df1
In [328]: df3
Out[328]:
0 1 2
0 NaN 3 NaN
1 -4 NaN 8
2 NaN 7 4

Categories

Resources