I have two pandas dataframes:
df1:
a b c
1 1 2
2 1 2
3 1 3
df2:
a b c
4 0 2
5 5 2
1 1 2
df1 = {'a': [1, 2, 3], 'b': [1, 1, 1], 'c': [2, 2, 3]}
df2 = {'a': [4, 5, 6], 'b': [0, 5, 1], 'c': [2, 2, 2]}
df1= pd.DataFrame(df1)
df2 = pd.DataFrame(df2)
I'm looking for a function that will display whether df1 and df2 contain the same value in column a.
In the example I provided df1.a and df2.a both have a=1.
If df1 and df2 do not have an entry where the the value in column a are equal then the function should return None or False.
How do I do this? I've tried a couple combinations of panda.merge
Define your own function by using isin and any
def yourf(x,y):
if any(x.isin(y)):
#print(x[x.isin(y)])
return x[x.isin(y)]
else:
return 'No match' # you can change here to None
Out[316]:
0 1
Name: a, dtype: int64
yourf(df1.b,df2.c)
Out[318]: 'No match'
You could use set intersection:
def col_intersect(df1, df2, col='a'):
s1 = set(df1[col])
s2 = set(df2[col])
return s1 & s2 else None
Using merge as you tried, you could try this:
def col_match(df1, df2, col='a'):
merged = df1.merge(df2, how='inner', on=col)
if len(merged):
return merged[col]
else:
return None
Related
i want to select the whole row in which the minimal value of 3 selected columns is found, in a dataframe like this:
it is supposed to look like this afterwards:
I tried something like
dfcheckminrow = dfquery[dfquery == dfquery['A':'C'].min().groupby('ID')]
obviously it didn't work out well.
Thanks in advance!
Bkeesey's answer looks like it almost got you to your solution. I added one more step to get the overall minimum for each group.
import pandas as pd
# create sample df
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'A': [30, 14, 100, 67, 1, 20],
'B': [10, 1, 2, 5, 100, 3],
'C': [1, 2, 3, 4, 5, 6],
})
# set "ID" as the index
df = df.set_index('ID')
# get the min for each column
mindf = df[['A','B']].groupby('ID').transform('min')
# get the min between columns and add it to df
df['min'] = mindf.apply(min, axis=1)
# filter df for when A or B matches the min
df2 = df.loc[(df['A'] == df['min']) | (df['B'] == df['min'])]
print(df2)
In my simplified example, I'm just finding the minimum between columns A and B. Here's the output:
A B C min
ID
1 14 1 2 1
2 100 2 3 2
3 1 100 5 1
One method do filter the initial DataFrame based on a groupby conditional could be to use transform to find the minimum for a "ID" group and then use loc to filter the initial DataFrame where `any(axis=1) (checking rows) is met.
# create sample df
df = pd.DataFrame({'ID': [1, 1, 2, 2, 3, 3],
'A': [30, 14, 100, 67, 1, 20],
'B': [10, 1, 2, 5, 100, 3]})
# set "ID" as the index
df = df.set_index('ID')
Sample df:
A B
ID
1 30 10
1 14 1
2 100 2
2 67 5
3 1 100
3 20 3
Use groupby and transform to find minimum value based on "ID" group.
Then use loc to filter initial df to where any(axis=1) is valid
df.loc[(df == df.groupby('ID').transform('min')).any(axis=1)]
Output:
A B
ID
1 14 1
2 100 2
2 67 5
3 1 100
3 20 3
In this example only the first row should be removed as it in both columns is not a minimum for the "ID" group.
I have 2 df one is
df1 = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
df2 = {'col_1': [3, 2, 1, 3]}
I want the result as follows
df3 = {'col_1': [3, 2, 1, 3], 'col_2': ['a', 'b', 'c', 'a']}
The column 2 of the new df is the same as the column 2 of the df1 depending on the value of the df1.
Add the new column by mapping the values from df1 after setting its first column as index:
df3 = df2.copy()
df3['col_2'] = df2['col_1'].map(df1.set_index('col_1')['col_2'])
output:
col_1 col_2
0 3 a
1 2 b
2 1 c
3 3 a
You can do it with merge after converting the dicts to df with pd.DataFrame():
output = pd.DataFrame(df2)
output = output.merge(pd.DataFrame(df1),on='col_1',how='left')
Or in a one-liner:
output = pd.DataFrame(df2).merge(pd.DataFrame(df1),on='col_1',how='left')
Outputs:
col_1 col_2
0 3 a
1 2 b
2 1 c
3 3 a
This could be a simple way of doing it.
# use df1 to create a lookup dictionary
lookup = df1.set_index("col_1").to_dict()["col_2"]
# look up each value from df2's "col_1" in the lookup dict
df2["col_2"] = df2["col_1"].apply(lambda d: lookup[d])
The question is I would like to avoid iterrows here.
From my dataframe I want to create a new column "unique" that will be based on the condition that if "a" and "b" column values are the same I would give it a value "uniqueN" then for all occurrence of the exact "a" and "b" I would need the same value "uniqueN".
In this case
"1", "3" (the first row) from "a" and "b" is the first unique pair, so I give that the value "unique1", and the seventh row will also have the same value which is "unique1" as it is also "1", "3".
"2", "2" (the second row) is the next unique "a", "b" pair so I give them "unique2" and the eight row also has "2", "2" so that will also have "unique2".
"3", "1" (third row) is the next unique, so "unique3", no more rows in the df is "3", "1" so that value wont repeat.
and so on
I have a working code that uses loops but this is not the pandas way, can anyone suggest how I can do this using pandas functions?
Expected Output (My code works, but its not using pandas methods)
a b unique
0 1 3 unique1
1 2 2 unique2
2 3 1 unique3
3 4 2 unique4
4 3 3 unique5
5 4 2 unique4
6 1 3 unique1
7 2 2 unique2
Code
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]})
c = 1
seen = {}
for i, j in df.iterrows():
j = tuple(j)
if j not in seen:
seen[j] = 'unique' + str(c)
c += 1
for key, value in seen.items():
df.loc[(df.a == key[0]) & (df.b == key[1]), 'unique'] = value
Let's use groupby ngroup with sort=False to ensure values are enumerated in order of appearance, add 1 so group numbers start at one, then convert to string with astype so we can add the prefix unique to the number:
df['unique'] = 'unique' + \
df.groupby(['a', 'b'], sort=False).ngroup().add(1).astype(str)
Or with map and format instead of converting and concatenating:
df['unique'] = (
df.groupby(['a', 'b'], sort=False).ngroup()
.add(1)
.map('unique{}'.format)
)
df:
a b unique
0 1 3 unique1
1 2 2 unique2
2 3 1 unique3
3 4 2 unique4
4 3 3 unique5
5 4 2 unique4
6 1 3 unique1
7 2 2 unique2
Setup:
import pandas as pd
df = pd.DataFrame({
'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]
})
I came up with a slightly different solution. I'll add this for posterity, but the groupby answer is superior.
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3, 4, 3, 4, 1, 2], 'b': [3, 2, 1, 2, 3, 2, 3, 2]})
print(df)
df1 = df[~df.duplicated()]
print(df1)
df1['unique'] = df1.index
print(df1)
df2 = df.merge(df1, how='left')
print(df2)
I have the next csv and I need get the values duplicated from DialedNumer column and then the averege Duration of those duplicates.
I already got the duplicates with the next code:
df = pd.read_csv('cdrs.csv')
dnidump = pd.DataFrame(df, columns=['DialedNumber'])
pd.options.display.float_format = '{:.0f}'.format
dupl_dni = dnidump.pivot_table(index=['DialedNumber'], aggfunc='size')
a1 = dupl_dni.to_frame().rename(columns={0:'TimesRepeated'}).sort_values(by=['TimesRepeated'], ascending=False)
b = a1.head(10)
print(b)
Output:
DialedNumber TimesRepeated
50947740194 4
50936564292 2
50931473242 3
I can't figure out how to get the duration avarege of those duplicates, any ideas?
thx
try:
df_mean = df.groupby('DialedNumber').mean()
Use df.groupby('column').mean()
Here is sample code.
Input
df = pd.DataFrame({'A': [1, 1, 1, 2, 2],
'B': [2461, 1023, 9, 5614, 212],
'C': [2, 4, 8, 16, 32]}, columns=['A', 'B', 'C'])
df.groupby('A').mean()
Output
B C
A
1 1164.333333 4.666667
2 2913.000000 24.000000
API reference of pandas.core.groupby.GroupBy.mean
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.mean.html
While calling apply on a Pandas dataframe with axis=1, getting ValueError when trying to set a list as cell-value.
Note: Lists in different rows are of varying lengths and this seems to be cause, but not sure how to overcome it.
import numpy as np
import pandas as pd
data = [{'a': 1, 'b': '3412', 'c': 0}, {'a': 88, 'b': '56\t23', 'c': 1},
{'a': 45, 'b': '412\t34\t324', 'c': 2}]
df = pd.DataFrame.from_dict(data)
print("df: ")
print(df)
def get_rank_array(ids):
ids = list(map(int, ids))
return np.random.randint(0, 10, len(ids))
def get_rank_list(ids):
ids = list(map(int, ids))
return np.random.randint(0, 10, len(ids)).tolist()
df['rank'] = df.apply(lambda row: get_rank_array(row['b'].split('\t')), axis=1)
ValueError: could not broadcast input array from shape (2) into shape (3)
df['rank'] = df.apply(lambda row: get_rank_list(row['b'].split('\t')), axis=1)
print("df: ")
print(df)
df:
a b c rank
0 1 3412 0 [6]
1 88 56\t23 1 [0, 0]
2 45 412\t34\t324 2 [3, 3, 6]
get_rank_list works but not get_rank_array in producing the above expected result.
I understand the (3,) shape comes from the number of columns in the dataframe, and (2,) is from the length of the list after splitting 56\t23 in the second row.
But I do not get the reason behind the error itself.
When
data = [{'a': 45, 'b': '412\t34\t324', 'c': 2},
{'a': 1, 'b': '3412', 'c': 0}, {'a': 88, 'b': '56\t23', 'c': 1}]
the error occurs with lists too.
Observe -
df.apply(lambda x: [0, 1, 2])
a b c
0 0 0 0
1 1 1 1
2 2 2 2
df.apply(lambda x: [0, 1])
a [0, 1]
b [0, 1]
c [0, 1]
dtype: object
Pandas does two things inside apply:
it special cases np.arrays and lists, and
it attempts to snap the results into a DataFrame if the shape is compatible
Note that arrays are special cased a little differently to lists, in that, if the shape is not compatible, for lists, the result is a series (as you see in the second output above), but for arrays,
df.apply(lambda x: np.array([0, 1, 2]))
a b c
0 0 0 0
1 1 1 1
2 2 2 2
df.apply(lambda x: np.array([0, 1]))
ValueError: Shape of passed values is (3, 2), indices imply (3, 3)
In short, this is a consequence of the pandas internals. For more information, peruse the apply function code on GitHub.
To get your desired o/p, use a list comprehension and assign the result to df['new']. Don't use apply.
df['new'] = [
np.random.randint(0, 10, len(x.split('\t'))).tolist() for x in df.b
]
df
a b c new
0 1 3412 0 [8]
1 88 56\t23 1 [4, 2]
2 45 412\t34\t324 2 [9, 0, 3]