How to merge by a column of collection using Python Pandas? - python

I have 2 lists of Stack Overflow questions, group A and group B. Both have two columns, Id and Tag. e.g:
|Id |Tag
| -------- | --------------------------------------------
|2 |c#,winforms,type-conversion,decimal,opacity
For each question in group A, I need to find in group B all matched questions that have at least one overlapping tag the question in group A, independent of the position of tags. For example, these questions should all be matched questions:
|Id |Tag
|----------|---------------------------
|3 |c#
|4 |winforms,type-conversion
|5 |winforms,c#
My first thought was to convert the variable Tag into a set variable and merge using Pandas because set ignores position. However, it seems that Pandas doesn't allow a set variable to be the key variable. So I am now using for loop to search over group B. But it is extremely slow since I have 13 million observation in group B.
My question is:
1. Is there any other way in Python to merge by a column of collection and can tell the number of overlapping tags?
2. How to improve the efficiency of for loop search?

This can be achieved using df.join and df.groupby.
This is the setup I'm working with:
df1 = pd.DataFrame({ 'Id' : [2], 'Tag' : [['c#', 'winforms', 'type-conversion', 'decimal', 'opacity']]})
Id Tag
0 2 [c#, winforms, type-conversion, decimal, opacity]
df2 = pd.DataFrame({ 'Id' : [3, 4, 5], 'Tag' : [['c#'], ['winforms', 'type-conversion'], ['winforms', 'c#']]})
Id Tag
0 3 [c#]
1 4 [winforms, type-conversion]
2 5 [winforms, c#]
Let's flatten out the right column in both data frames. This helped:
In [2331]: from itertools import chain
In [2332]: def flatten(df):
...: return pd.DataFrame({"Id": np.repeat(df.Id.values, df.Tag.str.len()),
...: "Tag": list(chain.from_iterable(df.Tag))})
...:
In [2333]: df1 = flatten(df1)
In [2334]: df2 = flatten(df2)
In [2335]: df1.head()
Out[2335]:
Id Tag
0 2 c#
1 2 winforms
2 2 type-conversion
3 2 decimal
4 2 opacity
And similarly for df2, which is also flattened.
Now is the magic. We'll do a join on Tag column, and then groupby on joined IDs to find count of overlapping tags.
In [2337]: df1.merge(df2, on='Tag').groupby(['Id_x', 'Id_y']).count().reset_index()
Out[2337]:
Id_x Id_y Tag
0 2 3 1
1 2 4 2
2 2 5 2
The output shows each pair of tags along with the number of overlapping tags. Pairs with no overlaps are filtered out by the groupby.
The df.count counts overlapping tags, and df.reset_index just prettifies the output, since groupby assigns the grouped column as the index, so we reset it.
To see matching tags, you'll modify the above slightly:
In [2359]: df1.merge(df2, on='Tag').groupby(['Id_x', 'Id_y'])['Tag'].apply(list).reset_index()
Out[2359]:
Id_x Id_y Tag
0 2 3 [c#]
1 2 4 [winforms, type-conversion]
2 2 5 [c#, winforms]
To filter out 1-overlaps, chain a df.query call to the first expression:
In [2367]: df1.merge(df2, on='Tag').groupby(['Id_x', 'Id_y']).count().reset_index().query('Tag > 1')
Out[2367]:
Id_x Id_y Tag
1 2 4 2
2 2 5 2

Step 1 List down all tags
Step 2 create binary representation of each tag, i.e. use bit 1 or 0 to represent whether have or not have the tag
Step 3 to find any ID share the same tag, you could call a simple apply function to decode your binary representation.
In terms of processing speed, it should be all right. However, if number of tags are too huge, there might be memory issue. If you only need to find questions with same tag for one Id only, I will suggest you write a simple function and calling df.apply. If you need to check for a lot of IDs and find questions with same tag, I will say above approach will be better.
(Intended to leave it as comment, but not enough reputation... sigh)

Related

Pandas: using groupby and nunique taking time into account

I have a dataframe in this form:
A B time
1 2 2019-01-03
1 3 2018-04-05
1 4 2020-01-01
1 4 2020-02-02
where A and B contain some integer identifiers.
I want to measure the number of different identifiers each A has interacted with. To do this I usually simply do
df.groupby('A')['B'].nunique()
I now have to do a slightly different thing: each identifier has a date assigned (different for each identifier), that splits its interactions in 2 parts: the ones happening before that date, and the ones happening after that date. The same operation previously done (counting number of unique B interacted with ) needs to be done for both parts separately.
For example, if the date for A=1 was 2018-07-01, the output would be
A before after
1 1 2
In the real data, A contains millions of different identifiers, each with its unique date assigned.
EDITED
To be more clear I added a line to df. I want to count the number of different values of B each A interacts with before and after the date
I would convert A into dates, compare those with df['time'] and then groupby().value_counts():
(df['A'].map(date_dict)
.gt(df['time'])
.groupby(df['A'])
.value_counts()
.unstack()
.rename({False:'after',True:'before'}, axis=1)
)
Output:
after before
A
1 2 1

Merge pandas dataframe, with column operation

I searched archive, but did not find what I wanted (probably because I don't really know what key words to use)
Here is my problem: I have a bunch of dataframes need to be merged; I also want to update the values of a subset of columns with the sum across the dataframes.
For example, I have two dataframes, df1 and df2:
df1=pd.DataFrame([ [1,2],[1,3], [0,4]], columns=["a", "b"])
df2=pd.DataFrame([ [1,6],[1,4]], columns=["a", "b"])
a b a b
0 1 2 0 1 5
1 1 3 2 0 6
2 0 4
after merging, I'd like to have the column 'b' updated with the sum of matched records, while column 'a' should be just like df1 (or df2, don't really care) as before:
a b
0 1 7
1 1 3
2 0 10
Now, expand this to merging three or more data frames.
Are there straightforward, build-in tricks to do this? or I need to process one by one, line by line?
===== Edit / Clarification =====
In the real world example, each data frame may contain indexes that are not in the other data frames. In this case, the merged data frame should have all of them and update the shared entries/indexes with sum (or some other operation).
Only partial, not complete solution yet. But the main point is solved:
df3 = pd.concat([df1, df2], join = "outer", axis=1)
df4 = df3.b.sum(axis=1)
df3 will have two 'a' columns, and two 'b' columns. the sum() function on df3.b add two 'b' columns and ignore NaNs. Now df4 has column 'b' with sum of df1 and df2's 'b' columns, and all the indexes.
did not solve the column 'a' though. In my real case, there are quite few number of NaN in df3.a , while others in df3.a should be the same. I haven't found a straightforward way to make a column 'a' in df4 and fill value with non-NaN. Now searching for a "count" function to get occurance of elements in rows of df3.a (imagine it has a few dozens column 'a').

How to use python pandas groupby or .DataFrameGroupBy objects to create unique list of combinations

Is there a more efficient way to use pandas groupby or pandas.core.groupby.DataFrameGroupBy object to create a unique list, series or dataframe, where I want unique combinations of 2 of N columns. E.g., if I have columns: Date, Name, Item Purchased and I just want to know unique Name and Date combination this works fine:
y = x.groupby(['Date','Name']).count()
y = y.reset_index()[['Date', 'Name']]
but I feel like there should be a cleaner way using
y = x.groupby(['Date','Name'])
but y.index gives me an error, although y.keys works. This actually leads me to ask the general question as what are pandas.core.groupby.DataFrameGroupBy objects convenient for?
Thanks!
You don't need to use -- and in fact shouldn't use -- groupby here. You could use drop_duplicates to get unique rows instead:
x.drop_duplicates(['Date','Name'])
Demo:
In [156]: x = pd.DataFrame({'Date':[0,1,2]*2, 'Name':list('ABC')*2})
In [158]: x
Out[158]:
Date Name
0 0 A
1 1 B
2 2 C
3 0 A
4 1 B
5 2 C
In [160]: x.drop_duplicates(['Date','Name'])
Out[160]:
Date Name
0 0 A
1 1 B
2 2 C
You shouldn't use groupby because
x.groupby(['Date','Name']).count() performs a count of the
number of elements in each group, but the count is not used -- it's a wasted computation.
x.groupby(['Date','Name']).count() raises an AttributeError if
x has only Date and Name columns.
drop_duplicates is much much faster for this purpose.
Use groupby when you want to perform some operation on each group, such as counting the number of elements in each group, or computing some statistic (e.g. a sum or mean, etc.) per group.

pandas dataframe groupby and get arbitrary member of each group

This question is similar to this other question.
I have a pandas dataframe. I want to split it into groups, and select an arbitrary member of each group, defined elsewhere.
Example: I have a dataframe that can be divided in 6 groups of 4 observations each. I want to extract the observations according to:
selected = [0,3,2,3,1,3]
This is very similar to
df.groupy('groupvar').nth(n)
But, crucially, n varies for each group according to the selected list.
Thanks!
Typically everything that you do within groupby should be group independent. So, within any groupby.apply(), you will only get the group itself, not the context. An alternative is to compute the index value for the whole sample (following, index) out of the indices for the groups (here, selected). Note that the dataset is sorted by groups, which you need to do if you want to apply the following.
I use test, out of which I want to select selected:
In[231]: test
Out[231]:
score
name
0 A -0.208392
1 A -0.103659
2 A 1.645287
0 B 0.119709
1 B -0.047639
2 B -0.479155
0 C -0.415372
1 C -1.390416
2 C -0.384158
3 C -1.328278
selected = [0, 2, 1]
c = test.groupby(level=1).count()
In[242]: index = c.shift(1).cumsum().add(array([selected]).T, fill_value=0)
In[243]: index
Out[243]:
score
name
A 0
B 5
C 4
In[255]: test.iloc[index.values[:,0]]
Out[255]:
score
name
0 A -0.208392
2 B -0.479155
1 C -1.390416

Only allow one to one mapping between two columns in pandas dataframe

I have a two column dataframe df, each row are distinct, one element in one column can map to one or more than one elements in another column. I want to filter OUT those elements. So in the final dataframe, one element in one column only map to a unique element in another column.
What I am doing is to groupby one column and count the duplicates, then remove rows with counts more than 1. and do it again for another column. I am wondering if there is a better, simpler way.
Thanks
edit1: I just realize my solution is INCORRECT, removing multi-mapping elements in column A reduces the number of mapping in column B, consider the following example:
A B
1 4
1 3
2 4
1 maps to 3,4 , so the first two rows should be removed, and 4 maps to 1,2. The final table should be empty. However, my solution will keep the last row.
Can anyone provide me a fast and simple solution ? thanks
Well, You could do something like the following:
>>> df
A B
0 1 4
1 1 3
2 2 4
3 3 5
You only want to keep a row if no other row has the value of 'A' and no other row as that value of 'B'. Only row three meets those conditions in this example:
>>> Aone = df.groupby('A').filter(lambda x: len(x) == 1)
>>> Bone = df.groupby('B').filter(lambda x: len(x) == 1)
>>> Aone.merge(Bone,on=['A','B'],how='inner')
A B
0 3 5
Explanation:
>>> Aone = df.groupby('A').filter(lambda x: len(x) == 1)
>>> Aone
A B
2 2 4
3 3 5
The above grabs the rows that may be allowed based on looking at column 'A' alone.
>>> Bone = df.groupby('B').filter(lambda x: len(x) == 1)
>>> Bone
A B
1 1 3
3 3 5
The above grabs the rows that may be allowed based on looking at column 'B' alone. And then merging the intersection leaves you with rows that only meet both conditions:
>>> Aone.merge(Bone,on=['A','B'],how='inner')
Note, you could also do a similar thing using groupby/transform. But transform tends to be slowish so I didn't do it as an alternative.

Categories

Resources