I'm not sure if this is a duplicate question, but here it goes.
Assuming I have the following table:
import pandas
lst = [1,1,1,2,2,3,3,4,5]
lst2 = ['A','A','B','D','E','A','A','A','E']
df = pd.DataFrame(list(zip(lst, lst2)),
columns =['ID', 'val'])
will output the following table
+----+-----+
| ID | Val |
+----+-----+
| 1 | A |
+----+-----+
| 1 | A |
+----+-----+
| 1 | B |
+----+-----+
| 2 | D |
+----+-----+
| 2 | E |
+----+-----+
| 3 | A |
+----+-----+
| 3 | A |
+----+-----+
| 4 | A |
+----+-----+
| 5 | E |
+----+-----+
The goal is count the duplicates on VAL grouped by ID:
+----+-----+--------------+
| ID | Val | is_duplicate |
+----+-----+--------------+
| 1 | A | 1 |
+----+-----+--------------+
| 1 | A | 1 |
+----+-----+--------------+
| 1 | B | 0 |
+----+-----+--------------+
| 2 | D | 0 |
+----+-----+--------------+
| 2 | E | 0 |
+----+-----+--------------+
| 3 | A | 1 |
+----+-----+--------------+
| 3 | A | 1 |
+----+-----+--------------+
| 4 | A | 0 |
+----+-----+--------------+
| 5 | E | 0 |
+----+-----+--------------+
I tried the following code but its counting the overall duplicates
df_grouped = df.groupby(['notes']).size().reset_index(name='count')
while the following code does only the duplicate count
df.duplicated(subset=['notes'])
what would be the best approach for this?
Let us try duplicated
df['is_dup']=df.duplicated(subset=['ID','val'],keep=False).astype(int)
df
Out[21]:
ID val is_dup
0 1 A 1
1 1 A 1
2 1 B 0
3 2 D 0
4 2 E 0
5 3 A 1
6 3 A 1
7 4 A 0
8 5 E 0
You can use .groupby on the relevant columns and get the count. Then if you add >1 to the end, then that will mean the value for the specified group contains duplicates. The > 1 will create a boolean True/False data type. Finally, to change to 1 or 0, simply use .astype(int) to transform the data type from a boolean data type to an int, which changes True to 1 and False to 0:
df['is_duplicate'] = (df.groupby(['ID','val'])['val'].transform('count') > 1).astype(int)
Out[7]:
ID val is_duplicate
0 1 A 1
1 1 A 1
2 1 B 0
3 2 D 0
4 2 E 0
5 3 A 1
6 3 A 1
7 4 A 0
Related
I have various columns in a pandas dataframe that have dummy values and I want to fill them as follows:
Input Columns
+----+-----
| c1 | c2 |
+----+----+
| 0 | 1 |
| 0 | 0 |
| 1 | 0 |
| 0 | 0 |
| 0 | 1 |
| 0 | 1 |
| 1 | 0 |
| 0 | 1 |
Output columns:
+----+-----
| c1 | c2 |
+----+----+
| 0 | 1 |
| 0 | 1 |
| 1 | 1 |
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 3 |
| 2 | 4 |
How can I get this output in pandas?
Here working if there are only 0 and 1 values cumulative sum - DataFrame.cumsum:
df1 = df.cumsum()
print (df1)
c1 c2
0 0 1
1 0 1
2 1 1
3 1 1
4 1 2
5 1 3
6 2 3
7 2 4
If there are 0 and another values is possible use cumulative sum for mask for test not equal 0 values:
df2 = df.ne(0).cumsum()
I have a time series dataframe with the following structure:
| ID | second | speaker1 | speaker2 | company | ... |
|----|--------|----------|----------|---------|-----|
| A | 1 | 1 | 1 | name1 | |
| A | 2 | 1 | 1 | name1 | |
| A | 3 | 1 | 1 | name1 | |
| B | 1 | 1 | 1 | name2 | |
| B | 2 | 1 | 1 | name2 | |
| B | 3 | 1 | 1 | name2 | |
| B | 4 | 1 | 1 | name2 | |
| C | 1 | 1 | 1 | name3 | |
| C | 2 | 1 | 1 | name3 | |
*note that speaker1 and speaker2 can be either 0 or 1, I set all to one for clarity here
I would like to add rows to each group until every group has the same number of rows. (where num of rows = ID with the most rows)
For every new row, I would like to populate the speaker1 and speaker2 columns with 0s while keeping the values in the other columns the same for that ID.
So the output should be:
| ID | second | speaker1 | speaker2 | company | ... |
|:--:|:------:|:--------:|:--------:|:-------:|:---:|
| A | 1 | 1 | 1 | name1 | |
| A | 2 | 1 | 1 | name1 | |
| A | 3 | 1 | 1 | name1 | |
| A | 4 | 0 | 0 | name1 | |
| B | 1 | 1 | 1 | name2 | |
| B | 2 | 1 | 1 | name2 | |
| B | 3 | 1 | 1 | name2 | |
| B | 4 | 1 | 1 | name2 | |
| C | 1 | 1 | 1 | name3 | |
| C | 2 | 1 | 1 | name3 | |
| C | 3 | 0 | 0 | name3 | |
| C | 4 | 0 | 0 | name3 | |
So far I have tried a groupby and apply, but found it to be extremely slow as I have many rows and columns in this dataframe.
def add_rows_sec(w):
'input: dataframe for grouped by ID, output: dataframe with added rows until max call length'
while w['second'].max() < clean_data['second'].max(): # if duration is less than max duration in full data set
last_row = w.iloc[-1]
last_row['second'] += 1
last_row['speaker1'] = 0
last_row['speaker2'] = 0
return w.append(last_row)
return w
df.groupby('ID').apply(add_rows_sec).reset_index(drop=True)
Is there a way of doing this with numpy? Something like
condition = w['second'].max() < df['second'].max()
choice = pd.Series([w.ID, w.second + 1, 0, 0, w.company...])
df = np.select(condition, choice, default = np.nan)
Any help is much appreciated!
A different approach with pandas
construct a Dataframe that is Cartesian product of ID and second
outer join it back to original data frame
fill missing values based on your spec
No groupby() no loops.
df = pd.DataFrame({"ID":["A","A","A","B","B","B","B","C","C"],"second":["1","2","3","1","2","3","4","1","2"],"speaker1":["1","1","1","1","1","1","1","1","1"],"speaker2":["1","1","1","1","1","1","1","1","1"],"company":["name1","name1","name1","name2","name2","name2","name2","name3","name3"]})
df2 = pd.DataFrame({"ID":df["ID"].unique()}).assign(foo=1).merge(\
pd.DataFrame({"second":df["second"].unique()}).assign(foo=1)).drop("foo", 1)\
.merge(df, on=["ID","second"], how="outer")
df2["company"] = df2["company"].fillna(method="ffill")
df2.fillna(0)
output
ID second speaker1 speaker2 company
0 A 1 1 1 name1
1 A 2 1 1 name1
2 A 3 1 1 name1
3 A 4 0 0 name1
4 B 1 1 1 name2
5 B 2 1 1 name2
6 B 3 1 1 name2
7 B 4 1 1 name2
8 C 1 1 1 name3
9 C 2 1 1 name3
10 C 3 0 0 name3
11 C 4 0 0 name3
My Current excel looks like:
----------------
| Type | Val |
|--------------|
| A | 1 |
|--------------|
| A | 2 |
|--------------|
| B | 3 |
|--------------|
| B | 4 |
|--------------|
| B | 5 |
|--------------|
| C | 6 |
----------------
This is the required excel:
----------------------
| Type | Val | Sum |
|--------------------|
| A | 1 | 3 |
| |------| |
| | 2 | |
|--------------------|
| B | 3 | 12 |
| |------| |
| | 4 | |
| |------| |
| | 5 | |
|--------------------|
| C | 6 | 6 |
----------------------
Is it possible in Python using Pandas or any other module?
IIUC use:
df['Sum']=df.groupby('Type').transform('sum')
df.loc[df[['Type','Sum']].duplicated(),['Type','Sum']]=''
print(df)
Type Val Sum
0 A 1 3
1 2
2 B 3 12
3 4
4 5
5 C 6 6
P.s: you can also add this as index:
df=df.set_index(['Type','Sum']) #export to excel without index=False
For merged first 2 levels is possible set all 3 columns to MultiIndex - only order of columns is different:
#specify column name after groupby
df['Sum'] = df.groupby('Type')['Val'].transform('sum')
df = df.set_index(['Type','Sum', 'Val'])
df.to_excel('file.xlsx')
But in my opinion the best is working with duplicated values:
df['Sum'] = df.groupby('Type')['Val'].transform('sum')
print (df)
Type Val Sum
0 A 1 3
1 A 2 3
2 B 3 12
3 B 4 12
4 B 5 12
5 C 6 6
df.to_excel('file.xlsx', index=False)
You can use
import pandas as pd
df = pd.DataFrame({'Type': ['A', 'A','B','B','B','C'], 'Val': [1,2 ,3,4,5,6]})
df_result = df.merge(df.groupby(by='Type', as_index=False).agg({'Val':'sum'}).rename(columns={'Val':'Sum'}), on = 'Type')
which gives the output as
print(df_result)
Type Val Sum
0 A 1 3
1 A 2 3
2 B 3 12
3 B 4 12
4 B 5 12
5 C 6 6
Is this what you are looking for?
I have a table that looks something like this:
+------------+------------+------------+------------+
| Category_1 | Category_2 | Category_3 | Category_4 |
+------------+------------+------------+------------+
| a | b | b | y |
| a | a | c | y |
| c | c | c | n |
| b | b | c | n |
| a | a | a | y |
+------------+------------+------------+------------+
I'm hoping for a pivot_table like result, with the counts of the frequency for each category. Something like this:
+---+------------+----+----+----+
| | | a | b | c |
+---+------------+----+----+----+
| | Category_1 | 12 | 10 | 40 |
| y | Category_2 | 15 | 48 | 26 |
| | Category_3 | 10 | 2 | 4 |
| | Category_1 | 5 | 6 | 4 |
| n | Category_2 | 9 | 5 | 2 |
| | Category_3 | 8 | 4 | 3 |
+---+------------+----+----+----+
I know I could pull it off by splitting the table, assigning value_counts to column values, then rejoining. Is there any more simple, more 'pythonic' way of pulling this off? I figure it may along the lines of a pivot paired with a Transform, but tests so far have been ugly at best.
So we need to melt (or stack ) your original dataframe, then we doing pd.crosstab, you can using pd.pivot_table as well.
s=df.set_index('Category_4').stack().reset_index().rename(columns={0:'value'})
pd.crosstab([s.Category_4,s.level_1],s['value'])
Out[532]:
value a b c
Category_4 level_1
n Category_1 0 1 1
Category_2 0 1 1
Category_3 0 0 2
y Category_1 3 0 0
Category_2 2 1 0
Category_3 1 1 1
Using get_dummies first, then summing across index levels
d = pd.get_dummies(df.set_index('Category_4'))
d.columns = d.columns.str.rsplit('_', 1, True)
d = d.stack(0)
# This shouldn't be necessary but is because the
# index gets bugged and I'm "resetting" it
d.index = pd.MultiIndex.from_tuples(d.index.values)
d.sum(level=[0, 1])
a b c
y Category_1 3 0 0
Category_2 2 1 0
Category_3 1 1 1
n Category_1 0 1 1
Category_2 0 1 1
Category_3 0 0 2
I have a DataFrame that looks something like this:
| event_type | object_id
------ | ------ | ------
0 | A | 1
1 | D | 1
2 | A | 1
3 | D | 1
4 | A | 2
5 | A | 2
6 | D | 2
7 | A | 3
8 | D | 3
9 | A | 3
What I want to do is get the index of the next row where the event_type is A and the object_id is still the same, so as an additional column this would look like this:
| event_type | object_id | next_A
------ | ------ | ------ | ------
0 | A | 1 | 2
1 | D | 1 | 2
2 | A | 1 | NaN
3 | D | 1 | NaN
4 | A | 2 | 5
5 | A | 2 | NaN
6 | D | 2 | NaN
7 | A | 3 | 9
8 | D | 3 | 9
9 | A | 3 | NaN
and so on.
I want to avoid using .apply() because my DataFrame is quite large, is there a vectorized way to do this?
EDIT: for multiple A/D pairs for the same object_id, I'd like it to always use the next index of A, like this:
| event_type | object_id | next_A
------ | ------ | ------ | ------
0 | A | 1 | 2
1 | D | 1 | 2
2 | A | 1 | 4
3 | D | 1 | 4
4 | A | 1 | NaN
You can do it with groupby like:
def populate_next_a(object_df):
object_df['a_index'] = pd.Series(object_df.index, index=object_df.index)[object_df.event_type == 'A']
object_df['a_index'].fillna(method='bfill', inplace=True)
object_df['next_A'] = object_df['a_index'].where(object_df.event_type != 'A', object_df['a_index'].shift(-1))
object_df.drop('a_index', axis=1)
return object_df
result = df.groupby(['object_id']).apply(populate_next_a)
print(result)
event_type object_id next_A
0 A 1 2.0
1 D 1 2.0
2 A 1 NaN
3 D 1 NaN
4 A 2 5.0
5 A 2 NaN
6 D 2 NaN
7 A 3 9.0
8 D 3 9.0
9 A 3 NaN
GroupBy.apply will not have as much overhead as a simple apply.
Note you cannot (yet) store integer with NaN: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#support-for-integer-na so they end up as float values