I try to delete columns with duplicate data in pandas, for example, the following data(They have the same data but different column names):
df1 = pd.DataFrame({'one': [1, 2, 3, 4], 'two': ['a', 'b', 'c', 'd'], 'three': [1, 2, 3, 4]})
one two three
0 1 a 1
1 2 b 2
2 3 c 3
3 4 d 4
I hope to get this result:
one two
0 1 a
1 2 b
2 3 c
3 4 d
The method I use now is:
df2 = df1.T.drop_duplicates().T
But this is too inefficient, is there a better way?
Hope to get your help, thanks
I tried to improve a little efficiency like this:
In [935]: df_int = df1.select_dtypes(include=['int'])
In [933]: df_other = df1.select_dtypes(exclude=['int'])
In [949]: if df_int.T.drop_duplicates().shape[0] == 1:
...: res = pd.concat([df_int.iloc[:,0], df_other], axis=1)
...:
In [950]: res
Out[950]:
one two
0 1 a
1 2 b
2 3 c
3 4 d
To remove transpose completely, you can do something like this:
In [995]: import numpy as np
In [997]: if (pd.DataFrame(np.diff(df_int.values)).sum() == 0).all():
...: res = pd.concat([df_int.iloc[:,0], df_other], axis=1)
Related
Starting with a dataframe like this:
df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': ['a', 'b', 'b', 'b', 'a']})
A B
0 1 a
1 2 b
2 3 b
3 4 b
4 5 a
What is the best way of getting to a dataframe like this?
pd.DataFrame({'source': [1, 2, 2, 3], 'target': [5, 3, 4, 4]})
source target
0 1 5
1 2 3
2 2 4
3 3 4
For each time a row in column A has the same value in column B as another row in column A, I want to save the unique instances of that relationship in a new dataframe.
This is pretty close:
df.groupby('B')['A'].unique()
B
a [1, 5]
b [2, 3, 4]
Name: A, dtype: object
But I'd ideally convert it into a single dataframe now and my brain has gone kaput.
In your case , you can do itertools.combinations
import itertools
s = df.groupby('B')['A'].apply(lambda x : set(list(itertools.combinations(x, 2)))).explode().tolist()
out = pd.DataFrame(s,columns=['source','target'])
out
Out[312]:
source target
0 1 5
1 3 4
2 2 3
3 2 4
use merge function
df.merge(df, how = "outer", on = ["B"]).query("A_x < A_y")
I need to filter a pandas dataframe with two boolean queries, means I want to keep the ones which are True
dataframe:
import numpy as np
df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
output:
a b c
0 1 2 3
1 4 5 6
2 7 8 9
single filter works:
filter = (df.b == 2)
df = df[filter]
output:
a b c
0 1 2 3
But how can I filter with df.b == 2 or df.b == 5 ?
I tried:
filter = [(df['b']==2) | (df['b']==5)]
df = df[filter]
print(df)
I get :
ValueError: Item wrong length 1 instead of 3
Any suggestions how do achive it?
my desired output is:
a b c
0 1 2 3
1 4 5 6
You pass list as filter, try this: (better don't use filter as variable, it is built-in function in python)
mask = ((df['b']==2) | (df['b']==5))
df = df[mask]
You can use .inin() as alternative solution like below:
mask = [2,5]
df = df[df['b'].isin(mask)]
I have a dataframe like this where the columns are the scores of some metrics:
A B C D
4 3 3 1
2 5 2 2
3 5 2 4
I want to create a new column to summarize which metrics each row scored over a set threshold in, using the column name as a string. So if the threshold was A > 2, B > 3, C > 1, D > 3, I would want the new column to look like this:
A B C D NewCol
4 3 3 1 AC
2 5 2 2 BC
3 5 2 4 ABCD
I tried using a series of np.where:
df[NewCol] = np.where(df['A'] > 2, 'A', '')
df[NewCol] = np.where(df['B'] > 3, 'B', '')
etc.
but realized the result was overwriting with the last metric any time all four metrics didn't meet the conditions, like so:
A B C D NewCol
4 3 3 1 C
2 5 2 2 C
3 5 2 4 ABCD
I am pretty sure there is an easier and correct way to do this.
You could do:
import pandas as pd
data = [[4, 3, 3, 1],
[2, 5, 2, 2],
[3, 5, 2, 4]]
df = pd.DataFrame(data=data, columns=['A', 'B', 'C', 'D'])
th = {'A': 2, 'B': 3, 'C': 1, 'D': 3}
df['result'] = [''.join(k for k in df.columns if record[k] > th[k]) for record in df.to_dict('records')]
print(df)
Output
A B C D result
0 4 3 3 1 AC
1 2 5 2 2 BC
2 3 5 2 4 ABCD
Using dot
s=pd.Series([2,3,1,3],index=df.columns)
df.gt(s,1).dot(df.columns)
Out[179]:
0 AC
1 BC
2 ABCD
dtype: object
#df['New']=df.gt(s,1).dot(df.columns)
Another option that operates in an array fashion. It would be interesting to compare performance.
import pandas as pd
import numpy as np
# Data to test.
data = pd.DataFrame(
[
[4, 3, 3, 1],
[2, 5, 2, 2],
[3, 5, 2, 4]
]
, columns = ['A', 'B', 'C', 'D']
)
# Series to hold the thresholds.
thresholds = pd.Series([2, 3, 1, 3], index = ['A', 'B', 'C', 'D'])
# Subtract the series from the data, broadcasting, and then use sum to concatenate the strings.
data['result'] = np.where(data - thresholds > 0, data.columns, '').sum(axis = 1)
print(data)
Gives:
A B C D result
0 4 3 3 1 AC
1 2 5 2 2 BC
2 3 5 2 4 ABCD
consider this
df = pd.DataFrame({'B': ['a', 'a', 'b', 'b'], 'C': [1, 2, 6,2]})
df
Out[128]:
B C
0 a 1
1 a 2
2 b 6
3 b 2
I want to create a variable that simply corresponds to the ordering of observations after sorting by 'C' within each groupby('B') group.
df.sort_values(['B','C'])
Out[129]:
B C order
0 a 1 1
1 a 2 2
3 b 2 1
2 b 6 2
How can I do that? I am thinking about creating a column that is one, and using cumsum but that seems too clunky...
I think you can use range with len(df):
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3],
'B': ['a', 'a', 'b'],
'C': [5, 3, 2]})
print df
A B C
0 1 a 5
1 2 a 3
2 3 b 2
df.sort_values(by='C', inplace=True)
#or without inplace
#df = df.sort_values(by='C')
print df
A B C
2 3 b 2
1 2 a 3
0 1 a 5
df['order'] = range(1,len(df)+1)
print df
A B C order
2 3 b 2 1
1 2 a 3 2
0 1 a 5 3
EDIT by comment:
I think you can use groupby with cumcount:
import pandas as pd
df = pd.DataFrame({'B': ['a', 'a', 'b', 'b'], 'C': [1, 2, 6,2]})
df.sort_values(['B','C'], inplace=True)
#or without inplace
#df = df.sort_values(['B','C'])
print df
B C
0 a 1
1 a 2
3 b 2
2 b 6
df['order'] = df.groupby('B', sort=False).cumcount() + 1
print df
B C order
0 a 1 1
1 a 2 2
3 b 2 1
2 b 6 2
Nothing wrong with Jezrael's answer but there's a simpler (though less general) method in this particular example. Just add groupby to JohnGalt's suggestion of using rank.
>>> df['order'] = df.groupby('B')['C'].rank()
B C order
0 a 1 1.0
1 a 2 2.0
2 b 6 2.0
3 b 2 1.0
In this case, you don't really need the ['C'] but it makes the ranking a little more explicit and if you had other unrelated columns in the dataframe then you would need it.
But if you are ranking by more than 1 column, you should use Jezrael's method.
So given a multiindexed dataframe, I would like to return only rows that satisfy a condition for all levels of the lower index in a multi index. Here is a small working example:
df = pd.DataFrame({'a': [1, 1, 2, 2], 'b': [1, 2, 3, 4], 'c': [0, 2, 2, 2]})
df = df.set_index(['a', 'b'])
print(df)
out:
c
a b
1 1 0
2 2
2 3 2
4 2
Now, I would like to return the entries for which c > 1. For instance, I would like to do something like
df[df[c > 1]]
out:
c
a b
1 2 2
2 3 2
4 2
But I want to get
out:
c
a b
2 3 2
4 2
Any thoughts on how to do this in the most efficient way?
I ended up using groupby:
df.groupby(level=0).filter(lambda x: all([c > 1 for v in x['c']]))