Selecting different rows from different GroupBy groups - python

As opposed to GroupBy.nth, which selects the same index for each group, I would like to take specific indices from each group. For example, if my GroupBy object consisted of four groups and I would like the 1st, 5th, 10th, and 15th from each respectively, then I would like to be able to pass x = [0, 4, 9, 14] and get those rows.

This is kind of a strange thing to want; is there a reason?
In any case, to do what you want, try this:
df = pd.DataFrame([['a', 1], ['a', 2],
['b', 3], ['b', 4], ['b', 5],
['c', 6], ['c', 7]],
columns=['group', 'value'])
def index_getter(which):
def get(series):
return series.iloc[which[series.name]]
return get
which = {'a': 0, 'b': 2, 'c': 1}
df.groupby('group')['value'].apply(index_getter(which))
Which results in:
group
a 1
b 5
c 7

Related

How to get index for all the duplicates in a dataframe (pandas - python)

I have a data frame with multiple columns, and I want to find the duplicates in some of them. My columns go from A to Z. I want to know which lines have the same values in columns A, D, F, K, L, and G.
I tried:
df = df[df.duplicated(keep=False)]
df = df.groupby(df.columns.tolist()).apply(lambda x: tuple(x.index)).tolist()
However, this uses all of the columns.
I also tried
print(df[df.duplicated(['A', 'D', 'F', 'K', 'L', 'P'])])
This only returns the duplication's index. I want the index of both lines that have the same values.
Your final attempt is close. Instead of grouping by all columns, just use a list of the ones you want to consider:
df = pd.DataFrame({'A': [1, 1, 1, 2, 2, 2],
'B': [3, 3, 3, 4, 4, 5],
'C': [6, 7, 8, 9, 10, 11]})
res = df.groupby(['A', 'B']).apply(lambda x: (x.index).tolist()).reset_index()
print(res)
# A B 0
# 0 1 3 [0, 1, 2]
# 1 2 4 [3, 4]
# 2 2 5 [5]
Different layout of groupby
df.index.to_series().groupby([df['A'],df['B']]).apply(list)
Out[449]:
A B
1 3 [0, 1, 2]
2 4 [3, 4]
5 [5]
dtype: object
You can have .groupby return a dict with keys being the group labels (tuples for multiple columns) and the values being the Index
df.groupby(['A', 'B']).groups
#{(1, 3): Int64Index([0, 1, 2], dtype='int64'),
# (2, 4): Int64Index([3, 4], dtype='int64'),
# (2, 5): Int64Index([5], dtype='int64')}

plotting a given column name across different data frames in python

All, I have multiple dataframes like
df1 = pd.DataFrame(np.array([
['a', 1, 2],
['b', 3, 4],
['c', 5, 6]]),
columns=['name', 'attr1', 'attr2'])
df2 = pd.DataFrame(np.array([
['a', 2, 3],
['b', 4, 5],
['c', 6, 7]]),
columns=['name', 'attr1', 'attr2'])
df3 = pd.DataFrame(np.array([
['a', 3, 4],
['b', 5, 6],
['c', 7, 8]]),
columns=['name', 'attr1', 'attr2'])
each of these dataframes are generated at specific time steps says T=[t1, t2, t3]
I would like to plot, attr1 or attr2 of the diff data frames as function of time T. I would like to do this for 'a', 'b' and 'c' on all the same graph.
Plot Attr1 VS time for 'a', 'b' and 'c'
If I understand correctly, first assign a column T to each of your dataframes, then concatenate the three. Then, you can groupby the name column, iterate through each, and plot T against attr1 or attr2:
dfs = pd.concat([df1.assign(T=1), df2.assign(T=2), df3.assign(T=3)])
for name, data in dfs.groupby('name'):
plt.plot(data['T'], data['attr2'], label=name)
plt.xlabel('Time')
plt.ylabel('attr2')
plt.legend()
plt.show()

Can I do a conditional sort on two different columns, but where the order of two columns is reversed based on the secondary condition?

Edit: Since writing this, I remembered a third necessary condition. That is, if the difference between the values at index 1 (time) is greater than or equal to 2, then the rows should be sorted normally by the index 1 (time) column. So because the time value for B is 6 and within a difference of 2 for the T time of 5, B should come after T. However,for T and K, for example, because the 7 value for K is 2 greater than the 5 value for T, T should come first.
Let's say I have this array
input = [['user_id', 'time', 'address'],
['F', 5, 5],
['T', 5, 8],
['B', 6, 6],
['K', 7, 7],
['J', 7, 9],
['M', 9, 10]]
I'd like to sort the rows -- first in ascending order by index 1 (time). However, secondarily, if index 2 (address) for a given user_id such as 'B' is less than index 2 (address) for another user such as 'T', I'd like user_id 'B' to come before user_id 'T'.
So the final output would look like this:
output = [['user_id', 'time', 'address'],
['F', 5, 5],
['B', 6, 6]
['T', 5, 8],
['K', 7, 7],
['J', 7, 9],
['M', 9, 10]]
If possible, I'd like to do this without Pandas.
>>> import functools
>>>
>>> def compare(item1, item2):
... return item1[1]-item2[1] if item1[1]-item2[1] >=2 else item1[2]-item2[2]
...
>>>
>>> output = [input[0]] + sorted(input[1:], key = functools.cmp_to_key(compare))
>>> pprint (output)
[['user_id', 'time', 'address'],
['F', 5, 5],
['B', 6, 6],
['T', 5, 8],
['K', 7, 7],
['J', 7, 9],
['M', 9, 10]]
>>>
For builtin function sorted you can provide custom key method. Here it's enough if the key method returns a tuple of columns 1 and 2, so first the value of column 1 will be considered, and for rows having the same value in that column, will be ordered by column 2.
data = [['user_id', 'time', 'address'],
['F', 5, 5],
['B', 6, 6],
['T', 5, 8],
['K', 7, 7],
['J', 7, 9],
['M', 9, 10]]
data_sorted = [data[0]] + sorted(data[1:], key = lambda row: (row[1], row[2]))

Sort two lists of lists by index of inner list [duplicate]

This question already has answers here:
Sorting list based on values from another list
(20 answers)
Closed 5 years ago.
Assume I want to sort a list of lists like explained here:
>>>L=[[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']]
>>>sorted(L, key=itemgetter(2))
[[9, 4, 'afsd'], [0, 1, 'f'], [4, 2, 't']]
(Or with lambda.) Now I have a second list which I want to sort in the same order, so I need the new order of the indices. sorted() or .sort() do not return indices. How can I do that?
Actually in my case both lists contain numpy arrays. But the numpy sort/argsort aren't intuitive for that case either.
If I understood you correctly, you want to order B in the example below, based on a sorting rule you apply on L. Take a look at this:
L = [[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']]
B = ['a', 'b', 'c']
result = [i for _, i in sorted(zip(L, B), key=lambda x: x[0][2])]
print(result) # ['c', 'a', 'b']
# that corresponds to [[9, 4, 'afsd'], [0, 1, 'f'], [4, 2, 't']]
If I understand correctly, you want to know how the list has been rearranged. i.e. where is the 0th element after sorting, etc.
If so, you are one step away:
L2 = [L.index(x) for x in sorted(L, key=itemgetter(2))]
which gives:
[2, 0, 1]
As tobias points out, this is needlessly complex compared to
map(itemgetter(0), sorted(enumerate(L), key=lambda x: x[1][2]))
NumPy
Setup:
import numpy as np
L = np.array([[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']])
S = np.array(['a', 'b', 'c'])
Solution:
print S[L[:,2].argsort()]
Output:
['c' 'a' 'b']
Just Python
You could combine both lists, sort them together, and separate them again.
>>> L = [[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']]
>>> S = ['a', 'b', 'c']
>>> L, S = zip(*sorted(zip(L, S), key=lambda x: x[0][2]))
>>> L
([9, 4, 'afsd'], [0, 1, 'f'], [4, 2, 't'])
>>> S
('c', 'a', 'b')
I guess you could do something similar in NumPy as well...

Get DataFrame selection's row posititions

Instead of the indices, I'd like to obtain the row positions, so I can use the result later using df.iloc(row_positions).
This is the example:
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']}, index=[10, 2, 7])
print df[df['a']>=2].index
# Int64Index([2, 7], dtype='int64')
# How do I convert the index list [2, 7] to [1, 2] (the row position)
# I managed to do this for 1 index element, but how can I do this for the entire selection/index list?
df.index.get_loc(2)
Update
I could use a list comprehension to apply the selected result on the get_loc function, but perhaps there's some Pandas-built-in function.
you can use where from numpy:
import numpy as np
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']}, index=[10, 2, 7])
np.where( df.a>=2)
returns row indices:
(array([1, 2], dtype=int64),)
#ssm's answer is what I would normally use. However to answer your specific query of how to select multiple rows try this:
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']}, index=[10, 2, 7])
indices = df[df['a']>=2].index
print df.ix[indices]
More information on .ix indexing scheme is here
[EDIT to answer the specific query]
How do I convert the index list [2, 7] to [1, 2] (the row position)
df[df['a']>=2].reset_index().index

Categories

Resources