I have a series with a MultiIndex like this:
import numpy as np
import pandas as pd
buckets = np.repeat(['a','b','c'], [3,5,1])
sequence = [0,1,5,0,1,2,4,50,0]
s = pd.Series(
np.random.randn(len(sequence)),
index=pd.MultiIndex.from_tuples(zip(buckets, sequence))
)
# In [6]: s
# Out[6]:
# a 0 -1.106047
# 1 1.665214
# 5 0.279190
# b 0 0.326364
# 1 0.900439
# 2 -0.653940
# 4 0.082270
# 50 -0.255482
# c 0 -0.091730
I'd like to get the s['b'] values where the second index ('sequence') is between 2 and 10.
Slicing on the first index works fine:
s['a':'b']
# Out[109]:
# bucket value
# a 0 1.828176
# 1 0.160496
# 5 0.401985
# b 0 -1.514268
# 1 -0.973915
# 2 1.285553
# 4 -0.194625
# 5 -0.144112
But not on the second, at least by what seems to be the two most obvious ways:
1) This returns elements 1 through 4, with nothing to do with the index values
s['b'][1:10]
# In [61]: s['b'][1:10]
# Out[61]:
# 1 0.900439
# 2 -0.653940
# 4 0.082270
# 50 -0.255482
However, if I reverse the index and the first index is integer and the second index is a string, it works:
In [26]: s
Out[26]:
0 a -0.126299
1 a 1.810928
5 a 0.571873
0 b -0.116108
1 b -0.712184
2 b -1.771264
4 b 0.148961
50 b 0.089683
0 c -0.582578
In [25]: s[0]['a':'b']
Out[25]:
a -0.126299
b -0.116108
As Robbie-Clarken answers, since 0.14 you can pass a slice in the tuple you pass to loc:
In [11]: s.loc[('b', slice(2, 10))]
Out[11]:
b 2 -0.65394
4 0.08227
dtype: float64
Indeed, you can pass a slice for each level:
In [12]: s.loc[(slice('a', 'b'), slice(2, 10))]
Out[12]:
a 5 0.27919
b 2 -0.65394
4 0.08227
dtype: float64
Note: the slice is inclusive.
Old answer:
You can also do this using:
s.ix[1:10, "b"]
(It's good practice to do in a single ix/loc/iloc since this version allows assignment.)
This answer was written prior to the introduction of iloc in early 2013, i.e. position/integer location - which may be preferred in this case. The reason it was created was to remove the ambiguity from integer-indexed pandas objects, and be more descriptive: "I'm slicing on position".
s["b"].iloc[1:10]
That said, I kinda disagree with the docs that ix is:
most robust and consistent way
it's not, the most consistent way is to describe what you're doing:
use loc for labels
use iloc for position
use ix for both (if you really have to)
Remember the zen of python:
explicit is better than implicit
Since pandas 0.15.0 this works:
s.loc['b', 2:10]
Output:
b 2 -0.503023
4 0.704880
dtype: float64
With a DataFrame it's slightly different (source):
df.loc(axis=0)['b', 2:10]
As of pandas 0.14.0 it is possible to slice multi-indexed objects by providing .loc a tuple containing slice objects:
In [2]: s.loc[('b', slice(2, 10))]
Out[2]:
b 2 -1.206052
4 -0.735682
dtype: float64
The best way I can think of is to use 'select' in this case. Although it even says in the docs that "This method should be used only when there is no more direct way."
Indexing and selecting data
In [116]: s
Out[116]:
a 0 1.724372
1 0.305923
5 1.780811
b 0 -0.556650
1 0.207783
4 -0.177901
50 0.289365
0 1.168115
In [117]: s.select(lambda x: x[0] == 'b' and 2 <= x[1] <= 10)
Out[117]: b 4 -0.177901
not sure if this is ideal but it works by creating a mask
In [59]: s.index
Out[59]:
MultiIndex
[('a', 0) ('a', 1) ('a', 5) ('b', 0) ('b', 1) ('b', 2) ('b', 4)
('b', 50) ('c', 0)]
In [77]: s[(tpl for tpl in s.index if 2<=tpl[1]<=10 and tpl[0]=='b')]
Out[77]:
b 2 -0.586568
4 1.559988
EDIT : hayden's solution is the way to go
Related
I've been exploring how to optimize my code and ran across pandas .at method. Per the documentation
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
So I ran some samples:
Setup
import pandas as pd
import numpy as np
from string import letters, lowercase, uppercase
lt = list(letters)
lc = list(lowercase)
uc = list(uppercase)
def gdf(rows, cols, seed=None):
"""rows and cols are what you'd pass
to pd.MultiIndex.from_product()"""
gmi = pd.MultiIndex.from_product
df = pd.DataFrame(index=gmi(rows), columns=gmi(cols))
np.random.seed(seed)
df.iloc[:, :] = np.random.rand(*df.shape)
return df
seed = [3, 1415]
df = gdf([lc, uc], [lc, uc], seed)
print df.head().T.head().T
df looks like:
a
A B C D E
a A 0.444939 0.407554 0.460148 0.465239 0.462691
B 0.032746 0.485650 0.503892 0.351520 0.061569
C 0.777350 0.047677 0.250667 0.602878 0.570528
D 0.927783 0.653868 0.381103 0.959544 0.033253
E 0.191985 0.304597 0.195106 0.370921 0.631576
Lets use .at and .loc and ensure I get the same thing
print "using .loc", df.loc[('a', 'A'), ('c', 'C')]
print "using .at ", df.at[('a', 'A'), ('c', 'C')]
using .loc 0.37374090276
using .at 0.37374090276
Test speed using .loc
%%timeit
df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 180 µs per loop
Test speed using .at
%%timeit
df.at[('a', 'A'), ('c', 'C')]
The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8 µs per loop
This looks to be a huge speed increase. Even at the caching stage 6.11 * 8 is a lot faster than 180
Question
What are the limitations of .at? I'm motivated to use it. The documentation says it's similar to .loc but it doesn't behave similarly. Example:
# small df
sdf = gdf([lc[:2]], [uc[:2]], seed)
print sdf.loc[:, :]
A B
a 0.444939 0.407554
b 0.460148 0.465239
where as print sdf.at[:, :] results in TypeError: unhashable type
So obviously not the same even if the intent is to be similar.
That said, who can provide guidance on what can and cannot be done with the .at method?
Update: df.get_value is deprecated as of version 0.21.0. Using df.at or df.iat is the recommended method going forward.
df.at can only access a single value at a time.
df.loc can select multiple rows and/or columns.
Note that there is also df.get_value, which may be even quicker at accessing single values:
In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 187 µs per loop
In [26]: %timeit df.at[('a', 'A'), ('c', 'C')]
100000 loops, best of 3: 8.33 µs per loop
In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C'))
100000 loops, best of 3: 3.62 µs per loop
Under the hood, df.at[...] calls df.get_value, but it also does some type checking on the keys.
As you asked about the limitations of .at, here is one thing I recently ran into (using pandas 0.22). Let's use the example from the documentation:
df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]], index=[4, 5, 6], columns=['A', 'B', 'C'])
df2 = df.copy()
A B C
4 0 2 3
5 0 4 1
6 10 20 30
If I now do
df.at[4, 'B'] = 100
the result looks as expected
A B C
4 0 100 3
5 0 4 1
6 10 20 30
However, when I try to do
df.at[4, 'C'] = 10.05
it seems that .at tries to conserve the datatype (here: int):
A B C
4 0 100 10
5 0 4 1
6 10 20 30
That seems to be a difference to .loc:
df2.loc[4, 'C'] = 10.05
yields the desired
A B C
4 0 2 10.05
5 0 4 1.00
6 10 20 30.00
The risky thing in the example above is that it happens silently (the conversion from float to int). When one tries the same with strings it will throw an error:
df.at[5, 'A'] = 'a_string'
ValueError: invalid literal for int() with base 10: 'a_string'
It will work, however, if one uses a string on which int() actually works as noted by #n1k31t4 in the comments, e.g.
df.at[5, 'A'] = '123'
A B C
4 0 2 3
5 123 4 1
6 10 20 30
Adding to the above, Pandas documentation for the at function states:
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if
you only need to get or set a single value in a DataFrame or Series.
For setting data loc and at are similar, for example:
df = pd.DataFrame({'A': [1,2,3], 'B': [11,22,33]}, index=[0,0,1])
Both loc and at will produce the same result
df.at[0, 'A'] = [101,102]
df.loc[0, 'A'] = [101,102]
A B
0 101 11
0 102 22
1 3 33
df.at[0, 'A'] = 103
df.loc[0, 'A'] = 103
A B
0 103 11
0 103 22
1 3 33
Also, for accessing a single value, both are the same
df.loc[1, 'A'] # returns a single value (<class 'numpy.int64'>)
df.at[1, 'A'] # returns a single value (<class 'numpy.int64'>)
3
However, when matching multiple values, loc will return a group of rows/cols from the DataFrame while at will return an array of values
df.loc[0, 'A'] # returns a Series (<class 'pandas.core.series.Series'>)
0 103
0 103
Name: A, dtype: int64
df.at[0, 'A'] # returns array of values (<class 'numpy.ndarray'>)
array([103, 103])
And more so, loc can be used to match a group of row/cols and can be given only an index, while at must receive the column
df.loc[0] # returns a DataFrame view (<class 'pandas.core.frame.DataFrame'>)
A B
0 103 11
0 103 22
# df.at[0] # ERROR: must receive column
.at is an optimized data access method compared to .loc .
.loc of a data frame selects all the elements located by indexed_rows and labeled_columns as given in its argument. Instead, .at selects particular element of a data frame positioned at the given indexed_row and labeled_column.
Also, .at takes one row and one column as input argument, whereas .loc may take multiple rows and columns. Output using .at is a single element and using .loc maybe a Series or a DataFrame.
I have noticed a quirky thing. Let's say A and B are dataframe.
A is:
A
a b c
0 x 1 a
1 y 2 b
2 z 3 c
3 w 4 d
B is:
B
a b c
0 1 x a
1 2 y b
2 3 z c
3 4 w d
As we can see above, the elements under column a in A and B are different, but A.equals(B) yields True
A==B correctly shows that the elements are not equal:
A==B
a b c
0 False False True
1 False False True
2 False False True
3 False False True
Question: Can someone please explain why .equals() yields True? Also, I researched this topic on SO. As per contract of pandas.DataFrame.equals, Pandas must return False. I'd appreciate any help.
I am a beginner, so I'd appreciate any help.
Here's json format and ._data of A and B
A
`A.to_json()`
Out[114]: '{"a":{"0":"x","1":"y","2":"z","3":"w"},"b":{"0":1,"1":2,"2":3,"3":4},"c":{"0":"a","1":"b","2":"c","3":"d"}}'
and A._data is
BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=4, step=1)
IntBlock: slice(1, 2, 1), 1 x 4, dtype: int64
ObjectBlock: slice(0, 4, 2), 2 x 4, dtype: object
B
B's json format:
B.to_json()
'{"a":{"0":1,"1":2,"2":3,"3":4},"b":{"0":"x","1":"y","2":"z","3":"w"},"c":{"0":"a","1":"b","2":"c","3":"d"}}'
B._data
BlockManager
Items: Index(['a', 'b', 'c'], dtype='object')
Axis 1: RangeIndex(start=0, stop=4, step=1)
IntBlock: slice(0, 1, 1), 1 x 4, dtype: int64
ObjectBlock: slice(1, 3, 1), 2 x 4, dtype: object
Alternative to sacul and U9-Forward's answers, I've done some further analysis and it looks like the reason you are seeing True and not False as you expected might have something more to do with this line of the docs:
This function requires that the elements have the same dtype as their respective elements in the other Series or DataFrame.
With the above dataframes, when I run df.equals(), this is what is returned:
>>> A.equals(B)
Out: True
>>> B.equals(C)
Out: False
These two align with what the other answers are saying, A and B are the same shape and have the same elements, so they are the same. While B and C have the same shape, but different elements, so they aren't the same.
On the other hand:
>>> A.equals(D)
Out: False
Here A and D have the same shape, and the same elements. But still they are returning false. The difference between this case and the one above is that all of the dtypes in the comparison match up, as it says the above docs quote. A and D both have the dtypes: str, int, str.
As in the answer you linked in your question, essentially the behaviour of pandas.DataFrame.equals mimics numpy.array_equal.
The docs for np.array_equal state that it returns:
True if two arrays have the same shape and elements, False otherwise.
Which your 2 dataframes satisfies.
From the docs:
Determines if two NDFrame objects contain the same elements. NaNs in the same location are considered equal.
Determines if two NDFrame objects contain the same elements!!!
ELEMNTS not including COLUMNS
So that's why returns True
If you want it to return false and check the columns do:
print((A==B).all().all())
Output:
False
I've been exploring how to optimize my code and ran across pandas .at method. Per the documentation
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
So I ran some samples:
Setup
import pandas as pd
import numpy as np
from string import letters, lowercase, uppercase
lt = list(letters)
lc = list(lowercase)
uc = list(uppercase)
def gdf(rows, cols, seed=None):
"""rows and cols are what you'd pass
to pd.MultiIndex.from_product()"""
gmi = pd.MultiIndex.from_product
df = pd.DataFrame(index=gmi(rows), columns=gmi(cols))
np.random.seed(seed)
df.iloc[:, :] = np.random.rand(*df.shape)
return df
seed = [3, 1415]
df = gdf([lc, uc], [lc, uc], seed)
print df.head().T.head().T
df looks like:
a
A B C D E
a A 0.444939 0.407554 0.460148 0.465239 0.462691
B 0.032746 0.485650 0.503892 0.351520 0.061569
C 0.777350 0.047677 0.250667 0.602878 0.570528
D 0.927783 0.653868 0.381103 0.959544 0.033253
E 0.191985 0.304597 0.195106 0.370921 0.631576
Lets use .at and .loc and ensure I get the same thing
print "using .loc", df.loc[('a', 'A'), ('c', 'C')]
print "using .at ", df.at[('a', 'A'), ('c', 'C')]
using .loc 0.37374090276
using .at 0.37374090276
Test speed using .loc
%%timeit
df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 180 µs per loop
Test speed using .at
%%timeit
df.at[('a', 'A'), ('c', 'C')]
The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8 µs per loop
This looks to be a huge speed increase. Even at the caching stage 6.11 * 8 is a lot faster than 180
Question
What are the limitations of .at? I'm motivated to use it. The documentation says it's similar to .loc but it doesn't behave similarly. Example:
# small df
sdf = gdf([lc[:2]], [uc[:2]], seed)
print sdf.loc[:, :]
A B
a 0.444939 0.407554
b 0.460148 0.465239
where as print sdf.at[:, :] results in TypeError: unhashable type
So obviously not the same even if the intent is to be similar.
That said, who can provide guidance on what can and cannot be done with the .at method?
Update: df.get_value is deprecated as of version 0.21.0. Using df.at or df.iat is the recommended method going forward.
df.at can only access a single value at a time.
df.loc can select multiple rows and/or columns.
Note that there is also df.get_value, which may be even quicker at accessing single values:
In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 187 µs per loop
In [26]: %timeit df.at[('a', 'A'), ('c', 'C')]
100000 loops, best of 3: 8.33 µs per loop
In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C'))
100000 loops, best of 3: 3.62 µs per loop
Under the hood, df.at[...] calls df.get_value, but it also does some type checking on the keys.
As you asked about the limitations of .at, here is one thing I recently ran into (using pandas 0.22). Let's use the example from the documentation:
df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]], index=[4, 5, 6], columns=['A', 'B', 'C'])
df2 = df.copy()
A B C
4 0 2 3
5 0 4 1
6 10 20 30
If I now do
df.at[4, 'B'] = 100
the result looks as expected
A B C
4 0 100 3
5 0 4 1
6 10 20 30
However, when I try to do
df.at[4, 'C'] = 10.05
it seems that .at tries to conserve the datatype (here: int):
A B C
4 0 100 10
5 0 4 1
6 10 20 30
That seems to be a difference to .loc:
df2.loc[4, 'C'] = 10.05
yields the desired
A B C
4 0 2 10.05
5 0 4 1.00
6 10 20 30.00
The risky thing in the example above is that it happens silently (the conversion from float to int). When one tries the same with strings it will throw an error:
df.at[5, 'A'] = 'a_string'
ValueError: invalid literal for int() with base 10: 'a_string'
It will work, however, if one uses a string on which int() actually works as noted by #n1k31t4 in the comments, e.g.
df.at[5, 'A'] = '123'
A B C
4 0 2 3
5 123 4 1
6 10 20 30
Adding to the above, Pandas documentation for the at function states:
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if
you only need to get or set a single value in a DataFrame or Series.
For setting data loc and at are similar, for example:
df = pd.DataFrame({'A': [1,2,3], 'B': [11,22,33]}, index=[0,0,1])
Both loc and at will produce the same result
df.at[0, 'A'] = [101,102]
df.loc[0, 'A'] = [101,102]
A B
0 101 11
0 102 22
1 3 33
df.at[0, 'A'] = 103
df.loc[0, 'A'] = 103
A B
0 103 11
0 103 22
1 3 33
Also, for accessing a single value, both are the same
df.loc[1, 'A'] # returns a single value (<class 'numpy.int64'>)
df.at[1, 'A'] # returns a single value (<class 'numpy.int64'>)
3
However, when matching multiple values, loc will return a group of rows/cols from the DataFrame while at will return an array of values
df.loc[0, 'A'] # returns a Series (<class 'pandas.core.series.Series'>)
0 103
0 103
Name: A, dtype: int64
df.at[0, 'A'] # returns array of values (<class 'numpy.ndarray'>)
array([103, 103])
And more so, loc can be used to match a group of row/cols and can be given only an index, while at must receive the column
df.loc[0] # returns a DataFrame view (<class 'pandas.core.frame.DataFrame'>)
A B
0 103 11
0 103 22
# df.at[0] # ERROR: must receive column
.at is an optimized data access method compared to .loc .
.loc of a data frame selects all the elements located by indexed_rows and labeled_columns as given in its argument. Instead, .at selects particular element of a data frame positioned at the given indexed_row and labeled_column.
Also, .at takes one row and one column as input argument, whereas .loc may take multiple rows and columns. Output using .at is a single element and using .loc maybe a Series or a DataFrame.
I am currently working with a panda that uses tuples for column names. When attempting to use .loc as I would for normal columns the tuple names cause it to error out.
Test code is below:
import pandas as pd
import numpy as np
df1 = pd.DataFrame(np.random.randn(6,4),
columns=[('a','1'), ('b','2'), ('c','3'), 'nontuple'])
df1.loc[:3, 'nontuple']
df1.loc[:3, ('c','3')]
The second line works as expected and displays the column 'non tuple' from 0:3. The third line does not work and instead gives the error:
KeyError: "None of [('c', '3')] are in the [columns]
Any idea how to resolve this issue short of not using tuples as column names?
Also, I have found that the code below works even though the .loc doesn't:
df1.ix[:3][('c','3')]
Documenation
access by tuple, returns DF:
In [508]: df1.loc[:3, [('c', '3')]]
Out[508]:
(c, 3)
0 1.433004
1 -0.731705
2 -1.633657
3 0.565320
access by non-tuple column, returns series:
In [514]: df1.loc[:3, 'nontuple']
Out[514]:
0 0.783621
1 1.984459
2 -2.211271
3 -0.532457
Name: nontuple, dtype: float64
access by non-tuple column, returns DF:
In [517]: df1.loc[:3, ['nontuple']]
Out[517]:
nontuple
0 0.783621
1 1.984459
2 -2.211271
3 -0.532457
access any column by it's number, returns series:
In [515]: df1.iloc[:3, 2]
Out[515]:
0 1.433004
1 -0.731705
2 -1.633657
Name: (c, 3), dtype: float64
access any column(s) by it's number, returns DF:
In [516]: df1.iloc[:3, [2]]
Out[516]:
(c, 3)
0 1.433004
1 -0.731705
2 -1.633657
NOTE: pay attention at the differences between .loc[] and .iloc[] - they are filtering rows differently!
this works like Python's slicing:
In [531]: df1.iloc[0:2]
Out[531]:
(a, 1) (b, 2) (c, 3) nontuple
0 0.650961 -1.130000 1.433004 0.783621
1 0.073805 1.907998 -0.731705 1.984459
this includes right index boundary:
In [532]: df1.loc[0:2]
Out[532]:
(a, 1) (b, 2) (c, 3) nontuple
0 0.650961 -1.130000 1.433004 0.783621
1 0.073805 1.907998 -0.731705 1.984459
2 -1.511939 0.167122 -1.633657 -2.211271
Say I have a list (or numpy array or pandas series) as below
l = [1,2,6,6,4,2,4]
I want to return a list of each value's ordinal, 1-->1(smallest), 2-->2, 4-->3, 6-->4 and
to_ordinal(l) == [1,2,4,4,3,2,4]
and I want it to also work for list of strings input.
I can try
s = numpy.unique(l)
then loop over each element in l and find its index in s. Just wonder if there is a direct method?
In pandas you can call rank and pass method='dense':
In [18]:
l = [1,2,6,6,4,2,4]
s = pd.Series(l)
s.rank(method='dense')
Out[18]:
0 1
1 2
2 4
3 4
4 3
5 2
6 3
dtype: float64
This also works for strings:
In [19]:
l = ['aaa','abc','aab','aba']
s = pd.Series(l)
Out[19]:
0 aaa
1 abc
2 aab
3 aba
dtype: object
In [20]:
s.rank(method='dense')
Out[20]:
0 1
1 4
2 2
3 3
dtype: float64
I don't think that there is a "direct method" for this1. The most straight forward way that I can think to do it is to sort a set of the elements:
sorted_unique = sorted(set(l))
Then make a dictionary mapping the value to it's ordinal:
ordinal_map = {val: i for i, val in enumerate(sorted_unique, 1)}
Now one more pass over the data and we can get your list:
ordinals = [ordinal_map[val] for val in l]
Note that this is a roughly O(NlogN) algorithm (due to the sort) -- And the more non-unique elements you have, the closer it becomes to O(N).
1Certainly not in vanilla python and I don't know of anything in numpy. I'm less familiar with pandas so I can't speak to that.