I've been exploring how to optimize my code and ran across pandas .at method. Per the documentation
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
So I ran some samples:
Setup
import pandas as pd
import numpy as np
from string import letters, lowercase, uppercase
lt = list(letters)
lc = list(lowercase)
uc = list(uppercase)
def gdf(rows, cols, seed=None):
"""rows and cols are what you'd pass
to pd.MultiIndex.from_product()"""
gmi = pd.MultiIndex.from_product
df = pd.DataFrame(index=gmi(rows), columns=gmi(cols))
np.random.seed(seed)
df.iloc[:, :] = np.random.rand(*df.shape)
return df
seed = [3, 1415]
df = gdf([lc, uc], [lc, uc], seed)
print df.head().T.head().T
df looks like:
a
A B C D E
a A 0.444939 0.407554 0.460148 0.465239 0.462691
B 0.032746 0.485650 0.503892 0.351520 0.061569
C 0.777350 0.047677 0.250667 0.602878 0.570528
D 0.927783 0.653868 0.381103 0.959544 0.033253
E 0.191985 0.304597 0.195106 0.370921 0.631576
Lets use .at and .loc and ensure I get the same thing
print "using .loc", df.loc[('a', 'A'), ('c', 'C')]
print "using .at ", df.at[('a', 'A'), ('c', 'C')]
using .loc 0.37374090276
using .at 0.37374090276
Test speed using .loc
%%timeit
df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 180 µs per loop
Test speed using .at
%%timeit
df.at[('a', 'A'), ('c', 'C')]
The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8 µs per loop
This looks to be a huge speed increase. Even at the caching stage 6.11 * 8 is a lot faster than 180
Question
What are the limitations of .at? I'm motivated to use it. The documentation says it's similar to .loc but it doesn't behave similarly. Example:
# small df
sdf = gdf([lc[:2]], [uc[:2]], seed)
print sdf.loc[:, :]
A B
a 0.444939 0.407554
b 0.460148 0.465239
where as print sdf.at[:, :] results in TypeError: unhashable type
So obviously not the same even if the intent is to be similar.
That said, who can provide guidance on what can and cannot be done with the .at method?
Update: df.get_value is deprecated as of version 0.21.0. Using df.at or df.iat is the recommended method going forward.
df.at can only access a single value at a time.
df.loc can select multiple rows and/or columns.
Note that there is also df.get_value, which may be even quicker at accessing single values:
In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 187 µs per loop
In [26]: %timeit df.at[('a', 'A'), ('c', 'C')]
100000 loops, best of 3: 8.33 µs per loop
In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C'))
100000 loops, best of 3: 3.62 µs per loop
Under the hood, df.at[...] calls df.get_value, but it also does some type checking on the keys.
As you asked about the limitations of .at, here is one thing I recently ran into (using pandas 0.22). Let's use the example from the documentation:
df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]], index=[4, 5, 6], columns=['A', 'B', 'C'])
df2 = df.copy()
A B C
4 0 2 3
5 0 4 1
6 10 20 30
If I now do
df.at[4, 'B'] = 100
the result looks as expected
A B C
4 0 100 3
5 0 4 1
6 10 20 30
However, when I try to do
df.at[4, 'C'] = 10.05
it seems that .at tries to conserve the datatype (here: int):
A B C
4 0 100 10
5 0 4 1
6 10 20 30
That seems to be a difference to .loc:
df2.loc[4, 'C'] = 10.05
yields the desired
A B C
4 0 2 10.05
5 0 4 1.00
6 10 20 30.00
The risky thing in the example above is that it happens silently (the conversion from float to int). When one tries the same with strings it will throw an error:
df.at[5, 'A'] = 'a_string'
ValueError: invalid literal for int() with base 10: 'a_string'
It will work, however, if one uses a string on which int() actually works as noted by #n1k31t4 in the comments, e.g.
df.at[5, 'A'] = '123'
A B C
4 0 2 3
5 123 4 1
6 10 20 30
Adding to the above, Pandas documentation for the at function states:
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if
you only need to get or set a single value in a DataFrame or Series.
For setting data loc and at are similar, for example:
df = pd.DataFrame({'A': [1,2,3], 'B': [11,22,33]}, index=[0,0,1])
Both loc and at will produce the same result
df.at[0, 'A'] = [101,102]
df.loc[0, 'A'] = [101,102]
A B
0 101 11
0 102 22
1 3 33
df.at[0, 'A'] = 103
df.loc[0, 'A'] = 103
A B
0 103 11
0 103 22
1 3 33
Also, for accessing a single value, both are the same
df.loc[1, 'A'] # returns a single value (<class 'numpy.int64'>)
df.at[1, 'A'] # returns a single value (<class 'numpy.int64'>)
3
However, when matching multiple values, loc will return a group of rows/cols from the DataFrame while at will return an array of values
df.loc[0, 'A'] # returns a Series (<class 'pandas.core.series.Series'>)
0 103
0 103
Name: A, dtype: int64
df.at[0, 'A'] # returns array of values (<class 'numpy.ndarray'>)
array([103, 103])
And more so, loc can be used to match a group of row/cols and can be given only an index, while at must receive the column
df.loc[0] # returns a DataFrame view (<class 'pandas.core.frame.DataFrame'>)
A B
0 103 11
0 103 22
# df.at[0] # ERROR: must receive column
.at is an optimized data access method compared to .loc .
.loc of a data frame selects all the elements located by indexed_rows and labeled_columns as given in its argument. Instead, .at selects particular element of a data frame positioned at the given indexed_row and labeled_column.
Also, .at takes one row and one column as input argument, whereas .loc may take multiple rows and columns. Output using .at is a single element and using .loc maybe a Series or a DataFrame.
Related
I've been exploring how to optimize my code and ran across pandas .at method. Per the documentation
Fast label-based scalar accessor
Similarly to loc, at provides label based scalar lookups. You can also set using these indexers.
So I ran some samples:
Setup
import pandas as pd
import numpy as np
from string import letters, lowercase, uppercase
lt = list(letters)
lc = list(lowercase)
uc = list(uppercase)
def gdf(rows, cols, seed=None):
"""rows and cols are what you'd pass
to pd.MultiIndex.from_product()"""
gmi = pd.MultiIndex.from_product
df = pd.DataFrame(index=gmi(rows), columns=gmi(cols))
np.random.seed(seed)
df.iloc[:, :] = np.random.rand(*df.shape)
return df
seed = [3, 1415]
df = gdf([lc, uc], [lc, uc], seed)
print df.head().T.head().T
df looks like:
a
A B C D E
a A 0.444939 0.407554 0.460148 0.465239 0.462691
B 0.032746 0.485650 0.503892 0.351520 0.061569
C 0.777350 0.047677 0.250667 0.602878 0.570528
D 0.927783 0.653868 0.381103 0.959544 0.033253
E 0.191985 0.304597 0.195106 0.370921 0.631576
Lets use .at and .loc and ensure I get the same thing
print "using .loc", df.loc[('a', 'A'), ('c', 'C')]
print "using .at ", df.at[('a', 'A'), ('c', 'C')]
using .loc 0.37374090276
using .at 0.37374090276
Test speed using .loc
%%timeit
df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 180 µs per loop
Test speed using .at
%%timeit
df.at[('a', 'A'), ('c', 'C')]
The slowest run took 6.11 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 8 µs per loop
This looks to be a huge speed increase. Even at the caching stage 6.11 * 8 is a lot faster than 180
Question
What are the limitations of .at? I'm motivated to use it. The documentation says it's similar to .loc but it doesn't behave similarly. Example:
# small df
sdf = gdf([lc[:2]], [uc[:2]], seed)
print sdf.loc[:, :]
A B
a 0.444939 0.407554
b 0.460148 0.465239
where as print sdf.at[:, :] results in TypeError: unhashable type
So obviously not the same even if the intent is to be similar.
That said, who can provide guidance on what can and cannot be done with the .at method?
Update: df.get_value is deprecated as of version 0.21.0. Using df.at or df.iat is the recommended method going forward.
df.at can only access a single value at a time.
df.loc can select multiple rows and/or columns.
Note that there is also df.get_value, which may be even quicker at accessing single values:
In [25]: %timeit df.loc[('a', 'A'), ('c', 'C')]
10000 loops, best of 3: 187 µs per loop
In [26]: %timeit df.at[('a', 'A'), ('c', 'C')]
100000 loops, best of 3: 8.33 µs per loop
In [35]: %timeit df.get_value(('a', 'A'), ('c', 'C'))
100000 loops, best of 3: 3.62 µs per loop
Under the hood, df.at[...] calls df.get_value, but it also does some type checking on the keys.
As you asked about the limitations of .at, here is one thing I recently ran into (using pandas 0.22). Let's use the example from the documentation:
df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]], index=[4, 5, 6], columns=['A', 'B', 'C'])
df2 = df.copy()
A B C
4 0 2 3
5 0 4 1
6 10 20 30
If I now do
df.at[4, 'B'] = 100
the result looks as expected
A B C
4 0 100 3
5 0 4 1
6 10 20 30
However, when I try to do
df.at[4, 'C'] = 10.05
it seems that .at tries to conserve the datatype (here: int):
A B C
4 0 100 10
5 0 4 1
6 10 20 30
That seems to be a difference to .loc:
df2.loc[4, 'C'] = 10.05
yields the desired
A B C
4 0 2 10.05
5 0 4 1.00
6 10 20 30.00
The risky thing in the example above is that it happens silently (the conversion from float to int). When one tries the same with strings it will throw an error:
df.at[5, 'A'] = 'a_string'
ValueError: invalid literal for int() with base 10: 'a_string'
It will work, however, if one uses a string on which int() actually works as noted by #n1k31t4 in the comments, e.g.
df.at[5, 'A'] = '123'
A B C
4 0 2 3
5 123 4 1
6 10 20 30
Adding to the above, Pandas documentation for the at function states:
Access a single value for a row/column label pair.
Similar to loc, in that both provide label-based lookups. Use at if
you only need to get or set a single value in a DataFrame or Series.
For setting data loc and at are similar, for example:
df = pd.DataFrame({'A': [1,2,3], 'B': [11,22,33]}, index=[0,0,1])
Both loc and at will produce the same result
df.at[0, 'A'] = [101,102]
df.loc[0, 'A'] = [101,102]
A B
0 101 11
0 102 22
1 3 33
df.at[0, 'A'] = 103
df.loc[0, 'A'] = 103
A B
0 103 11
0 103 22
1 3 33
Also, for accessing a single value, both are the same
df.loc[1, 'A'] # returns a single value (<class 'numpy.int64'>)
df.at[1, 'A'] # returns a single value (<class 'numpy.int64'>)
3
However, when matching multiple values, loc will return a group of rows/cols from the DataFrame while at will return an array of values
df.loc[0, 'A'] # returns a Series (<class 'pandas.core.series.Series'>)
0 103
0 103
Name: A, dtype: int64
df.at[0, 'A'] # returns array of values (<class 'numpy.ndarray'>)
array([103, 103])
And more so, loc can be used to match a group of row/cols and can be given only an index, while at must receive the column
df.loc[0] # returns a DataFrame view (<class 'pandas.core.frame.DataFrame'>)
A B
0 103 11
0 103 22
# df.at[0] # ERROR: must receive column
.at is an optimized data access method compared to .loc .
.loc of a data frame selects all the elements located by indexed_rows and labeled_columns as given in its argument. Instead, .at selects particular element of a data frame positioned at the given indexed_row and labeled_column.
Also, .at takes one row and one column as input argument, whereas .loc may take multiple rows and columns. Output using .at is a single element and using .loc maybe a Series or a DataFrame.
With a numpy ndarray it is possible to write to multiple columns at a time without making a copy first (as long as they are adjacent). If I wanted to write to the first three columns of an array I would write
a[0,0:3] = 1,2,3 # this is very fast ('a' is a numpy ndarray)
I was hoping that in pandas I would similarly be able to select multiple adjacent columns by "label-slicing" like so (assuming the first 3 columns are labeled 'a','b','c')
a.loc[0,'a':'c'] = 1,2,3 # this works but is very slow ('a' is a pandas DataFrame)
or similarly
a.iloc[0,3:6] = 1,2,3 # this is equally as slow
However, this takes several 100s of milliseconds as compared to writing to a numpy array which takes only a few microseconds. I'm unclear on whether pandas is making a copy of the array under the hood. The only way I could find to write to the dataframe in this way that gives good speed is to work on the underlying ndarray directly
a.values[0,0:3] = 1,2,3 # this works fine and is fast
Have I missed something in the Pandas docs or is their no way to do multiple adjacent column indexing on a Pandas dataframe with speed comparable to numpy?
Edit
Here's the actual dataframe I am working with.
>> conn = sqlite3.connect('prath.sqlite')
>> prath = pd.read_sql("select image_id,pixel_index,skin,r,g,b from pixels",conn)
>> prath.shape
(5913307, 6)
>> prath.head()
image_id pixel_index skin r g b
0 21 113764 0 0 0 0
1 13 187789 0 183 149 173
2 17 535758 0 147 32 35
3 31 6255 0 116 1 16
4 15 119272 0 238 229 224
>> prath.dtypes
image_id int64
pixel_index int64
skin int64
r int64
g int64
b int64
dtype: object
Here is some runtime comparisons for the different indexing methods (again, pandas indexing is very slow)
>> %timeit prath.loc[0,'r':'b'] = 4,5,6
1 loops, best of 3: 888 ms per loop
>> %timeit prath.iloc[0,3:6] = 4,5,6
1 loops, best of 3: 894 ms per loop
>> %timeit prath.values[0,3:6] = 4,5,6
100000 loops, best of 3: 4.8 µs per loop
Edit to clarify: I don't believe pandas has a direct analog to setting a view in numpy in terms of both speed and syntax. iloc and loc are probably the most direct analog in terms of syntax and purpose, but are much slower. This is a fairly common situation with numpy and pandas. Pandas does a lot more than numpy (labeled columns/indexes, automatic alignment, etc.), but is slower to varying degrees. When you need speed and can do things in numpy, then do them in numpy.
I think in a nutshell that the tradeoff here is that loc and iloc will be slower but work 100% of the time whereas values will be fast but not always work (to be honest, I didn't even realize it would work in the way you got it to work).
But here's a really simple example where values doesn't work because column 'g' is a float rather than integer.
prath['g'] = 3.33
prath.values[0,3:6] = 4,5,6
prath.head(3)
image_id pixel_index skin r g b
0 21 113764 0 0 3.33 0
1 13 187789 0 183 3.33 173
2 17 535758 0 147 3.33 35
prath.iloc[0,3:6] = 4,5,6
prath.head(3)
image_id pixel_index skin r g b
0 21 113764 0 4 5.00 6
1 13 187789 0 183 3.33 173
2 17 535758 0 147 3.33 35
You can often get numpy-like speed and behavior from pandas when columns are of homogeneous type, you want to be careful about this. Edit to add: As #toes notes in the comment, the documentation does state that you can do this with homogeneous data. However, it's potentially very error prone as the example above shows, and I don't think many people would consider this a good general practice in pandas.
My general recommendation would be to do things in numpy in cases where you need the speed (and have homogeneous data types), and pandas when you don't. The nice thing is that numpy and pandas play well together so it's really not that hard to convert between dataframes and arrays as you go.
Edit to add: The following seems to work (albeit with a warning) even with column 'g' as a float. The speed is in between the values way and loc/iloc ways. I'm not sure if this can be expected to work all the time though. Just putting it out as a possible middle way.
prath[0:1][['r','g','b']] = 4,5,6
We are adding the ability to index directly even in a multi-dtype frame. This is in master now and will be in 0.17.0. You can do this in < 0.17.0, but it requires (more) manipulation of the internals.
In [1]: df = DataFrame({'A' : range(5), 'B' : range(6,11), 'C' : 'foo'})
In [2]: df.dtypes
Out[2]:
A int64
B int64
C object
dtype: object
The copy=False flag is new. This gives you a dict of dtypes->blocks (which are dtype separable)
In [3]: b = df.as_blocks(copy=False)
In [4]: b
Out[4]:
{'int64': A B
0 0 6
1 1 7
2 2 8
3 3 9
4 4 10, 'object': C
0 foo
1 foo
2 foo
3 foo
4 foo}
Here is the underlying numpy array.
In [5]: b['int64'].values
Out[5]:
array([[ 0, 6],
[ 1, 7],
[ 2, 8],
[ 3, 9],
[ 4, 10]])
This is the array in the original data set
In [7]: id(df._data.blocks[0].values)
Out[7]: 4429267232
Here is our view on it. They are the same
In [8]: id(b['int64'].values.base)
Out[8]: 4429267232
Now you can access the frame, and use pandas set operations to modify.
You can also directly access the numpy array via .values, which is now a VIEW into the original.
You will not incur any speed penalty for modifications as copies won't be made as long as you don't change the dtype of the data itself (e.g. don't try to put a string here; it will work but the view will be lost)
In [9]: b['int64'].loc[0,'A'] = -1
In [11]: b['int64'].values[0,1] = -2
Since we have a view, you can then change the underlying data.
In [12]: df
Out[12]:
A B C
0 -1 -2 foo
1 1 7 foo
2 2 8 foo
3 3 9 foo
4 4 10 foo
Note that if you modify the shape of the data (e.g. if you add a column for example) then the views will be lost.
I have a dataframe in which all values are of the same variety (e.g. a correlation matrix -- but where we expect a unique maximum). I'd like to return the row and the column of the maximum of this matrix.
I can get the max across rows or columns by changing the first argument of
df.idxmax()
however I haven't found a suitable way to return the row/column index of the max of the whole dataframe.
For example, I can do this in numpy:
>>>npa = np.array([[1,2,3],[4,9,5],[6,7,8]])
>>>np.where(npa == np.amax(npa))
(array([1]), array([1]))
But when I try something similar in pandas:
>>>df = pd.DataFrame([[1,2,3],[4,9,5],[6,7,8]],columns=list('abc'),index=list('def'))
>>>df.where(df == df.max().max())
a b c
d NaN NaN NaN
e NaN 9 NaN
f NaN NaN NaN
At a second level, what I acutally want to do is to return the rows and columns of the top n values, e.g. as a Series.
E.g. for the above I'd like a function which does:
>>>topn(df,3)
b e
c f
b f
dtype: object
>>>type(topn(df,3))
pandas.core.series.Series
or even just
>>>topn(df,3)
(['b','c','b'],['e','f','f'])
a la numpy.where()
I figured out the first part:
npa = df.as_matrix()
cols,indx = np.where(npa == np.amax(npa))
([df.columns[c] for c in cols],[df.index[c] for c in indx])
Now I need a way to get the top n. One naive idea is to copy the array, and iteratively replace the top values with NaN grabbing index as you go. Seems inefficient. Is there a better way to get the top n values of a numpy array? Fortunately, as shown here there is, through argpartition, but we have to use flattened indexing.
def topn(df,n):
npa = df.as_matrix()
topn_ind = np.argpartition(npa,-n,None)[-n:] #flatend ind, unsorted
topn_ind = topn_ind[np.argsort(npa.flat[topn_ind])][::-1] #arg sort in descending order
cols,indx = np.unravel_index(topn_ind,npa.shape,'F') #unflatten, using column-major ordering
return ([df.columns[c] for c in cols],[df.index[i] for i in indx])
Trying this on the example:
>>>df = pd.DataFrame([[1,2,3],[4,9,5],[6,7,8]],columns=list('abc'),index=list('def'))
>>>topn(df,3)
(['b', 'c', 'b'], ['e', 'f', 'f'])
As desired. Mind you the sorting was not originally asked for, but provides little overhead if n is not large.
what you want to use is stack
df = pd.DataFrame([[1,2,3],[4,9,5],[6,7,8]],columns=list('abc'),index=list('def'))
df = df.stack()
df.sort(ascending=False)
df.head(4)
e b 9
f c 8
b 7
a 6
dtype: int64
I guess for what you are trying to do a DataFrame might not be the best choice, since the idea of the columns in the DataFrame is to hold independent data.
>>> def topn(df,n):
# pull the data ouit of the DataFrame
# and flatten it to an array
vals = df.values.flatten(order='F')
# next we sort the array and store the sort mask
p = np.argsort(vals)
# create two arrays with the column names and indexes
# in the same order as vals
cols = np.array([[col]*len(df.index) for col in df.columns]).flatten()
idxs = np.array([list(df.index) for idx in df.index]).flatten()
# sort and return cols, and idxs
return cols[p][:-(n+1):-1],idxs[p][:-(n+1):-1]
>>> topn(df,3)
(array(['b', 'c', 'b'],
dtype='|S1'),
array(['e', 'f', 'f'],
dtype='|S1'))
>>> %timeit(topn(df,3))
10000 loops, best of 3: 29.9 µs per loop
watsonics solution takes slightly less
%timeit(topn(df,3))
10000 loops, best of 3: 24.6 µs per loop
but way faster than stack
def topStack(df,n):
df = df.stack()
df.sort(ascending=False)
return df.head(n)
%timeit(topStack(df,3))
1000 loops, best of 3: 1.91 ms per loop
Suppose I have two tables A and B.
Table A has a multi-level index (a, b) and one column (ts).
b determines univocally ts.
A = pd.DataFrame(
[('a', 'x', 4),
('a', 'y', 6),
('a', 'z', 5),
('b', 'x', 4),
('b', 'z', 5),
('c', 'y', 6)],
columns=['a', 'b', 'ts']).set_index(['a', 'b'])
AA = A.reset_index()
Table B is another one-column (ts) table with non-unique index (a).
The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x.
Moreover, there is always a value in B.ix[x] that is greater than or equal to
the values in A.
B = pd.DataFrame(
dict(a=list('aaaaabbcccccc'),
ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a')
The semantics in this is that B contains observations of occurrences of an event of type indicated by the index.
I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B.
So, my goal would be:
C:
('a', 'x') 4
('a', 'y') 7
('a', 'z') 5
('b', 'x') 7
('b', 'z') 7
('c', 'y') 8
I have some working code, but is terribly slow.
C = AA.apply(lambda row: (
row[0],
row[1],
B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b'])
Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run.
Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's.
1,000,000 x 200 x 300 = 60,000,000,000 rows
may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above.
How would I improve the performance?
Thanks for providing sample data. I've updated this answer with general
suggestions given anticipated array sizes in the 100's of million.
Line profile
Line profiling the guts of your lambda function shows that most time is spent
in B.ix[] (which has been refactored here to only be called once).
In [91]: lprun -f stack.foo1 AA.apply(stack.foo1, B=B, axis=1)
Timer unit: 1e-06 s
File: stack.py
Function: foo1 at line 4
Total time: 0.006651 s
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4 def foo1(row, B):
5 6 6158 1026.3 92.6 subset = B.ix[row[0]].ts
6 6 418 69.7 6.3 idx = np.searchsorted(subset, row[2])
7 6 56 9.3 0.8 val = subset.irow(idx)
8 6 19 3.2 0.3 return val
Consider built-in data types and raw numpy arrays over higher-level constructs.
Since B behaves like a dict here and the same key is accessed many times, let's compare df.ix to a normal Python
dictionary (precomputed elsewhere). A dictionary with 1M keys (unique A values) should only require ~34MB (33% capacity: 3 * 1e6 * 12 bytes).
In [102]: timeit B.ix['a']
10000 loops, best of 3: 122 us per loop
In [103]: timeit dct['a']
10000000 loops, best of 3: 53.2 ns per loop
Replace function calls with loops
The last major improvement I can think of would be to replace df.apply() with a for loop to avoid calling any function 200M times (or however large A is).
Hopefully these ideas help.
Original, expressive solution, though not memory efficient:
In [5]: CC = AA.merge(B, left_on='a', right_index=True)
In [6]: CC[CC.ts_x <= CC.ts_y].groupby(['a', 'b']).first()
Out[6]:
ts_x ts_y
a b
a x 4 4
y 6 7
z 5 5
b x 4 7
z 5 7
c y 6 8
Another option using numpy's boolean array notation, which seems an order of magnitude faster than the original (in this tiny example, and I suspect it'll be even better on larger datasets...):
I suspect this is largely because picking the minimum is much faster task than sorting.
In [11]: AA.apply(lambda row: (B.ts.values[(B.ts.values >= row['ts']) &
(B.index == row['a'])].min()),
axis=1)
Out[11]:
0 4
1 7
2 5
3 7
4 7
5 8
In [12]: %timeit AA.apply(lambda row: (B.ts.values[(B.ts.values >= row['ts']) &(B.index == row['a'])].min()), axis=1)
1000 loops, best of 3: 1.46 ms per loop
This seems like the fastest method if you were to simply adding this as a column to AA.
If you were creating a new dataframe as in you example - trying to test this "fairly" - it is slower (but still twice as fast as the original):
In [13]: %timeit C = AA.apply(lambda row: (row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b'])
100 loops, best of 3: 10.3 ms per loop
In [14]: %timeit C = AA.apply(lambda row: (row[0], x[1], B.ts.values[(B.ts.values >= row['ts']) & (B.index == row['a'])].min()), axis=1)
100 loops, best of 3: 4.32 ms per loop
I have a series with a MultiIndex like this:
import numpy as np
import pandas as pd
buckets = np.repeat(['a','b','c'], [3,5,1])
sequence = [0,1,5,0,1,2,4,50,0]
s = pd.Series(
np.random.randn(len(sequence)),
index=pd.MultiIndex.from_tuples(zip(buckets, sequence))
)
# In [6]: s
# Out[6]:
# a 0 -1.106047
# 1 1.665214
# 5 0.279190
# b 0 0.326364
# 1 0.900439
# 2 -0.653940
# 4 0.082270
# 50 -0.255482
# c 0 -0.091730
I'd like to get the s['b'] values where the second index ('sequence') is between 2 and 10.
Slicing on the first index works fine:
s['a':'b']
# Out[109]:
# bucket value
# a 0 1.828176
# 1 0.160496
# 5 0.401985
# b 0 -1.514268
# 1 -0.973915
# 2 1.285553
# 4 -0.194625
# 5 -0.144112
But not on the second, at least by what seems to be the two most obvious ways:
1) This returns elements 1 through 4, with nothing to do with the index values
s['b'][1:10]
# In [61]: s['b'][1:10]
# Out[61]:
# 1 0.900439
# 2 -0.653940
# 4 0.082270
# 50 -0.255482
However, if I reverse the index and the first index is integer and the second index is a string, it works:
In [26]: s
Out[26]:
0 a -0.126299
1 a 1.810928
5 a 0.571873
0 b -0.116108
1 b -0.712184
2 b -1.771264
4 b 0.148961
50 b 0.089683
0 c -0.582578
In [25]: s[0]['a':'b']
Out[25]:
a -0.126299
b -0.116108
As Robbie-Clarken answers, since 0.14 you can pass a slice in the tuple you pass to loc:
In [11]: s.loc[('b', slice(2, 10))]
Out[11]:
b 2 -0.65394
4 0.08227
dtype: float64
Indeed, you can pass a slice for each level:
In [12]: s.loc[(slice('a', 'b'), slice(2, 10))]
Out[12]:
a 5 0.27919
b 2 -0.65394
4 0.08227
dtype: float64
Note: the slice is inclusive.
Old answer:
You can also do this using:
s.ix[1:10, "b"]
(It's good practice to do in a single ix/loc/iloc since this version allows assignment.)
This answer was written prior to the introduction of iloc in early 2013, i.e. position/integer location - which may be preferred in this case. The reason it was created was to remove the ambiguity from integer-indexed pandas objects, and be more descriptive: "I'm slicing on position".
s["b"].iloc[1:10]
That said, I kinda disagree with the docs that ix is:
most robust and consistent way
it's not, the most consistent way is to describe what you're doing:
use loc for labels
use iloc for position
use ix for both (if you really have to)
Remember the zen of python:
explicit is better than implicit
Since pandas 0.15.0 this works:
s.loc['b', 2:10]
Output:
b 2 -0.503023
4 0.704880
dtype: float64
With a DataFrame it's slightly different (source):
df.loc(axis=0)['b', 2:10]
As of pandas 0.14.0 it is possible to slice multi-indexed objects by providing .loc a tuple containing slice objects:
In [2]: s.loc[('b', slice(2, 10))]
Out[2]:
b 2 -1.206052
4 -0.735682
dtype: float64
The best way I can think of is to use 'select' in this case. Although it even says in the docs that "This method should be used only when there is no more direct way."
Indexing and selecting data
In [116]: s
Out[116]:
a 0 1.724372
1 0.305923
5 1.780811
b 0 -0.556650
1 0.207783
4 -0.177901
50 0.289365
0 1.168115
In [117]: s.select(lambda x: x[0] == 'b' and 2 <= x[1] <= 10)
Out[117]: b 4 -0.177901
not sure if this is ideal but it works by creating a mask
In [59]: s.index
Out[59]:
MultiIndex
[('a', 0) ('a', 1) ('a', 5) ('b', 0) ('b', 1) ('b', 2) ('b', 4)
('b', 50) ('c', 0)]
In [77]: s[(tpl for tpl in s.index if 2<=tpl[1]<=10 and tpl[0]=='b')]
Out[77]:
b 2 -0.586568
4 1.559988
EDIT : hayden's solution is the way to go