Related
I have a Pandas.DataFrame with 387 rows and 26 columns. This DataFrame is then Groupby()-ed and agg()-ed, turning into a DataFrame with 1 row and 111 columns. This takes about 0.05s. For example:
frames = frames.groupby(['id']).agg({"bytes": ["count",
"sum",
"median",
"std",
"sum",
"min",
"max"],
# === add about 70 more lines of this ===
"pixels": "sum"}
All of these use Pandas' built-in Cython functions, e.g. sum, std, min, max, first, etc. I am looking to speed this process up, but is there even a way to do such a thing? Seems like it is already considered 'vectorized' to my understanding. Thus, there isn't anything more to do with Cython is there?
Maybe calculating each column separately without the .agg() would be faster?
Would greatly appreciate any ideas, or confirmation that there is nothing else to be done. Thanks!
Edit!
Here's a working example:
import pandas as pd
import numpy as np
aggs = ["sum", "mean", "std", "min"]
cols = {k:aggs for k in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'}
df = pd.DataFrame(np.random.randint(0,100,size=(387, 26)), columns=list('ABCDEFGHIJKLMNOPQRSTUVWXYZ'))
df['id'] = 1
print(df.groupby("id").agg(cols))
cProfile results:
import cProfile
cProfile.run('df.groupby("id").agg(cols)', sort='cumtime')
79825 function calls (78664 primitive calls) in 0.076 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.076 0.076 {built-in method builtins.exec}
1 0.000 0.000 0.076 0.076 <string>:1(<module>)
1 0.000 0.000 0.076 0.076 generic.py:964(aggregate)
1 0.000 0.000 0.075 0.075 apply.py:143(agg)
1 0.000 0.000 0.075 0.075 apply.py:405(agg_dict_like)
1 0.000 0.000 0.062 0.062 apply.py:435(<dictcomp>)
130/26 0.000 0.000 0.059 0.002 generic.py:225(aggregate)
26 0.001 0.000 0.058 0.002 generic.py:278(_aggregate_multiple_funcs)
78 0.001 0.000 0.023 0.000 generic.py:322(_cython_agg_general)
28 0.000 0.000 0.023 0.001 frame.py:573(__init__)
I ran some benchmarks (with 10 columns to aggregate and 6 aggregation functions for each column, and at most 100 unique ids). It seems that the total time to run the aggregation does not change until the number of rows is somewhere between 10k and 100k.
If you know your dataframes in advance, you can concatenate them into a single big DataFrame with two-level index, run groupby on two columns and get significant speedup. In a way, this runs the calculation on batches of dataframes.
In my example, it takes around 400ms to process a single DataFrame, and 600ms to process a batch of 100 DataFrames, with an average speedup of around 60x.
Here is the general approach (using 100 columns instead of 10):
import numpy as np
import pandas as pd
num_rows = 100
num_cols = 100
# builds a random df
def build_df():
# build some df with num_rows rows and num_cols cols to aggregate
df = pd.DataFrame({
"id": np.random.randint(0, 100, num_rows),
})
for i in range(num_cols):
df[i] = np.random.randint(0, 10, num_rows)
return df
agg_dict = {
i: ["count", "sum", "median", "std", "min", "max"]
for i in range(num_cols)
}
# get a single small df
df = build_df()
# build 100 random dataframes
dfs = [build_df() for _ in range(100)]
# set the first df to be equal to the "small" df we computed before
dfs[0] = df.copy()
big_df = pd.concat(dfs, keys=range(100))
%timeit big_df.groupby([big_df.index.get_level_values(0), "id"]).agg(agg_dict)
# 605 ms per loop, for 100 dataframe
agg_from_big = big_df.groupby([big_df.index.get_level_values(0), "id"]).agg(agg_dict).loc[0]
%timeit df.groupby("id").agg(agg_dict)
# 417 ms per loop, for one dataframe
agg_from_small = df.groupby("id").agg(agg_dict)
assert agg_from_small.equals(agg_from_big)
Here is the benchmarking code. The timings are comparable until the number of rows increases to 10k to 100k:
def get_setup(n):
return f"""
import pandas as pd
import numpy as np
N = {n}
num_cols = 10
df = pd.DataFrame({{
"id": np.random.randint(0, 100, N),
}})
for i in range(num_cols):
df[i] = np.random.randint(0, 10, N)
agg_dict = {{
i: ["count", "sum", "median", "std", "min", "max"]
for i in range(num_cols)
}}
"""
from timeit import timeit
def time_n(n):
return timeit(
"df.groupby('id').agg(agg_dict)", setup=get_setup(n), number=100
)
times = pd.Series({n: time_n(n) for n in [10, 100, 1000, 10_000, 100_000]})
# 10 4.532458
# 100 4.398949
# 1000 4.426178
# 10000 5.009555
# 100000 11.660783
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have the following code:
import numpy as np
A = np.random.random((128,4,46,23)) + np.random.random((128,4,46,23)) * 1j
signal = np.random.random((355,256,4)) + np.random.random((355,256,4)) * 1j
# start timing here
Signal = np.fft.fft(signal, axis=1)[:,:128,:]
B = Signal[..., None, None] * A[None,...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b**2
res = b_squared.sum(1)
I need to run this code for two different values of A each time. The problem is that the code is too slow to be used in an real time application. I tried using np.einsum and although it did speed up things a little it wasn't enough for my application.
So, now I'm trying to speed things up using a GPU but I'm not sure how. I looked into OpenCL and multiplying two matrices seems fine, but I'm not sure how to do it with complex numbers and matrices with more than two dimensions(I guess using a for loop to send two axis at a time for the GPU). I also don't know how to do something like array.sum(axis). I have worked with a GPU before using OpenGL.
I would have no problem using C++ to optimize the code if needed or using something besides OpenCL, as long as it works with more than one GPU manufacturer(so no CUDA).
Edit:
Running cProfile:
import cProfile
def f():
Signal = np.fft.fft(signal, axis=1)[:,:128,:]
B = Signal[..., None, None] * A[None,...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b**2
res = b_squared.sum(1)
cProfile.run("f()", sort="cumtime")
Output:
56 function calls (52 primitive calls) in 1.555 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.555 1.555 {built-in method builtins.exec}
1 0.005 0.005 1.555 1.555 <string>:1(<module>)
1 1.240 1.240 1.550 1.550 <ipython-input-10-d4613cd45f64>:3(f)
2 0.000 0.000 0.263 0.131 {method 'sum' of 'numpy.ndarray' objects}
2 0.000 0.000 0.263 0.131 _methods.py:45(_sum)
2 0.263 0.131 0.263 0.131 {method 'reduce' of 'numpy.ufunc' objects}
6/2 0.000 0.000 0.047 0.024 {built-in method numpy.core._multiarray_umath.implement_array_function}
2 0.000 0.000 0.047 0.024 _pocketfft.py:49(_raw_fft)
2 0.047 0.023 0.047 0.023 {built-in method numpy.fft._pocketfft_internal.execute}
1 0.000 0.000 0.041 0.041 <__array_function__ internals>:2(ifft)
1 0.000 0.000 0.041 0.041 _pocketfft.py:189(ifft)
1 0.000 0.000 0.006 0.006 <__array_function__ internals>:2(fft)
1 0.000 0.000 0.006 0.006 _pocketfft.py:95(fft)
4 0.000 0.000 0.000 0.000 <__array_function__ internals>:2(swapaxes)
4 0.000 0.000 0.000 0.000 fromnumeric.py:550(swapaxes)
4 0.000 0.000 0.000 0.000 fromnumeric.py:52(_wrapfunc)
4 0.000 0.000 0.000 0.000 {method 'swapaxes' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 _asarray.py:14(asarray)
4 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
2 0.000 0.000 0.000 0.000 {built-in method numpy.array}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
4 0.000 0.000 0.000 0.000 fromnumeric.py:546(_swapaxes_dispatcher)
2 0.000 0.000 0.000 0.000 _pocketfft.py:91(_fft_dispatcher)
Most of the number crunching libraries that interface well with the Python ecosystem have tight dependencies on Nvidia's ecosystem, but this is changing slowly. Here are some things you could try:
Profile your code. The built-in profiler (cProfile) is probably a good place to start, I'm also a fan of snakeviz for looking at performance traces. This will actually tell you if NumPy's FFT implementation is what's blocking you. Is memory being allocated efficiently? Is there some way where you could hand more data off to np.fft.ifft? How much time is Python taking to read the signal from its source and convert it into a Numpy array?
Numba is a JIT which takes Python code and further optimizes it, or compiles it to either CUDA or ROCm (AMD). I'm not sure how far off you are from your performance goals, but perhaps this could help.
Here is a list of C++ libraries to try.
Honestly, I'm kind of surprised that the NumPy build distributed on
the PyPI isn't fast enough for real-time use. I'll post an update to my comment where I benchmark this code snippet.
UPDATE: Here's an implementation which uses a multiprocessing.Pool, and the np.einsum trick kindly provided by #hpaulj.
import time
import numpy as np
import multiprocessing
NUM_RUNS = 50
A = np.random.random((128, 4, 46, 23)) + np.random.random((128, 4, 46, 23)) * 1j
signal = np.random.random((355, 256, 4)) + np.random.random((355, 256, 4)) * 1j
def worker(signal_chunk: np.ndarray, a_chunk: np.ndarray) -> np.ndarray:
return (
np.fft.ifft(np.einsum("ijk,jklm->iklm", fft_chunk, a_chunk), axis=1).real ** 2
)
# old code
serial_times = []
for _ in range(NUM_RUNS):
start = time.monotonic()
Signal = np.fft.fft(signal, axis=1)[:, :128, :]
B = Signal[..., None, None] * A[None, ...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b ** 2
res = b_squared.sum(1)
serial_times.append(time.monotonic() - start)
parallel_times = []
# My CPU is hyperthreaded, so I'm only spawning workers for the amount of physical cores
with multiprocessing.Pool(multiprocessing.cpu_count() // 2) as p:
for _ in range(NUM_RUNS):
start = time.monotonic()
# Get the FFT of the whole sample before splitting
transformed = np.fft.fft(signal, axis=1)[:, :128, :]
a_chunks = np.split(A, A.shape[0] // multiprocessing.cpu_count(), axis=0)
signal_chunks = np.split(
transformed, transformed.shape[1] // multiprocessing.cpu_count(), axis=1
)
res = np.sum(np.hstack(p.starmap(worker, zip(signal_chunks, a_chunks))), axis=1)
parallel_times.append(time.monotonic() - start)
print(
f"ORIGINAL AVG TIME: {np.mean(serial_times):.3f}\t POOLED TIME: {np.mean(parallel_times):.3f}"
)
And here are the results I'm getting on a Ryzen 3700X (8 cores, 16 threads):
ORIGINAL AVG TIME: 0.897 POOLED TIME: 0.315
I'd've loved to have offered you an FFT library written in OpenCL, but I'm not sure whether you'd have to write the Python bridge yourself (more code) or whether you'd trust the first implementation you'd come across on GitHub. If you're willing to give into CUDA's vendor lock-in, Nvidia provides an "almost drop in replacement" for NumPy called CuPy, and it has FFT and IFFT kernels., Hope this helps!
With your arrays:
In [42]: Signal.shape
Out[42]: (355, 128, 4)
In [43]: A.shape
Out[43]: (128, 4, 46, 23)
The B calc takes minutes on my modest machine:
In [44]: B = (Signal[..., None, None] * A[None,...]).sum(1)
In [45]: B.shape
Out[45]: (355, 4, 46, 23)
einsum is much faster:
In [46]: B2=np.einsum('ijk,jklm->iklm',Signal,A)
In [47]: np.allclose(B,B2)
Out[47]: True
In [48]: timeit B2=np.einsum('ijk,jklm->iklm',Signal,A)
1.05 s ± 21.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
reworking the dimensions to move the 128 to the classic dot positions:
In [49]: B21=np.einsum('ikj,kjn->ikn',Signal.transpose(0,2,1),A.reshape(128, 4, 46*23).transpose(1,0,2))
In [50]: B21.shape
Out[50]: (355, 4, 1058)
In [51]: timeit B21=np.einsum('ikj,kjn->ikn',Signal.transpose(0,2,1),A.reshape(128, 4, 46*23).transpose(1,0,2)
...: )
1.04 s ± 3.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
With a bit more tweaking I can use matmul/# and cut the time in half:
In [52]: B3=Signal.transpose(0,2,1)[:,:,None,:]#(A.reshape(1,128, 4, 46*23).transpose(0,2,1,3))
In [53]: timeit B3=Signal.transpose(0,2,1)[:,:,None,:]#(A.reshape(1,128, 4, 46*23).transpose(0,2,1,3))
497 ms ± 11.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [54]: B3.shape
Out[54]: (355, 4, 1, 1058)
In [56]: np.allclose(B, B3[:,:,0,:].reshape(B.shape))
Out[56]: True
Casting the arrays to the matmul format took a fair bit of experimentation. matmul makes optimal use of BLAS-like libraries. You may improve speed by installing better libraries.
Running Numpy version 1.19.2, I get better performance cumulating the mean of every individual axis of an array than by calculating the mean over an already flattened array.
shape = (10000,32,32,3)
mat = np.random.random(shape)
# Call this Method A.
%%timeit
mat_means = mat.mean(axis=0).mean(axis=0).mean(axis=0)
14.6 ms ± 167 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
mat_reshaped = mat.reshape(-1,3)
# Call this Method B
%%timeit
mat_means = mat_reshaped.mean(axis=0)
135 ms ± 227 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
This is odd, since doing the mean multiple times has the same bad access pattern (perhaps even worse) than the one on the reshaped array. We also do more operations this way. As a sanity check, I converted the array to FORTRAN order:
mat_reshaped_fortran = mat.reshape(-1,3, order='F')
%%timeit
mat_means = mat_reshaped_fortran.mean(axis=0)
12.2 ms ± 85.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This yields the performance improvement I expected.
For Method A, prun gives:
36 function calls in 0.019 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
3 0.018 0.006 0.018 0.006 {method 'reduce' of 'numpy.ufunc' objects}
1 0.000 0.000 0.019 0.019 {built-in method builtins.exec}
3 0.000 0.000 0.019 0.006 _methods.py:143(_mean)
3 0.000 0.000 0.000 0.000 _methods.py:59(_count_reduce_items)
1 0.000 0.000 0.019 0.019 <string>:1(<module>)
3 0.000 0.000 0.019 0.006 {method 'mean' of 'numpy.ndarray' objects}
3 0.000 0.000 0.000 0.000 _asarray.py:86(asanyarray)
3 0.000 0.000 0.000 0.000 {built-in method numpy.array}
3 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
6 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
6 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
While for Method B:
14 function calls in 0.166 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.166 0.166 0.166 0.166 {method 'reduce' of 'numpy.ufunc' objects}
1 0.000 0.000 0.166 0.166 {built-in method builtins.exec}
1 0.000 0.000 0.166 0.166 _methods.py:143(_mean)
1 0.000 0.000 0.000 0.000 _methods.py:59(_count_reduce_items)
1 0.000 0.000 0.166 0.166 <string>:1(<module>)
1 0.000 0.000 0.166 0.166 {method 'mean' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 _asarray.py:86(asanyarray)
1 0.000 0.000 0.000 0.000 {built-in method numpy.array}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
2 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
2 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Note: np.setbufsize(1e7) doesn't seem to have any effect.
What is the reason for this performance difference?
Let's call your original matrix mat. mat.shape = (10000,32,32,3). Visually, this is like having a "stack" of 10,000 * 32x32x3 * rectangular prisms (I think of them as LEGOs) of floats.
Now lets think about what you did in terms of floating point operations (flops):
In Method A, you do mat.mean(axis=0).mean(axis=0).mean(axis=0). Let's break this down:
You take the mean of each position (i,j,k) across all 10,000 LEGOs. This gives you back a single LEGO of size 32x32x3 which now contains the first set of means. This means you have performed 10,000 additions and 1 division per mean, of which there are 32323 = 3072. In total, you've done 30,723,072 flops.
You then take the mean again, this time of each position (j,k), where i is now the number of the layer (vertical position) you are currently on. This gives you a piece of paper with 32x3 means written on it. You have performed 32 additions and 1 divisions per mean, of which there are 32*3 = 96. In total, you've done 3,168 flops.
Finally, you take the mean of each column k, where j is now the row you are currently on. This gives you a stub with 3 means written on it. You have performed 32 additions and 1 division per mean, of which there are 3. In total, you've done 99 flops.
The grand total of all this is 30,723,072 + 3,168 + 99 = 30,726,339 flops.
In Method B, you do mat_reshaped = mat.reshape(-1,3); mat_means = mat_reshaped.mean(axis=0). Let's break this down:
You reshaped everything, so mat is a long roll of paper of size 10,240,000x3. You take the mean of each column k, where j is now the row you are currently on. This gives you a stub with 3 means written on it. You have performed 10,240,000 additions and 1 division per mean, of which there are 3. In total, you've done 30,720,003 flops.
So now you're saying to yourself "What! All of that work, only to show that the slower method actually does ~less~ work?! " Here's the problem: Although Method B has less work to do, it does not have a lot less work to do, meaning just from a flop standpoint, we would expect things to be similar in terms of runtime.
You also have to consider the size of your reshaped array in Method B: a matrix with 10,240,000 rows is HUGE!!! It's really hard/inefficient for the computer to access all of that, and more memory accesses means longer runtimes. The fact is that in its original 10,000x32x32x3 shape, the matrix was already partitioned into convenient slices that the computer could access more efficiently: this is actually a common technique when handling giant matrices Jaime's response to a similar question or even this article: both talk about how breaking up a big matrix into smaller slices helps your program be more memory efficient, therefore making it run faster.
Test code:
import numpy as np
import pandas as pd
COUNT = 1000000
df = pd.DataFrame({
'y': np.random.normal(0, 1, COUNT),
'z': np.random.gamma(50, 1, COUNT),
})
%timeit df.y[(10 < df.z) & (df.z < 50)].mean()
%timeit df.y.values[(10 < df.z.values) & (df.z.values < 50)].mean()
%timeit df.eval('y[(10 < z) & (z < 50)].mean()', engine='numexpr')
The output on my machine (a fairly fast x86-64 Linux desktop with Python 3.6) is:
17.8 ms ± 1.3 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
8.44 ms ± 502 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
46.4 ms ± 2.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I understand why the second line is a bit faster (it ignores the Pandas index). But why is the eval() approach using numexpr so slow? Shouldn't it be faster than at least the first approach? The documentation sure makes it seem like it would be: https://pandas.pydata.org/pandas-docs/stable/enhancingperf.html
From the investigation presented below, it looks like the unspectacular reason for the worse performance is "overhead".
Only a small part of the expression y[(10 < z) & (z < 50)].mean() is done via numexpr-module. numexpr doesn't support indexing, thus we can only hope for (10 < z) & (z < 50) to be speed-up - anything else will be mapped to pandas-operations.
However, (10 < z) & (z < 50) is not the bottle-neck here, as can be easily seen:
%timeit df.y[(10 < df.z) & (df.z < 50)].mean() # 16.7 ms
mask=(10 < df.z) & (df.z < 50)
%timeit df.y[mask].mean() # 13.7 ms
%timeit df.y[mask] # 13.2 ms
df.y[mask] -takes the lion's share of the running time.
We can compare the profiler output for df.y[mask] and df.eval('y[mask]') to see what makes the difference.
When I use the following script:
import numpy as np
import pandas as pd
COUNT = 1000000
df = pd.DataFrame({
'y': np.random.normal(0, 1, COUNT),
'z': np.random.gamma(50, 1, COUNT),
})
mask = (10 < df.z) & (df.z < 50)
df['m']=mask
for _ in range(500):
df.y[df.m]
# OR
#df.eval('y[m]', engine='numexpr')
and run it with python -m cProfile -s cumulative run.py (or %prun -s cumulative <...> in IPython), I can see the following profiles.
For direct call of the pandas functionality:
ncalls tottime percall cumtime percall filename:lineno(function)
419/1 0.013 0.000 7.228 7.228 {built-in method builtins.exec}
1 0.006 0.006 7.228 7.228 run.py:1(<module>)
500 0.005 0.000 6.589 0.013 series.py:764(__getitem__)
500 0.003 0.000 6.475 0.013 series.py:812(_get_with)
500 0.003 0.000 6.468 0.013 series.py:875(_get_values)
500 0.009 0.000 6.445 0.013 internals.py:4702(get_slice)
500 0.006 0.000 3.246 0.006 range.py:491(__getitem__)
505 3.146 0.006 3.236 0.006 base.py:2067(__getitem__)
500 3.170 0.006 3.170 0.006 internals.py:310(_slice)
635/2 0.003 0.000 0.414 0.207 <frozen importlib._bootstrap>:958(_find_and_load)
We can see that almost 100% of the time is spent in series.__getitem__ without any overhead.
For the call via df.eval(...), the situation is quite different:
ncalls tottime percall cumtime percall filename:lineno(function)
453/1 0.013 0.000 12.702 12.702 {built-in method builtins.exec}
1 0.015 0.015 12.702 12.702 run.py:1(<module>)
500 0.013 0.000 12.090 0.024 frame.py:2861(eval)
1000/500 0.025 0.000 10.319 0.021 eval.py:153(eval)
1000/500 0.007 0.000 9.247 0.018 expr.py:731(__init__)
1000/500 0.004 0.000 9.236 0.018 expr.py:754(parse)
4500/500 0.019 0.000 9.233 0.018 expr.py:307(visit)
1000/500 0.003 0.000 9.105 0.018 expr.py:323(visit_Module)
1000/500 0.002 0.000 9.102 0.018 expr.py:329(visit_Expr)
500 0.011 0.000 9.096 0.018 expr.py:461(visit_Subscript)
500 0.007 0.000 6.874 0.014 series.py:764(__getitem__)
500 0.003 0.000 6.748 0.013 series.py:812(_get_with)
500 0.004 0.000 6.742 0.013 series.py:875(_get_values)
500 0.009 0.000 6.717 0.013 internals.py:4702(get_slice)
500 0.006 0.000 3.404 0.007 range.py:491(__getitem__)
506 3.289 0.007 3.391 0.007 base.py:2067(__getitem__)
500 3.282 0.007 3.282 0.007 internals.py:310(_slice)
500 0.003 0.000 1.730 0.003 generic.py:432(_get_index_resolvers)
1000 0.014 0.000 1.725 0.002 generic.py:402(_get_axis_resolvers)
2000 0.018 0.000 1.685 0.001 base.py:1179(to_series)
1000 0.003 0.000 1.537 0.002 scope.py:21(_ensure_scope)
1000 0.014 0.000 1.534 0.002 scope.py:102(__init__)
500 0.005 0.000 1.476 0.003 scope.py:242(update)
500 0.002 0.000 1.451 0.003 inspect.py:1489(stack)
500 0.021 0.000 1.449 0.003 inspect.py:1461(getouterframes)
11000 0.062 0.000 1.415 0.000 inspect.py:1422(getframeinfo)
2000 0.008 0.000 1.276 0.001 base.py:1253(_to_embed)
2035 1.261 0.001 1.261 0.001 {method 'copy' of 'numpy.ndarray' objects}
1000 0.015 0.000 1.226 0.001 engines.py:61(evaluate)
11000 0.081 0.000 1.081 0.000 inspect.py:757(findsource)
once again about 7 seconds are spent in series.__getitem__, but there are also about 6 seconds overhead - for example about 2 seconds in frame.py:2861(eval) and about 2 seconds in expr.py:461(visit_Subscript).
I did only a superficial investigation (see more details further below), but this overhead doesn't seems to be just constant but at least linear in the number of element in the series. For example there is method 'copy' of 'numpy.ndarray' objects which means that data is copied (it is quite unclear, why this would be necessary per se).
My take-away from it: using pd.eval has advantages as long as the evaluated expression can be evaluated with numexpr alone. As soon as this is not the case, there might be no longer gains but losses due to quite large overhead.
Using line_profiler (here I use %lprun-magic (after loading it with %load_ext line_profliler) for the function run() which is more or less a copy from the script above) we can easily find where the time is lost in Frame.eval:
%lprun -f pd.core.frame.DataFrame.eval
-f pd.core.frame.DataFrame._get_index_resolvers
-f pd.core.frame.DataFrame._get_axis_resolvers
-f pd.core.indexes.base.Index.to_series
-f pd.core.indexes.base.Index._to_embed
run()
Here we can see were the additional 10% are spent:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
2861 def eval(self, expr,
....
2951 10 206.0 20.6 0.0 from pandas.core.computation.eval import eval as _eval
2952
2953 10 176.0 17.6 0.0 inplace = validate_bool_kwarg(inplace, 'inplace')
2954 10 30.0 3.0 0.0 resolvers = kwargs.pop('resolvers', None)
2955 10 37.0 3.7 0.0 kwargs['level'] = kwargs.pop('level', 0) + 1
2956 10 17.0 1.7 0.0 if resolvers is None:
2957 10 235850.0 23585.0 9.0 index_resolvers = self._get_index_resolvers()
2958 10 2231.0 223.1 0.1 resolvers = dict(self.iteritems()), index_resolvers
2959 10 29.0 2.9 0.0 if 'target' not in kwargs:
2960 10 19.0 1.9 0.0 kwargs['target'] = self
2961 10 46.0 4.6 0.0 kwargs['resolvers'] = kwargs.get('resolvers', ()) + tuple(resolvers)
2962 10 2392725.0 239272.5 90.9 return _eval(expr, inplace=inplace, **kwargs)
and _get_index_resolvers() can be drilled down to Index._to_embed:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1253 def _to_embed(self, keep_tz=False, dtype=None):
1254 """
1255 *this is an internal non-public method*
1256
1257 return an array repr of this object, potentially casting to object
1258
1259 """
1260 40 73.0 1.8 0.0 if dtype is not None:
1261 return self.astype(dtype)._to_embed(keep_tz=keep_tz)
1262
1263 40 201490.0 5037.2 100.0 return self.values.copy()
Where the O(n)-copying happens.
I am seeing a huge difference in performance between pandas 0.11 and pandas 0.13 on simple series operations.
In [7]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [8]: pandas.__version__
Out[8]: '0.13.0'
In [9]: %timeit df['a'].values+df['b'].values
100 loops, best of 3: 4.33 ms per loop
In [10]: %timeit df['a']+df['b']
10 loops, best of 3: 42.5 ms per loop
On version 0.11 however (on the same machine),
In [10]: pandas.__version__
Out[10]: '0.11.0'
In [11]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [12]: %timeit df['a'].values+df['b'].valuese
100 loops, best of 3: 2.22 ms per loop
In [13]: %timeit df['a']+df['b']
100 loops, best of 3: 2.3 ms per loop
So on 0.13, it's about 20x slower. Profiling it, I see
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.047 0.047 <string>:1(<module>)
1 0.000 0.000 0.047 0.047 ops.py:462(wrapper)
3 0.000 0.000 0.044 0.015 series.py:134(__init__)
1 0.000 0.000 0.044 0.044 series.py:2394(_sanitize_array)
1 0.000 0.000 0.044 0.044 series.py:2407(_try_cast)
1 0.000 0.000 0.044 0.044 common.py:1708(_possibly_cast_to_datetime)
1 0.044 0.044 0.044 0.044 {pandas.lib.infer_dtype}
1 0.000 0.000 0.003 0.003 ops.py:442(na_op)
1 0.000 0.000 0.003 0.003 expressions.py:193(evaluate)
1 0.000 0.000 0.003 0.003 expressions.py:93(_evaluate_numexpr)
So it's spending some huge amount of time on _possibly_cash_to_datetime and pandas.lib.infer_dtype.
Is this change expected? How can I get the old, faster performance back?
Note: It appears that the problem is that the output is of an integer type. If I make one of the columns a double, it goes back to being fast ...
This was a very odd bug having to do (I think) with a strange lookup going on in cython. For some reason
_TYPE_MAP = { np.int64 : 'integer' }
np.int64 in _TYPE_MAP
was not evaluating correctly, ONLY for int64 (but worked just fine for all other dtypes). Its possible the hash of the np.dtype object was screwy for some reason. In any event, fixed here: https: github.com/pydata/pandas/pull/7342 so we use name hashing instead.
Here's the perf comparison:
master
In [1]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [2]: %timeit df['a'] + df['b']
100 loops, best of 3: 2.49 ms per loop
0.14.0
In [6]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [7]: %timeit df['a'] + df['b']
10 loops, best of 3: 35.1 ms per loop