Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have the following code:
import numpy as np
A = np.random.random((128,4,46,23)) + np.random.random((128,4,46,23)) * 1j
signal = np.random.random((355,256,4)) + np.random.random((355,256,4)) * 1j
# start timing here
Signal = np.fft.fft(signal, axis=1)[:,:128,:]
B = Signal[..., None, None] * A[None,...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b**2
res = b_squared.sum(1)
I need to run this code for two different values of A each time. The problem is that the code is too slow to be used in an real time application. I tried using np.einsum and although it did speed up things a little it wasn't enough for my application.
So, now I'm trying to speed things up using a GPU but I'm not sure how. I looked into OpenCL and multiplying two matrices seems fine, but I'm not sure how to do it with complex numbers and matrices with more than two dimensions(I guess using a for loop to send two axis at a time for the GPU). I also don't know how to do something like array.sum(axis). I have worked with a GPU before using OpenGL.
I would have no problem using C++ to optimize the code if needed or using something besides OpenCL, as long as it works with more than one GPU manufacturer(so no CUDA).
Edit:
Running cProfile:
import cProfile
def f():
Signal = np.fft.fft(signal, axis=1)[:,:128,:]
B = Signal[..., None, None] * A[None,...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b**2
res = b_squared.sum(1)
cProfile.run("f()", sort="cumtime")
Output:
56 function calls (52 primitive calls) in 1.555 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 1.555 1.555 {built-in method builtins.exec}
1 0.005 0.005 1.555 1.555 <string>:1(<module>)
1 1.240 1.240 1.550 1.550 <ipython-input-10-d4613cd45f64>:3(f)
2 0.000 0.000 0.263 0.131 {method 'sum' of 'numpy.ndarray' objects}
2 0.000 0.000 0.263 0.131 _methods.py:45(_sum)
2 0.263 0.131 0.263 0.131 {method 'reduce' of 'numpy.ufunc' objects}
6/2 0.000 0.000 0.047 0.024 {built-in method numpy.core._multiarray_umath.implement_array_function}
2 0.000 0.000 0.047 0.024 _pocketfft.py:49(_raw_fft)
2 0.047 0.023 0.047 0.023 {built-in method numpy.fft._pocketfft_internal.execute}
1 0.000 0.000 0.041 0.041 <__array_function__ internals>:2(ifft)
1 0.000 0.000 0.041 0.041 _pocketfft.py:189(ifft)
1 0.000 0.000 0.006 0.006 <__array_function__ internals>:2(fft)
1 0.000 0.000 0.006 0.006 _pocketfft.py:95(fft)
4 0.000 0.000 0.000 0.000 <__array_function__ internals>:2(swapaxes)
4 0.000 0.000 0.000 0.000 fromnumeric.py:550(swapaxes)
4 0.000 0.000 0.000 0.000 fromnumeric.py:52(_wrapfunc)
4 0.000 0.000 0.000 0.000 {method 'swapaxes' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 _asarray.py:14(asarray)
4 0.000 0.000 0.000 0.000 {built-in method builtins.getattr}
2 0.000 0.000 0.000 0.000 {built-in method numpy.array}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
4 0.000 0.000 0.000 0.000 fromnumeric.py:546(_swapaxes_dispatcher)
2 0.000 0.000 0.000 0.000 _pocketfft.py:91(_fft_dispatcher)
Most of the number crunching libraries that interface well with the Python ecosystem have tight dependencies on Nvidia's ecosystem, but this is changing slowly. Here are some things you could try:
Profile your code. The built-in profiler (cProfile) is probably a good place to start, I'm also a fan of snakeviz for looking at performance traces. This will actually tell you if NumPy's FFT implementation is what's blocking you. Is memory being allocated efficiently? Is there some way where you could hand more data off to np.fft.ifft? How much time is Python taking to read the signal from its source and convert it into a Numpy array?
Numba is a JIT which takes Python code and further optimizes it, or compiles it to either CUDA or ROCm (AMD). I'm not sure how far off you are from your performance goals, but perhaps this could help.
Here is a list of C++ libraries to try.
Honestly, I'm kind of surprised that the NumPy build distributed on
the PyPI isn't fast enough for real-time use. I'll post an update to my comment where I benchmark this code snippet.
UPDATE: Here's an implementation which uses a multiprocessing.Pool, and the np.einsum trick kindly provided by #hpaulj.
import time
import numpy as np
import multiprocessing
NUM_RUNS = 50
A = np.random.random((128, 4, 46, 23)) + np.random.random((128, 4, 46, 23)) * 1j
signal = np.random.random((355, 256, 4)) + np.random.random((355, 256, 4)) * 1j
def worker(signal_chunk: np.ndarray, a_chunk: np.ndarray) -> np.ndarray:
return (
np.fft.ifft(np.einsum("ijk,jklm->iklm", fft_chunk, a_chunk), axis=1).real ** 2
)
# old code
serial_times = []
for _ in range(NUM_RUNS):
start = time.monotonic()
Signal = np.fft.fft(signal, axis=1)[:, :128, :]
B = Signal[..., None, None] * A[None, ...]
B = B.sum(1)
b = np.fft.ifft(B, axis=1).real
b_squared = b ** 2
res = b_squared.sum(1)
serial_times.append(time.monotonic() - start)
parallel_times = []
# My CPU is hyperthreaded, so I'm only spawning workers for the amount of physical cores
with multiprocessing.Pool(multiprocessing.cpu_count() // 2) as p:
for _ in range(NUM_RUNS):
start = time.monotonic()
# Get the FFT of the whole sample before splitting
transformed = np.fft.fft(signal, axis=1)[:, :128, :]
a_chunks = np.split(A, A.shape[0] // multiprocessing.cpu_count(), axis=0)
signal_chunks = np.split(
transformed, transformed.shape[1] // multiprocessing.cpu_count(), axis=1
)
res = np.sum(np.hstack(p.starmap(worker, zip(signal_chunks, a_chunks))), axis=1)
parallel_times.append(time.monotonic() - start)
print(
f"ORIGINAL AVG TIME: {np.mean(serial_times):.3f}\t POOLED TIME: {np.mean(parallel_times):.3f}"
)
And here are the results I'm getting on a Ryzen 3700X (8 cores, 16 threads):
ORIGINAL AVG TIME: 0.897 POOLED TIME: 0.315
I'd've loved to have offered you an FFT library written in OpenCL, but I'm not sure whether you'd have to write the Python bridge yourself (more code) or whether you'd trust the first implementation you'd come across on GitHub. If you're willing to give into CUDA's vendor lock-in, Nvidia provides an "almost drop in replacement" for NumPy called CuPy, and it has FFT and IFFT kernels., Hope this helps!
With your arrays:
In [42]: Signal.shape
Out[42]: (355, 128, 4)
In [43]: A.shape
Out[43]: (128, 4, 46, 23)
The B calc takes minutes on my modest machine:
In [44]: B = (Signal[..., None, None] * A[None,...]).sum(1)
In [45]: B.shape
Out[45]: (355, 4, 46, 23)
einsum is much faster:
In [46]: B2=np.einsum('ijk,jklm->iklm',Signal,A)
In [47]: np.allclose(B,B2)
Out[47]: True
In [48]: timeit B2=np.einsum('ijk,jklm->iklm',Signal,A)
1.05 s ± 21.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
reworking the dimensions to move the 128 to the classic dot positions:
In [49]: B21=np.einsum('ikj,kjn->ikn',Signal.transpose(0,2,1),A.reshape(128, 4, 46*23).transpose(1,0,2))
In [50]: B21.shape
Out[50]: (355, 4, 1058)
In [51]: timeit B21=np.einsum('ikj,kjn->ikn',Signal.transpose(0,2,1),A.reshape(128, 4, 46*23).transpose(1,0,2)
...: )
1.04 s ± 3.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
With a bit more tweaking I can use matmul/# and cut the time in half:
In [52]: B3=Signal.transpose(0,2,1)[:,:,None,:]#(A.reshape(1,128, 4, 46*23).transpose(0,2,1,3))
In [53]: timeit B3=Signal.transpose(0,2,1)[:,:,None,:]#(A.reshape(1,128, 4, 46*23).transpose(0,2,1,3))
497 ms ± 11.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [54]: B3.shape
Out[54]: (355, 4, 1, 1058)
In [56]: np.allclose(B, B3[:,:,0,:].reshape(B.shape))
Out[56]: True
Casting the arrays to the matmul format took a fair bit of experimentation. matmul makes optimal use of BLAS-like libraries. You may improve speed by installing better libraries.
Running Numpy version 1.19.2, I get better performance cumulating the mean of every individual axis of an array than by calculating the mean over an already flattened array.
shape = (10000,32,32,3)
mat = np.random.random(shape)
# Call this Method A.
%%timeit
mat_means = mat.mean(axis=0).mean(axis=0).mean(axis=0)
14.6 ms ± 167 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
mat_reshaped = mat.reshape(-1,3)
# Call this Method B
%%timeit
mat_means = mat_reshaped.mean(axis=0)
135 ms ± 227 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
This is odd, since doing the mean multiple times has the same bad access pattern (perhaps even worse) than the one on the reshaped array. We also do more operations this way. As a sanity check, I converted the array to FORTRAN order:
mat_reshaped_fortran = mat.reshape(-1,3, order='F')
%%timeit
mat_means = mat_reshaped_fortran.mean(axis=0)
12.2 ms ± 85.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This yields the performance improvement I expected.
For Method A, prun gives:
36 function calls in 0.019 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
3 0.018 0.006 0.018 0.006 {method 'reduce' of 'numpy.ufunc' objects}
1 0.000 0.000 0.019 0.019 {built-in method builtins.exec}
3 0.000 0.000 0.019 0.006 _methods.py:143(_mean)
3 0.000 0.000 0.000 0.000 _methods.py:59(_count_reduce_items)
1 0.000 0.000 0.019 0.019 <string>:1(<module>)
3 0.000 0.000 0.019 0.006 {method 'mean' of 'numpy.ndarray' objects}
3 0.000 0.000 0.000 0.000 _asarray.py:86(asanyarray)
3 0.000 0.000 0.000 0.000 {built-in method numpy.array}
3 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
6 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
6 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
While for Method B:
14 function calls in 0.166 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.166 0.166 0.166 0.166 {method 'reduce' of 'numpy.ufunc' objects}
1 0.000 0.000 0.166 0.166 {built-in method builtins.exec}
1 0.000 0.000 0.166 0.166 _methods.py:143(_mean)
1 0.000 0.000 0.000 0.000 _methods.py:59(_count_reduce_items)
1 0.000 0.000 0.166 0.166 <string>:1(<module>)
1 0.000 0.000 0.166 0.166 {method 'mean' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 _asarray.py:86(asanyarray)
1 0.000 0.000 0.000 0.000 {built-in method numpy.array}
1 0.000 0.000 0.000 0.000 {built-in method numpy.core._multiarray_umath.normalize_axis_index}
2 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
2 0.000 0.000 0.000 0.000 {built-in method builtins.issubclass}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Note: np.setbufsize(1e7) doesn't seem to have any effect.
What is the reason for this performance difference?
Let's call your original matrix mat. mat.shape = (10000,32,32,3). Visually, this is like having a "stack" of 10,000 * 32x32x3 * rectangular prisms (I think of them as LEGOs) of floats.
Now lets think about what you did in terms of floating point operations (flops):
In Method A, you do mat.mean(axis=0).mean(axis=0).mean(axis=0). Let's break this down:
You take the mean of each position (i,j,k) across all 10,000 LEGOs. This gives you back a single LEGO of size 32x32x3 which now contains the first set of means. This means you have performed 10,000 additions and 1 division per mean, of which there are 32323 = 3072. In total, you've done 30,723,072 flops.
You then take the mean again, this time of each position (j,k), where i is now the number of the layer (vertical position) you are currently on. This gives you a piece of paper with 32x3 means written on it. You have performed 32 additions and 1 divisions per mean, of which there are 32*3 = 96. In total, you've done 3,168 flops.
Finally, you take the mean of each column k, where j is now the row you are currently on. This gives you a stub with 3 means written on it. You have performed 32 additions and 1 division per mean, of which there are 3. In total, you've done 99 flops.
The grand total of all this is 30,723,072 + 3,168 + 99 = 30,726,339 flops.
In Method B, you do mat_reshaped = mat.reshape(-1,3); mat_means = mat_reshaped.mean(axis=0). Let's break this down:
You reshaped everything, so mat is a long roll of paper of size 10,240,000x3. You take the mean of each column k, where j is now the row you are currently on. This gives you a stub with 3 means written on it. You have performed 10,240,000 additions and 1 division per mean, of which there are 3. In total, you've done 30,720,003 flops.
So now you're saying to yourself "What! All of that work, only to show that the slower method actually does ~less~ work?! " Here's the problem: Although Method B has less work to do, it does not have a lot less work to do, meaning just from a flop standpoint, we would expect things to be similar in terms of runtime.
You also have to consider the size of your reshaped array in Method B: a matrix with 10,240,000 rows is HUGE!!! It's really hard/inefficient for the computer to access all of that, and more memory accesses means longer runtimes. The fact is that in its original 10,000x32x32x3 shape, the matrix was already partitioned into convenient slices that the computer could access more efficiently: this is actually a common technique when handling giant matrices Jaime's response to a similar question or even this article: both talk about how breaking up a big matrix into smaller slices helps your program be more memory efficient, therefore making it run faster.
I have the following code that takes 800ms to execute, however the data is not that much.. just few columns and and few rows
Is there an opportunity to make it faster, I really don't know where is the bottelneck in that code
def compute_s_t(df,
gb=('session_time', 'trajectory_id'),
params=('t', 's', 's_normalized', 'v_direct', 't_abs', ),
fps=25, inplace=True):
if not inplace:
df = df.copy()
orig_columns = df.columns.tolist()
# compute travelled distance
df['dx'] = df['x_world'].diff()
df['dy'] = df['y_world'].diff()
t1 = datetime.datetime.now()
df['ds'] = np.sqrt(np.array(df['dx'] ** 2 + df['dy'] ** 2, dtype=np.float32))
df['ds'].iloc[0] = 0 # to avoid NaN returned by .diff()
df['s'] = df['ds'].cumsum()
df['s'] = (df.groupby('trajectory_id')['s']
.transform(subtract_nanmin))
# compute travelled time
df['dt'] = df['frame'].diff() / fps
df['dt'].iloc[0] = 0 # to avoid NaN returned by .diff()
df['t'] = df['dt'].cumsum()
df['t'] = (df.groupby('trajectory_id')['t']
.transform(subtract_nanmin))
df['t_abs'] = df['frame'] / fps
# compute velocity
# why values[:, 0]? why duplicate column?
df['v_direct'] = df['ds'].values / df['dt'].values
df.loc[df['t'] == 0, 'v'] = np.NaN
# compute normalized s
df['s_normalized'] = (df.groupby('trajectory_id')['s']
.transform(divide_nanmax))
# skip intermediate results
cols = orig_columns + list(params)
t2 = datetime.datetime.now()
print((t2 - t1).microseconds / 1000)
return df[cols]
Here is the profiler output:
18480 function calls (18196 primitive calls) in 0.593 seconds
Ordered by: call count
ncalls tottime percall cumtime percall filename:lineno(function)
11 0.000 0.000 0.580 0.053 frame.py:3105(__setitem__)
11 0.000 0.000 0.000 0.000 frame.py:3165(_ensure_valid_index)
11 0.000 0.000 0.580 0.053 frame.py:3182(_set_item)
11 0.000 0.000 0.000 0.000 frame.py:3324(_sanitize_column)
11 0.000 0.000 0.003 0.000 generic.py:2599(_set_item)
11 0.000 0.000 0.577 0.052 generic.py:2633(_check_setitem_copy)
11 0.000 0.000 0.000 0.000 indexing.py:2321(convert_to_index_sliceable)
According to the comments I have used a profiler and I put the profiling result of the function above.
def subtract_nanmin(x):
return x - np.nanmin(x)
def divide_nanmax(x):
return x / np.nanmax(x)
One thing to do is replace:
df.columns.tolist()
with
df.columns.values.tolist()
This is much faster. Here's an experiment with a random 100x100 dataframe:
%timeit df.columns.values.tolist()
output:
1.29 µs ± 19.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
and with the same df:
%timeit df.columns.tolist()
output:
6.91 µs ± 241 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
UPDATE:
What are subtract_nanmin and divide_nanmax?
Instead of
df['ds'].iloc[0] = 0 # to avoid NaN returned by .diff()
df['dt'].iloc[0] = 0 # to avoid NaN returned by .diff()
You can use df.fillna(0) or df['ds'].fillna(0) to get rid of NaNs
Test code:
import numpy as np
import pandas as pd
COUNT = 1000000
df = pd.DataFrame({
'y': np.random.normal(0, 1, COUNT),
'z': np.random.gamma(50, 1, COUNT),
})
%timeit df.y[(10 < df.z) & (df.z < 50)].mean()
%timeit df.y.values[(10 < df.z.values) & (df.z.values < 50)].mean()
%timeit df.eval('y[(10 < z) & (z < 50)].mean()', engine='numexpr')
The output on my machine (a fairly fast x86-64 Linux desktop with Python 3.6) is:
17.8 ms ± 1.3 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
8.44 ms ± 502 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
46.4 ms ± 2.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
I understand why the second line is a bit faster (it ignores the Pandas index). But why is the eval() approach using numexpr so slow? Shouldn't it be faster than at least the first approach? The documentation sure makes it seem like it would be: https://pandas.pydata.org/pandas-docs/stable/enhancingperf.html
From the investigation presented below, it looks like the unspectacular reason for the worse performance is "overhead".
Only a small part of the expression y[(10 < z) & (z < 50)].mean() is done via numexpr-module. numexpr doesn't support indexing, thus we can only hope for (10 < z) & (z < 50) to be speed-up - anything else will be mapped to pandas-operations.
However, (10 < z) & (z < 50) is not the bottle-neck here, as can be easily seen:
%timeit df.y[(10 < df.z) & (df.z < 50)].mean() # 16.7 ms
mask=(10 < df.z) & (df.z < 50)
%timeit df.y[mask].mean() # 13.7 ms
%timeit df.y[mask] # 13.2 ms
df.y[mask] -takes the lion's share of the running time.
We can compare the profiler output for df.y[mask] and df.eval('y[mask]') to see what makes the difference.
When I use the following script:
import numpy as np
import pandas as pd
COUNT = 1000000
df = pd.DataFrame({
'y': np.random.normal(0, 1, COUNT),
'z': np.random.gamma(50, 1, COUNT),
})
mask = (10 < df.z) & (df.z < 50)
df['m']=mask
for _ in range(500):
df.y[df.m]
# OR
#df.eval('y[m]', engine='numexpr')
and run it with python -m cProfile -s cumulative run.py (or %prun -s cumulative <...> in IPython), I can see the following profiles.
For direct call of the pandas functionality:
ncalls tottime percall cumtime percall filename:lineno(function)
419/1 0.013 0.000 7.228 7.228 {built-in method builtins.exec}
1 0.006 0.006 7.228 7.228 run.py:1(<module>)
500 0.005 0.000 6.589 0.013 series.py:764(__getitem__)
500 0.003 0.000 6.475 0.013 series.py:812(_get_with)
500 0.003 0.000 6.468 0.013 series.py:875(_get_values)
500 0.009 0.000 6.445 0.013 internals.py:4702(get_slice)
500 0.006 0.000 3.246 0.006 range.py:491(__getitem__)
505 3.146 0.006 3.236 0.006 base.py:2067(__getitem__)
500 3.170 0.006 3.170 0.006 internals.py:310(_slice)
635/2 0.003 0.000 0.414 0.207 <frozen importlib._bootstrap>:958(_find_and_load)
We can see that almost 100% of the time is spent in series.__getitem__ without any overhead.
For the call via df.eval(...), the situation is quite different:
ncalls tottime percall cumtime percall filename:lineno(function)
453/1 0.013 0.000 12.702 12.702 {built-in method builtins.exec}
1 0.015 0.015 12.702 12.702 run.py:1(<module>)
500 0.013 0.000 12.090 0.024 frame.py:2861(eval)
1000/500 0.025 0.000 10.319 0.021 eval.py:153(eval)
1000/500 0.007 0.000 9.247 0.018 expr.py:731(__init__)
1000/500 0.004 0.000 9.236 0.018 expr.py:754(parse)
4500/500 0.019 0.000 9.233 0.018 expr.py:307(visit)
1000/500 0.003 0.000 9.105 0.018 expr.py:323(visit_Module)
1000/500 0.002 0.000 9.102 0.018 expr.py:329(visit_Expr)
500 0.011 0.000 9.096 0.018 expr.py:461(visit_Subscript)
500 0.007 0.000 6.874 0.014 series.py:764(__getitem__)
500 0.003 0.000 6.748 0.013 series.py:812(_get_with)
500 0.004 0.000 6.742 0.013 series.py:875(_get_values)
500 0.009 0.000 6.717 0.013 internals.py:4702(get_slice)
500 0.006 0.000 3.404 0.007 range.py:491(__getitem__)
506 3.289 0.007 3.391 0.007 base.py:2067(__getitem__)
500 3.282 0.007 3.282 0.007 internals.py:310(_slice)
500 0.003 0.000 1.730 0.003 generic.py:432(_get_index_resolvers)
1000 0.014 0.000 1.725 0.002 generic.py:402(_get_axis_resolvers)
2000 0.018 0.000 1.685 0.001 base.py:1179(to_series)
1000 0.003 0.000 1.537 0.002 scope.py:21(_ensure_scope)
1000 0.014 0.000 1.534 0.002 scope.py:102(__init__)
500 0.005 0.000 1.476 0.003 scope.py:242(update)
500 0.002 0.000 1.451 0.003 inspect.py:1489(stack)
500 0.021 0.000 1.449 0.003 inspect.py:1461(getouterframes)
11000 0.062 0.000 1.415 0.000 inspect.py:1422(getframeinfo)
2000 0.008 0.000 1.276 0.001 base.py:1253(_to_embed)
2035 1.261 0.001 1.261 0.001 {method 'copy' of 'numpy.ndarray' objects}
1000 0.015 0.000 1.226 0.001 engines.py:61(evaluate)
11000 0.081 0.000 1.081 0.000 inspect.py:757(findsource)
once again about 7 seconds are spent in series.__getitem__, but there are also about 6 seconds overhead - for example about 2 seconds in frame.py:2861(eval) and about 2 seconds in expr.py:461(visit_Subscript).
I did only a superficial investigation (see more details further below), but this overhead doesn't seems to be just constant but at least linear in the number of element in the series. For example there is method 'copy' of 'numpy.ndarray' objects which means that data is copied (it is quite unclear, why this would be necessary per se).
My take-away from it: using pd.eval has advantages as long as the evaluated expression can be evaluated with numexpr alone. As soon as this is not the case, there might be no longer gains but losses due to quite large overhead.
Using line_profiler (here I use %lprun-magic (after loading it with %load_ext line_profliler) for the function run() which is more or less a copy from the script above) we can easily find where the time is lost in Frame.eval:
%lprun -f pd.core.frame.DataFrame.eval
-f pd.core.frame.DataFrame._get_index_resolvers
-f pd.core.frame.DataFrame._get_axis_resolvers
-f pd.core.indexes.base.Index.to_series
-f pd.core.indexes.base.Index._to_embed
run()
Here we can see were the additional 10% are spent:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
2861 def eval(self, expr,
....
2951 10 206.0 20.6 0.0 from pandas.core.computation.eval import eval as _eval
2952
2953 10 176.0 17.6 0.0 inplace = validate_bool_kwarg(inplace, 'inplace')
2954 10 30.0 3.0 0.0 resolvers = kwargs.pop('resolvers', None)
2955 10 37.0 3.7 0.0 kwargs['level'] = kwargs.pop('level', 0) + 1
2956 10 17.0 1.7 0.0 if resolvers is None:
2957 10 235850.0 23585.0 9.0 index_resolvers = self._get_index_resolvers()
2958 10 2231.0 223.1 0.1 resolvers = dict(self.iteritems()), index_resolvers
2959 10 29.0 2.9 0.0 if 'target' not in kwargs:
2960 10 19.0 1.9 0.0 kwargs['target'] = self
2961 10 46.0 4.6 0.0 kwargs['resolvers'] = kwargs.get('resolvers', ()) + tuple(resolvers)
2962 10 2392725.0 239272.5 90.9 return _eval(expr, inplace=inplace, **kwargs)
and _get_index_resolvers() can be drilled down to Index._to_embed:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1253 def _to_embed(self, keep_tz=False, dtype=None):
1254 """
1255 *this is an internal non-public method*
1256
1257 return an array repr of this object, potentially casting to object
1258
1259 """
1260 40 73.0 1.8 0.0 if dtype is not None:
1261 return self.astype(dtype)._to_embed(keep_tz=keep_tz)
1262
1263 40 201490.0 5037.2 100.0 return self.values.copy()
Where the O(n)-copying happens.
Given a dictionary of lists, such as
d = {'1':[11,12], '2':[21,21]}
Which is more pythonic or otherwise preferable:
for k in d:
for x in d[k]:
# whatever with k, x
or
for k, dk in d.iteritems():
for x in dk:
# whatever with k, x
or is there something else to consider?
EDIT, in case a list might be useful (e.g., standard dicts don't preserve order), this might be appropriate, although it's much slower.
d2 = d.items()
for k in d2:
for x in d2[1]:
# whatever with k, x
Here's a speed test, why not:
import random
numEntries = 1000000
d = dict(zip(range(numEntries), [random.sample(range(0, 100), 2) for x in range(numEntries)]))
def m1(d):
for k in d:
for x in d[k]:
pass
def m2(d):
for k, dk in d.iteritems():
for x in dk:
pass
import cProfile
cProfile.run('m1(d)')
print
cProfile.run('m2(d)')
# Ran 3 trials:
# m1: 0.205, 0.194, 0.193: average 0.197 s
# m2: 0.176, 0.166, 0.173: average 0.172 s
# Method 1 takes 15% more time than method 2
cProfile example output:
3 function calls in 0.194 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.194 0.194 <string>:1(<module>)
1 0.194 0.194 0.194 0.194 stackoverflow.py:7(m1)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
4 function calls in 0.179 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.179 0.179 <string>:1(<module>)
1 0.179 0.179 0.179 0.179 stackoverflow.py:12(m2)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {method 'iteritems' of 'dict' objects}
I considered a couple methods:
import itertools
COLORED_THINGS = {'blue': ['sky', 'jeans', 'powerline insert mode'],
'yellow': ['sun', 'banana', 'phone book/monitor stand'],
'red': ['blood', 'tomato', 'test failure']}
def forloops():
""" Nested for loops. """
for color, things in COLORED_THINGS.items():
for thing in things:
pass
def iterator():
""" Use itertools and list comprehension to construct iterator. """
for color, thing in (
itertools.chain.from_iterable(
[itertools.product((k,), v) for k, v in COLORED_THINGS.items()])):
pass
def iterator_gen():
""" Use itertools and generator to construct iterator. """
for color, thing in (
itertools.chain.from_iterable(
(itertools.product((k,), v) for k, v in COLORED_THINGS.items()))):
pass
I used ipython and memory_profiler to test performance:
>>> %timeit forloops()
1000000 loops, best of 3: 1.31 µs per loop
>>> %timeit iterator()
100000 loops, best of 3: 3.58 µs per loop
>>> %timeit iterator_gen()
100000 loops, best of 3: 3.91 µs per loop
>>> %memit -r 1000 forloops()
peak memory: 35.79 MiB, increment: 0.02 MiB
>>> %memit -r 1000 iterator()
peak memory: 35.79 MiB, increment: 0.00 MiB
>>> %memit -r 1000 iterator_gen()
peak memory: 35.79 MiB, increment: 0.00 MiB
As you can see, the method had no observable impact on peak memory usage, but nested for loops were unbeatable for speed (not to mention readability).
Here's the list comprehension approach. Nested...
r = [[i for i in d[x]] for x in d.keys()]
print r
[[11, 12], [21, 21]]
My results from Brionius code:
3 function calls in 0.173 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.173 0.173 <string>:1(<module>)
1 0.173 0.173 0.173 0.173 speed.py:5(m1)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Prof
iler' objects}
4 function calls in 0.185 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.185 0.185 <string>:1(<module>)
1 0.185 0.185 0.185 0.185 speed.py:10(m2)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Prof
iler' objects}
1 0.000 0.000 0.000 0.000 {method 'iteritems' of 'dict' obje
cts}