When reading a large hdf file with pandas.read_hdf() I get extremely slow read time. My hdf has 50 million rows, 3 columns with integers and 2 with strings. Writing this using to_hdf() with table format and indexing took almost 10 minutes. While this is also slow, I am not too concerned as read speed is more important.
I have tried saving as fixed/table format, with/without compression, however the read time ranges between 2-5 minutes. By comparison, read_csv() on the same data takes 4 minutes.
I have also tried to read the hdf using pytables directly. This is much faster at 6 seconds and this would be the speed I would like to see.
h5file = tables.open_file("data.h5", "r")
table = h5file.root.data.table.read()
I noticed all the speed comparisons in the documentation use only numeric data and running these myself achieved similar performance.
I would like to ask whether there is something I can do to optimise read performance?
Edit
Here is a sample of the data
col_A col_B col_C col_D col_E
30649671 1159660800 10217383 0 10596000 LACKEY
26198715 1249084800 0921720 0 0 KEY CLIFTON
19251910 752112000 0827092 104 243000 WEMPLE
47636877 1464739200 06247715 0 0 FLOYD
14121495 1233446400 05133815 0 988000 OGU ALLYN CH 9
41171050 1314835200 7C140009 0 39000 DEBERRY A
45865543 1459468800 0314892 76 254000 SABRINA
13387355 970358400 04140585 19 6956000 LA PERLA
4186815 849398400 02039719 0 19208000 NPU UNIONSPIELHAGAN1
32666568 733622400 10072006 0 1074000 BROWN
And info on the dataframe:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 52046850 entries, 0 to 52046849
Data columns (total 5 columns):
col_A int64
col_B object
col_C int64
col_D int64
col_E object
dtypes: int64(3), object(2)
memory usage: 1.9+ GB
Here is a small demo:
Generating sample DF (1M rows):
N = 10**6
df = pd.DataFrame({
'n1': np.random.randint(10**6, size=N),
'n2': np.random.randint(10**6, size=N),
'n3': np.random.randint(10**6, size=N),
's1': pd.util.testing.rands_array(10, size=N),
's2': pd.util.testing.rands_array(40, size=N),
})
let's write it to disk in CSV, HDF5 (fixed, table and table + data_columns=True) and in Feather formats
df.to_csv(r'c:/tmp/test.csv', index=False)
df.to_hdf(r'c:/tmp/test_fix.h5', 'a')
df.to_hdf(r'c:/tmp/test_tab.h5', 'a', format='t')
df.to_hdf(r'c:/tmp/test_tab_idx.h5', 'a', format='t', data_columns=True)
import feather
feather.write_dataframe(df, 'c:/tmp/test.feather')
Reading:
In [2]: %timeit pd.read_csv(r'c:/tmp/test.csv')
1 loop, best of 3: 4.48 s per loop
In [3]: %timeit pd.read_hdf(r'c:/tmp/test_fix.h5','a')
1 loop, best of 3: 1.24 s per loop
In [4]: %timeit pd.read_hdf(r'c:/tmp/test_tab.h5','a')
1 loop, best of 3: 5.65 s per loop
In [5]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a')
1 loop, best of 3: 5.6 s per loop
In [6]: %timeit feather.read_dataframe(r'c:/tmp/test.feather')
1 loop, best of 3: 589 ms per loop
conditional reading - let's select only those rows where n2 <= 100000
In [7]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000")
1 loop, best of 3: 1.18 s per loop
the less data we need to select (after filtering) - the faster it is:
In [8]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000 and n1 > 500000")
1 loop, best of 3: 763 ms per loop
In [10]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000 and n1 > 500000 and n3 < 50000")
1 loop, best of 3: 379 ms per loop
UPDATE: for Pandas versions 0.20.0+ there we can write and read directly to/from feather format (thanks #jezrael for the hint):
In [3]: df.to_feather(r'c:/tmp/test2.feather')
In [4]: %timeit pd.read_feather(r'c:/tmp/test2.feather')
1 loop, best of 3: 583 ms per loop
Example of generated DF:
In [13]: df
Out[13]:
n1 n2 n3 s1 s2
0 719458 808047 792611 Fjv4CoRv2b 2aWQTkutPlKkO38fRQh2tdh1BrnEFavmIsDZK17V
1 526092 950709 804869 dfG12EpzVI YVZzhMi9sfazZEW9e2TV7QIvldYj2RPHw0TXxS2z
2 109107 801344 266732 aoyBuHTL9I ui0PKJO8cQJwcvmMThb08agWL1UyRumYgB7jjmcw
3 873626 814409 895382 qQQms5pTGq zvf4HTaKCISrdPK98ROtqPqpsG4WhSdEgbKNHy05
4 212776 596713 924623 3YXa4PViAn 7Y94ykHIHIEnjKvGphYfAWSINRZtJ99fCPiMrfzl
5 375323 401029 973262 j6QQwYzfsK PNYOM2GpHdhrz9NCCifRsn8gIZkLHecjlk82o44Y
6 232655 937230 40883 NsI5Y78aLT qiKvXcAdPVbhWbXnyD3uqIwzS7ZsCgssm9kHAETb
7 69010 438280 564194 N73tQaZjey ttj1IHtjPyssyADMYiNScflBjN4SFv5bk3tbz93o
8 988081 8992 968871 eb9lc7D22T sb3dt1Ndc8CUHyvsFJgWRrQg4ula7KJ76KrSSqGH
9 127155 66042 881861 tHSBB3RsNH ZpZt5sxAU3zfiPniSzuJYrwtrytDvqJ1WflJ4vh3
... ... ... ... ... ...
999990 805220 21746 355944 IMCMWuf97L bj7tSrgudA5wLvWkWVQyNVamSGmFGOeQlIUoKXK3
999991 232596 293850 741881 JD0SVS5uob kWeP8DEw19rwxVN3XBBcskibMRGxfoToNO9RDeCT
999992 532752 733958 222003 9X4PopnltN dKhsdKFK1EfAATBFsB5hjKZzQWERxzxGEQZWAvSe
999993 308623 717897 703895 Fg0nuq63hA kHzRecZoaG5tAnLbtlq1hqtfd2l5oEMFbJp4NjhC
999994 841670 528518 70745 vKQDiAzZNf M5wdoUNfkdKX2VKQEArvBLYl5lnTNShjDLwnb8VE
999995 986988 599807 901853 r8iHjo39NH 72CfzCycAGoYMocbw3EbUbrV4LRowFjSDoDeYfT5
999996 384064 429184 203230 EJy0mTAmdQ 1jfUQCj2SLIktVqIRHfYQW2QYfpvhcWCbRLO5wqL
999997 967270 565677 146418 KWp2nH1MbM hzhn880cuEpjFhd5bd7vpgsjjRNgaViANW9FHwrf
999998 130864 863893 5614 L28QGa22f1 zfg8mBidk8NTa3LKO4rg31Z6K4ljK50q5tHHq8Fh
999999 528532 276698 553870 0XRJwqBAWX 0EzNcDkGUFklcbKELtcr36zPCMu9lSaIDcmm0kUX
[1000000 rows x 5 columns]
Related
I have a dataframe like this
df = pd.DataFrame({'id': [205,205,205, 211, 211, 211]
, 'date': pd.to_datetime(['2019-12-01','2020-01-01', '2020-02-01'
,'2019-12-01' ,'2020-01-01', '2020-03-01'])})
df
id date
0 205 2019-12-01
1 205 2020-01-01
2 205 2020-02-01
3 211 2019-12-01
4 211 2020-01-01
5 211 2020-03-01
where the column date is made by consecutive months for id 205 but not for id 211.
I want to keep only the observations (id) for which I have monthly data without jumps. In this example I want:
id date
0 205 2019-12-01
1 205 2020-01-01
2 205 2020-02-01
Here I am collecting the id to keep:
keep_id = []
for num in pd.unique(df.index):
temp = (df.loc[df['id']==num,'date'].dt.year - df.loc[df['id']==num,'date'].shift(1).dt.year) * 12 + df.loc[df['id']==num,'date'].dt.month - df.loc[df['id']==num,'date'].shift(1).dt.month
temp.values[0] = 1.0 # here I correct the first entry
if (temp==1.).all():
keep_id.append(num)
where I am using (df.loc[num,'date'].dt.year - df.loc[num,'date'].shift(1).dt.year) * 12 + df.loc[num,'date'].dt.month - df.loc[num,'date'].shift(1).dt.month to compute the difference in months from the previous date for every id.
This seems to work when tested on a small portion of df, but I'm sure there is a better way of doing this, maybe using the .groupby() method.
Since df is made of millions of observations my code takes too much time (and I'd like to learn a more efficient and pythonic way of doing this)
What you want to do is use groupby-filter rather than a groupby apply.
df.groupby('id').filter(lambda x: not (x.date.diff() > pd.Timedelta(days=32)).any())
provides exactly:
id date
0 205 2019-12-01
1 205 2020-01-01
2 205 2020-02-01
And indeed, I would keep the index unique, there are too many useful characteristics to retain.
Both this response and Michael's above are correct in terms of output. In terms of performance, they are very similar as well:
%timeit df.groupby('id').filter(lambda x: not (x.date.diff() > pd.Timedelta(days=32)).any())
1.48 ms ± 12.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
and
%timeit df[df.groupby('id')['date'].transform(lambda x: x.diff().max() < pd.Timedelta(days=32))]
1.7 ms ± 163 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
For most operations, this difference is negligible.
You can use the following approach. Only ~3x faster in my tests.
df[df.groupby('id')['date'].transform(lambda x: x.diff().max() < pd.Timedelta(days=32))]
Out:
date
id
205 2019-12-01
205 2020-01-01
205 2020-02-01
Appreciate any help from the community on this. I've been toying with it for a few days now.
I have 2 dataframes, df1 & df2. The first dataframe will always be 1 min data about 20-30 thousand rows. The second dataframe will contain random times with associated relevant data & will always be relatively small (1000-4000 rows x 4 or 5 columns). I'm working through df1 with itertuples in order to perform a time specific slice (trailing). This process gets repeated thousands of times, & the single slice line below (df3 = df2...) is causing over 50% of the runtime. Simply adding a couple slicing criteria in the single line below can have 30+% increases on the final runtimes which run hours long!
I've considered trying pandas 'query', but have read it really only helps on larger dataframes. My thought is that it may be better to reduce df2 into a numpy array, simple python list, or other since it is always fairly short, though I think I'll need it back into a dataframe for subsequent sorting, summations, and vector multiplications that come afterward in the primary code. I did succeed in utilizing concurrent futures on a 12 core setup, which increased speed about 5X for my overall application, though I'm still talking hours of runtime.
Any help or suggestions would be appreciated.
Example code illustrating the issue:
import pandas as pd
import numpy as np
import random
from datetime import datetime as dt
from datetime import timedelta, timezone
def random_dates(start, end, n=10):
start_u = start.value//10**9
end_u = end.value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
dfsize = 34000
df1 = pd.DataFrame({'datetime': pd.date_range('2010-01-01', periods=dfsize, freq='1min'), 'val':np.random.uniform(10, 100, size=dfsize)})
sizedf = 3000
start = pd.to_datetime('2010-01-01')
end = pd.to_datetime('2010-01-24')
test_list = [5, 30]
df2 = pd.DataFrame({'datetime':random_dates(start,end, sizedf), 'a':np.random.uniform(10, 100, size=sizedf), 'b':np.random.choice(test_list, sizedf), 'c':np.random.uniform(10, 100, size=sizedf), 'd':np.random.uniform(10, 100, size=sizedf), 'e':np.random.uniform(10, 100, size=sizedf)})
df2.set_index('datetime', inplace=True)
daysback5 = 3
daysback30 = 8
#%%timeit -r1 #time this section here:
#Slow portion here - Performing ~4000+ slices on a dataframe (df2) which is ~1000 to 3000 rows -- Some slowdown due to itertuples, which don't think is avoidable
for line, row in enumerate(df1.itertuples(index=False), 0):
if row.datetime.minute % 5 ==0:
#Lion's share of the slowdown:
df3 = df2[(df2['a']<=row.val*1.25) & (df2['a']>=row.val*.75) & (df2.index<=row.datetime) & (((df2.index>=row.datetime-timedelta(days=daysback30)) & (df2['b']==30)) | ((df2.index>=row.datetime-timedelta(days=daysback5)) & (df2['b']==5))) ].reset_index(drop=True).copy()
Time of slow part:
8.53 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
df1:
datetime val
0 2010-01-01 00:00:00 58.990147
1 2010-01-01 00:01:00 27.457308
2 2010-01-01 00:02:00 20.657251
3 2010-01-01 00:03:00 36.416561
4 2010-01-01 00:04:00 71.398897
... ... ...
33995 2010-01-24 14:35:00 77.763085
33996 2010-01-24 14:36:00 21.151239
33997 2010-01-24 14:37:00 83.741844
33998 2010-01-24 14:38:00 93.370216
33999 2010-01-24 14:39:00 99.720858
34000 rows × 2 columns
df2:
a b c d e
datetime
2010-01-03 23:38:13 22.363251 30 81.158073 21.806457 11.116421
2010-01-09 16:27:32 78.952070 5 27.045279 29.471537 29.559228
2010-01-13 04:49:57 85.985935 30 79.206437 29.711683 74.454446
2010-01-07 22:29:22 36.009752 30 43.072552 77.646257 57.208626
2010-01-15 09:33:02 13.653679 5 87.987849 37.433810 53.768334
... ... ... ... ... ...
2010-01-12 07:36:42 30.328512 5 81.281791 14.046032 38.288534
2010-01-08 20:26:31 80.911904 30 32.524414 80.571806 26.234552
2010-01-14 08:32:01 12.198825 5 94.270709 27.255914 87.054685
2010-01-06 03:25:09 82.591519 5 91.160917 79.042083 17.831732
2010-01-07 14:32:47 38.337405 30 10.619032 32.557640 87.890791
3000 rows × 5 columns
Actually, cross merge and query works pretty well for your data size:
(df1[df1.datetime.dt.minute % 5==0].assign(dummy=1)
.merge(df2.reset_index().assign(dummy=1),
on='dummy', suffixes=['_1','_2'])
.query('val*1.25 >= a >= val*.75 and datetime_2 <= datetime_1 ')
.loc[lambda x: ((x.datetime_2 >= x.datetime_1 - daysback30) & x['b'].eq(30) )
|((x.datetime_2>= x.datetime_1 - daysback5) & (x['b']==5))]
)
which takes about on my system:
2.05 s ± 60.4 ms per loop (mean ± std. dev. of 7 runs, 3 loops each)
where your code runs for about 10s.
When I call df.groupby([...]).apply(lambda x: ...) the performance is horrible. Is there a faster / more direct way to do this simple query?
To demonstrate my point, here is some code to set up the DataFrame:
import pandas as pd
df = pd.DataFrame(data=
{'ticker': ['AAPL','AAPL','AAPL','IBM','IBM','IBM'],
'side': ['B','B','S','S','S','B'],
'size': [100, 200, 300, 400, 100, 200],
'price': [10.12, 10.13, 10.14, 20.3, 20.2, 20.1]})
price side size ticker
0 10.12 B 100 AAPL
1 10.13 B 200 AAPL
2 10.14 S 300 AAPL
3 20.30 S 400 IBM
4 20.20 S 100 IBM
5 20.10 B 200 IBM
Now here is the part that is extremely slow that I need to speed up:
%timeit avgpx = df.groupby(['ticker','side']) \
.apply(lambda group: (group['size'] * group['price']).sum() / group['size'].sum())
3.23 ms ± 148 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This produces the correct result but as you can see above, takes super long (3.23ms doesn't seem like much but this is only 6 rows... When I use this on a real dataset it takes forever).
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
You can save some time by precomputing the product and getting rid of the apply.
df['scaled_size'] = df['size'] * df['price']
g = df.groupby(['ticker', 'side'])
g['scaled_size'].sum() / g['size'].sum()
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
100 loops, best of 3: 2.58 ms per loop
Sanity Check
df.groupby(['ticker','side']).apply(
lambda group: (group['size'] * group['price']).sum() / group['size'].sum())
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
100 loops, best of 3: 5.02 ms per loop
Getting rid of apply appears to result in a 2X speedup on my machine.
I have a dataframe Df which looks like:
date XNGS BBG FX
16/11/2007 19.41464766 0.6819 19.41464766
19/11/2007 19.34059332 0.6819 19.34059332
20/11/2007 19.49080536 0.6739 19.49080536
21/11/2007 19.2399259 0.673 19.2399259
22/11/2007 0.6734
23/11/2007 19.2009794 0.674 19.2009794
I would like to remove any rows where XNGS is empty. In this example I would like to remove the row with the date index 22/11/2007. So the resulting Df would look like:
date XNGS BBG FX
16/11/2007 19.41464766 0.6819 19.41464766
19/11/2007 19.34059332 0.6819 19.34059332
20/11/2007 19.49080536 0.6739 19.49080536
21/11/2007 19.2399259 0.673 19.2399259
23/11/2007 19.2009794 0.674 19.2009794
The dataframe changes a lot so the fix needs to be dynamic. I have tried:
Df = Df[Df.XNGS != ""]
and
Df.dropna(subset=["XNGS"])
but they don't work. What can I try next?
Safe Option
canonical dropna after replace
df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS'])
date XNGS BBG FX
0 16/11/2007 19.414648 0.6819 19.414648
1 19/11/2007 19.340593 0.6819 19.340593
2 20/11/2007 19.490805 0.6739 19.490805
3 21/11/2007 19.239926 0.6730 19.239926
5 23/11/2007 19.200979 0.6740 19.200979
Less Safe, but Cool
Empty strings evaluate to False
df[df.XNGS.values.astype(bool)]
date XNGS BBG FX
0 16/11/2007 19.414648 0.6819 19.414648
1 19/11/2007 19.340593 0.6819 19.340593
2 20/11/2007 19.490805 0.6739 19.490805
3 21/11/2007 19.239926 0.6730 19.239926
5 23/11/2007 19.200979 0.6740 19.200979
Timing
small data
%timeit (df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS']))
1000 loops, best of 3: 1.39 ms per loop
%timeit df[df.XNGS.values.astype(bool)]
1000 loops, best of 3: 192 µs per loop
large data
df = pd.concat([df] * 10000, ignore_index=True)
%timeit (df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS']))
100 loops, best of 3: 10.5 ms per loop
%timeit df[df.XNGS.values.astype(bool)]
100 loops, best of 3: 2.11 ms per loop
What about query?
Df.query('XNGS != ""', inplace=True)
or
Df = Df.query('XNGS != ""')
A long way of doing it is:
df["column name"].fillna(9999, inplace=True)
df = df[df["column name"]!= 9999]
I have a pandas dataframe holding more than million records. One of its columns is datetime. The sample of my data is like the following:
time,x,y,z
2015-05-01 10:00:00,111,222,333
2015-05-01 10:00:03,112,223,334
...
I need to effectively get the record during the specific period. The following naive way is very time consuming.
new_df = df[(df["time"] > start_time) & (df["time"] < end_time)]
I know that on DBMS like MySQL the indexing by the time field is effective for getting records by specifying the time period.
My question is
Does the indexing of pandas such as df.index = df.time makes the slicing process faster?
If the answer of Q1 is 'No', what is the common effective way to get a record during the specific time period in pandas?
Let's create a dataframe with 1 million rows and time performance. The index is a Pandas Timestamp.
df = pd.DataFrame(np.random.randn(1000000, 3),
columns=list('ABC'),
index=pd.DatetimeIndex(start='2015-1-1', freq='10s', periods=1000000))
Here are the results sorted from fastest to slowest (tested on the same machine with both v. 0.14.1 (don't ask...) and the most recent version 0.17.1):
%timeit df2 = df['2015-2-1':'2015-3-1']
1000 loops, best of 3: 459 µs per loop (v. 0.14.1)
1000 loops, best of 3: 664 µs per loop (v. 0.17.1)
%timeit df2 = df.ix['2015-2-1':'2015-3-1']
1000 loops, best of 3: 469 µs per loop (v. 0.14.1)
1000 loops, best of 3: 662 µs per loop (v. 0.17.1)
%timeit df2 = df.loc[(df.index >= '2015-2-1') & (df.index <= '2015-3-1'), :]
100 loops, best of 3: 8.86 ms per loop (v. 0.14.1)
100 loops, best of 3: 9.28 ms per loop (v. 0.17.1)
%timeit df2 = df.loc['2015-2-1':'2015-3-1', :]
1 loops, best of 3: 341 ms per loop (v. 0.14.1)
1000 loops, best of 3: 677 µs per loop (v. 0.17.1)
Here are the timings with the Datetime index as a column:
df.reset_index(inplace=True)
%timeit df2 = df.loc[(df['index'] >= '2015-2-1') & (df['index'] <= '2015-3-1')]
100 loops, best of 3: 12.6 ms per loop (v. 0.14.1)
100 loops, best of 3: 13 ms per loop (v. 0.17.1)
%timeit df2 = df.loc[(df['index'] >= '2015-2-1') & (df['index'] <= '2015-3-1'), :]
100 loops, best of 3: 12.8 ms per loop (v. 0.14.1)
100 loops, best of 3: 12.7 ms per loop (v. 0.17.1)
All of the above indexing techniques produce the same dataframe:
>>> df2.shape
(250560, 3)
It appears that either of the first two methods are the best in this situation, and the fourth method also works just as fine using the latest version of Pandas.
I've never dealt with a data set that large, but maybe you can try recasting the time column as a datetime index and then slicing directly. Something like this.
timedata.txt (extended from your example):
time,x,y,z
2015-05-01 10:00:00,111,222,333
2015-05-01 10:00:03,112,223,334
2015-05-01 10:00:05,112,223,335
2015-05-01 10:00:08,112,223,336
2015-05-01 10:00:13,112,223,337
2015-05-01 10:00:21,112,223,338
df = pd.read_csv('timedata.txt')
df.time = pd.to_datetime(df.time)
df = df.set_index('time')
print(df['2015-05-01 10:00:02':'2015-05-01 10:00:14'])
x y z
time
2015-05-01 10:00:03 112 223 334
2015-05-01 10:00:05 112 223 335
2015-05-01 10:00:08 112 223 336
2015-05-01 10:00:13 112 223 337
Note that in the example the times used for slicing are not in the column, so this will work for the case where you only know the time interval.
If your data has a fixed time period you can create a datetime index which may provide more options. I didn't want to assume your time period was fixed so constructed this for a more general case.