I have a dataframe Df which looks like:
date XNGS BBG FX
16/11/2007 19.41464766 0.6819 19.41464766
19/11/2007 19.34059332 0.6819 19.34059332
20/11/2007 19.49080536 0.6739 19.49080536
21/11/2007 19.2399259 0.673 19.2399259
22/11/2007 0.6734
23/11/2007 19.2009794 0.674 19.2009794
I would like to remove any rows where XNGS is empty. In this example I would like to remove the row with the date index 22/11/2007. So the resulting Df would look like:
date XNGS BBG FX
16/11/2007 19.41464766 0.6819 19.41464766
19/11/2007 19.34059332 0.6819 19.34059332
20/11/2007 19.49080536 0.6739 19.49080536
21/11/2007 19.2399259 0.673 19.2399259
23/11/2007 19.2009794 0.674 19.2009794
The dataframe changes a lot so the fix needs to be dynamic. I have tried:
Df = Df[Df.XNGS != ""]
and
Df.dropna(subset=["XNGS"])
but they don't work. What can I try next?
Safe Option
canonical dropna after replace
df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS'])
date XNGS BBG FX
0 16/11/2007 19.414648 0.6819 19.414648
1 19/11/2007 19.340593 0.6819 19.340593
2 20/11/2007 19.490805 0.6739 19.490805
3 21/11/2007 19.239926 0.6730 19.239926
5 23/11/2007 19.200979 0.6740 19.200979
Less Safe, but Cool
Empty strings evaluate to False
df[df.XNGS.values.astype(bool)]
date XNGS BBG FX
0 16/11/2007 19.414648 0.6819 19.414648
1 19/11/2007 19.340593 0.6819 19.340593
2 20/11/2007 19.490805 0.6739 19.490805
3 21/11/2007 19.239926 0.6730 19.239926
5 23/11/2007 19.200979 0.6740 19.200979
Timing
small data
%timeit (df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS']))
1000 loops, best of 3: 1.39 ms per loop
%timeit df[df.XNGS.values.astype(bool)]
1000 loops, best of 3: 192 µs per loop
large data
df = pd.concat([df] * 10000, ignore_index=True)
%timeit (df.replace({'XNGS': {'': np.nan}}).dropna(subset=['XNGS']))
100 loops, best of 3: 10.5 ms per loop
%timeit df[df.XNGS.values.astype(bool)]
100 loops, best of 3: 2.11 ms per loop
What about query?
Df.query('XNGS != ""', inplace=True)
or
Df = Df.query('XNGS != ""')
A long way of doing it is:
df["column name"].fillna(9999, inplace=True)
df = df[df["column name"]!= 9999]
Related
Here's the thing, I have the dataset below where date is the index:
date value
2020-01-01 100
2020-02-01 140
2020-03-01 156
2020-04-01 161
2020-05-01 170
.
.
.
And I want to transform it in this other dataset:
value_t0 value_t1 value_t2 value_t3 value_t4 ...
100 NaN NaN NaN NaN ...
140 100 NaN NaN NaN ...
156 140 100 NaN NaN ...
161 156 140 100 NaN ...
170 161 156 140 100 ...
First I thought about using pandas.pivot_table to do something, but that would just provide a different layout grouped by some column, which is not exactly what I want. Later, I thought about using pandasql and apply 'case when', but that wouldn't work because I would have to type dozens of lines of code. So I'm stuck here.
try this:
new_df = pd.DataFrame({f"value_t{i}": df['value'].shift(i) for i in range(len(df))})
The series .shift(n) method can get you a single column of your desired output by shifting everything down and filling in NaNs above. So we're building a new dataframe by feeding it a dictionary of the form {column name: column data, ...}, by using dictionary comprehension to iterate through your original dataframe.
I think the best is use numpy
values = np.asarray(df['value'].astype(float))
new_values = np.tril(np.repeat([values], values.shape[0], axis=0).T)
new_values[np.triu_indices(new_values.shape[0], 1)] = np.nan
new_df = pd.DataFrame(new_values).add_prefix('value_t')
Times for 5000 rows
%%timeit
values = np.asarray(df['value'].astype(float))
new_values = np.tril(np.repeat([values], values.shape[0], axis=0).T)
new_values[np.triu_indices(new_values.shape[0],1)] = np.nan
new_df = pd.DataFrame(new_values).add_prefix('value_t')
556 ms ± 35.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
new_df = pd.DataFrame({f"value_t{i}": df['value'].shift(i) for i in range(len(df))})
1.31 s ± 36.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Time without add_prefix
%%timeit
values = np.asarray(df['value'].astype(float))
new_values = np.tril(np.repeat([values], values.shape[0], axis=0).T)
new_values[np.triu_indices(new_values.shape[0],1)] = np.nan
new_df = pd.DataFrame(new_values)
357 ms ± 8.09 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
When I call df.groupby([...]).apply(lambda x: ...) the performance is horrible. Is there a faster / more direct way to do this simple query?
To demonstrate my point, here is some code to set up the DataFrame:
import pandas as pd
df = pd.DataFrame(data=
{'ticker': ['AAPL','AAPL','AAPL','IBM','IBM','IBM'],
'side': ['B','B','S','S','S','B'],
'size': [100, 200, 300, 400, 100, 200],
'price': [10.12, 10.13, 10.14, 20.3, 20.2, 20.1]})
price side size ticker
0 10.12 B 100 AAPL
1 10.13 B 200 AAPL
2 10.14 S 300 AAPL
3 20.30 S 400 IBM
4 20.20 S 100 IBM
5 20.10 B 200 IBM
Now here is the part that is extremely slow that I need to speed up:
%timeit avgpx = df.groupby(['ticker','side']) \
.apply(lambda group: (group['size'] * group['price']).sum() / group['size'].sum())
3.23 ms ± 148 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This produces the correct result but as you can see above, takes super long (3.23ms doesn't seem like much but this is only 6 rows... When I use this on a real dataset it takes forever).
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
You can save some time by precomputing the product and getting rid of the apply.
df['scaled_size'] = df['size'] * df['price']
g = df.groupby(['ticker', 'side'])
g['scaled_size'].sum() / g['size'].sum()
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
100 loops, best of 3: 2.58 ms per loop
Sanity Check
df.groupby(['ticker','side']).apply(
lambda group: (group['size'] * group['price']).sum() / group['size'].sum())
ticker side
AAPL B 10.126667
S 10.140000
IBM B 20.100000
S 20.280000
dtype: float64
100 loops, best of 3: 5.02 ms per loop
Getting rid of apply appears to result in a 2X speedup on my machine.
When reading a large hdf file with pandas.read_hdf() I get extremely slow read time. My hdf has 50 million rows, 3 columns with integers and 2 with strings. Writing this using to_hdf() with table format and indexing took almost 10 minutes. While this is also slow, I am not too concerned as read speed is more important.
I have tried saving as fixed/table format, with/without compression, however the read time ranges between 2-5 minutes. By comparison, read_csv() on the same data takes 4 minutes.
I have also tried to read the hdf using pytables directly. This is much faster at 6 seconds and this would be the speed I would like to see.
h5file = tables.open_file("data.h5", "r")
table = h5file.root.data.table.read()
I noticed all the speed comparisons in the documentation use only numeric data and running these myself achieved similar performance.
I would like to ask whether there is something I can do to optimise read performance?
Edit
Here is a sample of the data
col_A col_B col_C col_D col_E
30649671 1159660800 10217383 0 10596000 LACKEY
26198715 1249084800 0921720 0 0 KEY CLIFTON
19251910 752112000 0827092 104 243000 WEMPLE
47636877 1464739200 06247715 0 0 FLOYD
14121495 1233446400 05133815 0 988000 OGU ALLYN CH 9
41171050 1314835200 7C140009 0 39000 DEBERRY A
45865543 1459468800 0314892 76 254000 SABRINA
13387355 970358400 04140585 19 6956000 LA PERLA
4186815 849398400 02039719 0 19208000 NPU UNIONSPIELHAGAN1
32666568 733622400 10072006 0 1074000 BROWN
And info on the dataframe:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 52046850 entries, 0 to 52046849
Data columns (total 5 columns):
col_A int64
col_B object
col_C int64
col_D int64
col_E object
dtypes: int64(3), object(2)
memory usage: 1.9+ GB
Here is a small demo:
Generating sample DF (1M rows):
N = 10**6
df = pd.DataFrame({
'n1': np.random.randint(10**6, size=N),
'n2': np.random.randint(10**6, size=N),
'n3': np.random.randint(10**6, size=N),
's1': pd.util.testing.rands_array(10, size=N),
's2': pd.util.testing.rands_array(40, size=N),
})
let's write it to disk in CSV, HDF5 (fixed, table and table + data_columns=True) and in Feather formats
df.to_csv(r'c:/tmp/test.csv', index=False)
df.to_hdf(r'c:/tmp/test_fix.h5', 'a')
df.to_hdf(r'c:/tmp/test_tab.h5', 'a', format='t')
df.to_hdf(r'c:/tmp/test_tab_idx.h5', 'a', format='t', data_columns=True)
import feather
feather.write_dataframe(df, 'c:/tmp/test.feather')
Reading:
In [2]: %timeit pd.read_csv(r'c:/tmp/test.csv')
1 loop, best of 3: 4.48 s per loop
In [3]: %timeit pd.read_hdf(r'c:/tmp/test_fix.h5','a')
1 loop, best of 3: 1.24 s per loop
In [4]: %timeit pd.read_hdf(r'c:/tmp/test_tab.h5','a')
1 loop, best of 3: 5.65 s per loop
In [5]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a')
1 loop, best of 3: 5.6 s per loop
In [6]: %timeit feather.read_dataframe(r'c:/tmp/test.feather')
1 loop, best of 3: 589 ms per loop
conditional reading - let's select only those rows where n2 <= 100000
In [7]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000")
1 loop, best of 3: 1.18 s per loop
the less data we need to select (after filtering) - the faster it is:
In [8]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000 and n1 > 500000")
1 loop, best of 3: 763 ms per loop
In [10]: %timeit pd.read_hdf(r'c:/tmp/test_tab_idx.h5','a', where="n2 <= 100000 and n1 > 500000 and n3 < 50000")
1 loop, best of 3: 379 ms per loop
UPDATE: for Pandas versions 0.20.0+ there we can write and read directly to/from feather format (thanks #jezrael for the hint):
In [3]: df.to_feather(r'c:/tmp/test2.feather')
In [4]: %timeit pd.read_feather(r'c:/tmp/test2.feather')
1 loop, best of 3: 583 ms per loop
Example of generated DF:
In [13]: df
Out[13]:
n1 n2 n3 s1 s2
0 719458 808047 792611 Fjv4CoRv2b 2aWQTkutPlKkO38fRQh2tdh1BrnEFavmIsDZK17V
1 526092 950709 804869 dfG12EpzVI YVZzhMi9sfazZEW9e2TV7QIvldYj2RPHw0TXxS2z
2 109107 801344 266732 aoyBuHTL9I ui0PKJO8cQJwcvmMThb08agWL1UyRumYgB7jjmcw
3 873626 814409 895382 qQQms5pTGq zvf4HTaKCISrdPK98ROtqPqpsG4WhSdEgbKNHy05
4 212776 596713 924623 3YXa4PViAn 7Y94ykHIHIEnjKvGphYfAWSINRZtJ99fCPiMrfzl
5 375323 401029 973262 j6QQwYzfsK PNYOM2GpHdhrz9NCCifRsn8gIZkLHecjlk82o44Y
6 232655 937230 40883 NsI5Y78aLT qiKvXcAdPVbhWbXnyD3uqIwzS7ZsCgssm9kHAETb
7 69010 438280 564194 N73tQaZjey ttj1IHtjPyssyADMYiNScflBjN4SFv5bk3tbz93o
8 988081 8992 968871 eb9lc7D22T sb3dt1Ndc8CUHyvsFJgWRrQg4ula7KJ76KrSSqGH
9 127155 66042 881861 tHSBB3RsNH ZpZt5sxAU3zfiPniSzuJYrwtrytDvqJ1WflJ4vh3
... ... ... ... ... ...
999990 805220 21746 355944 IMCMWuf97L bj7tSrgudA5wLvWkWVQyNVamSGmFGOeQlIUoKXK3
999991 232596 293850 741881 JD0SVS5uob kWeP8DEw19rwxVN3XBBcskibMRGxfoToNO9RDeCT
999992 532752 733958 222003 9X4PopnltN dKhsdKFK1EfAATBFsB5hjKZzQWERxzxGEQZWAvSe
999993 308623 717897 703895 Fg0nuq63hA kHzRecZoaG5tAnLbtlq1hqtfd2l5oEMFbJp4NjhC
999994 841670 528518 70745 vKQDiAzZNf M5wdoUNfkdKX2VKQEArvBLYl5lnTNShjDLwnb8VE
999995 986988 599807 901853 r8iHjo39NH 72CfzCycAGoYMocbw3EbUbrV4LRowFjSDoDeYfT5
999996 384064 429184 203230 EJy0mTAmdQ 1jfUQCj2SLIktVqIRHfYQW2QYfpvhcWCbRLO5wqL
999997 967270 565677 146418 KWp2nH1MbM hzhn880cuEpjFhd5bd7vpgsjjRNgaViANW9FHwrf
999998 130864 863893 5614 L28QGa22f1 zfg8mBidk8NTa3LKO4rg31Z6K4ljK50q5tHHq8Fh
999999 528532 276698 553870 0XRJwqBAWX 0EzNcDkGUFklcbKELtcr36zPCMu9lSaIDcmm0kUX
[1000000 rows x 5 columns]
I have a pandas dataframe holding more than million records. One of its columns is datetime. The sample of my data is like the following:
time,x,y,z
2015-05-01 10:00:00,111,222,333
2015-05-01 10:00:03,112,223,334
...
I need to effectively get the record during the specific period. The following naive way is very time consuming.
new_df = df[(df["time"] > start_time) & (df["time"] < end_time)]
I know that on DBMS like MySQL the indexing by the time field is effective for getting records by specifying the time period.
My question is
Does the indexing of pandas such as df.index = df.time makes the slicing process faster?
If the answer of Q1 is 'No', what is the common effective way to get a record during the specific time period in pandas?
Let's create a dataframe with 1 million rows and time performance. The index is a Pandas Timestamp.
df = pd.DataFrame(np.random.randn(1000000, 3),
columns=list('ABC'),
index=pd.DatetimeIndex(start='2015-1-1', freq='10s', periods=1000000))
Here are the results sorted from fastest to slowest (tested on the same machine with both v. 0.14.1 (don't ask...) and the most recent version 0.17.1):
%timeit df2 = df['2015-2-1':'2015-3-1']
1000 loops, best of 3: 459 µs per loop (v. 0.14.1)
1000 loops, best of 3: 664 µs per loop (v. 0.17.1)
%timeit df2 = df.ix['2015-2-1':'2015-3-1']
1000 loops, best of 3: 469 µs per loop (v. 0.14.1)
1000 loops, best of 3: 662 µs per loop (v. 0.17.1)
%timeit df2 = df.loc[(df.index >= '2015-2-1') & (df.index <= '2015-3-1'), :]
100 loops, best of 3: 8.86 ms per loop (v. 0.14.1)
100 loops, best of 3: 9.28 ms per loop (v. 0.17.1)
%timeit df2 = df.loc['2015-2-1':'2015-3-1', :]
1 loops, best of 3: 341 ms per loop (v. 0.14.1)
1000 loops, best of 3: 677 µs per loop (v. 0.17.1)
Here are the timings with the Datetime index as a column:
df.reset_index(inplace=True)
%timeit df2 = df.loc[(df['index'] >= '2015-2-1') & (df['index'] <= '2015-3-1')]
100 loops, best of 3: 12.6 ms per loop (v. 0.14.1)
100 loops, best of 3: 13 ms per loop (v. 0.17.1)
%timeit df2 = df.loc[(df['index'] >= '2015-2-1') & (df['index'] <= '2015-3-1'), :]
100 loops, best of 3: 12.8 ms per loop (v. 0.14.1)
100 loops, best of 3: 12.7 ms per loop (v. 0.17.1)
All of the above indexing techniques produce the same dataframe:
>>> df2.shape
(250560, 3)
It appears that either of the first two methods are the best in this situation, and the fourth method also works just as fine using the latest version of Pandas.
I've never dealt with a data set that large, but maybe you can try recasting the time column as a datetime index and then slicing directly. Something like this.
timedata.txt (extended from your example):
time,x,y,z
2015-05-01 10:00:00,111,222,333
2015-05-01 10:00:03,112,223,334
2015-05-01 10:00:05,112,223,335
2015-05-01 10:00:08,112,223,336
2015-05-01 10:00:13,112,223,337
2015-05-01 10:00:21,112,223,338
df = pd.read_csv('timedata.txt')
df.time = pd.to_datetime(df.time)
df = df.set_index('time')
print(df['2015-05-01 10:00:02':'2015-05-01 10:00:14'])
x y z
time
2015-05-01 10:00:03 112 223 334
2015-05-01 10:00:05 112 223 335
2015-05-01 10:00:08 112 223 336
2015-05-01 10:00:13 112 223 337
Note that in the example the times used for slicing are not in the column, so this will work for the case where you only know the time interval.
If your data has a fixed time period you can create a datetime index which may provide more options. I didn't want to assume your time period was fixed so constructed this for a more general case.
I have test_df with columns 'MonthAbbr' and 'PromoInterval'
Example output
1017174 Jun Mar,Jun,Sept,Dec
1017175 Mar Mar,Jun,Sept,Dec
1017176 Feb Mar,Jun,Sept,Dec
1017177 Feb Feb,May,Aug,Nov
1017178 Jan Feb,May,Aug,Nov
1017179 Jan Mar,Jun,Sept,Dec
1017180 Jan Mar,Jun,Sept,Dec
I want add column-indicator is month in promo interval, which will =1 if MonthAbbr in PromoInterval for current row, =0 otherwise
Is there more efficient way?
for ind in test_df.index:
test_df.set_value(ind ,'IsPromoInThisMonth',
test_df.MonthAbbr.astype(str)[ind] in (test_df.PromoInterval.astype(str)[ind])
This is a bit faster:
%%timeit
test_df['IsPromoInThisMonth'] = [x in y for x, y in zip(test_df['MonthAbbr'],
test_df['PromoInterval'])]
1000 loops, best of 3: 317 µs per loop
Than your approach:
%%timeit
for ind in test_df.index:
test_df.set_value(ind ,'IsPromoInThisMonth',
test_df.MonthAbbr.astype(str)[ind] in (test_df.PromoInterval.astype(str)[ind]))
1000 loops, best of 3: 1.44 ms per loop
UPDATE
Using a function with apply is slower than the list comprehension:
%%timeit
test_df['IsPromoInThisMonth'] = test_df.apply(lambda x: x[0] in x[1], axis=1)
1000 loops, best of 3: 804 µs per loop