Series string replace with contents from another series (without using apply) - python

For the sake of optimization, I want to know if its possible to do a faster string replace in one column, with the contents of the corresponding row from another column, without using apply.
Here is my dataframe:
data_dict = {'root': [r'c:/windows/'], 'file': [r'c:/windows/system32/calc.exe']}
df = pd.DataFrame.from_dict(data_dict)
"""
Result:
file root
0 c:/windows/system32/calc.exe c:/windows/
"""
Using the following apply, I can get what I'm after:
df['trunc'] = df.apply(lambda x: x['file'].replace(x['path'], ''), axis=1)
"""
Result:
file root trunc
0 c:/windows/system32/calc.exe c:/windows/ system32/calc.exe
"""
However, in the interest of making more efficient use of code, I'm wondering if there is a better way. I've tried the code below, but it doesn't seem to work the way I expected it to.
df['trunc'] = df['file'].replace(df['root'], '')
"""
Result (note that the root was NOT properly replaced with a black string in the 'trunc' column):
file root trunc
0 c:/windows/system32/calc.exe c:/windows/ c:/windows/system32/calc.exe
"""
Are there any more effecient alternatives? Thanks!
EDIT - With timings from the couple of examples below
# Expand out the data set to 1000 entries
data_dict = {'root': [r'c:/windows/']*1000, 'file': [r'c:/windows/system32/calc.exe']*1000}
df0 = pd.DataFrame.from_dict(data_dict)
Using Apply
%%timeit -n 100
df0['trunk0'] = df0.apply(lambda x: x['file'].replace(x['root'], ''), axis=1)
100 loops, best of 3: 13.9 ms per loop
Using Replace (thanks Gayatri)
%%timeit -n 100
df0['trunk1'] = df0['file'].replace(df0['root'], '', regex=True)
100 loops, best of 3: 365 ms per loop
Using Zip (thanks 0p3n5ourcE)
%%timeit -n 100
df0['trunk2'] = [file_val.replace(root_val, '') for file_val, root_val in zip(df0.file, df0.root)]
100 loops, best of 3: 600 µs per loop
Overall, looks like zip is the best option here. Thanks for all the input!

Using similar approach as in link
df['trunc'] = [file_val.replace(root_val, '') for file_val, root_val in zip(df.file, df.root)]
Output:
file root trunc
0 c:/windows/system32/calc.exe c:/windows/ system32/calc.exe
Checking with timeit:
%%timeit
df['trunc'] = df.apply(lambda x: x['file'].replace(x['root'], ''), axis=1)
Result:
1000 loops, best of 3: 469 µs per loop
Using zip:
%%timeit
df['trunc'] = [file_val.replace(root_val, '') for file_val, root_val in zip(df.file, df.root)]
Result:
1000 loops, best of 3: 322 µs per loop

Try this:
df['file'] = df['file'].astype(str)
df['root'] = df['root'].astype(str)
df['file'].replace(df['root'],'', regex=True)
Output:
0 system32/calc.exe
Name: file, dtype: object

Related

Filter rows based on multiple columns entries

I have a dataframe which contains millions of entries and looks something like this:
Chr
Start
Alt
1
21651521
A
1
41681521
T
1
41681521
T
...
...
...
X
423565
T
I am currently trying to count the number of rows that match several conditions at the same time, i.e. Chr==1, Start==41681521 and Alt==T.
Right now I am using this syntax, which works fine, but seems unpythonic and is also rather slow I think.
num_occurrence = sum((df["Chr"] == chrom) &
(df["Start"] == int(position)) &
(df["Alt"] == allele))
Does anyone have an approach which is more suitable then mine?
Any help is much appreciated!
Cheers!
Alternative 1: pd.DataFrame.query()
You could work with query (see also the illustrative examples here):
expr = "Chr=={chr} & Start=={pos} & Alt=='{alt}'"
ret = df.query(expr.format(chr=chrom, pos=int(position), alt=allele))
In my experiments, this led already to a considerable speedup.
Optimizing this further requires additional information about the data types involved. There are several things you could try:
Alternative 2: Query sorted data
If you can afford to sort your DataFrame prior to querying, you can use pd.Series.searchsorted(). Here is a possible approach:
def query_sorted(df, chrom, position, allele):
"""
Returns index of the matches.
"""
assert df["Start"].is_monotonic_increasing
i_min, i_max = df["Start"].searchsorted([position, position+1])
df = df.iloc[i_min:i_max]
return df[(df["Chr"] == chrom) & (df["Alt"] == allele)].index
# Usage: first sort df by column "Start", then query:
df = df.sort_values("Start")
ret_index = query_sorted(df, chrom, position, allele)
print(len(ret_index))
Alternative 3: Use hashes
Another idea would be to use hashes. Again, this requires some calculations up front, but it speeds up the query considerably. Here is an example based on pd.util.hash_pandas_object():
def query_hash(df, chrom, position, allele):
"""
Returns a view on df
"""
assert "hash" in df
dummy = pd.DataFrame([[chrom, position, allele]])
query_hash = pd.util.hash_pandas_object(dummy, index=False).squeeze()
return df[df["hash"] == query_hash].index
# Usage: first compute hashes over the columns of interest, then query
df["hash"] = pd.util.hash_pandas_object(df[["Chr", "Start", "Alt"]],
index=False)
ret_index = query_hash(df, chrom, position, allele)
print(len(ret_index))
Alternative 4: Use a multi-index
Pandas also operates with hashes when accessing rows via the index. Thus, instead of calculating hashes explicitly, as in the previous alternative, one could simply set the index of the DataFrame prior to querying. (Since setting all columns as index would result in an empty DataFrame, I first create a dummy column. For a real DataFrame with additional columns this will probably not be necessary.)
df["dummy"] = None
df = df.set_index(["Chr", "Start", "Alt"])
df = df.sort_index() # Improves performance
print(len(df.loc[(chrom, position, allele)])
# Interestingly, chaining .loc[] is about twice as fast
print(len(df.loc[chrom].loc[position].loc[allele]))
Note that using an index where one index value maps to many records is not always a good idea. Also, this approach is slower than alternative 3, indicating that Pandas does some extra work here.
There are certainly many more ways to improve this, though the alternative approaches will depend on your specific needs.
Results
I tested with n=10M samples on a MacBook Pro (Mid 2015), running Python 3.8, Pandas 1.2.4 and IPython 7.24.1. Note that the performance evaluation depends on the problem size. The relative assessment of the methods therefore will change for different problem sizes.
# original (sum(s)): 1642.0 ms ± 19.1 ms
# original (s.sum()): 639.0 ms ± 21.9 ms
# query(): 175.0 ms ± 1.1 ms
# query_sorted(): 17.5 ms ± 60.4 µs
# query-hash(): 10.6 ms ± 62.5 µs
# multi-index: 71.5 ms ± 0.7 ms
# multi-index (seq.): 36.5 ms ± 0.6 ms
Implementation
This is how I constructed the data and compared the different approaches.
import numpy as np
import pandas as pd
# Create test data
n = int(10*1e6)
df = pd.DataFrame({"Chr": np.random.randint(1,23+1,n),
"Start": np.random.randint(100,999, n),
"Alt": np.random.choice(list("ACTG"), n)})
# Query point
chrom, position, allele = 1, 142, "A"
# Create test data
n = 10000000
df = pd.DataFrame({"Chr": np.random.randint(1,23+1,n),
"Start": np.random.randint(100,999, n),
"Alt": np.random.choice(list("ACTG"), n)})
# Query point
chrom, position, allele = 1, 142, "A"
# Measure performance in IPython
print("original (sum(s)):")
%timeit sum((df["Chr"] == chrom) & \
(df["Start"] == int(position)) & \
(df["Alt"] == allele))
print("original (s.sum()):")
%timeit ((df["Chr"] == chrom) & \
(df["Start"] == int(position)) & \
(df["Alt"] == allele)).sum()
print("query():")
%timeit len(df.query(expr.format(chr=chrom, \
pos=position, \
alt=allele)))
print("query_sorted():")
df_sorted = df.sort_values("Start")
%timeit query_sorted(df_sorted, chrom, position, allele)
print("query-hash():")
df_hash = df.copy()
df_hash["hash"] = pd.util.hash_pandas_object(df_hash[["Chr", "Start", "Alt"]],
index=False)
%timeit query_hash(df_hash, chrom, position, allele)
print("multi-index:")
df_multi = df.copy()
df_multi["dummy"] = None
df_multi = df_multi.set_index(["Chr", "Start", "Alt"]).sort_index()
%timeit df_multi.loc[(chrom, position, allele)]
print("multi-index (seq.):")
%timeit len(df_multi.loc[chrom].loc[position].loc[allele])
Use DataFrame.all + Series.sum:
res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
For example:
import pandas as pd
# toy data
df = pd.DataFrame(data=[[1, 21651521, "A"], [1, 41681521, "T"], [1, 41681521, "T"]], columns=["Chr", "Start", "Alt"])
chrom, position, allele = 1, "21651521", "A"
res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
print(res)
Output
1

Reading weird json file into pandas [duplicate]

I'd like to know if there is a memory efficient way of reading multi record JSON file ( each line is a JSON dict) into a pandas dataframe. Below is a 2 line example with working solution, I need it for potentially very large number of records. Example use would be to process output from Hadoop Pig JSonStorage function.
import json
import pandas as pd
test='''{"a":1,"b":2}
{"a":3,"b":4}'''
#df=pd.read_json(test,orient='records') doesn't work, expects []
l=[ json.loads(l) for l in test.splitlines()]
df=pd.DataFrame(l)
Note: Line separated json is now supported in read_json (since 0.19.0):
In [31]: pd.read_json('{"a":1,"b":2}\n{"a":3,"b":4}', lines=True)
Out[31]:
a b
0 1 2
1 3 4
or with a file/filepath rather than a json string:
pd.read_json(json_file, lines=True)
It's going to depend on the size of you DataFrames which is faster, but another option is to use str.join to smash your multi line "JSON" (Note: it's not valid json), into valid json and use read_json:
In [11]: '[%s]' % ','.join(test.splitlines())
Out[11]: '[{"a":1,"b":2},{"a":3,"b":4}]'
For this tiny example this is slower, if around 100 it's the similar, signicant gains if it's larger...
In [21]: %timeit pd.read_json('[%s]' % ','.join(test.splitlines()))
1000 loops, best of 3: 977 µs per loop
In [22]: %timeit l=[ json.loads(l) for l in test.splitlines()]; df = pd.DataFrame(l)
1000 loops, best of 3: 282 µs per loop
In [23]: test_100 = '\n'.join([test] * 100)
In [24]: %timeit pd.read_json('[%s]' % ','.join(test_100.splitlines()))
1000 loops, best of 3: 1.25 ms per loop
In [25]: %timeit l = [json.loads(l) for l in test_100.splitlines()]; df = pd.DataFrame(l)
1000 loops, best of 3: 1.25 ms per loop
In [26]: test_1000 = '\n'.join([test] * 1000)
In [27]: %timeit l = [json.loads(l) for l in test_1000.splitlines()]; df = pd.DataFrame(l)
100 loops, best of 3: 9.78 ms per loop
In [28]: %timeit pd.read_json('[%s]' % ','.join(test_1000.splitlines()))
100 loops, best of 3: 3.36 ms per loop
Note: of that time the join is surprisingly fast.
If you are trying to save memory, then reading the file a line at a time will be much more memory efficient:
with open('test.json') as f:
data = pd.DataFrame(json.loads(line) for line in f)
Also, if you import simplejson as json, the compiled C extensions included with simplejson are much faster than the pure-Python json module.
As of Pandas 0.19, read_json has native support for line-delimited JSON:
pd.read_json(jsonfile, lines=True)
++++++++Update++++++++++++++
As of v0.19, Pandas supports this natively (see https://github.com/pandas-dev/pandas/pull/13351). Just run:
df=pd.read_json('test.json', lines=True)
++++++++Old Answer++++++++++
The existing answers are good, but for a little variety, here is another way to accomplish your goal that requires a simple pre-processing step outside of python so that pd.read_json() can consume the data.
Install jq https://stedolan.github.io/jq/.
Create a valid json file with cat test.json | jq -c --slurp . > valid_test.json
Create dataframe with df=pd.read_json('valid_test.json')
In ipython notebook, you can run the shell command directly from the cell interface with
!cat test.json | jq -c --slurp . > valid_test.json
df=pd.read_json('valid_test.json')

Select the max row per group - pandas performance issue

I'm selecting one max row per group and I'm using groupby/agg to return index values and select the rows using loc.
For example, to group by "Id" and then select the row with the highest "delta" value:
selected_idx = df.groupby("Id").apply(lambda df: df.delta.argmax())
selected_rows = df.loc[selected_idx, :]
However, it's so slow this way. Actually, my i7/16G RAM laptop hangs when I'm using this query on 13 million rows.
I have two questions for experts:
How can I make this query run fast in pandas? What am I doing wrong?
Why is this operation so expensive?
[Update]
Thank you so much for #unutbu 's analysis!
sort_drop it is! On my i7/32GRAM machine, groupby+idxmax hangs for nearly 14 hours (never return a thing) however sort_drop handled it LESS THAN A MINUTE!
I still need to look at how pandas implements each method but problems solved for now! I love StackOverflow.
The fastest option depends not only on length of the DataFrame (in this case, around 13M rows) but also on the number of groups. Below are perfplots which compare a number of ways of finding the maximum in each group:
If there an only a few (large) groups, using_idxmax may be the fastest option:
If there are many (small) groups and the DataFrame is not too large, using_sort_drop may be the fastest option:
Keep in mind, however, that while using_sort_drop, using_sort and using_rank start out looking very fast, as N = len(df) increases, their speed relative to the other options disappears quickly. For large enough N, using_idxmax becomes the fastest option, even if there are many groups.
using_sort_drop, using_sort and using_rank sorts the DataFrame (or groups within the DataFrame). Sorting is O(N * log(N)) on average, while the other methods use O(N) operations. This is why methods like using_idxmax beats using_sort_drop for very large DataFrames.
Be aware that benchmark results may vary for a number of reasons, including machine specs, OS, and software versions. So it is important to run benchmarks on your own machine, and with test data tailored to your situation.
Based on the perfplots above, using_sort_drop may be an option worth considering for your DataFrame of 13M rows, especially if it has many (small) groups. Otherwise, I would suspect using_idxmax to be the fastest option -- but again, it's important that you check benchmarks on your machine.
Here is the setup I used to make the perfplots:
import numpy as np
import pandas as pd
import perfplot
def make_df(N):
# lots of small groups
df = pd.DataFrame(np.random.randint(N//10+1, size=(N, 2)), columns=['Id','delta'])
# few large groups
# df = pd.DataFrame(np.random.randint(10, size=(N, 2)), columns=['Id','delta'])
return df
def using_idxmax(df):
return df.loc[df.groupby("Id")['delta'].idxmax()]
def max_mask(s):
i = np.asarray(s).argmax()
result = [False]*len(s)
result[i] = True
return result
def using_custom_mask(df):
mask = df.groupby("Id")['delta'].transform(max_mask)
return df.loc[mask]
def using_isin(df):
idx = df.groupby("Id")['delta'].idxmax()
mask = df.index.isin(idx)
return df.loc[mask]
def using_sort(df):
df = df.sort_values(by=['delta'], ascending=False, kind='mergesort')
return df.groupby('Id', as_index=False).first()
def using_rank(df):
mask = (df.groupby('Id')['delta'].rank(method='first', ascending=False) == 1)
return df.loc[mask]
def using_sort_drop(df):
# Thanks to jezrael
# https://stackoverflow.com/questions/50381064/select-the-max-row-per-group-pandas-performance-issue/50389889?noredirect=1#comment87795818_50389889
return df.sort_values(by=['delta'], ascending=False, kind='mergesort').drop_duplicates('Id')
def using_apply(df):
selected_idx = df.groupby("Id").apply(lambda df: df.delta.argmax())
return df.loc[selected_idx]
def check(df1, df2):
df1 = df1.sort_values(by=['Id','delta'], kind='mergesort').reset_index(drop=True)
df2 = df2.sort_values(by=['Id','delta'], kind='mergesort').reset_index(drop=True)
return df1.equals(df2)
perfplot.show(
setup=make_df,
kernels=[using_idxmax, using_custom_mask, using_isin, using_sort,
using_rank, using_apply, using_sort_drop],
n_range=[2**k for k in range(2, 20)],
logx=True,
logy=True,
xlabel='len(df)',
repeat=75,
equality_check=check)
Another way to benchmark is to use IPython %timeit:
In [55]: df = make_df(2**20)
In [56]: %timeit using_sort_drop(df)
1 loop, best of 3: 403 ms per loop
In [57]: %timeit using_rank(df)
1 loop, best of 3: 1.04 s per loop
In [58]: %timeit using_idxmax(df)
1 loop, best of 3: 15.8 s per loop
Using Numba's jit
from numba import njit
import numpy as np
#njit
def nidxmax(bins, k, weights):
out = np.zeros(k, np.int64)
trk = np.zeros(k)
for i, w in enumerate(weights - (weights.min() - 1)):
b = bins[i]
if w > trk[b]:
trk[b] = w
out[b] = i
return np.sort(out)
def with_numba_idxmax(df):
f, u = pd.factorize(df.Id)
return df.iloc[nidxmax(f, len(u), df.delta.values)]
Borrowing from #unutbu
def make_df(N):
# lots of small groups
df = pd.DataFrame(np.random.randint(N//10+1, size=(N, 2)), columns=['Id','delta'])
# few large groups
# df = pd.DataFrame(np.random.randint(10, size=(N, 2)), columns=['Id','delta'])
return df
Prime jit
with_numba_idxmax(make_df(10));
Test
df = make_df(2**20)
%timeit with_numba_idxmax(df)
%timeit using_sort_drop(df)
47.4 ms ± 99.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
194 ms ± 451 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Quickly read HDF 5 file in python?

I have an instrument that saves data (many traces from an analog-to-digital converter) as an HDF 5 file. How can I efficiently open this file in python? I have tried the following code, but it seems to take a very long time to extract the data.
Also, it reads the data in the wrong order: instead of reading 1,2,3, it reads 1,10,100,1000.
Any ideas?
Here is a link to the sample data file: https://drive.google.com/file/d/0B4bj1tX3AZxYVGJpZnk2cDNhMzg/edit?usp=sharing
And here is my super-slow code:
import h5py
import matplotlib.pyplot as plt
import numpy as np
f = h5py.File('sample.h5','r')
ks = f.keys()
for index,key in enumerate(ks[:10]):
print index, key
data = np.array(f[key].values())
plt.plot(data.ravel())
plt.show()
As far as the order of your data:
In [10]: f.keys()[:10]
Out[10]:
[u'Acquisition.1',
u'Acquisition.10',
u'Acquisition.100',
u'Acquisition.1000',
u'Acquisition.1001',
u'Acquisition.1002',
u'Acquisition.1003',
u'Acquisition.1004',
u'Acquisition.1005',
u'Acquisition.1006']
This is the correct order for numbers that isn't left padded with zeros. It's doing its sort lexicographically, not numerically. See Python: list.sort() doesn't seem to work for a possible solution.
Second, you're killing your performance by rebuilding the array within the loop:
In [20]: d1 = f[u'Acquisition.990'].values()[0][:]
In [21]: d2 = np.array(f[u'Acquisition.990'].values())
In [22]: np.allclose(d1,d2)
Out[22]: True
In [23]: %timeit d1 = f[u'Acquisition.990'].values()[0][:]
1000 loops, best of 3: 401 µs per loop
In [24]: %timeit d2 = np.array(f[u'Acquisition.990'].values())
1 loops, best of 3: 1.77 s per loop

Reading multiple JSON records into a Pandas dataframe

I'd like to know if there is a memory efficient way of reading multi record JSON file ( each line is a JSON dict) into a pandas dataframe. Below is a 2 line example with working solution, I need it for potentially very large number of records. Example use would be to process output from Hadoop Pig JSonStorage function.
import json
import pandas as pd
test='''{"a":1,"b":2}
{"a":3,"b":4}'''
#df=pd.read_json(test,orient='records') doesn't work, expects []
l=[ json.loads(l) for l in test.splitlines()]
df=pd.DataFrame(l)
Note: Line separated json is now supported in read_json (since 0.19.0):
In [31]: pd.read_json('{"a":1,"b":2}\n{"a":3,"b":4}', lines=True)
Out[31]:
a b
0 1 2
1 3 4
or with a file/filepath rather than a json string:
pd.read_json(json_file, lines=True)
It's going to depend on the size of you DataFrames which is faster, but another option is to use str.join to smash your multi line "JSON" (Note: it's not valid json), into valid json and use read_json:
In [11]: '[%s]' % ','.join(test.splitlines())
Out[11]: '[{"a":1,"b":2},{"a":3,"b":4}]'
For this tiny example this is slower, if around 100 it's the similar, signicant gains if it's larger...
In [21]: %timeit pd.read_json('[%s]' % ','.join(test.splitlines()))
1000 loops, best of 3: 977 µs per loop
In [22]: %timeit l=[ json.loads(l) for l in test.splitlines()]; df = pd.DataFrame(l)
1000 loops, best of 3: 282 µs per loop
In [23]: test_100 = '\n'.join([test] * 100)
In [24]: %timeit pd.read_json('[%s]' % ','.join(test_100.splitlines()))
1000 loops, best of 3: 1.25 ms per loop
In [25]: %timeit l = [json.loads(l) for l in test_100.splitlines()]; df = pd.DataFrame(l)
1000 loops, best of 3: 1.25 ms per loop
In [26]: test_1000 = '\n'.join([test] * 1000)
In [27]: %timeit l = [json.loads(l) for l in test_1000.splitlines()]; df = pd.DataFrame(l)
100 loops, best of 3: 9.78 ms per loop
In [28]: %timeit pd.read_json('[%s]' % ','.join(test_1000.splitlines()))
100 loops, best of 3: 3.36 ms per loop
Note: of that time the join is surprisingly fast.
If you are trying to save memory, then reading the file a line at a time will be much more memory efficient:
with open('test.json') as f:
data = pd.DataFrame(json.loads(line) for line in f)
Also, if you import simplejson as json, the compiled C extensions included with simplejson are much faster than the pure-Python json module.
As of Pandas 0.19, read_json has native support for line-delimited JSON:
pd.read_json(jsonfile, lines=True)
++++++++Update++++++++++++++
As of v0.19, Pandas supports this natively (see https://github.com/pandas-dev/pandas/pull/13351). Just run:
df=pd.read_json('test.json', lines=True)
++++++++Old Answer++++++++++
The existing answers are good, but for a little variety, here is another way to accomplish your goal that requires a simple pre-processing step outside of python so that pd.read_json() can consume the data.
Install jq https://stedolan.github.io/jq/.
Create a valid json file with cat test.json | jq -c --slurp . > valid_test.json
Create dataframe with df=pd.read_json('valid_test.json')
In ipython notebook, you can run the shell command directly from the cell interface with
!cat test.json | jq -c --slurp . > valid_test.json
df=pd.read_json('valid_test.json')

Categories

Resources