Number of Unique values in Dask-Dataframe columns - python

I have a Dask Dataframe read from csv file having around 1 million records and 120 features/columns and I would like to count the number of unique value in each column. I can clearly do it for each column separately using the for-loop:
from dask import dataframe as dd
dask_df = dd.read_csv("train.csv")
for column in dask_df.columns:
print(dask_df[col].nunique().compute())
But compute at each iteration is very expensive ( took me around ~40minutes with 3 node cluster with 5 workers with each worker having 2GB of memory and 2 vcores), so is there a way where I can get the unique values at each column of data-frame? I have tried dask_df.describe() api but that gives unique values only for String types. Any help appreciated, thanks in advance!

Here's another workaround, where the number of unique values for each column are all calculated at once, allowing for more opportunity for optimization:
import random
import pandas
import dask
import dask.dataframe as dd
df = pandas.DataFrame({
"x": [random.randint(0,100) for _ in range(100)],
"y": [random.randint(0,100) for _ in range(100)],
"z": [random.randint(0,100) for _ in range(100)],
})
ddf = dd.from_pandas(df, npartitions=10)
unique = {
name: ddf[name].nunique()
for name in ddf.columns
}
# traverse=True is default, but being explicit that we are traversing the dict for dask objects
dask.compute(unique, traverse=True)

I don't know if that is the fastest solution, but you can use .melt() to unpivot your dataframe columns and then use .groupby() on the variable column to count the unique values in each group in order to get significant performance improvement over your column per column solution:
dd.read_csv('test.csv').melt().groupby('variable')['value'].nunique().compute()
Let us generate some random integer data and save as csv:
import numpy as np
import pandas as pd
from dask import dataframe as dd
nrows = 10000
ncols = 120
rng = np.random.default_rng(seed=1)
random_data = rng.integers(low=0, high=nrows/2, size=(nrows,ncols))
pd.DataFrame(data=random_data).add_prefix('col_').to_csv('test.csv', index=False)
We use the following two functions for performance evaluation:
def nunique_per_column():
dask_df = dd.read_csv('test.csv')
counts = []
for col in dask_df.columns:
counts.append(dask_df[col].nunique().compute())
return pd.Series(counts, index=dask_df.columns)
def melt_groupby_nunique():
return dd.read_csv('test.csv').melt().groupby('variable')['value'].nunique().compute()
First check if both functions compute the same result with:
pd.testing.assert_series_equal(nunique_per_column().sort_index(),
melt_groupby_nunique().sort_index(),
check_names=False)
%timeit on the functions and the sample data yields the following output on my machine:
%timeit nunique_per_column()
17.5 s ± 216 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit melt_groupby_nunique()
1.78 s ± 576 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

#Mohamed as of Dask version 2022.01.0, dask.DataFrame.nunique() has been implemented:
import random
import pandas
import dask.dataframe as dd
df = pandas.DataFrame({
"x": [random.randint(0,100) for _ in range(100)],
"y": [random.randint(0,100) for _ in range(100)],
"z": [random.randint(0,100) for _ in range(100)],
})
ddf = dd.from_pandas(df, npartitions=10)
ddf.nunique().compute()

Related

Filter rows based on multiple columns entries

I have a dataframe which contains millions of entries and looks something like this:
Chr
Start
Alt
1
21651521
A
1
41681521
T
1
41681521
T
...
...
...
X
423565
T
I am currently trying to count the number of rows that match several conditions at the same time, i.e. Chr==1, Start==41681521 and Alt==T.
Right now I am using this syntax, which works fine, but seems unpythonic and is also rather slow I think.
num_occurrence = sum((df["Chr"] == chrom) &
(df["Start"] == int(position)) &
(df["Alt"] == allele))
Does anyone have an approach which is more suitable then mine?
Any help is much appreciated!
Cheers!
Alternative 1: pd.DataFrame.query()
You could work with query (see also the illustrative examples here):
expr = "Chr=={chr} & Start=={pos} & Alt=='{alt}'"
ret = df.query(expr.format(chr=chrom, pos=int(position), alt=allele))
In my experiments, this led already to a considerable speedup.
Optimizing this further requires additional information about the data types involved. There are several things you could try:
Alternative 2: Query sorted data
If you can afford to sort your DataFrame prior to querying, you can use pd.Series.searchsorted(). Here is a possible approach:
def query_sorted(df, chrom, position, allele):
"""
Returns index of the matches.
"""
assert df["Start"].is_monotonic_increasing
i_min, i_max = df["Start"].searchsorted([position, position+1])
df = df.iloc[i_min:i_max]
return df[(df["Chr"] == chrom) & (df["Alt"] == allele)].index
# Usage: first sort df by column "Start", then query:
df = df.sort_values("Start")
ret_index = query_sorted(df, chrom, position, allele)
print(len(ret_index))
Alternative 3: Use hashes
Another idea would be to use hashes. Again, this requires some calculations up front, but it speeds up the query considerably. Here is an example based on pd.util.hash_pandas_object():
def query_hash(df, chrom, position, allele):
"""
Returns a view on df
"""
assert "hash" in df
dummy = pd.DataFrame([[chrom, position, allele]])
query_hash = pd.util.hash_pandas_object(dummy, index=False).squeeze()
return df[df["hash"] == query_hash].index
# Usage: first compute hashes over the columns of interest, then query
df["hash"] = pd.util.hash_pandas_object(df[["Chr", "Start", "Alt"]],
index=False)
ret_index = query_hash(df, chrom, position, allele)
print(len(ret_index))
Alternative 4: Use a multi-index
Pandas also operates with hashes when accessing rows via the index. Thus, instead of calculating hashes explicitly, as in the previous alternative, one could simply set the index of the DataFrame prior to querying. (Since setting all columns as index would result in an empty DataFrame, I first create a dummy column. For a real DataFrame with additional columns this will probably not be necessary.)
df["dummy"] = None
df = df.set_index(["Chr", "Start", "Alt"])
df = df.sort_index() # Improves performance
print(len(df.loc[(chrom, position, allele)])
# Interestingly, chaining .loc[] is about twice as fast
print(len(df.loc[chrom].loc[position].loc[allele]))
Note that using an index where one index value maps to many records is not always a good idea. Also, this approach is slower than alternative 3, indicating that Pandas does some extra work here.
There are certainly many more ways to improve this, though the alternative approaches will depend on your specific needs.
Results
I tested with n=10M samples on a MacBook Pro (Mid 2015), running Python 3.8, Pandas 1.2.4 and IPython 7.24.1. Note that the performance evaluation depends on the problem size. The relative assessment of the methods therefore will change for different problem sizes.
# original (sum(s)): 1642.0 ms ± 19.1 ms
# original (s.sum()): 639.0 ms ± 21.9 ms
# query(): 175.0 ms ± 1.1 ms
# query_sorted(): 17.5 ms ± 60.4 µs
# query-hash(): 10.6 ms ± 62.5 µs
# multi-index: 71.5 ms ± 0.7 ms
# multi-index (seq.): 36.5 ms ± 0.6 ms
Implementation
This is how I constructed the data and compared the different approaches.
import numpy as np
import pandas as pd
# Create test data
n = int(10*1e6)
df = pd.DataFrame({"Chr": np.random.randint(1,23+1,n),
"Start": np.random.randint(100,999, n),
"Alt": np.random.choice(list("ACTG"), n)})
# Query point
chrom, position, allele = 1, 142, "A"
# Create test data
n = 10000000
df = pd.DataFrame({"Chr": np.random.randint(1,23+1,n),
"Start": np.random.randint(100,999, n),
"Alt": np.random.choice(list("ACTG"), n)})
# Query point
chrom, position, allele = 1, 142, "A"
# Measure performance in IPython
print("original (sum(s)):")
%timeit sum((df["Chr"] == chrom) & \
(df["Start"] == int(position)) & \
(df["Alt"] == allele))
print("original (s.sum()):")
%timeit ((df["Chr"] == chrom) & \
(df["Start"] == int(position)) & \
(df["Alt"] == allele)).sum()
print("query():")
%timeit len(df.query(expr.format(chr=chrom, \
pos=position, \
alt=allele)))
print("query_sorted():")
df_sorted = df.sort_values("Start")
%timeit query_sorted(df_sorted, chrom, position, allele)
print("query-hash():")
df_hash = df.copy()
df_hash["hash"] = pd.util.hash_pandas_object(df_hash[["Chr", "Start", "Alt"]],
index=False)
%timeit query_hash(df_hash, chrom, position, allele)
print("multi-index:")
df_multi = df.copy()
df_multi["dummy"] = None
df_multi = df_multi.set_index(["Chr", "Start", "Alt"]).sort_index()
%timeit df_multi.loc[(chrom, position, allele)]
print("multi-index (seq.):")
%timeit len(df_multi.loc[chrom].loc[position].loc[allele])
Use DataFrame.all + Series.sum:
res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
For example:
import pandas as pd
# toy data
df = pd.DataFrame(data=[[1, 21651521, "A"], [1, 41681521, "T"], [1, 41681521, "T"]], columns=["Chr", "Start", "Alt"])
chrom, position, allele = 1, "21651521", "A"
res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
print(res)
Output
1

Vectorizing hashing function in pandas

I have the following dataset (with different values, just multiplied same rows).
I need to combine the columns and hash them, specifically with the library hashlib and the algorithm provided.
The problem is that it takes too long, and somehow I have the feeling I could vectorize the function but I am not an expert.
The function is pretty simple and I feel like it can be vectorized, but struggling to implement.
I am working with millions of rows and it takes hours, even if hashing 4 columns values.
import pandas as pd
import hashlib
data = pd.DataFrame({'first_identifier':['ALP1x','RDX2b']* 100000,'second_identifier':['RED413','BLU031']* 100000})
def _mutate_hash(row):
return hashlib.md5(row.sum().lower().encode()).hexdigest()
%timeit data['row_hash']=data.apply(_mutate_hash,axis=1)
Using a list comprehension will get you a significant speedup.
First your original:
import pandas as pd
import hashlib
n = 100000
data = pd.DataFrame({'first_identifier':['ALP1x','RDX2b']* n,'second_identifier':['RED413','BLU031']* n})
def _mutate_hash(row):
return hashlib.md5(row.sum().lower().encode()).hexdigest()
%timeit data['row_hash']=data.apply(_mutate_hash,axis=1)
1 loop, best of 5: 26.1 s per loop
Then as a list comprehension:
data = pd.DataFrame({'first_identifier':['ALP1x','RDX2b']* n,'second_identifier':['RED413','BLU031']* n})
def list_comp(df):
return pd.Series([ _mutate_hash(row) for row in df.to_numpy() ])
%timeit data['row_hash']=list_comp(data)
1 loop, best of 5: 872 ms per loop
...i.e., a speedup of ~30x.
As a check: You can check that these two methods yield equivalent results by putting the first one in "data2" and the second one in "data3" and then check that they're equal:
data2, data3 = pd.DataFrame([]), pd.DataFrame([])
%timeit data2['row_hash']=data.apply(_mutate_hash,axis=1)
...
%timeit data3['row_hash']=list_comp(data)
...
data2.equals(data3)
True
The easiest performance boost comes from using vectorized string operations. If you do the string prep (lowercasing and encoding) before applying the hash function, your performance is much more reasonable.
data = pd.DataFrame(
{
"first_identifier": ["ALP1x", "RDX2b"] * 1000000,
"second_identifier": ["RED413", "BLU031"] * 1000000,
}
)
def _mutate_hash(row):
return hashlib.md5(row).hexdigest()
prepped_data = data.apply(lambda col: col.str.lower().str.encode("utf8")).sum(axis=1)
data["row_hash"] = prepped_data.map(_mutate_hash)
I see ~25x speedup with that change.

Calculating a for loop with different indexes simultaneosuly

I have the following for function:
def calculateEMAs(df,startIndex,endIndex):
for index,row in df.iterrows():
for i in range (1,51):
if(index-i > 0):
df.loc[index,"EMA%d"%i] = abs(df.iloc[index-i]["Trade Close"] - df.iloc[index]["Trade Close"])/2 #replace this with EMA formula
print(df)
This for loop takes a long time to calculate the values for the data frame as it has to loop 50 times for each row (it takes approximately 62 seconds)
I tried to use multiprocessor pool from this question. My code looks like this now:
def calculateEMAs(df,startIndex,endIndex):
for index,row in df.iterrows():
for i in range (startIndex,endIndex):
if(index-i > 0):
df.loc[index,"EMA%d"%i] = abs(df.iloc[index-i]["Trade Close"] - df.iloc[index]["Trade Close"])/2 #replace this with EMA formula
print(df)
def main():
dfClosePrice= getFileDataframe().to_frame()
pool = Pool()
time0 = time.time()
result1 = pool.apply_async(calculateEMAs,[dfClosePrice,1,10])
result2 = pool.apply_async(calculateEMAs,[dfClosePrice,10,20])
result3 = pool.apply_async(calculateEMAs,[dfClosePrice,20,30])
result4 = pool.apply_async(calculateEMAs,[dfClosePrice,30,40])
result5 = pool.apply_async(calculateEMAs,[dfClosePrice,40,51])
answer1 = result1.get()
answer2 = result2.get()
answer3 = result3.get()
answer4 = result4.get()
answer5 = result5.get()
print(time.time() - time0)
print(dfClosePrice)
I run the function asynchronously with different values for the for loop. this takes 19 seconds to complete and I can see the result of each function printed correctly but the final value of dfClosePirce is a dataframe with only 1 column (Trade Close) and the new columns from each async function will not be added to the dataframe. How can I do it the right way?
Solution Using Numpy vectorization
Issue
Line if(index-i > 0): should be if(index-i >= 0): otherwise we miss the difference of 1
Use 'Close' rather than 'Trade Close' (doesn't matter for performance but avoid renaming column after pulling data from web)
Code
import numpy as np
import pandas as pd
def compute_using_np(df, start_index, end_index):
'''
Using numpy to vectorize computation
'''
nrows = len(df)
ncols = end_index - start_index
# container for pairwise differences
pair_wise_diff = np.empty((nrows, ncols)) #np.zeros((nrows, ncols), dtype = float)
pair_wise_diff.fill(np.nan)
# Get values of Trading close column as numpy 1D array
values = df['Close'].values
# Compute differences for different offsets
for offset in range(startIndex, endIndex):
# Using numpy to compute vectorized difference (i.e. faster computation)
diff = np.abs(values[offset:] - values[:-offset])/2.0
# Update result
pair_wise_diff[offset:, offset-startIndex] = diff
# Place into DataFrame
columns = ["EMA%d"%i for i in range(start_index, end_index)]
df_result = pd.DataFrame(data = pair_wise_diff, index = np.arange(nrows), columns = columns)
# Add result to df merging on index
return df.join(df_result)
Usage
df_result = compute_using_np(df, 1, 51)
Performance
Summary
Posted Code: 37.9 s ± 143 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Numpy Code: 1.56 ms ± 27.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Result: 20K times speed up
Test Code
import pandas_datareader as dr
import pandas as pd
import numpy as np
def calculateEMAs(df, start_index, end_index):
'''
Posted code changed 1) use Python PEP 8 naming convention,
2) corrected conditional
'''
for index,row in df.iterrows():
for i in range (start_index, end_index):
if(index-i >= 0):
df.loc[index,"EMA%d"%i] = abs(df.iloc[index-i]["Close"] - df.iloc[index]["Close"])/2 #replace this with EMA formula
return df
def compute_using_np(df, start_index, end_index):
'''
Using numpy to vectorie computation
'''
nrows = len(df)
ncols = end_index - start_index
# container for pairwise differences
pair_wise_diff = np.empty((nrows, ncols)) #np.zeros((nrows, ncols), dtype = float)
pair_wise_diff.fill(np.nan)
# Get values of Trading close column as numpy 1D array
values = df['Close'].values
# Compute differences for different offsets
for offset in range(start_index, end_index):
# Using numpy to compute vectorized difference (i.e. faster computation)
diff = np.abs(values[offset:] - values[:-offset])/2.0
# Update result
pair_wise_diff[offset:, offset-start_index] = diff
# Place into DataFrame
columns = ["EMA%d"%i for i in range(start_index, end_index)]
df_result = pd.DataFrame(data = pair_wise_diff, index = np.arange(nrows), columns = columns)
# Add result to df merging on index
return df.join(df_result)
# Get ibm closing stock pricing (777 DataFrame rows)
df = dr.data.get_data_yahoo('ibm', start = '2017-09-01', end = '2020-10-02')
df.reset_index(level=0, inplace = True) # create index which is 0, 1, 2, ...
# Time Original post
df1 = df.copy() # Copy data since operation is inplace
%timeit calculateEMAs(df1, 1, 51) # Jupyter Notebook Magic method
# Time Numpy Version
%timeit compute_using_np(df, 1, 51) # Jupyter Notebook Magic method
# No need to copy since operation is not inplace

Select the max row per group - pandas performance issue

I'm selecting one max row per group and I'm using groupby/agg to return index values and select the rows using loc.
For example, to group by "Id" and then select the row with the highest "delta" value:
selected_idx = df.groupby("Id").apply(lambda df: df.delta.argmax())
selected_rows = df.loc[selected_idx, :]
However, it's so slow this way. Actually, my i7/16G RAM laptop hangs when I'm using this query on 13 million rows.
I have two questions for experts:
How can I make this query run fast in pandas? What am I doing wrong?
Why is this operation so expensive?
[Update]
Thank you so much for #unutbu 's analysis!
sort_drop it is! On my i7/32GRAM machine, groupby+idxmax hangs for nearly 14 hours (never return a thing) however sort_drop handled it LESS THAN A MINUTE!
I still need to look at how pandas implements each method but problems solved for now! I love StackOverflow.
The fastest option depends not only on length of the DataFrame (in this case, around 13M rows) but also on the number of groups. Below are perfplots which compare a number of ways of finding the maximum in each group:
If there an only a few (large) groups, using_idxmax may be the fastest option:
If there are many (small) groups and the DataFrame is not too large, using_sort_drop may be the fastest option:
Keep in mind, however, that while using_sort_drop, using_sort and using_rank start out looking very fast, as N = len(df) increases, their speed relative to the other options disappears quickly. For large enough N, using_idxmax becomes the fastest option, even if there are many groups.
using_sort_drop, using_sort and using_rank sorts the DataFrame (or groups within the DataFrame). Sorting is O(N * log(N)) on average, while the other methods use O(N) operations. This is why methods like using_idxmax beats using_sort_drop for very large DataFrames.
Be aware that benchmark results may vary for a number of reasons, including machine specs, OS, and software versions. So it is important to run benchmarks on your own machine, and with test data tailored to your situation.
Based on the perfplots above, using_sort_drop may be an option worth considering for your DataFrame of 13M rows, especially if it has many (small) groups. Otherwise, I would suspect using_idxmax to be the fastest option -- but again, it's important that you check benchmarks on your machine.
Here is the setup I used to make the perfplots:
import numpy as np
import pandas as pd
import perfplot
def make_df(N):
# lots of small groups
df = pd.DataFrame(np.random.randint(N//10+1, size=(N, 2)), columns=['Id','delta'])
# few large groups
# df = pd.DataFrame(np.random.randint(10, size=(N, 2)), columns=['Id','delta'])
return df
def using_idxmax(df):
return df.loc[df.groupby("Id")['delta'].idxmax()]
def max_mask(s):
i = np.asarray(s).argmax()
result = [False]*len(s)
result[i] = True
return result
def using_custom_mask(df):
mask = df.groupby("Id")['delta'].transform(max_mask)
return df.loc[mask]
def using_isin(df):
idx = df.groupby("Id")['delta'].idxmax()
mask = df.index.isin(idx)
return df.loc[mask]
def using_sort(df):
df = df.sort_values(by=['delta'], ascending=False, kind='mergesort')
return df.groupby('Id', as_index=False).first()
def using_rank(df):
mask = (df.groupby('Id')['delta'].rank(method='first', ascending=False) == 1)
return df.loc[mask]
def using_sort_drop(df):
# Thanks to jezrael
# https://stackoverflow.com/questions/50381064/select-the-max-row-per-group-pandas-performance-issue/50389889?noredirect=1#comment87795818_50389889
return df.sort_values(by=['delta'], ascending=False, kind='mergesort').drop_duplicates('Id')
def using_apply(df):
selected_idx = df.groupby("Id").apply(lambda df: df.delta.argmax())
return df.loc[selected_idx]
def check(df1, df2):
df1 = df1.sort_values(by=['Id','delta'], kind='mergesort').reset_index(drop=True)
df2 = df2.sort_values(by=['Id','delta'], kind='mergesort').reset_index(drop=True)
return df1.equals(df2)
perfplot.show(
setup=make_df,
kernels=[using_idxmax, using_custom_mask, using_isin, using_sort,
using_rank, using_apply, using_sort_drop],
n_range=[2**k for k in range(2, 20)],
logx=True,
logy=True,
xlabel='len(df)',
repeat=75,
equality_check=check)
Another way to benchmark is to use IPython %timeit:
In [55]: df = make_df(2**20)
In [56]: %timeit using_sort_drop(df)
1 loop, best of 3: 403 ms per loop
In [57]: %timeit using_rank(df)
1 loop, best of 3: 1.04 s per loop
In [58]: %timeit using_idxmax(df)
1 loop, best of 3: 15.8 s per loop
Using Numba's jit
from numba import njit
import numpy as np
#njit
def nidxmax(bins, k, weights):
out = np.zeros(k, np.int64)
trk = np.zeros(k)
for i, w in enumerate(weights - (weights.min() - 1)):
b = bins[i]
if w > trk[b]:
trk[b] = w
out[b] = i
return np.sort(out)
def with_numba_idxmax(df):
f, u = pd.factorize(df.Id)
return df.iloc[nidxmax(f, len(u), df.delta.values)]
Borrowing from #unutbu
def make_df(N):
# lots of small groups
df = pd.DataFrame(np.random.randint(N//10+1, size=(N, 2)), columns=['Id','delta'])
# few large groups
# df = pd.DataFrame(np.random.randint(10, size=(N, 2)), columns=['Id','delta'])
return df
Prime jit
with_numba_idxmax(make_df(10));
Test
df = make_df(2**20)
%timeit with_numba_idxmax(df)
%timeit using_sort_drop(df)
47.4 ms ± 99.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
194 ms ± 451 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Quickly read HDF 5 file in python?

I have an instrument that saves data (many traces from an analog-to-digital converter) as an HDF 5 file. How can I efficiently open this file in python? I have tried the following code, but it seems to take a very long time to extract the data.
Also, it reads the data in the wrong order: instead of reading 1,2,3, it reads 1,10,100,1000.
Any ideas?
Here is a link to the sample data file: https://drive.google.com/file/d/0B4bj1tX3AZxYVGJpZnk2cDNhMzg/edit?usp=sharing
And here is my super-slow code:
import h5py
import matplotlib.pyplot as plt
import numpy as np
f = h5py.File('sample.h5','r')
ks = f.keys()
for index,key in enumerate(ks[:10]):
print index, key
data = np.array(f[key].values())
plt.plot(data.ravel())
plt.show()
As far as the order of your data:
In [10]: f.keys()[:10]
Out[10]:
[u'Acquisition.1',
u'Acquisition.10',
u'Acquisition.100',
u'Acquisition.1000',
u'Acquisition.1001',
u'Acquisition.1002',
u'Acquisition.1003',
u'Acquisition.1004',
u'Acquisition.1005',
u'Acquisition.1006']
This is the correct order for numbers that isn't left padded with zeros. It's doing its sort lexicographically, not numerically. See Python: list.sort() doesn't seem to work for a possible solution.
Second, you're killing your performance by rebuilding the array within the loop:
In [20]: d1 = f[u'Acquisition.990'].values()[0][:]
In [21]: d2 = np.array(f[u'Acquisition.990'].values())
In [22]: np.allclose(d1,d2)
Out[22]: True
In [23]: %timeit d1 = f[u'Acquisition.990'].values()[0][:]
1000 loops, best of 3: 401 µs per loop
In [24]: %timeit d2 = np.array(f[u'Acquisition.990'].values())
1 loops, best of 3: 1.77 s per loop

Categories

Resources