Pandas - length of connectable pd.Intervals - python

We start with an interval axis that is divided into bins of length 5. (0,5], (5, 10], ...
There is a timestamp column that has some timestamps >= 0. By using pd.cut() the interval bin that corresponds to the timestamp is determined. (e.g. "timestamp" = 3.0 -> "time_bin" = (0,5]).
If there is a time bin that has no corresponding timestamp, it does not show up in the interval column. Thus, there can be interval gaps in the "time_bin" column, e.g., (5,10], (15,20]. (i.e., interval (10,15] is missing // note that the timestamp column is sorted)
The goal is to obtain a column "connected_interval" that indicates whether the current row interval is connected to the previous row interval; connected meaning no interval gaps, i.e., (0,5], (5,10], (10, 15] would be assigned the same integer ID) and a column "conn_interv_length" that indicates for each largest possible connected interval the length of the interval. The interval (0,5], (5,10], (10, 15] would be of length 15.
The initial dataframe has columns "group_id", "timestamp", "time_bin". Columns "connected_interval" & "conn_interv_len" should be computed.
Note: any solution to obtaining the length of populated connected intervals is welcome.
df = pd.DataFrame({"group_id":['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'],\
"timestamp": [0.0, 3.0, 9.0, 24.2, 30.2, 0.0, 136.51, 222.0, 237.0, 252.0],\
"time_bin": [pd.Interval(0, 5, closed='left'), pd.Interval(0, 5, closed='left'), pd.Interval(5, 10, closed='left'), pd.Interval(20, 25, closed='left'), pd.Interval(30, 35, closed='left'), pd.Interval(0, 5, closed='left'), pd.Interval(135, 140, closed='left'), pd.Interval(220, 225, closed='left'), pd.Interval(235, 240, closed='left'), pd.Interval(250, 255, closed='left')],\
"connected_interval":[0, 0, 0, 1, 2, 0, 1, 2, 3, 4],\
"conn_interv_len":[10, 10, 10, 5, 5, 5, 5, 5, 5, 5],\
})
input with expected output columns:
group_id timestamp time_bin connected_interval conn_interv_len
0 A 0.00 [0, 5) 0 10
1 A 3.00 [0, 5) 0 10
2 A 9.00 [5, 10) 0 10
3 A 24.20 [20, 25) 1 5
4 A 30.20 [30, 35) 2 5
5 B 0.00 [0, 5) 0 5
6 B 136.51 [135, 140) 1 5
7 B 222.00 [220, 225) 2 5
8 B 237.00 [235, 240) 3 5
9 B 252.00 [250, 255) 4 5

IIUC, you can sort the intervals, drop duplicates, extract the left/right bound, create groups based on the match/mismatch of the successive left/right, then merge again the output to the original:
df2 = (df[['group_id', 'time_bin']]
# extract bounds and sort intervals
.assign(left=df['time_bin'].array.left,
right=df['time_bin'].array.right)
.sort_values(by=['group_id', 'left', 'right'])
# ensure no duplicates
.drop_duplicates(['group_id', 'time_bin'])
# compute connected intervals and connected length
.assign(connected_interval=lambda d:
d.groupby('group_id', group_keys=False)
.apply(lambda g: g['left'].ne(g['right'].shift())
.cumsum().sub(1)),
conn_interv_len=lambda d:
(g := d.groupby(['group_id', 'connected_interval']))['right'].transform('max')
-g['left'].transform('min')
)
.drop(columns=['left', 'right'])
)
# merge to restore missing dropped duplicated rows
out = df.merge(df2)
output:
group_id timestamp time_bin connected_interval conn_interv_len
0 A 0.00 [0, 5) 0 10
1 A 3.00 [0, 5) 0 10
2 A 9.00 [5, 10) 0 10
3 A 24.20 [20, 25) 1 5
4 A 30.20 [30, 35) 2 5
5 B 0.00 [0, 5) 0 5
6 B 136.51 [135, 140) 1 5
7 B 222.00 [220, 225) 2 5
8 B 237.00 [235, 240) 3 5
9 B 252.00 [250, 255) 4 5

Related

Writing a DataFrame to an excel file where items in a list are put into separate cells

Consider a dataframe like pivoted, where replicates of some data are given as lists in a dataframe:
d = {'Compound': ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C'],
'Conc': [1, 0.5, 0.1, 1, 0.5, 0.1, 2, 1, 0.5, 0.1],
'Data': [[100, 90, 80], [50, 40, 30], [10, 9.7, 8],
[20, 15, 10], [3, 4, 5, 6], [100, 110, 80],
[30, 40, 50, 20], [10, 5, 9, 3], [2, 1, 2, 2], [1, 1, 0]]}
df = pd.DataFrame(data=d)
pivoted = df.pivot(index='Conc', columns='Compound', values='Data')
This df can be written to an excel file as such:
with pd.ExcelWriter('output.xlsx') as writer:
pivoted.to_excel(writer, sheet_name='Sheet1', index_label='Conc')
How can this instead be written where replicate data are given in side-by-side cells? Desired excel file:
Then you need to pivot your data in a slightly different way, first explode the Data column, and deduplicate with groupby.cumcount:
(df.explode('Data')
.assign(n=lambda d: d.groupby(level=0).cumcount())
.pivot(index='Conc', columns=['Compound', 'n'], values='Data')
.droplevel('n', axis=1).rename_axis(columns=None)
)
Output:
A A A B B B B C C C C
Conc
0.1 10 9.7 8 100 110 80 NaN 1 1 0 NaN
0.5 50 40 30 3 4 5 6 2 1 2 2
1.0 100 90 80 20 15 10 NaN 10 5 9 3
2.0 NaN NaN NaN NaN NaN NaN NaN 30 40 50 20
Beside the #mozway's answer, just for formatting, you can use:
piv = (df.explode('Data').assign(col=lambda x: x.groupby(level=0).cumcount())
.pivot(index='Conc', columns=['Compound', 'col'], values='Data')
.rename_axis(None))
piv.columns = pd.Index([i if j == 0 else '' for i, j in piv.columns], name='Conc')
piv.to_excel('file.xlsx')

loop to return a string in a column if the values of another column are between such and such value [duplicate]

I have a data frame column with numeric values:
df['percentage'].head()
46.5
44.2
100.0
42.12
I want to see the column as bin counts:
bins = [0, 1, 5, 10, 25, 50, 100]
How can I get the result as bins with their value counts?
[0, 1] bin amount
[1, 5] etc
[5, 10] etc
...
You can use pandas.cut:
bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = pd.cut(df['percentage'], bins)
print (df)
percentage binned
0 46.50 (25, 50]
1 44.20 (25, 50]
2 100.00 (50, 100]
3 42.12 (25, 50]
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
print (df)
percentage binned
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
Or numpy.searchsorted:
bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = np.searchsorted(bins, df['percentage'].values)
print (df)
percentage binned
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
...and then value_counts or groupby and aggregate size:
s = pd.cut(df['percentage'], bins=bins).value_counts()
print (s)
(25, 50] 3
(50, 100] 1
(10, 25] 0
(5, 10] 0
(1, 5] 0
(0, 1] 0
Name: percentage, dtype: int64
s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
print (s)
percentage
(0, 1] 0
(1, 5] 0
(5, 10] 0
(10, 25] 0
(25, 50] 3
(50, 100] 1
dtype: int64
By default cut returns categorical.
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.
Using the Numba module for speed up.
On big datasets (more than 500k), pd.cut can be quite slow for binning data.
I wrote my own function in Numba with just-in-time compilation, which is roughly six times faster:
from numba import njit
#njit
def cut(arr):
bins = np.empty(arr.shape[0])
for idx, x in enumerate(arr):
if (x >= 0) & (x < 1):
bins[idx] = 1
elif (x >= 1) & (x < 5):
bins[idx] = 2
elif (x >= 5) & (x < 10):
bins[idx] = 3
elif (x >= 10) & (x < 25):
bins[idx] = 4
elif (x >= 25) & (x < 50):
bins[idx] = 5
elif (x >= 50) & (x < 100):
bins[idx] = 6
else:
bins[idx] = 7
return bins
cut(df['percentage'].to_numpy())
# array([5., 5., 7., 5.])
Optional: you can also map it to bins as strings:
a = cut(df['percentage'].to_numpy())
conversion_dict = {1: 'bin1',
2: 'bin2',
3: 'bin3',
4: 'bin4',
5: 'bin5',
6: 'bin6',
7: 'bin7'}
bins = list(map(conversion_dict.get, a))
# ['bin5', 'bin5', 'bin7', 'bin5']
Speed comparison:
# Create a dataframe of 8 million rows for testing
dfbig = pd.concat([df]*2000000, ignore_index=True)
dfbig.shape
# (8000000, 1)
%%timeit
cut(dfbig['percentage'].to_numpy())
# 38 ms ± 616 µs per loop (mean ± standard deviation of 7 runs, 10 loops each)
%%timeit
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
pd.cut(dfbig['percentage'], bins=bins, labels=labels)
# 215 ms ± 9.76 ms per loop (mean ± standard deviation of 7 runs, 10 loops each)
We could also use np.select:
bins = [0, 1, 5, 10, 25, 50, 100]
df['groups'] = (np.select([df['percentage'].between(i, j, inclusive='right')
for i,j in zip(bins, bins[1:])],
[1, 2, 3, 4, 5, 6]))
Output:
percentage groups
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
Convenient and fast version using Numpy
np.digitize is a convenient and fast option:
import pandas as pd
import numpy as np
df = pd.DataFrame({'x': [1,2,3,4,5]})
df['y'] = np.digitize(a['x'], bins=[3,5])
print(df)
returns
x y
0 1 0
1 2 0
2 3 1
3 4 1
4 5 2

Pandas find value corresponding to absolute minimum

I am trying to find the actual value that corresponds to the absolute minimum from multiple columns. For example:
df = pd.DataFrame({'A': [10, -5, -20, 50], 'B': [-5, 10, 30, 300], 'C': [15, 30, 15, 10]})
The output for this should be another another column with values -5, -5, 15 and 10.
I tried df['D'] = df[['A', 'B', 'C']].abs().min(axis=1), but it returns the minimum of absolutes, thereby losing the sign.
Try with idxmin
df['D'] = df.values[df.index,df.columns.get_indexer(df[['A', 'B', 'C']].abs().idxmin(1))]
df
Out[176]:
A B C D
0 10 -5 15 -5
1 -5 10 30 -5
2 -20 30 15 15
3 50 300 10 10

Evaluate frequency, duration and values of a timeseries

I'm new to python and have a simple question for which I haven't found an answer yet.
Lets say I have a time series with c(t):
t_ c_
1 40
2 41
3 4
4 5
5 7
6 20
7 20
8 8
9 90
10 99
11 10
12 5
13 8
14 8
15 19
I now want to evaluate this series with respect to how long the value c has been continuously in certain ranges and how often these time periods occur.
The result would therefore include three columns: c (binned), duration (binned), frequency. Translated to the simple example the result could look as follows:
c_ Dt_ Freq_
0-50 8 1
50-100 2 1
0-50 5 1
Can you give me an advice?
Thanks in advance,
Ulrike
//EDIT:
Thank you for the replies! My example data were somewhat flawed so that I couldn't show a part of my question. So, here is a new data series:
series=
t c
1 1
2 1
3 10
4 10
5 10
6 1
7 1
8 50
9 50
10 50
12 1
13 1
14 1
If I apply the code proposed by Christoph below:
bins = pd.cut(series['c'], [-1, 5, 100])
same_as_prev = (bins != bins.shift())
run_ids = same_as_prev.cumsum()
result = bins.groupby(run_ids).aggregate(["first", "count"])
I receive a result like this:
first count
(-1, 5] 2
(5, 100] 3
(-1, 5] 2
(5, 100] 3
(-1, 5] 3
but what I'm more interested in something looking like this:
c length freq
(-1, 5] 2 2
(-1, 5] 3 1
(5, 100] 3 2
How do I achieve this? And how could I plot it in a KDE plot?
Best,
Ulrike
Nicely asked question with an example :)
This is one way to do it, most likely incomplete, but it should help you a bit.
Since your data is spaced in time by a fixed increment, I do not implement the time series and use the index as time. Thus, I convert c to an array and use np.where() to find the value in the bins.
import numpy as np
c = np.array([40, 41, 4, 5, 7, 20, 20, 8, 90, 99, 10, 5, 8, 8, 19])
bin1 = np.where((0 <= c) & (c <= 50))[0]
bin2 = np.where((50 < c) & (c <= 100))[0]
For bin1, the output is array([ 0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14], dtype=int64) which correspond to the idx where the values from c are in the bin.
Next step is to find the consecutive idx. According to this SO post::
from itertools import groupby
from operator import itemgetter
data = bin1
for k, g in groupby(enumerate(data), lambda ix : ix[0] - ix[1]):
print(list(map(itemgetter(1), g)))
# Output is:
#[0, 1, 2, 3, 4, 5, 6, 7]
#[10, 11, 12, 13, 14]
Final step: place the new sub-bin in the right order and track which bins correspond to which subbin. Thus, the complete code would look like:
import numpy as np
from itertools import groupby
from operator import itemgetter
c = np.array([40, 41, 4, 5, 7, 20, 20, 8, 90, 99, 10, 5, 8, 8, 19])
bin1 = np.where((0 <= c) & (c <= 50))[0]
bin2 = np.where((50 < c) & (c <= 100))[0]
# 1 and 2 for the range names.
bins = [(bin1, 1), (bin2, 2)]
subbins = list()
for b in bins:
data = b[0]
name = b[1] # 1 or 2
for k, g in groupby(enumerate(data), lambda ix : ix[0] - ix[1]):
subbins.append((list(map(itemgetter(1), g)), name))
subbins = sorted(subbins, key=lambda x: x[0][0])
Output: [([0, 1, 2, 3, 4, 5, 6, 7], 1), ([8, 9], 2), ([10, 11, 12, 13, 14], 1)]
Then, you just have to do the stats you want :)
import pandas as pd
def bin_run_lengths(series, bins):
binned = pd.cut(pd.Series(series), bins)
return binned.groupby(
(1 - (binned == binned.shift())).cumsum()
).aggregate(
["first", "count"]
)
(I'm not sure where your frequency column comes in - in the problem as you describe it, it seems like it would always be set to 1.)
Binning
Binning a series is easy with pandas.cut():
https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.cut.html
import pandas as pd
pd.cut(pd.Series(range(100)), bins=[-1,0,10,20,50,100])
The bins here are given as (right-inclusive, left-exclusive) boundaries; the argument can be given in different forms.
0 (-1.0, 0.0]
1 (0.0, 10.0]
2 (0.0, 10.0]
3 (0.0, 10.0]
4 (0.0, 10.0]
5 (0.0, 10.0]
6 (0.0, 10.0]
...
19 (10.0, 20.0]
20 (10.0, 20.0]
21 (20.0, 50.0]
22 (20.0, 50.0]
23 (20.0, 50.0]
...
29 (20.0, 50.0]
...
99 (50.0, 100.0]
Length: 100, dtype: category
Categories (4, interval[int64]): [(0, 10] < (10, 20] < (20, 50] < (50, 100]]
This converts it from a series of values to a series of intervals.
Count consecutive values
This doesn't have a native idiom in pandas, but it is fairly easy with a few common functions. The top-voted StackOverflow answer here puts it very well: Counting consecutive positive value in Python array
same_as_prev = (series != series.shift())
This yields a Boolean series that determines if the value is different from the one before.
run_ids = same_as_prev.cumsum()
This makes an int series that increments from 0 each time the value changes to a new run, and thus assigns each position in the series to a "run ID"
result = series.groupby(run_ids).aggregate(["first", "count"])
This yields a dataframe that shows the value in each run and the length of that run:
first count
0 (-1, 0] 1
1 (0, 10] 10
2 (10, 20] 10
3 (20, 50] 30
4 (50, 100] 49

Cumulative subtraction from first row

I have one series and one DataFrame, all integers.
s = [10,
10,
10]
m = [[0,0,0,0,3,4,5],
[0,0,0,0,1,1,1],
[10,0,0,0,0,5,5]]
I want to return a matrix containing the cumulative differences to take the place of the existing number.
Output:
n = [[10,10,10,10,7,3,-2],
[10,10,10,10,9,8,7],
[0,0,0,0,0,-5,-10]]
Calculate the cumsum of data frame by row first and then subtract from the Series:
import pandas as pd
s = pd.Series(s)
df = pd.DataFrame(m)
-df.cumsum(1).sub(s, axis=0)
# 0 1 2 3 4 5 6
#0 10 10 10 10 7 3 -2
#1 10 10 10 10 9 8 7
#2 0 0 0 0 0 -5 -10
You can directly compute a cumulative difference using np.subtract.accumulate:
# make a copy
>>> n = np.array(m)
# replace first column
>>> n[:, 0] = s - n[:, 0]
# subtract in-place
>>> np.subtract.accumulate(n, axis=1, out=n)
array([[ 10, 10, 10, 10, 7, 3, -2],
[ 10, 10, 10, 10, 9, 8, 7],
[ 0, 0, 0, 0, 0, -5, -10]])

Categories

Resources