Filter based on min and max - python

I have a dataframe as follows
Calls Weight
R 1
A 1
S 3
S 3
Q 7
W 5
E 9
If I have a min of 3 and a max of 5.
I am trying to filter the data so that all values below less than 3 are filtered out.
While all values greater than 5 are changed to the max (which is 5)
Expected output:
Calls Weight
S 3
S 3
Q 5
W 5
E 5

The transformation is straightforward:
df = df[df.Weight >= 3]
df[df.Weight >= 5] = 5

Related

Index and save last N points from a list that meets conditions from dataframe Python

I have a DataFrame that contains gas concentrations and the corresponding valve number. This data was taken continuously where we switched the valves back and forth (valves=1 or 2) for a certain amount of time to get 10 cycles for each valve value (20 cycles total). A snippet of the data looks like this (I have 2,000+ points and each valve stayed on for about 90 seconds each cycle):
gas1 valveW time
246.9438 2 1
247.5367 2 2
246.7167 2 3
246.6770 2 4
245.9197 1 5
245.9518 1 6
246.9207 1 7
246.1517 1 8
246.9015 1 9
246.3712 2 10
247.0826 2 11
... ... ...
My goal is to save the last N points of each valve's cycle. For example, the first cycle where valve=1, I want to index and save the last N points from the end before the valve switches to 2. I would then save the last N points and average them to find one value to represent that first cycle. Then I want to repeat this step for the second cycle when valve=1 again.
I am currently converting from Matlab to Python so here is the Matlab code that I am trying to translate:
% NOAA high
n2o_noaaHigh = [];
co2_noaaHigh = [];
co_noaaHigh = [];
h2o_noaaHigh = [];
ind_noaaHigh_end = zeros(1,length(t_c));
numPoints = 40;
for i = 1:length(valveW_c)-1
if (valveW_c(i) == 1 && valveW_c(i+1) ~= 1)
test = (i-numPoints):i;
ind_noaaHigh_end(test) = 1;
n2o_noaaHigh = [n2o_noaaHigh mean(n2o_c(test))];
co2_noaaHigh = [co2_noaaHigh mean(co2_c(test))];
co_noaaHigh = [co_noaaHigh mean(co_c(test))];
h2o_noaaHigh = [h2o_noaaHigh mean(h2o_c(test))];
end
end
ind_noaaHigh_end = logical(ind_noaaHigh_end);
This is what I have so far for Python:
# NOAA high
n2o_noaaHigh = [];
co2_noaaHigh = [];
co_noaaHigh = [];
h2o_noaaHigh = [];
t_c_High = []; # time
for i in range(len(valveW_c)):
# NOAA HIGH
if (valveW_c[i] == 1):
t_c_High.append(t_c[i])
n2o_noaaHigh.append(n2o_c[i])
co2_noaaHigh.append(co2_c[i])
co_noaaHigh.append(co_c[i])
h2o_noaaHigh.append(h2o_c[i])
Thanks in advance!
I'm not sure if I understood correctly, but I guess this is what you are looking for:
# First we create a column to show cycles:
df['cycle'] = (df.valveW.diff() != 0).cumsum()
print(df)
gas1 valveW time cycle
0 246.9438 2 1 1
1 247.5367 2 2 1
2 246.7167 2 3 1
3 246.677 2 4 1
4 245.9197 1 5 2
5 245.9518 1 6 2
6 246.9207 1 7 2
7 246.1517 1 8 2
8 246.9015 1 9 2
9 246.3712 2 10 3
10 247.0826 2 11 3
Now you can use groupby method to get the average for the last n points of each cycle:
n = 3 #we assume this is n
df.groupby('cycle').apply(lambda x: x.iloc[-n:, 0].mean())
Output:
cycle 0
1 246.9768
2 246.6579
3 246.7269
Let's call your DataFrame df; then you could do:
results = {}
for k, v in df.groupby((df['valveW'].shift() != df['valveW']).cumsum()):
results[k] = v
print(f'[group {k}]')
print(v)
Shift(), as it suggests, shifts the column of the valve cycle allows to detect changes in number sequences. Then, cumsum() helps to give a unique number to each of the group with the same number sequence. Then we can do a groupby() on this column (which was not possible before because groups were either of ones or twos!).
which gives e.g. for your code snippet (saved in results):
[group 1]
gas1 valveW time
0 246.9438 2 1
1 247.5367 2 2
2 246.7167 2 3
3 246.6770 2 4
[group 2]
gas1 valveW time
4 245.9197 1 5
5 245.9518 1 6
6 246.9207 1 7
7 246.1517 1 8
8 246.9015 1 9
[group 3]
gas1 valveW time
9 246.3712 2 10
10 247.0826 2 11
Then to get the mean for each cycle; you could e.g. do:
df.groupby((df['valveW'].shift() != df['valveW']).cumsum()).mean()
which gives (again for your code snippet):
gas1 valveW time
valveW
1 246.96855 2.0 2.5
2 246.36908 1.0 7.0
3 246.72690 2.0 10.5
where you wouldn't care much about the time mean but the gas1 one!
Then, based on results you could e.g. do:
n = 3
mean_n_last = []
for k, v in results.items():
if len(v) < n:
mean_n_last.append(np.nan)
else:
mean_n_last.append(np.nanmean(v.iloc[len(v) - n:, 0]))
which gives [246.9768, 246.65796666666665, nan] for n = 3 !
If your dataframe is sorted by time you could get the last N records for each valve like this.
N=2
valve1 = df[df['valveW']==1].iloc[-N:,:]
valve2 = df[df['valveW']==2].iloc[-N:,:]
If it isn't currently sorted you could easily sort it like this.
df.sort_values(by=['time'])

python pandas average number of consecutive values

I have an pandas data frame and I want the average number of consecutive values in a row. For example, for the following data
a b c d e f g h i j k l
p1 0 0 4 4 4 4 4 4 1 4 4 1
p2 0 4 4 0 4 4 0 1 4 4 0 1
so the average number of consecutive 4's for p1 is (6+2)/2 = 4 and for p2 is (2+2+2)/3 = 2
Is there also a way to find the min and max number of consecutive values? i.e. max for p1 is 6.
You can transpose your dataframe and use the method suggested in below post. You will get a dataframe of count of consecutive numbers, using which you can perform Mean, Min and Max.
https://stackoverflow.com/a/29643066/12452044
This will work for p1. To get p2, just replace the 0's with 1's whenever you see an 'iloc' function being used.
dict = {0:[],1:[],2:[],3:[],4:[]}
counter = 1
for i in range(len(df.iloc[0])-1):
num = df.iloc[0,i]
num2 = df.iloc[0,i+1]
if num == num2:
counter += 1
else:
dict[num].append(counter)
counter = 1
Then to get the average number of consecutive 4's:
print(sum(dict[4])/len(dict[4]))
And to get the max number of consecutive 4's:
print(max(dict[4]))

Rolling sum on a dynamic window

I am new to python and the last time I coded was in the mid-80's so I appreciate your patient help.
It seems .rolling(window) requires the window to be a fixed integer. I need a rolling window where the window or lookback period is dynamic and given by another column.
In the table below, I seek the Lookbacksum which is the rolling sum of Data as specified by the Lookback column.
d={'Data':[1,1,1,2,3,2,3,2,1,2],
'Lookback':[0,1,2,2,1,3,3,2,3,1],
'LookbackSum':[1,2,3,4,5,8,10,7,8,3]}
df=pd.DataFrame(data=d)
eg:
Data Lookback LookbackSum
0 1 0 1
1 1 1 2
2 1 2 3
3 2 2 4
4 3 1 5
5 2 3 8
6 3 3 10
7 2 2 7
8 1 3 8
9 2 1 3
You can create a custom function for use with df.apply, eg:
def lookback_window(row, values, lookback, method='sum', *args, **kwargs):
loc = values.index.get_loc(row.name)
lb = lookback.loc[row.name]
return getattr(values.iloc[loc - lb: loc + 1], method)(*args, **kwargs)
Then use it as:
df['new_col'] = df.apply(lookback_window, values=df['Data'], lookback=df['Lookback'], axis=1)
There may be some corner cases but as long as your indices align and are unique - it should fulfil what you're trying to do.
here is one with a list comprehension which stores the index and value of the column df['Lookback'] and the gets the slice by reversing the values and slicing according to the column value:
df['LookbackSum'] = [sum(df.loc[:e,'Data'][::-1].to_numpy()[:i+1])
for e,i in enumerate(df['Lookback'])]
print(df)
Data Lookback LookbackSum
0 1 0 1
1 1 1 2
2 1 2 3
3 2 2 4
4 3 1 5
5 2 3 8
6 3 3 10
7 2 2 7
8 1 3 8
9 2 1 3
An exercise in pain, if you want to try an almost fully vectorized approach. Sidenote: I don't think it's worth it here. At all.
Inspired by Divakar's answer here
Given:
import numpy as np
import pandas as pd
d={'Data':[1,1,1,2,3,2,3,2,1,2],
'Lookback':[0,1,2,2,1,3,3,2,3,1],
'LookbackSum':[1,2,3,4,5,8,10,7,8,3]}
df=pd.DataFrame(data=d)
Using the function from Divakar's answer, but slightly modified
from skimage.util.shape import view_as_windows as viewW
def strided_indexing_roll(a, r, fill_value=np.nan):
# Concatenate with sliced to cover all rolls
p = np.full((a.shape[0],a.shape[1]-1),fill_value)
a_ext = np.concatenate((p,a,p),axis=1)
# Get sliding windows; use advanced-indexing to select appropriate ones
n = a.shape[1]
return viewW(a_ext,(1,n))[np.arange(len(r)), -r + (n-1),0]
Now, we just need to prepare a 2d array for the data and independently shift the rows according to our desired lookback values.
arr = df['Data'].to_numpy().reshape(1, -1).repeat(len(df), axis=0)
shifter = np.arange(len(df) - 1, -1, -1) #+ d['Lookback'] - 1
temp = strided_indexing_roll(arr, shifter, fill_value=0)
out = strided_indexing_roll(temp, (len(df) - 1 - df['Lookback'])*-1, 0).sum(-1)
Output:
array([ 1, 2, 3, 4, 5, 8, 10, 7, 8, 3], dtype=int64)
We can then just assign it back to the dataframe as needed and check.
df['out'] = out
#output:
Data Lookback LookbackSum out
0 1 0 1 1
1 1 1 2 2
2 1 2 3 3
3 2 2 4 4
4 3 1 5 5
5 2 3 8 8
6 3 3 10 10
7 2 2 7 7
8 1 3 8 8
9 2 1 3 3

1d Array Transformation: Distributing groups of different sizes into unique batches with certain conditions

Problem:
Create the most efficient function to turn 1d array (group_id column) into another 1d array (output column).
The conditions are:
At most n groups can be in any batch, in this example n=2.
Each batch must contain groups of the same size.
Trivial condition: minimise the number of batches.
The function will distribute these groups of different size into batches with unique identifiers, with the condition that each batch has a fixed size AND each batch contains only groups with the same size.
data = {'group_size': [1,2,3,1,2,3,4,5,1,2,1,1,1],
'batch_id': [1,4,6,1,4,6,7,8,2,5,2,3,3]}
df = pd.DataFrame(data=data)
print(df)
group_size batch_id
0 1 1
1 2 4
2 3 6
3 1 1
4 2 4
5 3 6
6 4 7
7 5 8
8 1 2
9 2 5
10 1 2
11 1 3
12 1 3
What I need:
some_function( data['group_size'] ) to give me data['batch_id']
Edit:
My Clumsy Function
def generate_array():
out = 1
batch_size = 2
dictionary = {}
for i in range(df['group_size'].max()):
# get the mini df corresponding to the group size
sub_df = df[df['group_size'] == i+1 ]
# how many batches will we create?
no_of_new_batches = np.ceil ( sub_df.shape[0] / batch_size )
# create new array
a = np.repeat(np.arange(out, out+no_of_new_batches ), batch_size)
shift = len(a) - sub_df.shape[0]
# remove last elements from array to match the size
if len(a) != sub_df.shape[0]:
a = a[0:-shift]
# update batch id
out = out + no_of_new_batches
# create dictionary to store idx
indexes = sub_df.index.values
d = dict(zip(indexes, a))
dictionary.update(d)
array = [dictionary[i] for i in range(len(dictionary))]
return array
generate_array()
Out[78]:
[1.0, 4.0, 6.0, 1.0, 4.0, 6.0, 7.0, 8.0, 2.0, 5.0, 2.0, 3.0, 3.0]
Here is my solution. I don't think it gives exactly the same result as your function, but it satisfies your three rules:
import numpy as np
def package(data, mxsz):
idx = data.argsort()
ds = data[idx]
chng = np.empty((ds.size + 1,), bool)
chng[0] = True
chng[-1] = True
chng[1:-1] = ds[1:] != ds[:-1]
szs = np.diff(*np.where(chng))
corr = (-szs) % mxsz
result = np.empty_like(idx)
result[idx] = (np.arange(idx.size) + corr.cumsum().repeat(szs)) // mxsz
return result
data = np.random.randint(0, 4, (20,))
result = package(data, 3)
print(f'group_size {data}')
print(f'batch_id {result}')
check = np.lexsort((data, result))
print('sorted:')
print(f'group_size {data[check]}')
print(f'batch_id {result[check]}')
Sample run with n=3, the last two lines of the output are the same as the first two, only sorted for easier checking:
group_size [1 1 0 1 2 0 2 2 2 3 1 2 3 2 1 0 1 0 2 0]
batch_id [3 3 1 3 6 1 6 5 6 7 2 5 7 5 2 1 2 0 4 0]
sorted:
group_size [0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 2 3 3]
batch_id [0 0 1 1 1 2 2 2 3 3 3 4 5 5 5 6 6 6 7 7]
How it works:
1) sort data
2) detect where sorted data change to identify groups of equal values ("groups of group sizes")
3) determine sizes of the groups of groups sizes and for each calculate what misses to a clean multiple of n
4) enumerate the sorted data while at each switch to a new group of group sizes jumping to the next clean multiple of n; we use (3) to do this in a vectorized fashion
5) floor divide by n to get the batch ids
6) shuffle back to original order

How to calculate amounts that row values greater than a specific value in pandas?

How to calculate amounts that row values greater than a specific value in pandas?
For example, I have a Pandas DataFrame dff. I want to count row values greater than 0.
dff = pd.DataFrame(np.random.randn(9,3),columns=['a','b','c'])
dff
a b c
0 -0.047753 -1.172751 0.428752
1 -0.763297 -0.539290 1.004502
2 -0.845018 1.780180 1.354705
3 -0.044451 0.271344 0.166762
4 -0.230092 -0.684156 -0.448916
5 -0.137938 1.403581 0.570804
6 -0.259851 0.589898 0.099670
7 0.642413 -0.762344 -0.167562
8 1.940560 -1.276856 0.361775
I am using an inefficient way. How to be more efficient?
dff['count'] = 0
for m in range(len(dff)):
og = 0
for i in dff.columns:
if dff[i][m] > 0:
og += 1
dff['count'][m] = og
dff
a b c count
0 -0.047753 -1.172751 0.428752 1
1 -0.763297 -0.539290 1.004502 1
2 -0.845018 1.780180 1.354705 2
3 -0.044451 0.271344 0.166762 2
4 -0.230092 -0.684156 -0.448916 0
5 -0.137938 1.403581 0.570804 2
6 -0.259851 0.589898 0.099670 2
7 0.642413 -0.762344 -0.167562 1
8 1.940560 -1.276856 0.361775 2
You can create a boolean mask of your DataFrame, that is True wherever a value is greater than your threshold (in this case 0), and then use sum along the first axis.
dff.gt(0).sum(1)
0 1
1 1
2 2
3 2
4 0
5 2
6 2
7 1
8 2
dtype: int64

Categories

Resources