I have a dataframe with 15 columns named 0,1,2,...,14. I would like to write a method that would take in this data, and a vector of length 15. I would like it to return dataframe conditionally selected based on this vector that I have passed. E.g. the data passed is data_ and the vector passed is v_
I would like to produce that:
data[(data[0] == v_[0]) & (data[1] == v_[1]) & ... & (data[14] == v_[14])]
However I would like the method to be flexible, e.g. I could pass in dataframe of 100 columns named 0, ..., 99 and a vector of length 99. My problem is that I do not know how to cleverly create [(data[0] == v_[0]) & (data[1] == v_[1]) & ... & (data[14] == v_[14])] programatically to account for "&" sign. Equally well I would be satisfied if someone gave me a method that could merge multiple NxM matrices filled with True and False on "and" or "or" into single MxN matrix.
Thank You very much!
You can try this:
def custom_filter(data, v):
if len(data.columns) == len(v):
# If data has the same number of columns
# as v has elements
mask = (data == v).all(axis=1)
else:
# If they have a different length, we'll need to subset
# the data first, then create our mask
# This attempts to susbet the dataframe by assuming columns
# 0 .. len(v) - 1 exist as columns, and will throw an error
# otherwise
colnames = list(range(len(v)))
mask = (data[colnames] == v).all(axis=1)
return data.loc[mask, :]
df = pd.DataFrame({
"F": list("hiadsfin"),
0: list("aaaabbbb"),
1: list("cccdddee"),
2: list("ffgghhij")
})
v = ["a", "c", "f"]
df
F 0 1 2 H
0 h a c f 1
1 i a c f 2
2 a a c g 3
3 d a d g 4
4 s b d h 5
5 f b d h 6
6 i b e i 7
7 n b e j 8
custom_filter(df, v)
F 0 1 2 H
0 h a c f 1
1 i a c f 2
Note that with this function, if the number of columns exactly matches the length of your vector v, then you do not need to ensure the columns are labelled as 0, 1, 2, ..., len(v)-1. However if you have more columns than elements of v, you need to ensure that a subset of those columns are labelled as 0, 1, 2, ..., len(v)-1. If v` is longer than there are columns in your dataframe, this will throw an error.
This might work:
data[(data==v_.transpose())].dropna(axis=1)
Related
I have a dataframe with some columns delimited with '|', and I need to flatten this dataframe. Example:
name type
a l
b m
c|d|e n
For this df, I want to flatten it to:
name type
a l
b m
c n
d n
e n
To do this, I used this command:
df = df.assign(name=df.name.str.split('|')).explode(column).drop_duplicates()
Now, I want do one more thing besides above flatten operation:
name type co_occur
a l
b m
c n d
c n e
d n e
That is, not only split the 'c|d|e' into two rows, but also create a new column which contains a 'co_occur' relationship, in which 'c' and 'd' and 'e' co-occur with each other.
I don't see an easy way to do this by modifying:
df = df.assign(name=df.name.str.split('|')).explode(column).drop_duplicates()
I think this is what you want. Use combinations and piece everything together
from itertools import combinations
import io
data = '''name type
a l
b m
c|d|e n
j|k o
f|g|h|i p
'''
df = pd.read_csv(io.StringIO(data), sep=' \s+', engine='python')
# hold the new dataframes as you iterate via apply()
df_hold = []
def explode_combos(x):
combos = list(combinations(x['name'].split('|'),2))
# print(combos)
# print(x['type'])
df_hold.append(pd.DataFrame([{'name':c[0], 'type':x['type'], 'co_cur': c[1]} for c in combos]))
return
# only apply() to those rows that need to be exploded
dft = df[df['name'].str.contains('\|')].apply(explode_combos, axis=1)
# concatenate the result
dfn = pd.concat(df_hold)
# add back to rows that weren't operated on (see the ~)
df_final = pd.concat([df[~df['name'].str.contains('\|')], dfn]).fillna('')
name type co_cur
0 a l
1 b m
0 c n d
1 c n e
2 d n e
0 j o k
0 f p g
1 f p h
2 f p i
3 g p h
4 g p i
5 h p i
I have a set like this:
N1 N2
0 a b
1 b f
2 c d
3 d a
4 e b
I want to get the indexes with the repeated values between the two columns, and the value itself.
From the example, I should get something like these shortlists:
(value, idx(N1), idx(N2))
(a, 0, 3)
(b, 1, 0)
(b, 1, 4)
(d, 3, 2)
I have been able to do it with two for-loops, but for a half-million rows dataframe it took hours...
Use numpy broadcasting comparison and then use argwhere to find the indices where the values where equal:
import numpy as np
# make a broadcasted comparison
mat = df['N2'].values == df['N1'].values[:, None]
# find the indices where the values are True
where = np.argwhere(mat)
# select the values
values = df['N1'][where[:, 0]]
# create the DataFrame
res = pd.DataFrame(data=[[val, *row] for val, row in zip(values, where)], columns=['values', 'idx_N1', 'idx_N2'])
print(res)
Output
values idx_N1 idx_N2
0 a 0 3
1 b 1 0
2 b 1 4
3 d 3 2
I have a data frame that looks like this:
Numbers Names
0 A
1 A
2 B
3 B
4 C
5 C
6 C
8 D
10 D
And my numbers(integers) need to be sequential IF the value in the column "Names" is the same for both numbers: so for example, between 6 and 8 the numbers are not sequential but that is fine since the column "Names" changes from C to D. However, between 8 and 10 this is a problem since both rows have the same value "Names" but are not sequential.
I would like to do a code that returns the numbers missing that need to be added according to the logic explained above.
import itertools as it
import pandas as pd
df = pd.read_excel("booki.xlsx")
c1 = df['Numbers'].copy()
c2 = df['Names'].copy()
for i in it.chain(range(1,len(c2)-1), range(1,len(c1)-1)):
b = c2[i]
c = c2[i+1]
x = c1[i]
n = c1[i+1]
if c == b and n - x > 1:
print(x+1)
It prints the numbers that are missing but two times, so for the data frame in the example it would print:
9
9
but I would like to print only:
9
Perhaps it's some failure in the logic?
Thank you
you can use groupby('Names') and then shift to get the differences between following elements within each group, then pick only the ones that don't have -1 as a differnce, and print their following number.
try this:
import pandas as pd
import numpy as np
from io import StringIO
df = pd.read_csv(StringIO("""
Numbers Names
0 A
1 A
2 B
3 B
4 C
5 C
6 C
8 D
10 D"""), sep="\s+")
differences = df.groupby('Names', as_index=False).apply(lambda g: g['Numbers'] - g['Numbers'].shift(-1)).fillna(-1).reset_index()
missing_numbers = (df[differences != -1]['Numbers'].dropna()+1).tolist()
print(missing_numbers)
Output:
[9.0]
I'm not sure itertools is needed here. Here is one solution only using pandas methods.
Group the data according to Names column using groupby
Select the min and max from Numbers columns
Define an integer range from min to max
merge this value with the sub dataframe
Filter according missing values using isna
Return the filtered df
Optional : reindex the columns for prettier output with reset_index
Here the code:
df = pd.DataFrame({"Numbers": [0, 1, 2, 3, 4, 5, 6, 8, 10, 15],
"Names": ["A", "A", "B", "B", "C", "C", "C", "D", "D", "D"]})
def select_missing(df):
# Select min and max values
min_ = df.Numbers.min()
max_ = df.Numbers.max()
# Create integer range
serie = pd.DataFrame({"Numbers": [i for i in range(min_, max_ + 1)]})
# Merge with df
m = serie.merge(df, on=['Numbers'], how='left')
# Return rows not matching the equality
return m[m.isna().any(axis=1)]
# Group the data per Names and apply "select_missing" function
out = df.groupby("Names").apply(select_missing)
print(out)
# Numbers Names
# Names
# D 1 9 NaN
# 3 11 NaN
# 4 12 NaN
# 5 13 NaN
# 6 14 NaN
out = out[["Numbers"]].reset_index(level=0)
print(out)
# Names Numbers
# 1 D 9
# 3 D 11
# 4 D 12
# 5 D 13
# 6 D 14
I have a dataframe with sorted values labeled by ids and I want to take the difference of the value for the first element of an id with the value of the last elements of the all previous ids. The code below does what I want:
import pandas as pd
a = 'a'; b = 'b'; c = 'c'
df = pd.DataFrame(data=[*zip([a, a, a, b, b, c, a], [1, 2, 3, 5, 6, 7, 8])],
columns=['id', 'value'])
print(df)
# # take the last value for a particular id
# last_value_for_id = df.loc[df.id.shift(-1) != df.id, :]
# print(last_value_for_id)
current_id = ''; prev_values = {};diffs = {}
for t in df.itertuples(index=False):
prev_values[t.id] = t.value
if current_id != t.id:
current_id = t.id
else: continue
for k, v in prev_values.items():
if k == current_id: continue
diffs[(k, current_id)] = t.value - v
print(pd.DataFrame(data=diffs.values(), columns=['diff'], index=diffs.keys()))
prints:
id value
0 a 1
1 a 2
2 a 3
3 b 5
4 b 6
5 c 7
6 a 8
diff
a b 2
c 4
b c 1
a 2
c a 1
I want to do this in a vectorized manner however. I have found a way of getting the series of last elements as in:
# take the last value for a particular id
last_value_for_id = df.loc[df.id.shift(-1) != df.id, :]
print(last_value_for_id)
which gives me:
id value
2 a 3
4 b 6
5 c 7
but can't find a way of using this to take the diffs in a vectorized manner
Depending on how many ids you have, this works with few thousands:
# enumerate ids, should be careful
ids = [a,b,c]
num_ids = len(ids)
# compute first and last
f = df.groupby('id').value.agg(['first','last'])
# lower triangle mask
mask = np.array([[i>=j for j in range(num_ids)] for i in range(num_ids)])
# compute diff of first and last, then mask
diff = np.where(mask, None, f['first'][None,:] - f['last'][:,None])
diff = pd.DataFrame(diff,
index = ids,
columns = ids)
# stack
diff.stack()
output:
a b 2
c 4
b c 1
dtype: object
Edit for updated data:
For the updated data, approach is similar if we can create the f table:
# create blocks of consecutive id
blocks = df['id'].ne(df['id'].shift()).cumsum()
# groupby
groups = df.groupby(blocks)
# create first and last values
df['fv'] = groups.value.transform('first')
df['lv'] = groups.value.transform('last')
# the above f and ids
# note the column name change
f = df[['id','fv', 'lv']].drop_duplicates()
ids = f['id'].values
num_ids = len(ids)
Output:
a b 2
c 4
a 5
b c 1
a 2
c a 1
dtype: object
If you want to go further and drop the index (a,a), well, I'm so lazy :D.
My method
s=df.groupby(df.id.shift().ne(df.id).cumsum()).agg({'id':'first','value':['min','max']})
s.columns=s.columns.droplevel(0)
t=s['min'].values[:,None]-s['max'].values
t=t.astype(float)
Below are all reshape, to match your output
t[np.triu_indices(t.shape[1], 0)] = np.nan
newdf=pd.DataFrame(t,index=s['first'],columns=s['first'])
newdf.values[newdf.index.values[:,None]==newdf.index.values]=np.nan
newdf=newdf.T.stack()
newdf
Out[933]:
first first
a b 2.0
c 4.0
b c 1.0
a 2.0
c a 1.0
dtype: float64
i have a df
a name
1 a/b/c
2 w/x/y/z
3 q/w/e/r/t
i want to split the name column on '/' to get this output
id name main sub leaf
1 a/b/c a b c
2 w/x/y/z w x z
3 q/w/e/r/t q w t
i.e first two slashes add as main and sub respectively,
and leaf should be filled with word after last slash
i tried using this, but result was incorrect
df['name'].str.split('/', expand=True).rename(columns={0:'main',1:'sub',2:'leaf'})
is there any way to assign columns
Use split with assign:
s = df['name'].str.split('/')
df = df.assign(main=s.str[0], sub=s.str[1], leaf=s.str[-1])
print (df)
a name leaf main sub
0 1 a/b/c c a b
1 2 w/x/y/z z w x
2 3 q/w/e/r/t t q w
For change order of columns:
s = df['name'].str.split('/')
df = df.assign(main=s.str[0], sub=s.str[1], leaf=s.str[-1])
df = df[df.columns[:-3].tolist() + ['main','sub','leaf']]
print (df)
a name main sub leaf
0 1 a/b/c a b c
1 2 w/x/y/z w x z
2 3 q/w/e/r/t q w t
Or:
s = df['name'].str.split('/')
df = (df.join(pd.DataFrame({'main':s.str[0], 'sub':s.str[1], 'leaf':s.str[-1]},
columns=['main','sub','leaf'])))
print (df)
a name main sub leaf
0 1 a/b/c a b c
1 2 w/x/y/z w x z
2 3 q/w/e/r/t q w t
Option 1
Using str.split, but don't expand the result. You should end up with a column of lists. Next, use df.assign, assign columns to return a new DataFrame object.
v = df['name'].str.split('/')
df.assign(
main=v.str[ 0],
sub=v.str[ 1],
leaf=v.str[-1]
)
name leaf main sub
a
1 a/b/c c a b
2 w/x/y/z z w x
3 q/w/e/r/t t q w
Details
This is what v looks like:
a
1 [a, b, c]
2 [w, x, y, z]
3 [q, w, e, r, t]
Name: name, dtype: object
This is actually a lot easier to handle, because you have greater control over elements with the .str accessor. If you expand the result, you have to snap your ragged data to a tabular format to fit into a new DataFrame object, thereby introducing Nones. At that point, indexing (finding the ith or ith-last element) becomes a chore.
Option 2
Using direct assignment (to maintain order) -
df['main'] = v.str[ 0]
df['sub' ] = v.str[ 1]
df['leaf'] = v.str[-1]
df
name main sub leaf
a
1 a/b/c a b c
2 w/x/y/z w x z
3 q/w/e/r/t q w t
Note that this modifies the original dataframe, instead of returning a new one, so it is cheaper. However, it is more intractable if you have a large number of columns.
You might instead consider this alternative which should generalise to many more columns:
for c, i in [('main', 0), ('sub', 1), ('leaf', -1)]:
df[c] = v[i]
df
name main sub leaf
a
1 a/b/c a b c
2 w/x/y/z w x z
3 q/w/e/r/t q w t
Iterate over a list of tuples. The first element in a tuple is the column name, and the second is the corresponding index to pick the result from v. You still have to assign each one separately, whether you like it or not. Using a loop would probably be a clean way of doing it.