I have following dataframe in pandas
C1 C2 C3
10 a b
10 a b
? c c
? ? b
10 a b
10 ? ?
I want to count the occurrences of ? in all the columns
My desired output is column wise sum of occurrences
Use:
m=df.eq('?').sum()
pd.DataFrame([m.values],columns=m.index)
C1 C2 C3
0 2 2 1
Or better :
df.eq('?').sum().to_frame().T #thanks #user3483203
C1 C2 C3
0 2 2 1
Related
I'm in need of some advice on the following issue:
I have a DataFrame that looks like this:
ID SEQ LEN BEG_GAP END_GAP
0 A1 AABBCCDDEEFFGG 14 2 4
1 A1 AABBCCDDEEFFGG 14 10 12
2 B1 YYUUUUAAAAMMNN 14 4 6
3 B1 YYUUUUAAAAMMNN 14 8 12
4 C1 LLKKHHUUTTYYYYYYYYAA 20 7 9
5 C1 LLKKHHUUTTYYYYYYYYAA 20 12 15
6 C1 LLKKHHUUTTYYYYYYYYAA 20 17 18
And what I need to get is the SEQ that's separated between the different BEG_GAP and END_GAP. I already have worked it out (thanks to a previous question) for sequences that have only one pair of gaps, but here they have multiple.
This is what the sequences should look like:
ID SEQ
0 A1 AA---CDDEE---GG
1 B1 YYUU---A-----NN
2 C1 LLKKHHU---YY----Y--A
Or in an exploded DF:
ID Seq_slice
0 A1 AA
1 A1 CDDEE
2 A1 GG
3 B1 YYUU
4 B1 A
5 B1 NN
6 C1 LLKKHHU
7 C1 YY
8 C1 Y
9 C1 A
At the moment, I'm using a piece of code (that I got thanks to a previous question) that works only if there's one gap, and it looks like this:
import pandas as pd
df = pd.read_csv("..\path_to_the_csv.csv")
df["BEG_GAP"] = df["BEG_GAP"].astype(int)
df["END_GAP"]= df["END_GAP"].astype(int)
df['SEQ'] = df.apply(lambda x: [x.SEQ[:x.BEG_GAP], x.SEQ[x.END_GAP+1:]], axis=1)
output = df.explode('SEQ').query('SEQ!=""')
But this has the problem that it generates a bunch of sequences that don't really exist because they actually have another gap in the middle.
I.e what it would generate:
ID Seq_slice
0 A1 AA
1 A1 CDDEEFFG #<- this one shouldn't exist! Because there's another gap in 10-12
2 A1 AABBCCDDEE #<- Also, this one shouldn't exist, it's missing the previous gap.
3 A1 GG
And so on, with the other sequences. As you can see, there are some slices that are not being generated and some that are wrong, because I don't know how to tell the code to have in mind all the gaps while analyzing the sequence.
All advice is appreciated, I hope I was clear!
Let's try defining a function and apply:
def truncate(data):
seq = data.SEQ.iloc[0]
ll = data.LEN.iloc[0]
return [seq[x:y] for x,y in zip([0]+list(data.END_GAP),
list(data.BEG_GAP)+[ll])]
(df.groupby('ID').apply(truncate)
.explode().reset_index(name='Seq_slice')
)
Output:
ID Seq_slice
0 A1 AA
1 A1 CCDDEE
2 A1 GG
3 B1 YYUU
4 B1 AA
5 B1 NN
6 C1 LLKKHHU
7 C1 TYY
8 C1 YY
9 C1 AA
In one line:
df.groupby('ID').agg({'BEG_GAP': list, 'END_GAP': list, 'SEQ': max, 'LEN': max}).apply(lambda x: [x['SEQ'][b: e] for b, e in zip([0] + x['END_GAP'], x['BEG_GAP'] + [x['LEN']])], axis=1).explode()
ID
A1 AA
A1 CCDDEE
A1 GG
B1 YYUU
B1 AA
B1 NN
C1 LLKKHHU
C1 TYY
C1 YY
C1 AA
df1=
A B C D
a1 b1 c1 1
a2 b2 c2 2
a3 b3 c3 4
df2=
A B C D
a1 b1 c1 2
a2 b2 c2 1
I want to compare the value of the column 'D' in both dataframes. If both dataframes had same number of rows I would just do this.
newDF = df1['D']-df2['D']
However there are times when the number of rows are different. I want a result Dataframe which shows a dataframe like this.
resultDF=
A B C D_df1 D_df2 Diff
a1 b1 c1 1 2 -1
a2 b2 c2 2 1 1
EDIT: if 1st row in A,B,C from df1 and df2 is same then and only then compare 1st row of column D for each dataframe. Similarly, repeat for all the row.
Use merge and df.eval
df1.merge(df2, on=['A','B','C'], suffixes=['_df1','_df2']).eval('Diff=D_df1 - D_df2')
Out[314]:
A B C D_df1 D_df2 Diff
0 a1 b1 c1 1 2 -1
1 a2 b2 c2 2 1 1
I'm trying to duplicate rows of a pandas DataFrame (v.0.23.4, python v.3.7.1) based on an int value in one of the columns. I'm applying code from this question to do that, but I'm running into the following data type casting error: TypeError: Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'. Basically, I'm not understanding why this code is attempting to cast to int32.
Starting with this,
dummy_dict = {'c1': ['a','b','c'],
'c2': [0,1,2]}
dummy_df = pd.DataFrame(dummy_dict)
c1 c2 c3
0 a 0 textA
1 b 1 textB
2 c 2 textC
I'm doing this
dummy_df_test = dummy_df.reindex(dummy_df.index.repeat(dummy_df['c2']))
I want this at the end. However, I'm getting the above error instead.
c1 c2 c3
0 a 0 textA
1 b 1 textB
2 c 2 textC
3 c 2 textC
Just a workaround:
pd.concat([dummy_df[dummy_df.c2.eq(0)],dummy_df.loc[dummy_df.index.repeat(dummy_df.c2)]])
Another fantastic suggestion courtesy #Wen
dummy_df.reindex(dummy_df.index.repeat(dummy_df['c2'].clip(lower=1)))
c1 c2
0 a 0
1 b 1
2 c 2
2 c 2
I believe the answer as to why it's happening can be found here:
https://github.com/numpy/numpy/issues/4384
Specifying the dtype as int32 should solve the problem as highlighted in the original comment.
In the first attempt all rows are duplicated, and in the second attempt just the row with the index 2. Thanks to the concat function.
df2 = pd.concat([df]*2, ignore_index=True)
print(df2)
df3= pd.concat([df, df.iloc[[2]]])
print(df3)
c1 c2 c3
0 a 0 textA
1 b 1 textB
2 c 2 textC
c1 c2 c3
0 a 0 textA
1 b 1 textB
2 c 2 textC
3 a 0 textA
4 b 1 textB
5 c 2 textC
c1 c2 c3
0 a 0 textA
1 b 1 textB
2 c 2 textC
2 c 2 textC
If you plan to reset the index at the end
df3=df3.reset_index(drop=True)
I have a very large csv file with following structure:
a1 b1 c1 a2 b2 c2 a3 b3 c3 ..... a999 b999 c999
0 5 4 2 3 2 2 6 7 9 ....................
1 2 1 4 4 6 9 3 5 9 ....................
.
.
What I want to do is to group the columns in sets of N, for a, b and c, and check when the index of maximum value (argmax) of the set changes, in each row.
So in the above example, for N = 3, a1, b1, c1 is the first set in row 0, and argmax is 0, 2nd set is a2, b2, c2 and argmax is still 0, 3rd set is a3, b3, c3 but now the argmax is 2. I deally I am looking for a script that parses the whole csv file and returns [c3, c1]. c3 because thats where the argmax changes in row 0 and c1 becuase argmax doesn't change in row 1 but c1 is the largest value in that set.
I am doing this right now by using two for loops and its slow and looks very ugly, is there a better pandas pythonic way of doing this? I feel there must be.
I tried to keep to code as simple as possible. You can translate your dataframe and group by the sliced column name:
df = df.T.reset_index()
idx = df.groupby(df['index'].str.slice(1,2)).idxmax()
Output:
0 1
index
1 0 2
2 3 5
3 8 8
That means that for row 0 the max for group 1 is at index 0, the max group 2 is at index 3 (or 0 is you take the mod 3), the max for group 3 is at index 8, (or 2 if you take mod 3). Same reading for row 1 :)
If you need the actual column name:
df.columns[idx.values.flatten(order='F')]
Output:
['a1', 'a2', 'c3', 'c1', 'c2', 'c3']
You can groupby sets of columns and use .idxmax to find the column where the maximum occurs within each set. You can find where the first letter changes (if it ever does) to get your list.
n = 3
df2 = df.groupby([x//n for x in range(len(df.columns))], axis=1).idxmax(1)
mask = df2.applymap(lambda x: x[0]) # Case of 1-letter column prefix
## If possibility of words with different length ending in digits try
# import string
# mask = df2.applymap(lambda x: x.strip(string.digits))
df2.lookup(df2.index,
(mask.ne(mask.shift(-1, axis=1)).idxmax(1)+1) % (len(mask.columns))).tolist()
Sample Data
print(df)
a1 b1 c1 a2 b2 c2 a3 b3 c3
0 5 4 2 3 2 2 6 7 9
1 2 1 4 4 6 9 3 5 9
2 2 1 4 10 6 9 3 5 9
3 2 1 4 1 6 9 3 10 9
n = 3
df2 = df.groupby([x//n for x in range(len(df.columns))], axis=1).idxmax(1)
print(df2)
# 0 1 2
#0 a1 a2 c3
#1 c1 c2 c3
#2 c1 a2 c3
#3 c1 c2 b3
mask = df2.applymap(lambda x: x[0])
df2.lookup(df2.index, (mask.ne(mask.shift(-1, axis=1)).idxmax(1)+1) % (len(mask.columns))).tolist()
#['c3', 'c1', 'a2', 'b3']
I have a dataframe with many columns (around 1000).
Given a set of columns (around 10), which have 0 or 1 as values, I would like to select all the rows where I have 1s in the aforementioned set of columns.
Toy example. My dataframe is something like this:
c1,c2,c3,c4,c5
'a',1,1,0,1
'b',0,1,0,0
'c',0,0,1,1
'd',0,1,0,0
'e',1,0,0,1
And I would like to get the rows where the columns c2 and c5 are equal to 1:
'a',1,1,0,1
'e',1,0,0,1
Which would be the most efficient way to do it?
Thanks!
This would be more generic for multiple columns cols
In [1277]: cols = ['c2', 'c5']
In [1278]: df[(df[cols] == 1).all(1)]
Out[1278]:
c1 c2 c3 c4 c5
0 'a' 1 1 0 1
4 'e' 1 0 0 1
Or,
In [1284]: df[np.logical_and.reduce([df[x]==1 for x in cols])]
Out[1284]:
c1 c2 c3 c4 c5
0 'a' 1 1 0 1
4 'e' 1 0 0 1
Or,
In [1279]: df.query(' and '.join(['%s==1'%x for x in cols]))
Out[1279]:
c1 c2 c3 c4 c5
0 'a' 1 1 0 1
4 'e' 1 0 0 1
Can you try doing something like this:
df.loc[df['c2'] == 1 & df['c5'] == 1]
import pandas as pd
frame = pd.DataFrame([
['a',1,1,0,1],
['b',0,1,0,0],
['c',0,0,1,1],
['d',0,1,0,0],
['e',1,0,0,1]], columns='c1,c2,c3,c4,c5'.split(','))
print(frame.loc[(frame['c2'] == 1) & (frame['c5'] == 1)])