There is a huge matrix whose elements are numbers in the range of 1 to 15. I want to transform the matrix to the one whose elements be letters such that 1 becomes "a", 2 becomes "b", and so on. Finally I want to merge each row and create a sequence of it. As a simple example:
import pandas as pd
import numpy as np, numpy.random
numpy.random.seed(1)
A = pd.DataFrame (np.random.randint(1,16,10).reshape(2,5))
A.iloc[1,4]= np.NAN
A
# 0 1 2 3 4
#0 6 12 13 9 10.0
#1 12 6 1 1 NaN
If there were no Na in the dataset, I would use this code:
pd.DataFrame(list(map(''.join, A.applymap(lambda n: chr(n + 96)).as_matrix())))
Here, it gives this error:
TypeError: ('integer argument expected, got float', 'occurred at index 4')
The expected output is:
0
0 flmij
1 lfaa
The first row should have 5 elements and the second one should have 4 elements.
Use if-else condition with sum:
df = pd.DataFrame(A.applymap(lambda n: chr(int(n) + 96) if pd.notnull(n) else '')
.values.sum(axis=1))
print (df)
0
0 flmij
1 lfaa
Details:
print (A.applymap(lambda n: chr(int(n) + 96) if pd.notnull(n) else ''))
0 1 2 3 4
0 f l m i j
1 l f a a
print (A.applymap(lambda n: chr(int(n) + 96) if pd.notnull(n) else '').values)
[['f' 'l' 'm' 'i' 'j']
['l' 'f' 'a' 'a' '']]
print (A.applymap(lambda n: chr(int(n) + 96) if pd.notnull(n) else '').values.sum(axis=1))
['flmij' 'lfaa']
Another solution:
print (A.stack().astype(int).add(96).apply(chr).sum(level=0))
0 flmij
1 lfaa
dtype: object
Details:
Reshape to Series:
print (A.stack())
0 0 6.0
1 12.0
2 13.0
3 9.0
4 10.0
1 0 12.0
1 6.0
2 1.0
3 1.0
dtype: float64
Convert to integers:
print (A.stack().astype(int))
0 0 6
1 12
2 13
3 9
4 10
1 0 12
1 6
2 1
3 1
dtype: int32
Add number:
print (A.stack().astype(int).add(96))
0 0 102
1 108
2 109
3 105
4 106
1 0 108
1 102
2 97
3 97
dtype: int32
Convert to letters:
print (A.stack().astype(int).add(96).apply(chr))
0 0 f
1 l
2 m
3 i
4 j
1 0 l
1 f
2 a
3 a
dtype: object
Sum by first level of MultiIndex:
print (A.stack().astype(int).add(96).apply(chr).sum(level=0))
0 flmij
1 lfaa
dtype: object
try this,
A.fillna(0,inplace=True)
A.applymap(lambda x: (chr(int(x) + 96))).sum(axis=1).str.replace('`','')
0 flmij
1 lfaa
dtype: object
Could use a categorical. Useful if you're doing more than just mapping to individual characters.
import pandas as pd
import numpy as np, numpy.random
numpy.random.seed(1)
A_int = pd.DataFrame(np.random.randint(1,16,10).reshape(2,5))
A_int.iloc[1,4]= np.NAN
int_vals = list(range(1,16))
chr_vals = [chr(n+96) for n in int_vals]
A_chr = A_int.apply(axis=0, func=lambda x: pd.Categorical(x, categories=int_vals, ordered=True).rename_categories(chr_vals))
A_chr.apply(axis=1, func=lambda x: ''.join([str(i) for i in x[pd.notnull(x)]]))
Related
I have a large dataset and I want to sample from it but with a conditional. What I need is a new dataframe with the almost the same amount (count) of values of a boolean column of `0 and 1'
What I have:
df['target'].value_counts()
0 = 4000
1 = 120000
What I need:
new_df['target'].value_counts()
0 = 4000
1 = 6000
I know I can df.sample but I dont know how to insert the conditional.
Thanks
Since 1.1.0, you can use groupby.sample if you need the same number of rows for each group:
df.groupby('target').sample(4000)
Demo:
df = pd.DataFrame({'x': [0] * 10 + [1] * 25})
df.groupby('x').sample(5)
x
8 0
6 0
7 0
2 0
9 0
18 1
33 1
24 1
32 1
15 1
If you need to sample conditionally based on the group value, you can do:
df.groupby('target', group_keys=False).apply(
lambda g: g.sample(4000 if g.name == 0 else 6000)
)
Demo:
df.groupby('x', group_keys=False).apply(
lambda g: g.sample(4 if g.name == 0 else 6)
)
x
7 0
8 0
2 0
1 0
18 1
12 1
17 1
22 1
30 1
28 1
Assuming the following input and using the values 4/6 instead of 4000/6000:
df = pd.DataFrame({'target': [0,1,1,1,0,1,1,1,0,1,1,1,0,1,1,1]})
You could groupby your target and sample to take at most N values per group:
df.groupby('target', group_keys=False).apply(lambda g: g.sample(min(len(g), 6)))
example output:
target
4 0
0 0
8 0
12 0
10 1
14 1
1 1
7 1
11 1
13 1
If you want the same size you can simply use df.groupby('target').sample(n=4)
I have two columns which I want to compare every nth row. If it comes across the nth row it will compare them and put the result of the if statement in a new column.
When I tried the enumerate function it always ends up in the true part of the if statement. Somehow this piece of the code is always thrue:
if (count % 3)== 0:
for count, factors in enumerate(df.index):
if (count % 3)== 0: #every 3th row
df['Signal']=np.where(df['Wind Ch']>=df['Rain Ch'],'1', '-1')
else:
df['Signal']=0
In column 'Signal' I am expecting a '1' or '-1' every 3rd row and '0' on all the other rows. However I am getting '1' or '-1' on each row
Now I am getting:
Date Wind CH Rain CH Signal
0 5/10/2005 -1.85% -3.79% 1
1 5/11/2005 1.51% -1.66% 1
2 5/12/2005 0.37% 0.88% -1
3 5/13/2005 -0.81% 3.83% -1
4 5/14/2005 -0.28% 4.05% -1
5 5/15/2005 3.93% 1.79% 1
6 5/16/2005 6.23% 0.94% 1
7 5/17/2005 -0.08% 4.43% -1
8 5/18/2005 -2.69% 4.02% -1
9 5/19/2005 6.40% 1.33% 1
10 5/20/2005 -3.41% 2.38% -1
11 5/21/2005 3.27% 5.46% -1
12 5/22/2005 -4.40% -4.15% -1
13 5/23/2005 3.27% 4.48% -1
But I want to get:
Date Wind CH Rain CH Signal
0 5/10/2005 -1.85% -3.79% 0.0
1 5/11/2005 1.51% -1.66% 0.0
2 5/12/2005 0.37% 0.88% -1.0
3 5/13/2005 -0.81% 3.83% 0.0
4 5/14/2005 -0.28% 4.05% 0.0
5 5/15/2005 3.93% 1.79% 1.0
6 5/16/2005 6.23% 0.94% 0.0
7 5/17/2005 -0.08% 4.43% 0.0
8 5/18/2005 -2.69% 4.02% -1.0
9 5/19/2005 6.40% 1.33% 0.0
10 5/20/2005 -3.41% 2.38% 0.0
11 5/21/2005 3.27% 5.46% -1.0
12 5/22/2005 -4.40% -4.15% 0.0
13 5/23/2005 3.27% 4.48% 0.0
What am I missing here?
You can go about it like this, using np.vectorize to avoid loops:
import numpy as np
def calcSignal(x, y, i):
return 0 if (i + 1) % 3 != 0 else 1 if x >= y else -1
func = np.vectorize(calcSignal)
df['Signal'] = func(df['Wind CH'], df['Rain CH'], df.index)
df
Date Wind CH Rain CH Signal
0 5/10/2005 -1.85% -3.79% 0
1 5/11/2005 1.51% -1.66% 0
2 5/12/2005 0.37% 0.88% -1
3 5/13/2005 -0.81% 3.83% 0
4 5/14/2005 -0.28% 4.05% 0
5 5/15/2005 3.93% 1.79% 1
6 5/16/2005 6.23% 0.94% 0
7 5/17/2005 -0.08% 4.43% 0
8 5/18/2005 -2.69% 4.02% -1
9 5/19/2005 6.40% 1.33% 0
10 5/20/2005 -3.41% 2.38% 0
11 5/21/2005 3.27% 5.46% -1
12 5/22/2005 -4.40% -4.15% 0
13 5/23/2005 3.27% 4.48% 0
In general you don't want to loop over pandas objects. This case is no exception.
In [12]: df = pd.DataFrame({'x': [1,2,3], 'y': [10, 20, 30]})
In [13]: df
Out[13]:
x y
0 1 10
1 2 20
2 3 30
In [14]: df.loc[df.index % 2 == 0, 'x'] = 5
In [15]: df
Out[15]:
x y
0 5 10
1 2 20
2 5 30
there is no need to use enumerate function as i see it.Also your logic is faulty. you are rewriting complete column in every iteration of loop instead of ith row of column. you could simply do this
for count in range(len(df.index)):
if (count % 3)== 0: #every 3th row
df['Signal'].iloc[count]=np.where(df['Wind Ch'].iloc[count]>=df['Rain Ch'].iloc[count],'1', '-1')
else:
df['Signal'].iloc[0]=0
Here is my data:
import numpy as np
import pandas as pd
z = pd.DataFrame({'a':[1,1,1,2,2,3,3],'b':[3,4,5,6,7,8,9], 'c':[10,11,12,13,14,15,16]})
z
a b c
0 1 3 10
1 1 4 11
2 1 5 12
3 2 6 13
4 2 7 14
5 3 8 15
6 3 9 16
Question:
How can I do calculation on different element of each subgroup? For example, for each group, I want to extract any element in column 'c' which its corresponding element in column 'b' is between 4 and 9, and sum them all.
Here is the code I wrote: (It runs but I cannot get the correct result)
gbz = z.groupby('a')
# For displaying the groups:
gbz.apply(lambda x: print(x))
list = []
def f(x):
list_new = []
for row in range(0,len(x)):
if (x.iloc[row,0] > 4 and x.iloc[row,0] < 9):
list_new.append(x.iloc[row,1])
list.append(sum(list_new))
results = gbz.apply(f)
The output result should be something like this:
a c
0 1 12
1 2 27
2 3 15
It might just be easiest to change the order of operations, and filter against your criteria first - it does not change after the groupby.
z.query('4 < b < 9').groupby('a', as_index=False).c.sum()
which yields
a c
0 1 12
1 2 27
2 3 15
Use
In [2379]: z[z.b.between(4, 9, inclusive=False)].groupby('a', as_index=False).c.sum()
Out[2379]:
a c
0 1 12
1 2 27
2 3 15
Or
In [2384]: z[(4 < z.b) & (z.b < 9)].groupby('a', as_index=False).c.sum()
Out[2384]:
a c
0 1 12
1 2 27
2 3 15
You could also groupby first.
z = z.groupby('a').apply(lambda x: x.loc[x['b']\
.between(4, 9, inclusive=False), 'c'].sum()).reset_index(name='c')
z
a c
0 1 12
1 2 27
2 3 15
Or you can use
z.groupby('a').apply(lambda x : sum(x.loc[(x['b']>4)&(x['b']<9),'c']))\
.reset_index(name='c')
Out[775]:
a c
0 1 12
1 2 27
2 3 15
What is the best way to implement the Apriori algorithm in pandas? So far I got stuck on transforming extracting out the patterns using for loops. Everything from the for loop onward does not work. Is there a vectorized way to do this in pandas?
import pandas as pd
import numpy as np
trans=pd.read_table('output.txt', header=None,index_col=0)
def apriori(trans, support=4):
ts=pd.get_dummies(trans.unstack().dropna()).groupby(level=1).sum()
#user input
collen, rowlen =ts.shape
#max length of items
tssum=ts.sum(axis=1)
maxlen=tssum.loc[tssum.idxmax()]
items=list(ts.columns)
results=[]
#loop through items
for c in range(1, maxlen):
#generate patterns
pattern=[]
for n in len(pattern):
#calculate support
pattern=['supp']=pattern.sum/rowlen
#filter by support level
Condit=pattern['supp']> support
pattern=pattern[Condit]
results.append(pattern)
return results
results =apriori(trans)
print results
When I insert this with support 3
a b c d e
0
11 1 1 1 0 0
666 1 0 0 1 1
10101 0 1 1 1 0
1010 1 1 1 1 0
414147 0 1 1 0 0
10101 1 1 0 1 0
1242 0 0 0 1 1
101 1 1 1 1 0
411 0 0 1 1 1
444 1 1 1 0 0
it should output something like
Pattern support
a 6
b 7
c 7
d 7
e 3
a,b 5
a,c 4
a,d 4
Assuming I understand what you're after, maybe
from itertools import combinations
def get_support(df):
pp = []
for cnum in range(1, len(df.columns)+1):
for cols in combinations(df, cnum):
s = df[list(cols)].all(axis=1).sum()
pp.append([",".join(cols), s])
sdf = pd.DataFrame(pp, columns=["Pattern", "Support"])
return sdf
would get you started:
>>> s = get_support(df)
>>> s[s.Support >= 3]
Pattern Support
0 a 6
1 b 7
2 c 7
3 d 7
4 e 3
5 a,b 5
6 a,c 4
7 a,d 4
9 b,c 6
10 b,d 4
12 c,d 4
14 d,e 3
15 a,b,c 4
16 a,b,d 3
21 b,c,d 3
[15 rows x 2 columns]
add support, confidence, and lift caculation。
def apriori(data, set_length=2):
import pandas as pd
df_supports = []
dataset_size = len(data)
for combination_number in range(1, set_length+1):
for cols in combinations(data.columns, combination_number):
supports = data[list(cols)].all(axis=1).sum() * 1.0 / dataset_size
confidenceAB = data[list(cols)].all(axis=1).sum() * 1.0 / len(data[data[cols[0]]==1])
confidenceBA = data[list(cols)].all(axis=1).sum() * 1.0 / len(data[data[cols[-1]]==1])
liftAB = confidenceAB * dataset_size / len(data[data[cols[-1]]==1])
liftBA = confidenceAB * dataset_size / len(data[data[cols[0]]==1])
df_supports.append([",".join(cols), supports, confidenceAB, confidenceBA, liftAB, liftBA])
df_supports = pd.DataFrame(df_supports, columns=['Pattern', 'Support', 'ConfidenceAB', 'ConfidenceBA', 'liftAB', 'liftBA'])
df_supports.sort_values(by='Support', ascending=False)
return df_supports
Suppose I have the following DataFrame:
a b
0 A 1.516733
1 A 0.035646
2 A -0.942834
3 B -0.157334
4 A 2.226809
5 A 0.768516
6 B -0.015162
7 A 0.710356
8 A 0.151429
And I need to group it given the "edge B"; that means the groups will be:
a b
0 A 1.516733
1 A 0.035646
2 A -0.942834
3 B -0.157334
4 A 2.226809
5 A 0.768516
6 B -0.015162
7 A 0.710356
8 A 0.151429
That is any time I find a 'B' in the column 'a' I want to split my DataFrame.
My current solution is:
#create the dataframe
s = pd.Series(['A','A','A','B','A','A','B','A','A'])
ss = pd.Series(np.random.randn(9))
dff = pd.DataFrame({"a":s,"b":ss})
#my solution
count = 0
ls = []
for i in s:
if i=="A":
ls.append(count)
else:
ls.append(count)
count+=1
dff['grpb']=ls
and I got the dataframe:
a b grpb
0 A 1.516733 0
1 A 0.035646 0
2 A -0.942834 0
3 B -0.157334 0
4 A 2.226809 1
5 A 0.768516 1
6 B -0.015162 1
7 A 0.710356 2
8 A 0.151429 2
Which I can then split with dff.groupby('grpb').
Is there a more efficient way to do this using pandas' functions?
here's a oneliner:
zip(*dff.groupby(pd.rolling_median((1*(dff['a']=='B')).cumsum(),3,True)))[-1]
[ 1 2
0 A 1.516733
1 A 0.035646
2 A -0.942834
3 B -0.157334,
1 2
4 A 2.226809
5 A 0.768516
6 B -0.015162,
1 2
7 A 0.710356
8 A 0.151429]
How about:
df.groupby((df.a == "B").shift(1).fillna(0).cumsum())
For example:
>>> df
a b
0 A -1.957118
1 A -0.906079
2 A -0.496355
3 B 0.552072
4 A -1.903361
5 A 1.436268
6 B 0.391087
7 A -0.907679
8 A 1.672897
>>> gg = list(df.groupby((df.a == "B").shift(1).fillna(0).cumsum()))
>>> pprint.pprint(gg)
[(0,
a b
0 A -1.957118
1 A -0.906079
2 A -0.496355
3 B 0.552072),
(1, a b
4 A -1.903361
5 A 1.436268
6 B 0.391087),
(2, a b
7 A -0.907679
8 A 1.672897)]
(I didn't bother getting rid of the indices; you could use [g for k, g in df.groupby(...)] if you liked.)
An alternative is:
In [36]: dff
Out[36]:
a b
0 A 0.689785
1 A -0.374623
2 A 0.517337
3 B 1.549259
4 A 0.576892
5 A -0.833309
6 B -0.209827
7 A -0.150917
8 A -1.296696
In [37]: dff['grpb'] = np.NaN
In [38]: breaks = dff[dff.a == 'B'].index
In [39]: dff['grpb'][breaks] = range(len(breaks))
In [40]: dff.fillna(method='bfill').fillna(len(breaks))
Out[40]:
a b grpb
0 A 0.689785 0
1 A -0.374623 0
2 A 0.517337 0
3 B 1.549259 0
4 A 0.576892 1
5 A -0.833309 1
6 B -0.209827 1
7 A -0.150917 2
8 A -1.296696 2
Or using itertools to create 'grpb' is an option too.
def vGroup(dataFrame, edgeCondition, groupName='autoGroup'):
groupNum = 0
dataFrame[groupName] = ''
#loop over each row
for inx, row in dataFrame.iterrows():
if edgeCondition[inx]:
dataFrame.ix[inx, groupName] = 'edge'
groupNum += 1
else:
dataFrame.ix[inx, groupName] = groupNum
return dataFrame[groupName]
vGroup(df, df[0] == ' ')