I have this dataFrame
dd = pd.DataFrame({'a':[1,1,1,1,2,2,2,2],'feature':[10,10,20,20,10,10,20,20],'h':['h_30','h_60','h_30','h_60','h_30','h_60','h_30','h_60'],'count':[1,2,3,4,5,6,7,8]})
a feature h count
0 1 10 h_30 1
1 1 10 h_60 2
2 1 20 h_30 3
3 1 20 h_60 4
4 2 10 h_30 5
5 2 10 h_60 6
6 2 20 h_30 7
7 2 20 h_60 8
My expected output is I want to shift my h column unique values into column and use count numbers as values
like this
a feature h_30 h_60
0 1 10 1 2
1 1 20 3 4
2 2 10 5 6
3 2 20 7 8
I tried this but got an error saying ValueError: Length of passed values is 8, index implies 2
dd.pivot(index = ['a','feature'],columns ='h',values = 'count' )
df.pivot does not accept list of columns as index for versions below 1.1.0
Changed in version 1.1.0: Also accept list of index names.
Try this:
import pandas as pd
pd.pivot_table(
dd, index=["a", "feature"], columns="h", values="count"
).reset_index().rename_axis(None, 1)
Related
I have a DataFrame with two columns A and B.
I want to create a new column named C to identify the continuous A with the same B value.
Here's an example
import pandas as pd
df = pd.DataFrame({'A':[1,2,3,5,6,10,11,12,13,18], 'B':[1,1,2,2,3,3,3,3,4,4]})
I found a similar question, but that method only identifies the continuous A regardless of B.
df['C'] = df['A'].diff().ne(1).cumsum().sub(1)
I have tried to groupby B and apply the function like this:
df['C'] = df.groupby('B').apply(lambda x: x['A'].diff().ne(1).cumsum().sub(1))
However, it doesn't work: TypeError: incompatible index of inserted column with frame index.
The expected output is
A B C
1 1 0
2 1 0
3 2 1
5 2 2
6 3 3
10 3 4
11 3 4
12 3 4
13 4 5
18 4 6
Let's create a sequential counter using groupby, diff and cumsum then factorize to reencode the counter
df['C'] = df.groupby('B')['A'].diff().ne(1).cumsum().factorize()[0]
Result
A B C
0 1 1 0
1 2 1 0
2 3 2 1
3 5 2 2
4 6 3 3
5 10 3 4
6 11 3 4
7 12 3 4
8 13 4 5
9 18 4 6
Use DataFrameGroupBy.diff with compare not equal 1 and Series.cumsum, last subtract 1:
df['C'] = df.groupby('B')['A'].diff().ne(1).cumsum().sub(1)
print (df)
A B C
0 1 1 0
1 2 1 0
2 3 2 1
3 5 2 2
4 6 3 3
5 10 3 4
6 11 3 4
7 12 3 4
8 13 4 5
9 18 4 6
I want to fill numbers in column flag, based on the value in column KEY.
Instead of using cumcount() to fill incremental numbers, I want to fill same number for every two rows if the value in column KEY stays same.
If the value in column KEY changes, the number filled changes also.
Here is the example, df1 is what I want from df0.
df0 = pd.DataFrame({'KEY':['0','0','0','0','1','1','1','2','2','2','2','2','3','3','3','3','3','3','4','5','6']})
df1 = pd.DataFrame({'KEY':['0','0','0','0','1','1','1','2','2','2','2','2','3','3','3','3','3','3','4','5','6'],
'flag':['0','0','1','1','2','2','3','4','4','5','5','6','7','7','8','8','9','9','10','11','12']})
You want to get the cumcount and add one. Then use %2 to differentiate between odd or even rows. Then, take the cumulative sum and subtract 1 to start counting from zero.
You can use:
df0['flag'] = ((df0.groupby('KEY').cumcount() + 1) % 2).cumsum() - 1
df0
Out[1]:
KEY flag
0 0 0
1 0 0
2 0 1
3 0 1
4 1 2
5 1 2
6 1 3
7 2 4
8 2 4
9 2 5
10 2 5
11 2 6
12 3 7
13 3 7
14 3 8
15 3 8
16 3 9
17 3 9
18 4 10
19 5 11
20 6 12
Say I have a list of integers which correspond to points where I want to increase an interger value by 1.
for example Int64Index([5, 10]), not necessarily even spaced like that, and I have a dataframe like,
val new_col
0 0.729726564 1
1 0.067509062 1
2 0.943927114 1
3 0.037718436 1
4 0.512142908 1
5 0.767198655 2
6 0.202230787 2
7 0.343767479 2
8 0.540026305 2
9 0.256425022 2
10 0.403845023 3
11 0.444475008 3
12 0.464677745 3
I want to create new_col which is an int, but increases by on a the above index rows.
Edit:
import pandas as pd
import numpy as np
df = pd.DataFrame({'val': np.random.rand(14)})
df['new_col'] = 1
How to increase the value of new_col by one at each index point (5, 10)?
I see from your comment that you refer to an "arbitrary position" so you can space them as you wish with bins.
example:
bins = [-1,3,5,12,14] #space as you wish
labels = [1,2,3,4] #labels or in your case values that you want
df['new_col'] = pd.cut(list(df.index.values), bins=bins, labels=labels)
val new_col
0 0.509742 1
1 0.081701 1
2 0.990583 1
3 0.813398 1
4 0.905022 2
5 0.951973 2
6 0.702487 3
7 0.916432 3
8 0.647568 3
9 0.955188 3
10 0.875067 3
11 0.284496 3
12 0.393931 3
13 0.341115 4
Use numpy.split with enumerate:
import pandas as pd
indices = [5, 10]
df['add_col'] = pd.concat([s + n for n, s in enumerate(pd.np.split(df['new_col'], indices))])
print(df)
Output:
val new_col add_col
0 0.953431 1 1
1 0.929134 1 1
2 0.548343 1 1
3 0.080713 1 1
4 0.465212 1 1
5 0.290549 1 2
6 0.570886 1 2
7 0.232350 1 2
8 0.036968 1 2
9 0.455084 1 2
10 0.385177 1 3
11 0.811477 1 3
12 0.802502 1 3
13 0.001847 1 3
I'm having trouble working out how to add the index value of a pandas dataframe to each value at that index. For example, if I have a dataframe of zeroes, the row with index 1 should have a value of 1 for all columns. The row at index 2 should have values of 2 for each column, and so on.
Can someone enlighten me please?
You can use pd.DataFrame.add with axis=0. Just remember, as below, to convert your index to a series first.
df = pd.DataFrame(np.random.randint(0, 10, (5, 5)))
print(df)
0 1 2 3 4
0 3 4 2 2 2
1 9 6 1 8 0
2 2 9 0 5 3
3 3 1 1 7 0
4 2 6 3 6 6
df = df.add(df.index.to_series(), axis=0)
print(df)
0 1 2 3 4
0 3 4 2 2 2
1 10 7 2 9 1
2 4 11 2 7 5
3 6 4 4 10 3
4 6 10 7 10 10
I have two pandas dataframe with different size. two dataframe looks like
df1 =
x y data
1 2 5
2 2 7
5 3 9
3 5 2
and another dataframe looks like:
df2 =
x y value
5 3 7
1 2 4
3 5 2
7 1 4
4 6 5
2 2 1
7 5 8
I am trying to merge these two dataframe so that the final dataframe expected to have same combination of x and y with respective value. I am expecting final dataframe in this format:
x y data value
1 2 5 4
2 2 7 1
5 3 9 7
3 5 2 2
I tride this code but not getting expected results.
dfB.set_index('x').loc[dfA.x].reset_index()
Use merge, by default how='inner' so it can be omit and if join only on same columns parameter on can be omit too:
print (pd.merge(df1,df2))
x y data value
0 1 2 5 4
1 2 2 7 1
2 5 3 9 7
3 3 5 2 2
If in real data are multiple same column names use:
print (pd.merge(df1,df2, on=['x','y']))
x y data value
0 1 2 5 4
1 2 2 7 1
2 5 3 9 7
3 3 5 2 2
df1.merge(df2,by='x')
This will do