I have some function that takes a DataFrame and an integer as arguments:
func(df, int)
The function returns a new DataFrame, e.g.:
df2 = func(df,2)
I'd like to write a loop for integers 2-10, resulting in 9 DataFrames. If I do this manually it would look like this:
df2 = func(df,2)
df3 = func(df2,3)
df4 = func(df3,4)
df5 = func(df4,5)
df6 = func(df5,6)
df7 = func(df6,7)
df8 = func(df7,8)
df9 = func(df8,9)
df10 = func(df9,10)
Is there a way to write a loop that does this?
This type of thing is what lists are for.
data_frames = [df]
for i in range(2, 11):
data_frames.append(func(data_frames[-1], i))
It's a sign of brittle code when you see variable names like df1, df2, df3, etc. Use lists when you have a sequence of related objects to build.
To clarify, this data_frames is a list of DataFrames that can be concatenated with data_frames = pd.concat(data_frames, sort=False), resulting in one DataFrame that combines the original df with everything that results from the loop, correct?
Yup, that's right. If your goal is one final data frame, you can concatenate the entire list at the end to combine the information into a single frame.
Do you mind explaining why data_frames[-1], which takes the last item of the list, returns a DataFrame? Not clear on this.
Because as you're building the list, at all times each entry is a data frame. data_frames[-1] evaluates to the last element in the list, which in this case, is the data frame you most recently appended.
You may try using itertools.accumulate as follows:
sample data
df:
a b c
0 75 18 17
1 48 56 3
import itertools
def func(x, y):
return x + y
dfs = list(itertools.accumulate([df] + list(range(2, 11)), func))
[ a b c
0 75 18 17
1 48 56 3, a b c
0 77 20 19
1 50 58 5, a b c
0 80 23 22
1 53 61 8, a b c
0 84 27 26
1 57 65 12, a b c
0 89 32 31
1 62 70 17, a b c
0 95 38 37
1 68 76 23, a b c
0 102 45 44
1 75 83 30, a b c
0 110 53 52
1 83 91 38, a b c
0 119 62 61
1 92 100 47, a b c
0 129 72 71
1 102 110 57]
dfs is the list of result dataframes where each one is the adding of 2 - 10 to the previous result
If you want concat them all into one dataframe, Use pd.concat
pd.concat(dfs)
Out[29]:
a b c
0 75 18 17
1 48 56 3
0 77 20 19
1 50 58 5
0 80 23 22
1 53 61 8
0 84 27 26
1 57 65 12
0 89 32 31
1 62 70 17
0 95 38 37
1 68 76 23
0 102 45 44
1 75 83 30
0 110 53 52
1 83 91 38
0 119 62 61
1 92 100 47
0 129 72 71
1 102 110 57
You can use exec with a formatted string:
for i in range(2, 11):
exec("df{0} = func(df{1}, {0})".format(i, i - 1 if i > 2 else ''))
Related
I have a dataframe like this:
arr = np.random.randint(10, 99, (4,4))
df = pd.DataFrame(arr)
df.columns = pd.MultiIndex.from_product([['X','Y'],['A','B']])
And it looks like this:
X Y
A B A B
0 76 78 29 24
1 34 80 83 56
2 56 44 40 30
3 16 38 45 93
For all rows where A < B in X, I want to do A - B in Y. How do I do that?
I did this to filter and select A and B from Y
df[df['X']['A'] < df['X']['B']].loc[:, ('Y', ['A', 'B'])]
Y
A B
0 29 24
1 83 56
3 45 93
But I am lost on how to do A - B.
Thanks.
Assuming you want to subtract and update A with the result, you can do so by indexing as:
m = (df[('X','A')] < df[('X','B')])
df.loc[m,('Y','A')] = df.loc[m,('Y','A')] - df.loc[m,('Y','B')]
print(df)
X Y
A B A B
0 77 67 55 87
1 36 85 26 50
2 77 14 62 89
3 88 33 82 44
You can select columns by tuples for MultiIndex like:
np.random.seed(20)
arr = np.random.randint(10, 99, (4,4))
df = pd.DataFrame(arr)
df.columns = pd.MultiIndex.from_product([['X','Y'],['A','B']])
print (df)
X Y
A B A B
0 25 38 19 30
1 85 32 81 44
2 50 95 36 93
3 26 72 26 17
mask = df[('X','A')].lt(df[('X','B')])
print (mask)
0 True
1 False
2 True
3 True
dtype: bool
s = df.loc[mask, ('Y','A')].sub(df.loc[mask, ('Y','B')])
print (s)
0 -11
2 -57
3 9
dtype: int32
This question already has answers here:
Split (explode) pandas dataframe string entry to separate rows
(27 answers)
Separate comma-separated values within individual cells of Pandas Series using regex
(1 answer)
Closed 4 years ago.
I am looking to convert data frame df1 to df2 using Python. I have a solution that uses loops but I am wondering if there is an easier way to create df2.
df1
Test1 Test2 2014 2015 2016 Present
1 x a 90 85 84 0
2 x a:b 88 79 72 1
3 y a:b:c 75 76 81 0
4 y b 60 62 66 0
5 y c 68 62 66 1
df2
Test1 Test2 2014 2015 2016 Present
1 x a 90 85 84 0
2 x a 88 79 72 1
3 x b 88 79 72 1
4 y a 75 76 81 0
5 y b 75 76 81 0
6 y c 75 76 81 0
7 y b 60 62 66 0
8 y c 68 62 66 1
Here's one way using numpy.repeat and itertools.chain:
import numpy as np
from itertools import chain
# split by delimiter and calculate length for each row
split = df['Test2'].str.split(':')
lens = split.map(len)
# repeat non-split columns
cols = ('Test1', '2014', '2015', '2016', 'Present')
d1 = {col: np.repeat(df[col], lens) for col in cols}
# chain split columns
d2 = {'Test2': list(chain.from_iterable(split))}
# combine in a single dataframe
res = pd.DataFrame({**d1, **d2})
print(res)
2014 2015 2016 Present Test1 Test2
1 90 85 84 0 x a
2 88 79 72 1 x a
2 88 79 72 1 x b
3 75 76 81 0 y a
3 75 76 81 0 y b
3 75 76 81 0 y c
4 60 62 66 0 y b
5 68 62 66 1 y c
This will achieve what you want:
# Converting "Test2" strings into lists of values
df["Test2"] = df["Test2"].apply(lambda x: x.split(":"))
# Creating second dataframe with "Test2" values
test2 = df.apply(lambda x: pd.Series(x['Test2']),axis=1).stack().reset_index(level=1, drop=True)
test2.name = 'Test2'
# Joining both dataframes
df = df.drop('Test2', axis=1).join(test2)
print(df)
Test1 2014 2015 2016 Present Test2
1 x 90 85 84 0 a
2 x 88 79 72 1 a
2 x 88 79 72 1 b
3 y 75 76 81 0 a
3 y 75 76 81 0 b
3 y 75 76 81 0 c
4 y 60 62 66 0 b
5 y 68 62 66 1 c
Similar questions (column already existing as a list): 1 2
Here, in my code, the correlation matrix is a dataframe and diag is a list.
When I run the following code (CholDC part at the bottom), it returns numpy.float64 object is not iterable.
What do I need to do to make this code work?
def CholDC (correl, diag):
for column in correl:
j = 0
for j in correl[str(column)][j]:
Sum = correl[str(column)][j]
k = int(column)-1
if k >= 1:
Sum = Sum - correl[str(column)][k]*correl[str(j)][k]
else:
Sum = Sum
if int(column) == j:
if Sum <= 0:
print ("Should be PSD")
else:
diag[int(column)] = np.sqrt(Sum)
else:
correl[str(j)][int(column)] = Sum / diag[int(column)]
diag = []
df_correl = pd.DataFrame(df_correlation)
CholDC(df_correl, diag)
To loop through a dataframe, you need to use iterrows(). See the example below:
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randint(0,100, size=(10, 4)), columns=list('ABCD'))
print(df)
for index, row in df.iterrows():
print(row['B'], row['C'])
#dataframe output
A B C D
0 53 60 63 44
1 17 12 20 55
2 85 28 76 99
3 39 75 69 30
4 2 85 21 3
5 22 5 45 33
6 78 65 22 38
7 14 99 0 67
8 18 70 53 19
9 54 25 96 7
#output from loop
60 63
12 20
28 76
75 69
85 21
5 45
65 22
99 0
70 53
25 96
So use iterrows() in your code instead of for column in correl.
I have an XY problem. My setup is as follows - I have a dataframe with multi-index of 2 levels. I want to split it to two dataframes, taking only a fraction of rows from each label in the first level. For example:
df = pd.DataFrame({'a':[1, 1, 1, 1, 7, 7, 10, 10, 10, 10, 10, 10, 10], 'b': np.random.randint(0, 100, 13), 'c':np.random.randint(0, 100, 13)}).set_index(['a', 'b'])
df
Out[13]:
c
a b
1 86 83
1 37
57 64
53 5
7 4 66
13 49
10 61 0
32 84
97 59
69 98
25 52
17 31
37 95
So let's say the fraction is 0.5, I want to split it to two dataframes:
c
a b
1 86 83
1 37
7 4 66
10 61 0
32 84
97 59
69 98
c
a b
1 57 64
53 5
7 13 49
10 25 52
17 31
37 95
I thought about doing (df.groupby(level = 0).count() * 0.5).astype(int) to get the limit on which to "slice" the dataframe. Then, if only I had a way to add a running distance such as this:
c r
a b
1 38 36 0
6 47 1
57 6 2
55 45 3
7 7 51 0
90 96 1
10 59 75 0
27 16 1
58 7 2
79 51 3
58 77 4
63 48 5
87 60 6
I could join the limits and this df and filter with a boolean condition. Any suggestions on either problem? (splitting a fraction of rows or adding a level-aware running index)
This turns out to be pretty trivial with groupby:
In [36]: df.groupby(level=0).apply(lambda x:x.head(int(x.shape[0] * 0.5))).reset_index(level=0, drop=True)
Out[36]:
c
a b
1 86 83
1 37
7 4 66
10 61 0
32 84
97 59
Also getting the running index per group:
In [33]: df.groupby(level=0).cumcount()
Out[33]:
a b
1 38 0
6 1
57 2
55 3
7 7 0
90 1
10 59 0
27 1
58 2
79 3
58 4
63 5
87 6
I need to find the quickest way to sort each row in a dataframe with millions of rows and around a hundred columns.
So something like this:
A B C D
3 4 8 1
9 2 7 2
Needs to become:
A B C D
8 4 3 1
9 7 2 2
Right now I'm applying sort to each row and building up a new dataframe row by row. I'm also doing a couple of extra, less important things to each row (hence why I'm using pandas and not numpy). Could it be quicker to instead create a list of lists and then build the new dataframe at once? Or do I need to go cython?
I think I would do this in numpy:
In [11]: a = df.values
In [12]: a.sort(axis=1) # no ascending argument
In [13]: a = a[:, ::-1] # so reverse
In [14]: a
Out[14]:
array([[8, 4, 3, 1],
[9, 7, 2, 2]])
In [15]: pd.DataFrame(a, df.index, df.columns)
Out[15]:
A B C D
0 8 4 3 1
1 9 7 2 2
I had thought this might work, but it sorts the columns:
In [21]: df.sort(axis=1, ascending=False)
Out[21]:
D C B A
0 1 8 4 3
1 2 7 2 9
Ah, pandas raises:
In [22]: df.sort(df.columns, axis=1, ascending=False)
ValueError: When sorting by column, axis must be 0 (rows)
To Add to the answer given by #Andy-Hayden, to do this inplace to the whole frame... not really sure why this works, but it does. There seems to be no control on the order.
In [97]: A = pd.DataFrame(np.random.randint(0,100,(4,5)), columns=['one','two','three','four','five'])
In [98]: A
Out[98]:
one two three four five
0 22 63 72 46 49
1 43 30 69 33 25
2 93 24 21 56 39
3 3 57 52 11 74
In [99]: A.values.sort
Out[99]: <function ndarray.sort>
In [100]: A
Out[100]:
one two three four five
0 22 63 72 46 49
1 43 30 69 33 25
2 93 24 21 56 39
3 3 57 52 11 74
In [101]: A.values.sort()
In [102]: A
Out[102]:
one two three four five
0 22 46 49 63 72
1 25 30 33 43 69
2 21 24 39 56 93
3 3 11 52 57 74
In [103]: A = A.iloc[:,::-1]
In [104]: A
Out[104]:
five four three two one
0 72 63 49 46 22
1 69 43 33 30 25
2 93 56 39 24 21
3 74 57 52 11 3
I hope someone can explain the why of this, just happy that it works 8)
You could use pd.apply.
Eg:
A = pd.DataFrame(np.random.randint(0,100,(4,5)), columns=['one','two','three','four','five'])
print (A)
one two three four five
0 2 75 44 53 46
1 18 51 73 80 66
2 35 91 86 44 25
3 60 97 57 33 79
A = A.apply(np.sort, axis = 1)
print(A)
one two three four five
0 2 44 46 53 75
1 18 51 66 73 80
2 25 35 44 86 91
3 33 57 60 79 97
Since you want it in descending order, you can simply multiply the dataframe with -1 and sort it.
A = pd.DataFrame(np.random.randint(0,100,(4,5)), columns=['one','two','three','four','five'])
A = A * -1
A = A.apply(np.sort, axis = 1)
A = A * -1
Instead of using pd.DataFrame constructor, an easier way to assign the sorted values back is to use double brackets:
original dataframe:
A B C D
3 4 8 1
9 2 7 2
df[['A', 'B', 'C', 'D']] = np.sort(df)[:, ::-1]
A B C D
0 8 4 3 1
1 9 7 2 2
This way you can also sort a part of the columns:
df[['B', 'C']] = np.sort(df[['B', 'C']])[:, ::-1]
A B C D
0 3 8 4 1
1 9 7 2 2
One could try this approach to preserve the integrity of the df:
import pandas as pd
import numpy as np
A = pd.DataFrame(np.random.randint(0,100,(4,5)), columns=['one','two','three','four','five'])
print (A)
print(type(A))
one two three four five
0 85 27 64 50 55
1 3 90 65 22 8
2 0 7 64 66 82
3 58 21 42 27 30
<class 'pandas.core.frame.DataFrame'>
B = A.apply(lambda x: np.sort(x), axis=1, raw=True)
print(B)
print(type(B))
one two three four five
0 27 50 55 64 85
1 3 8 22 65 90
2 0 7 64 66 82
3 21 27 30 42 58
<class 'pandas.core.frame.DataFrame'>