I have a dataframe with more than 5000 columns but here is an example what it looks like:
data = {'AST_0-1': [1, 2, 3],
'AST_0-45': [4, 5, 6],
'AST_0-135': [7, 8, 20],
'AST_10-1': [10, 20, 32],
'AST_10-45': [47, 56, 67],
'AST_10-135': [48, 57, 64],
'AST_110-1': [100, 85, 93],
'AST_110-45': [100, 25, 37],
'AST_110-135': [44, 55, 67]}
I want to create multiple new dataframes based on the numbers after the "-" in the columns names. For example, a dataframe with all the columns that endes with "1" [df1=(AST_0-1;AST_10-1;AST_100-1)], another that ends with "45" and another ends with "135". To do that I know I will need a loop but I am actually having trouble to select the columns to then create the dataframes.
You can use str.extract on the v column names to get the wanted I'd, then groupby on axis=1.
Here creating a dictionary of dataframes.
group = df.columns.str.extract(r'(\d+)$', expand=False)
out = dict(list(df.groupby(group, axis=1)))
Output:
{'1': AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93,
'135': AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67,
'45': AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37}
Accessing ID 135:
out['135']
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
Use:
df = pd.DataFrame(data)
dfs = dict(list(df.groupby(df.columns.str.rsplit('-', n=1).str[1], axis=1)))
Output:
>>> dfs
{'1': AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93,
'135': AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67,
'45': AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37}
I know it's strongly discouraged but maybe you want to create dataframes like df1, df135, df45. In this case, you can use:
for name, df in dfs.items():
locals()[f'df{name}'] = df
>>> df1
AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93
>>> df135
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
>>> df45
AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37
data = {'AST_0-1': [1, 2, 3],
'AST_0-45': [4, 5, 6],
'AST_0-135': [7, 8, 20],
'AST_10-1': [10, 20, 32],
'AST_10-45': [47, 56, 67],
'AST_10-135': [48, 57, 64],
'AST_110-1': [100, 85, 93],
'AST_110-45': [100, 25, 37],
'AST_110-135': [44, 55, 67]}
import pandas as pd
df = pd.DataFrame(data)
value_list = ["1", "45", "135"]
for value in value_list:
interest_columns = [col for col in df.columns if col.split("-")[1] == value]
df_filtered = df[interest_columns]
print(df_filtered)
Output:
AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93
AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
I assume your problem is with the keys of the dictionary. you can get list of the keys with data.keys() then iterate it
for example
df1 = pd.DataFrame()
df45 = pd.DataFrame()
df135 = pd.DataFrame()
for i in list(data.keys()):
the_key = i.split('-')
if the_key[1] == '1':
df1[i] = data[i]
elif the_key[1] == '45':
df45[i] = data[i]
elif the_key[1] == '135':
df135[i] = data[i]
Related
Here is my toy example, my question is how to create new column called trial={2,3}, 2 & 3comes from the number part in the columns names 2.0__sum_values, 3.0__sum_values,
my code is:
import pandas as pd
before_spliting = {"ID": [1, 2,3], "2.0__sum_values": [33,28,40],"2.0__mediane": [33,70,20],"2.0__root_mean_square":[33,4,30],"3.0__sum_values": [33,28,40],"3.0__mediane": [33,70,20],"3.0__root_mean_square":[33,4,30]}
before_spliting = pd.DataFrame(before_spliting)
print(before_spliting)
ID 2.0__sum_values 2.0__mediane 2.0__root_mean_square 3.0__sum_values \
0 1 33 33 33 33
1 2 28 70 4 28
2 3 40 20 30 40
3.0__mediane 3.0__root_mean_square
0 33 33
1 70 4
2 20 30
after_spliting = { "ID": [1,1,2, 2,3,3], "trial": [2, 3,2,3,2,3],"sum_values": [33,33,28,28,40,40],"mediane": [33,33,70,70,20,20],"root_mean_square":[33,33,4,4,30,30]}
after_spliting = pd.DataFrame(after_spliting)
print(after_spliting)
ID trial sum_values mediane root_mean_square
0 1 2 33 33 33
1 1 3 33 33 33
2 2 2 28 70 4
3 2 3 28 70 4
4 3 2 40 20 30
5 3 3 40 20 30
You could try:
res = df.melt(id_vars="ID")
res[["trial", "columns"]] = res["variable"].str.split("__", expand=True)
res = (
res
.pivot_table(
index=["ID", "trial"], columns="columns", values="value", aggfunc=list
)
.explode(sorted(set(res["columns"])))
.reset_index()
)
Result for the following input dataframe
data = {
"ID": [1, 2, 3],
"2.0__sum_values": [33, 28, 40], "2.0__mediane": [43, 80, 30], "2.0__root_mean_square":[37, 4, 39],
"3.0__sum_values": [34, 29, 41], "3.0__mediane": [44, 81, 31], "3.0__root_mean_square":[38, 5, 40]
}
df = pd.DataFrame(data)
is
columns ID trial mediane root_mean_square sum_values
0 1 2.0 43 37 33
1 1 3.0 44 38 34
2 2 2.0 80 4 28
3 2 3.0 81 5 29
4 3 2.0 30 39 40
5 3 3.0 31 40 41
Alternative solution with the same output:
res = df.melt(id_vars="ID")
res[["trial", "columns"]] = res["variable"].str.split("__", expand=True)
res = res.set_index(["ID", "trial"]).drop(columns="variable").sort_index()
res = pd.concat(
(group[["value"]].rename(columns={"value": key})
for key, group in res.groupby("columns")),
axis=1
).reset_index()
Because you are using curly brackets {}, it is not possible to have duplicate variables inside the curly brackets {}, so instead you can use brackets [], for creating new column.
trial = []
for i in range(len(d1)):
trial.append([d1['2.0__sum_values'][i], d1['3.0__sum_values'][i]])
d1['trial'] = trial
Best of Luck
As the long title hints, I have an array of shape [n,m,z] and I want to turn it into a Pandas dataframe with the first column being an array of row and col position (2nd and 3rd dimension) and the next 13 columns being the value as from 1st dimension, leading to a DataFrame of (m*z)Xn. I have been reading the other examples but I haven't found any with pivoting one dimension to columns.
For example, for an array of shape [3,2,4]
import numpy as np
import pandas as pd
rand_int = np.random.randint(10,90,(3,2,4))
print(rand_int)
[[[57 76 30 34]
[21 70 10 51]]
[[73 67 55 51]
[78 38 50 76]]
[[89 58 47 35]
[45 11 61 18]]]
I want it to return as
Pair Col1 Col2 Col3
[0,0] 57 73 89
[0,1] 76 67 58
[0,2] 30 55 47
...
[1,3] 51 76 18
Can anyone help?
I may loop in m and z dimension to retrieve value.
import numpy as np
import pandas as pd
n = 3
m = 2
z = 4
rand_int = np.random.randint(10, 90, (n,m,z))
datas = [[[57, 76, 30, 34],
[21, 70, 10, 51]],
[[73, 67, 55, 51],
[78, 38, 50, 76]],
[[89, 58, 47, 35],
[45, 11, 61, 18]]]
res = []
for i in range(m):
for j in range(z):
res.append([[i, j]] + [data[i][j] for data in datas])
df = pd.DataFrame(res, columns=['Pair', 'Col1', 'Col2', 'Col3'])
print(df)
Pair Col1 Col2 Col3
0 [0, 0] 57 73 89
1 [0, 1] 76 67 58
2 [0, 2] 30 55 47
3 [0, 3] 34 51 35
4 [1, 0] 21 78 45
5 [1, 1] 70 38 11
6 [1, 2] 10 50 61
7 [1, 3] 51 76 18
I have a pandas dataframe like shown below where the coordinates column contains X and Y coordinates:
Coordinates Cluster
0 [25, 79] 2
1 [34, 51] 2
2 [22, 53] 2
3 [27, 78] 2
4 [33, 59] 2
I want to split the Coordinates column into X and Y column so that I have something like below:
X Y Cluster
0 25 79 2
1 34 51 2
2 22 53 2
3 27 78 2
4 33 59 2
How can I achieve this?
Check with
out = df.join(pd.DataFrame(df.pop('Coordinates').tolist(), index=df.index, columns=["X", "Y"]))
You could dump it into numpy as well :
df = pd.DataFrame(
{
"Coordinates": [[25, 79], [34, 51], [22, 53], [27, 78], [33, 59]],
"Cluster": [2, 2, 2, 2, 2],
})
box = df.to_numpy()
pd.DataFrame(np.column_stack([np.vstack(box[:, 0]), box[:, -1]]),
columns=["X", "Y", "Cluster"])
X Y Cluster
0 25 79 2
1 34 51 2
2 22 53 2
3 27 78 2
4 33 59 2
df = pd.DataFrame(df.Coordinates.str.split(',',1).tolist(),
columns = ['X','Y'])
I'm trying to make a ordinary loop under specific conditions.
I want to interact over rows, checking conditions, and then interact over columns counting how many times the condition was meet.
This counting should generate a new column e my dataframe indicating the total count for each row.
I tried to use apply and mapapply with no success.
I successfully generated the following code to reach my goal.
But I bet there is more efficient ways, or even, built-in pandas functions to do it.
Anyone know how?
sample code:
import pandas as pd
df = pd.DataFrame({'1column': [11, 22, 33, 44],
'2column': [32, 42, 15, 35],
'3column': [33, 77, 26, 64],
'4column': [99, 11, 110, 22],
'5column': [20, 64, 55, 33],
'6column': [10, 77, 77, 10]})
check_columns = ['3column','5column', '6column' ]
df1 = df.copy()
df1['bignum_count'] = 0
for column in check_columns:
inner_loop_count = []
bigseries = df[column]>=50
for big in bigseries:
if big:
inner_loop_count.append(1)
else:
inner_loop_count.append(0)
df1['bignum_count'] += inner_loop_count
# View the dataframe
df1
results:
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1
Index on the columns of interest and check which are greater or equal (ge) than a threshold:
df['bignum_count'] = df[check_columns].ge(50).sum(1)
print(df)
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1
check_columns
df1 = df.copy()
Use DataFrame.ge for >= with counts Trues values by sum:
df['bignum_count'] = df[check_columns].ge(50).sum(axis=1)
#alternative
#df['bignum_count'] = (df[check_columns]>=50).sum(axis=1)
print(df)
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1
I have a variable, 'ImageName' which ranges from 0-1600. I want to create a new variable, 'LocationCode', based on the value of 'ImageName'.
If 'ImageName' is less than 70, I want 'LocationCode' to be 1. if 'ImageName' is between 71 and 90, I want 'LocationCode' to be 2. I have 13 different codes in all. I'm not sure how to write this in python pandas. Here's what I tried:
def spatLoc(ImageName):
if ImageName <=70:
LocationCode = 1
elif ImageName >70 and ImageName <=90:
LocationCode = 2
return LocationCode
df['test'] = df.apply(spatLoc(df['ImageName'])
but it returned an error. I'm clearly not defining things the right way but I can't figure out how to.
You can just use 2 boolean masks:
df.loc[df['ImageName'] <= 70, 'Test'] = 1
df.loc[(df['ImageName'] > 70) & (df['ImageName'] <= 90), 'Test'] = 2
By using the masks you only set the value where the boolean condition is met, for the second mask you need to use the & operator to and the conditions and enclose the conditions in parentheses due to operator precedence
Actually I think it would be better to define your bin values and call cut, example:
In [20]:
df = pd.DataFrame({'ImageName': np.random.randint(0, 100, 20)})
df
Out[20]:
ImageName
0 48
1 78
2 5
3 4
4 9
5 81
6 49
7 11
8 57
9 17
10 92
11 30
12 74
13 62
14 83
15 21
16 97
17 11
18 34
19 78
In [22]:
df['group'] = pd.cut(df['ImageName'], range(0, 105, 10), right=False)
df
Out[22]:
ImageName group
0 48 [40, 50)
1 78 [70, 80)
2 5 [0, 10)
3 4 [0, 10)
4 9 [0, 10)
5 81 [80, 90)
6 49 [40, 50)
7 11 [10, 20)
8 57 [50, 60)
9 17 [10, 20)
10 92 [90, 100)
11 30 [30, 40)
12 74 [70, 80)
13 62 [60, 70)
14 83 [80, 90)
15 21 [20, 30)
16 97 [90, 100)
17 11 [10, 20)
18 34 [30, 40)
19 78 [70, 80)
Here the bin values were generated using range but you could pass your list of bin values yourself, once you have the bin values you can define a lookup dict:
In [32]:
d = dict(zip(df['group'].unique(), range(len(df['group'].unique()))))
d
Out[32]:
{'[0, 10)': 2,
'[10, 20)': 4,
'[20, 30)': 9,
'[30, 40)': 7,
'[40, 50)': 0,
'[50, 60)': 5,
'[60, 70)': 8,
'[70, 80)': 1,
'[80, 90)': 3,
'[90, 100)': 6}
You can now call map and add your new column:
In [33]:
df['test'] = df['group'].map(d)
df
Out[33]:
ImageName group test
0 48 [40, 50) 0
1 78 [70, 80) 1
2 5 [0, 10) 2
3 4 [0, 10) 2
4 9 [0, 10) 2
5 81 [80, 90) 3
6 49 [40, 50) 0
7 11 [10, 20) 4
8 57 [50, 60) 5
9 17 [10, 20) 4
10 92 [90, 100) 6
11 30 [30, 40) 7
12 74 [70, 80) 1
13 62 [60, 70) 8
14 83 [80, 90) 3
15 21 [20, 30) 9
16 97 [90, 100) 6
17 11 [10, 20) 4
18 34 [30, 40) 7
19 78 [70, 80) 1
The above can be modified to suit your needs but it's just to demonstrate an approach which should be fast and without the need to iterate over your df.
In Python, you use the dictionary lookup notation to find a field within a row. The field name is ImageName. In the spatLoc() function below, the parameter row is a dictionary containing the entire row, and you would find an individual column by using the field name as key to the dictionary.
def spatLoc(row):
if row['ImageName'] <=70:
LocationCode = 1
elif row['ImageName'] >70 and row['ImageName'] <=90:
LocationCode = 2
return LocationCode
df['test'] = df.apply(spatLoc, axis=1)