1. Background
The .xls files I have now contain some parameters of multi-pollutant in many aspects for different sites.
I created an simplified dataframe below as an illustration:
Some declaration:
Column Site contain the monitoring sites properties. In this case, Sites S1, S2 are the only two locatio here.
Column Time contain the monitoring period for different sites.
Species A & B represents two chemical pollutants had been detected.
Conc is one key parameter for each species(A & B) represents the concentration. Notice that, the concentration of Species A should be measured twice as parallel.
P and Q are two different analysis experiments. Since species A has two samples, it has P1, P2, P3 & Q1, Q2 as the analysis results respectively. Species B has only be analyzed by P. So, P1, P2, P3 are the only parameters.
After read some post on manipulating the pivot_table using Pandas, I want to have a try.
2. My target
I presented my target file construction manually in Excel showing like this:
3. My work
df = pd.ExcelFile("./test_file.xls")
df = df.parse("Sheet1")
pd.pivot_table(df,index = ["Site","Time","Species"])
This is the result:
Update
What I'm trying to figure out is to creat two columns P & Q and sub_columns below them.
I have re-upload my test file here. Anyone interested in can download it.
The P and Q tests are for each sample of species A respectively.
The Conc test are for them both.
Any advice would be appreciate!
IIUC
You want the same dataframe, but with a better column index.
To create the first level:
level0 = df.columns.str.extract(r'([^\d]*)', expand=False)
then assign a multiindex to the columns attribute.
df.columns = pd.MultiIndex.from_arrays([level0, df.columns])
Looks like:
print df
Conc P Q
Conc P1 P2 P3 Q1 Q2
Site Time Species
S1 20141222 A 0.79 0.02 0.62 1.05 0.01 1.73
20141228 A 0.13 0.01 0.79 0.44 0.01 1.72
20150103 B 0.48 0.03 1.39 0.84 NaN NaN
20150104 A 0.36 0.02 1.13 0.31 0.01 0.94
20150109 A 0.14 0.01 0.64 0.35 0.00 1.00
20150114 B 0.47 0.08 1.16 1.40 NaN NaN
20150115 A 0.62 0.02 0.90 0.95 0.01 2.63
20150116 A 0.71 0.03 1.72 1.71 0.01 2.53
20150121 B 0.61 0.03 0.67 0.87 NaN NaN
S2 20141222 A 0.23 0.01 0.66 0.44 0.01 1.49
20141228 A 0.42 0.06 0.99 1.56 0.00 2.18
20150103 B 0.09 0.01 0.56 0.12 NaN NaN
20150104 A 0.18 0.01 0.56 0.36 0.00 0.67
20150109 A 0.50 0.03 0.74 0.71 0.00 1.11
20150114 B 0.64 0.06 1.76 0.92 NaN NaN
20150115 A 0.58 0.05 0.77 0.95 0.01 1.54
20150116 A 0.93 0.04 1.33 0.69 0.00 0.82
20150121 B 0.33 0.09 1.33 0.76 NaN NaN
Related
I am trying to fill the dataframe with certain condition but I can not find the appropriate solution. I have a bit larger dataframe bet let's say that my pandas dataframe looks like this:
0
1
2
3
4
5
0.32
0.40
0.60
1.20
3.40
0.00
0.17
0.12
0.00
1.30
2.42
0.00
0.31
0.90
0.80
1.24
4.35
0.00
0.39
0.00
0.90
1.50
1.40
0.00
And I want to update the values, so that if 0.00 appears once in a row (row 2 and 4) that until the end all the values are 0.00. Something like this:
0
1
2
3
4
5
0.32
0.40
0.60
1.20
3.40
0.00
0.17
0.12
0.00
0.00
0.00
0.00
0.31
0.90
0.80
1.24
4.35
0.00
0.39
0.00
0.00
0.00
0.00
0.00
I have tried with
for t in range (1,T-1):
data= np.where(df[t-1]==0,0,df[t])
and several others ways but I couldn't get what I want.
Thanks!
Try as follows:
Select from df with df.eq(0). This will get us all zeros and the rest as NaN values.
Now, add df.ffill along axis=1. This will continue all the zeros through to the end of each row.
Finally, change the dtype to bool by chaining df.astype, thus turning all zeros into False, and all NaN values into True.
We feed the result to df.where. For all True values, we'll pick from the df itself, for all False values, we'll insert 0.
df = df.where(df[df.eq(0)].ffill(axis=1).astype(bool), 0)
print(df)
0 1 2 3 4 5
0 0.32 0.40 0.6 1.20 3.40 0.0
1 0.17 0.12 0.0 0.00 0.00 0.0
2 0.31 0.90 0.8 1.24 4.35 0.0
3 0.39 0.00 0.0 0.00 0.00 0.0
I have a pandas dataframe with multiple columns were their values increase from some value between 0 and 1 for column A up to column E which is always 1 (representing cumulative probabilities).
ID A B C D E SIM
1: 0.49 0.64 0.86 0.97 1.00 0.98
2: 0.76 0.84 0.98 0.99 1.00 0.87
3: 0.32 0.56 0.72 0.92 1.00 0.12
The column SIM represents a column with random uniform numbers.
I wish to add a new column SIM_CAT with values equal to the column-name which value is the right boundary of the interval in which the value in column SIM falls:
ID A B C D E SIM SIM_CAT
1: 0.49 0.64 0.86 0.97 1.00 0.98 E
2: 0.76 0.84 0.98 0.99 1.00 0.87 C
3: 0.32 0.56 0.72 0.92 1.00 0.12 A
I there a concise way to do that?
You can compare columns with SIM and use idxmax to find the 1st greater value:
cols = list('ABCDE')
df['SIM_CAT'] = df[cols].ge(df.SIM, axis=0).idxmax(axis=1)
df
ID A B C D E SIM SIM_CAT
0 1: 0.49 0.64 0.86 0.97 1.0 0.98 E
1 2: 0.76 0.84 0.98 0.99 1.0 0.87 C
2 3: 0.32 0.56 0.72 0.92 1.0 0.12 A
If SIM can contain values greater than 1:
cols = list('ABCDE')
df['SIM_CAT'] = None
df.loc[df.SIM <= 1, 'SIM_CAT'] = df[cols].ge(df.SIM, axis=0).idxmax(axis=1)
df
ID A B C D E SIM SIM_CAT
0 1: 0.49 0.64 0.86 0.97 1.0 0.98 E
1 2: 0.76 0.84 0.98 0.99 1.0 0.87 C
2 3: 0.32 0.56 0.72 0.92 1.0 0.12 A
I am trying to create a list (3 of the highest correlated selections) from a correlation matrix. Let's say I have the following matrix:
A B C D E
A 1.00 0.15 0.57 0.11 0.98
B 0.59 1.00 0.32 0.24 0.54
C 0.96 0.65 1.00 0.22 0.67
D 0.72 0.33 0.78 1.00 0.92
E 0.88 0.94 0.61 0.48 1.00
So lets say I then sort the matrix to give me the most correlated according to column B, the matrix will now look like this:
A B C D E
B 0.59 1.00 0.32 0.24 0.54
E 0.88 0.94 0.61 0.48 1.00
C 0.96 0.65 1.00 0.22 0.67
D 0.72 0.33 0.78 1.00 0.92
A 1.00 0.15 0.57 0.11 0.98
As you can see the matrix has been sorted to show me column B's most correlated counterparts. What I would then like is to be able to return the top 3 correlated letters in a list form, while missing out the top line (B), as this is obviously 1:1 correlated.
So I would like top_correlated = ['E', 'C', 'D'] for example, or I want my list to be that I mean.
As with all my posts, I'm aware that the etiquette is to at least attempt to show some effort with regards to code, but as usual I'm completely stumped, hence why I am posting here. Any help greatly appreciated.
Instead of sorting the entire DataFrame, you can call nlargest on your column, grab the index, and slice from the first element, since it should always be itself.
col = 'B'
df[col].nlargest(4).index[1:].tolist()
['E', 'C', 'D']
I have a dataframe df1 where each column represents a time series of returns. I want to create a new dataframe df2 with columns that corresponds to each of the columns in df1 where the column in df2 is defined to be the average of the top 5 most correlated columns in df1.
import pandas as pd
import numpy as np
from string import ascii_letters
np.random.seed([3,1415])
df1 = pd.DataFrame(np.random.randn(100, 10).round(2),
columns=list(ascii_letters[26:36]))
print df1.head()
A B C D E F G H I J
0 -2.13 -1.27 -1.97 -2.26 -0.35 -0.03 0.32 0.35 0.72 0.77
1 -0.61 0.35 -0.35 -0.42 -0.91 -0.14 0.75 -1.50 0.61 0.40
2 -0.96 1.49 -0.35 -1.47 1.06 1.06 0.59 0.30 -0.77 0.83
3 1.49 0.26 -0.90 0.38 -0.52 0.05 0.95 -1.03 0.95 0.73
4 1.24 0.16 -1.34 0.16 1.26 0.78 1.34 -1.64 -0.20 0.13
I expect the head of the resulting dataframe rounded to 2 places to look like:
A B C D E F G H I J
0 -0.78 -0.70 -0.53 -0.45 -0.99 -0.10 -0.47 -0.86 -0.31 -0.64
1 -0.49 -0.11 -0.45 -0.03 -0.04 0.10 -0.26 0.11 -0.06 -0.10
2 0.03 0.13 0.54 0.33 -0.13 0.27 0.22 0.32 0.41 0.27
3 -0.22 0.13 0.19 0.58 0.63 0.24 0.34 0.51 0.32 0.22
4 -0.04 0.31 0.23 0.52 0.43 0.24 0.07 0.31 0.73 0.43
For each column in the correlation matrix, take the six largest and ignore the first one (i.e. 100% correlated with itself). Use a dictionary comprehension to do this for each column.
Use another dictionary comprehension to located this columns in df1 and take their mean. Create a dataframe from the result, and reorder the columns to match those of df1 by appending [df1.columns].
corr = df1.corr()
most_correlated_cols = {col: corr[col].nlargest(6)[1:].index
for col in corr}
df2 = pd.DataFrame({col: df1.loc[:, most_correlated_cols[col]].mean(axis=1)
for col in df1})[df1.columns]
>>> df2.head()
A B C D E F G H I J
0 -0.782 -0.698 -0.526 -0.452 -0.994 -0.102 -0.472 -0.856 -0.310 -0.638
1 -0.486 -0.106 -0.454 -0.032 -0.042 0.100 -0.258 0.108 -0.064 -0.102
2 0.026 0.132 0.544 0.330 -0.130 0.272 0.224 0.320 0.414 0.274
3 -0.224 0.128 0.186 0.582 0.626 0.242 0.344 0.506 0.318 0.224
4 -0.044 0.310 0.230 0.518 0.428 0.238 0.068 0.306 0.734 0.432
%%timeit
corr = df1.corr()
most_correlated_cols = {
col: corr[col].nlargest(6)[1:].index
for col in corr}
df2 = pd.DataFrame({col: df1.loc[:, most_correlated_cols[col]].mean(axis=1)
for col in df1})[df1.columns]
100 loops, best of 3: 10 ms per loop
%%timeit
corr = df1.corr()
df2 = corr.apply(argsort).head(5).apply(lambda x: avg_of(x, df1))
100 loops, best of 3: 16 ms per loop
Setup
import pandas as pd
import numpy as np
from string import ascii_letters
np.random.seed([3,1415])
df1 = pd.DataFrame(np.random.randn(100, 10).round(2),
columns=list(ascii_letters[26:36]))
Solution
corr = df.corr()
# I don't want a securities correlation with itself to be included.
# Because `corr` is symmetrical, I can assume that a series' name will be in its index.
def remove_self(x):
return x.loc[x.index != x.name]
# This builds utilizes `remove_self` then sorts by correlation
# and returns the index.
def argsort(x):
return pd.Series(remove_self(x).sort_values(ascending=False).index)
# This reaches into `df` and gets all columns identified in x
# then takes the mean.
def avg_of(x, df):
return df.loc[:, x].mean(axis=1)
# Putting it all together.
df2 = corr.apply(argsort).head(5).apply(lambda x: avg_of(x, df))
print df2.round(2).head()
A B C D E F G H I J
0 -0.78 -0.70 -0.53 -0.45 -0.99 -0.10 -0.47 -0.86 -0.31 -0.64
1 -0.49 -0.11 -0.45 -0.03 -0.04 0.10 -0.26 0.11 -0.06 -0.10
2 0.03 0.13 0.54 0.33 -0.13 0.27 0.22 0.32 0.41 0.27
3 -0.22 0.13 0.19 0.58 0.63 0.24 0.34 0.51 0.32 0.22
4 -0.04 0.31 0.23 0.52 0.43 0.24 0.07 0.31 0.73 0.43
I have the following df in pandas:
df:
DATE STOCK DATA1 DATA2 DATA3
01/01/12 ABC 0.40 0.88 0.22
04/01/12 ABC 0.50 0.49 0.13
07/01/12 ABC 0.85 0.36 0.83
10/01/12 ABC 0.28 0.12 0.39
01/01/13 ABC 0.86 0.87 0.58
04/01/13 ABC 0.95 0.39 0.87
07/01/13 ABC 0.60 0.25 0.56
10/01/13 ABC 0.15 0.28 0.69
01/01/11 XYZ 0.94 0.40 0.50
04/01/11 XYZ 0.65 0.19 0.81
07/01/11 XYZ 0.89 0.59 0.69
10/01/11 XYZ 0.12 0.09 0.18
01/01/12 XYZ 0.25 0.94 0.55
04/01/12 XYZ 0.07 0.22 0.67
07/01/12 XYZ 0.46 0.08 0.54
10/01/12 XYZ 0.04 0.03 0.94
...
I want to group by the stocks, sort by date and then for specified columns (in this case DATA1 and DATA3), I want to get the last four items summed (TTM data).
The output would look like this:
DATE STOCK DATA1 DATA2 DATA3 DATA1_TTM DATA3_TTM
01/01/12 ABC 0.40 0.88 0.22 NaN NaN
04/01/12 ABC 0.50 0.49 0.13 NaN NaN
07/01/12 ABC 0.85 0.36 0.83 NaN NaN
10/01/12 ABC 0.28 0.12 0.39 2.03 1.56
01/01/13 ABC 0.86 0.87 0.58 2.49 1.92
04/01/13 ABC 0.95 0.39 0.87 2.94 2.66
07/01/13 ABC 0.60 0.25 0.56 2.69 2.39
10/01/13 ABC 0.15 0.28 0.69 2.55 2.70
01/01/11 XYZ 0.94 0.40 0.50 NaN NaN
04/01/11 XYZ 0.65 0.19 0.81 NaN NaN
07/01/11 XYZ 0.89 0.59 0.69 NaN NaN
10/01/11 XYZ 0.12 0.09 0.18 2.59 2.18
01/01/12 XYZ 0.25 0.94 0.55 1.90 2.23
04/01/12 XYZ 0.07 0.22 0.67 1.33 2.09
07/01/12 XYZ 0.46 0.08 0.54 0.89 1.94
10/01/12 XYZ 0.04 0.03 0.94 0.82 2.70
...
My approach so far has been to sort by date, then group, then iterate through each group and if there are 3 older events then the current event I sum. Also, I want to check to see if the dates fall within 1 year. Can anyone offer a better way in Python? Thank you.
Added: As a clarification for the 1 year part, let's say you take the last four dates and it goes 1/1/1993, 4/1/12, 7/1/12, 10/1/12 -- a data error. I wouldn't want to sum those four. I would want that one to say NaN.
For this I think you can use transform and rolling_sum. Starting from your dataframe, I might do something like:
>>> df["DATE"] = pd.to_datetime(df["DATE"]) # switch to datetime to ease sorting
>>> df = df.sort(["STOCK", "DATE"])
>>> rsum_columns = "DATA1", "DATA3"
>>> grouped = df.groupby("STOCK")[rsum_columns]
>>> new_columns = grouped.transform(lambda x: pd.rolling_sum(x, 4))
>>> df[new_columns.columns + "_TTM"] = new_columns
>>> df
DATE STOCK DATA1 DATA2 DATA3 DATA1_TTM DATA3_TTM
0 2012-01-01 00:00:00 ABC 0.40 0.88 0.22 NaN NaN
1 2012-04-01 00:00:00 ABC 0.50 0.49 0.13 NaN NaN
2 2012-07-01 00:00:00 ABC 0.85 0.36 0.83 NaN NaN
3 2012-10-01 00:00:00 ABC 0.28 0.12 0.39 2.03 1.57
4 2013-01-01 00:00:00 ABC 0.86 0.87 0.58 2.49 1.93
5 2013-04-01 00:00:00 ABC 0.95 0.39 0.87 2.94 2.67
6 2013-07-01 00:00:00 ABC 0.60 0.25 0.56 2.69 2.40
7 2013-10-01 00:00:00 ABC 0.15 0.28 0.69 2.56 2.70
8 2011-01-01 00:00:00 XYZ 0.94 0.40 0.50 NaN NaN
9 2011-04-01 00:00:00 XYZ 0.65 0.19 0.81 NaN NaN
10 2011-07-01 00:00:00 XYZ 0.89 0.59 0.69 NaN NaN
11 2011-10-01 00:00:00 XYZ 0.12 0.09 0.18 2.60 2.18
12 2012-01-01 00:00:00 XYZ 0.25 0.94 0.55 1.91 2.23
13 2012-04-01 00:00:00 XYZ 0.07 0.22 0.67 1.33 2.09
14 2012-07-01 00:00:00 XYZ 0.46 0.08 0.54 0.90 1.94
15 2012-10-01 00:00:00 XYZ 0.04 0.03 0.94 0.82 2.70
[16 rows x 7 columns]
I don't know what you're asking by "Also, I want to check to see if the dates fall within 1 year", so I'll leave that alone.