I have a CSV file (Mspec Data) which looks like this:
#Header
#
"Cycle";"Time";"ms";"mass amu";"SEM c/s"
0000000001;00:00:01;0000001452; 1,00; 620
0000000001;00:00:01;0000001452; 1,20; 4730
0000000001;00:00:01;0000001452; 1,40; 4610
... ;..:..:..;..........;.........;...........
I read it via:
df = pd.read_csv(Filename, header=30,delimiter=';',decimal= ',' )
the result looks like this:
Cycle Time ms mass amu SEM c/s
0 1 00:00:01 1452 1.0 620
1 1 00:00:01 1452 1.2 4730
2 1 00:00:01 1452 1.4 4610
... ... ... ... ... ...
3872 4 00:06:30 390971 1.0 32290
3873 4 00:06:30 390971 1.2 31510
This data contains several Mass spec scans with identical parameters. Cycle number 1 means scan 1 and so forth. I would like to calculate the mean in the last column SEM c/s for each corresponding identical mass. in the end i would like to have a new data frame containing only:
ms "mass amu" "SEM c/s(mean over all cycles)"
obviously the mean of the mass does not need to be calculated. I would like to avoid to read each cycle into a new dataframe as this would mean I have to look up the length of each Mass spectrum . The "mass range" and " resolution" is obviously different for different measurements (Solution).
I guess doing the calculation in numpy directly would be best but I am stuck?
Thank you in advance
You can use groupby(), something like this:
df.groupby(['ms', 'mass amu'])['SEM c/s'].mean()
You have different ms over all the cycles, and you want to calculate the mean of SEM over each group of same ms. I will show you a step-by-step example.
You should invoke each group and then put the mean in a dictionary to convert in DataFrame.
ms_uni = df['ms'].unique() #calculate the unique ms values
new_df_dict = { "ma":[], "SEM":[] } #later you will rename them
for un in range( len(ms_uni) ):
cms = ms_uni[un]
new_df_dict['ma'].append( cms )
new_df_dict['SEM'].append( df[ df['ms']==cms ]['SEM c/s'].mean() ) #advise: change the column name in a more safe SEM-c_s
new_df = pd.DataFrame(new_df_dict) #end of the dirty work
new_df.rename(index=str, columns={'ma':"mass amu", "SEM": "SEM c/s(mean over all cycles)"} )
Hope it will be helpful
Related
I have two pandas data frames,
First frame ip2CountryDF has 2M+ records of:
startIP, endIP, countryISO
16777216,16777471,US
16777472,16778239,CN
16778240,16779263,AU
IP Addresses in this data frame are represented as Integers for efficiency and matching purposes
Second frame inputDF has 60K+ records of:
sourceIP, eventTime, integerIP
114.119.157.43,01/Mar/2021,1920441643
193.205.128.7,01/Mar/2021,3251470343
193.205.128.7,01/Mar/2021,3251470343
193.205.128.7,01/Mar/2021,3251470343
The data I have are all from publicly available datasets
What I'm trying to do is to identify source country for each row in inputDF based on the value in ip2CountryDF.
Ideally, I will pick inputDF['integerIP'] and get ip2CountryDF['countryISO'] where the integerIP from inputDF is in the range between ip2CountryDF['startIP'] and ip2CountryDF['endIP']
So far I got the data done using a for loop, it worked on the test set (searching data for 5 entries in inputDF) but when I hit a bigger dataset my machine fans pick up and after a couple of minutes I get no results and I cancel the job (which tells me how inefficient my code is), here is the code I use (inefficient but it works):
countryList = []
for index, row in inputDF.iterrows():
integerIP = row['integerIP']
countryISO = ip2CountryDF.loc[(integerIP >= ip2CountryDF['startIP']) & (integerIP <= ip2CountryDF['endIP']),'countryISO'].iloc[0]
countryList.append(countryISO)
inputDF['countryISO'] = countryList
what I need help with, can this better be handled in a more efficient and more (panda-like) way, I was trying to use something like:
inputDF['countryISO'] = ip2CountryDF.loc[(inputDF['integerIP'] >= ip2CountryDF['startIP']) & (inputDF['integerIP'] <= ip2CountryDF['endIP']),'countryISO'].iloc[0]
Many thanks for taking the time to help me with this
You are so closer. You just lack a call to "map" function.
Load the IpToCountry.csv (for documentation purpose):
IP2COUNTRY = "https://github.com/urbanadventurer/WhatWeb/raw/master/plugins/IpToCountry.csv"
db = pd.read_csv(IP2COUNTRY, header=None, usecols=[0, 1, 4],
names=["startIP", "endIP", "countryISO"], comment="#")
>>> db
startIP endIP countryISO
0 0 16777215 ZZ
1 16777216 16777471 AU
2 16777472 16777727 CN
3 16777728 16778239 CN
4 16778240 16779263 AU
... ... ... ...
211757 4211081216 4227858431 ZZ
211758 4227858432 4244635647 ZZ
211759 4244635648 4261412863 ZZ
211760 4261412864 4278190079 ZZ
211761 4278190080 4294967295 ZZ
[211762 rows x 3 columns]
Create a function ip2country that for a decimal ip returns the corresponding iso country code:
def ip2country(ip: int):
return db.loc[(db["startIP"] <= ip) & (ip <= db["endIP"]), "countryISO"].squeeze()
df["countryISO"] = df["integerIP"].map(ip2country)
>>> df
sourceIP eventTime integerIP countryISO
0 114.119.157.43 2021-03-01 1920441643 SG
1 193.205.128.7 2021-03-01 3251470343 IT
2 193.205.128.7 2021-03-01 3251470343 IT
3 193.205.128.7 2021-03-01 3251470343 IT
Performance
For 10k ip addresses, results returned on average in 11.7s on a 2,5 GHz Quad-Core Intel Core i7.
df1 = pd.DataFrame({"integerIP": np.random.randint(db["startIP"].min(),
db["endIP"].max()+1,
size=10000)})
%timeit df1["integerIP"].map(ip2country)
11.7 s ± 489 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
:Edit: fixed a misunderstanding on my part - i am getting a nested list, not an array.
i'm working with a function in a for loop - bootstrapping some model predictions.
code looks like this:
def revenue(product):
revenue = predict * 4500
profit = revenue - 500000
return profit
and the loop i am feeding it into looks like this:
# set up a loop to select 500 random samples and train our region 2 data set
model = LinearRegression(fit_intercept = True, normalize = False)
features = r2_test.drop(['product'],axis=1)
values = []
for i in range(1000):
subsample = r2_test.sample(500,replace=False)
features = subsample.drop(['product'],axis=1)
predict = model2.predict(features)
result = (revenue(predict))
values.append(result)
so doing a 1000 loop of predictions on 500 samples from this dataframe:
id f0 f1 f2 product
0 74613 -15.001348 -8.276000 -0.005876 3.179103
1 9753 14.272088 -3.475083 0.999183 26.953261
2 93502 6.263187 -5.948386 5.001160 134.766305
3 33405 -13.081196 -11.506057 4.999415 137.945408
4 16486 12.702195 -8.147433 5.004363 134.766305
5 27901 -3.327590 -2.205276 3.003647 84.038886
6 69620 -11.142655 -10.133399 4.002382 110.992147
7 78940 4.234715 -0.001354 2.004588 53.906522
8 56159 13.355129 -0.332068 4.998647 134.766305
9 73142 1.069227 -11.025667 4.997844 137.945408
10 12663 11.777049 -5.334084 2.003033 53.906522
11 39849 16.320755 -0.562946 -0.001783 0.000000
12 61800 7.736313 -6.093374 3.982531 107.813044
13 72213 6.695604 -0.749449 -0.007630 0.000000
14 5479 -10.985487 -5.605994 2.991130 84.038886
15 6297 -0.347599 -6.275884 -0.003448 3.179103
16 88123 12.300570 2.944454 2.005541 53.906522
17 68352 8.900460 -5.632857 4.994324 134.766305
18 99029 -13.412826 -4.729495 2.998590 84.038886
19 64238 -4.373526 -8.590017 2.995379 84.038886
now, once i have my output, i want to select the top 200 predictions from each iteration, i'm using this loop:
# calculate the max value of each of the 500 iterations, then total them for the total profit
top_200 = []
for i in range(0,500):
profits = values.nlargest(200,[i],keep = 'all')
top_200.append(profits)
the problem i am running into is - when i feed values into the top_200 loop, i end up with an array of the selected 200 by column:
[ 0 1 2 3 \
628 125790.297387 -10140.964686 -361625.210913 -243132.040492
32 125429.134599 -368765.455544 -249361.525792 -497190.522207
815 124522.095794 -1793.660411 -11410.126264 114928.508488
645 123891.732231 115946.193531 104048.117460 -246350.752024
119 123063.545808 -124032.987348 -367200.191889 -131237.863430
.. ... ... ... ...
but i'd like to turn it into a dataframe, however, i haven't figured out how to do that while preserving the structure where 0 has it's 200 values, 1 has it's 200 values, etc.
i thought i could do something like:
top_200 = pd.DataFrame(top_200,columns= range(0,500))
and it gives me 500 columns, but only column 0 has anything in it and i end up with a [500,500] dataframe instead of the anticipated 200 rows by 500 columns.
i'm fairly sure there is a good way to do this, but my searching thus far has not turned anything up. I also am not sure what i am looking for is called so, i'm not sure what exactly i am looking for.
any input would be appreciated! Thanks in advance.
:Further editing:
so now that i know i'm getting a lists of lists, not an array, i thought i'd try to write to a dataframe instead:
# calculate the top 200 values of each of the 500 iterations
top_200 = pd.DataFrame(columns=['profits'])
for i in range(0,500):
top_200.loc[i] = i
profits = values.nlargest(200,[i],keep = 'all')
top_200.append(profits)
top_200.head()
but i've futzed something up here as my results are:
profits
0 0
1 1
2 2
3 3
4 4
where my expected results would be something like:
col 1 col2 col3
0 first n_largest first n_largest first n_largest
1 second n_largest second n_largest second n_largest
3 third n_largest third n_largest third n_largest
So, After doing some research based on #CygnusX 's recommended question i figured out that i was laboring under the impression that i had an array as the output, but of course top-200 = [] is a list, which, when combined with the nlargest gives me a list of lists.
Now that i understood the problem better, i converted the list of lists into a dataframe, and then transposed the data - which gave me the results i was looking for.
# calculate the max value of each of the 500 iterations, then total them for the total profit
top_200 = []
for i in range(0,500):
profits = (values.nlargest(200,[i],keep = 'all')).mean()
top_200.append(profits)
test = pd.DataFrame(top_200)
test = test.transpose()
output (screenshot, because, 500 columns.):
there is probably a more elegant way to accomplish this, like not using a list but a dataframe, but, i couldn't get the .append to work the way i wanted in a dataframe, since i wanted to preserve the list of 200 nlargest, not just have a sum or a mean. (which the append worked great for!)
I am working on GPS trajectories.
I am trying to find the mean of velocity of vehicles that belong to three different classes.
Mean of each vehicle is needed.
"Vehicle ID","Frame ID","Total Frames","Global Time","Local X","Local Y","Global X","Global Y","V_Len","V_Width","V_Class","V_Vel","V_Acc","Lane_ID","Pre_Veh","Fol_Veh","Spacing","Headway"
3033,9064,633,1118847885300,42.016,377.256,6451360.093,1873080.530,19.5,8.5,2,27.90,4.29,4,3022,0,93.16,3.34
3033,9065,633,1118847885400,42.060,380.052,6451362.114,1873078.608,19.5,8.5,2,28.43,6.63,4,3022,0,93.87,3.30
3033,9066,633,1118847885500,42.122,382.924,6451364.187,1873076.613,19.5,8.5,2,29.07,6.89,4,3022,0,94.49,3.25
3033,9067,633,1118847885600,42.200,385.882,6451366.307,1873074.553,19.5,8.5,2,29.62,4.41,4,3022,0,95.04,3.21
3033,9068,633,1118847885700,42.265,388.885,6451368.490,1873072.453,19.5,8.5,2,29.93,1.57,4,3022,0,95.57,3.19
df.sort_values(by=["Global Time"])
df["US Time"]=pd.to_datetime(df["Global Time"], unit='ms').dt.tz_localize('UTC' ).dt.tz_convert('America/Los_Angeles')
#Converting gps millisecond TS to US Local Time date format
#sorting
grouped=df.groupby('V_Class')
#find mean of all vehicles in each class
print( grouped['V_Vel'].agg([np.mean,np.std]))
for index, row in df.iterrows():
print (row["Vehicle ID"], row["V_Class"])
Actual output
V_Class mean std
1 40.487673 14.647576
2 37.376317 14.940034
3 40.953483 11.214995
Expected output
Vehicle ID V_Class mean std
3033 2 32.4 12.4
125 1 41.3 9.2
.
likewise
If you want the mean per vehicle, just group by vehicle:
df.groupby(['Vehicle ID','V_Class'])['V_Vel'].agg([np.mean, np.std])
it should give (with your sample data):
mean std
Vehicle ID V_Class
3033 2 28.99 0.834955
I have a large dataset, below the training and test datasets
train_data is from 2016-01-29 to 2017-12-31
head(train_data)
date Date_time Temp Ptot JFK AEH ART CS CP
1 2016-01-29 2016-01-29 00:00:00 30.3 1443.888 52.87707 49.36879 28.96548 6.239999 49.61212
2 2016-01-29 2016-01-29 00:15:00 30.3 1410.522 49.50248 49.58356 26.37977 5.024000 49.19649
3 2016-01-29 2016-01-29 00:30:00 30.3 1403.191 50.79809 49.04253 26.15317 5.055999 47.48126
4 2016-01-29 2016-01-29 00:45:00 30.3 1384.337 48.88359 49.14100 24.52135 5.088000 46.19261
5 2016-01-29 2016-01-29 01:00:00 30.1 1356.690 46.61842 48.80624 24.28208 5.024000 43.00352
6 2016-01-29 2016-01-29 01:15:00 30.1 1341.985 48.09687 48.87748 24.49988 4.975999 39.90505
test_data is from 2018-01-01 to 2018-07-12
tail(test_data)
date Date_time Temp Ptot JFK AEH ART CS CP
86007 2018-07-12 2018-07-12 22:30:00 64.1 1458.831 82.30099 56.93944 27.20252 2.496 54.41050
86008 2018-07-12 2018-07-12 22:45:00 64.1 1457.329 61.68535 54.28934 28.59752 3.728 54.15208
86009 2018-07-12 2018-07-12 23:00:00 63.5 1422.419 80.56367 56.40752 27.99190 3.520 53.85705
86010 2018-07-12 2018-07-12 23:15:00 63.5 1312.021 52.25757 56.40283 22.03727 2.512 53.72166
86011 2018-07-12 2018-07-12 23:30:00 63.5 1306.349 65.65347 56.20145 22.77093 3.680 52.71584
86012 2018-07-12 2018-07-12 23:45:00 63.5 1328.528 57.47283 57.73747 19.50940 2.432 52.37458
I want to make a prediction validation loop for 24 hours(Each day from 2018-01-01 to 2018-07-12) in test_data . Each day prediction is (96) values-15 minute sampling-. In other words, I have to select 96 values each time and put them in the test_data shown in the code and calculate MAPE.
Target variable: Ptot
Predictors: Temp, JFK, AEH, ...etc
I finished running the prediction as shown below
input = train_data[c("Temp","JFK","AEH","ART","CS","CP","RLF", "FH" ,"TJF" ,"GH" , "JPH","JEK", "KL",
"MH","MC","MRH", "PH","OR","RP","RC","RL","SH", "SPC","SJH","SMH","VWK","WH","Month","Day",
"Year","hour")]
target = train_data["Ptot"]
glm_model <- glm(Ptot~ ., data= c(input, target), family=gaussian)
I want to iterate through the "test_data" -create a loop- by taking each time 96 observation -96 rows- from the test table sequentially until the end of the dataset and calculate MAPE and save all of the value. I implemented this in R.
As shown below in fig. each time take 96 rows from (test_data) and put them in "test_data" in the function. It is just an explanation, not showing all 96 values :)
This is the function I have to create a loop for it
pred<- predict.glm(glm_model,test_data)
mape <- function(actual, pred){
return(100 * mean(abs((actual- pred)/actual)))
}
I will show how to make first-day prediction validation
1- select 96 values from test_data(i.e 2018-01-01)
One_day_data <- test_data[test_data$date == "2018-01-01",]
2- Put one day values in the function
pred<- predict.glm(glm_model,One_day_data )
3- This is the prediction results after running pred (96 values =one day)
print(pred)
67489 67490 67491 67492 67493 67494 67495 67496 67497 67498
1074.164 1069.527 1063.726 1082.404 1077.569 1071.265 1070.776 1073.686 1061.720 1063.554
67499 67500 67501 67502 67503 67504 67505 67506 67507 67508
1074.264 1067.393 1071.111 1076.754 1079.700 1071.244 1097.977 1089.862 1091.817 1098.025
67509 67510 67511 67512 67513 67514 67515 67516 67517 67518
1125.495 1133.786 1136.545 1138.473 1176.555 1183.483 1184.795 1186.220 1192.328 1187.582
67519 67520 67521 67522 67523 67524 67525 67526 67527 67528
1186.513 1254.844 1262.021 1258.816 1240.280 1229.237 1237.582 1250.030 1243.189 1262.266
67529 67530 67531 67532 67533 67534 67535 67536 67537 67538
1251.563 1242.417 1259.352 1269.760 1271.318 1266.984 1260.113 1247.424 1200.905 1198.161
67539 67540 67541 67542 67543 67544 67545 67546 67547 67548
1202.372 1189.016 1193.479 1194.668 1207.064 1199.772 1189.068 1176.762 1188.671 1208.944
67549 67550 67551 67552 67553 67554 67555 67556 67557 67558
1199.216 1193.544 1215.866 1209.969 1180.115 1182.482 1177.049 1196.165 1145.335 1146.028
67559 67560 67561 67562 67563 67564 67565 67566 67567 67568
1161.821 1163.816 1114.529 1112.068 1113.113 1107.496 1073.080 1082.271 1097.888 1095.782
67569 67570 67571 67572 67573 67574 67575 67576 67577 67578
1081.863 1068.071 1061.651 1072.511 1057.184 1068.474 1062.464 1061.535 1054.550 1050.287
67579 67580 67581 67582 67583 67584
1038.086 1045.610 1038.836 1030.429 1031.563 1019.997
We can get the actual value from "Ptot"
actual<- One_day_data$Ptot
[1] 1113.398 1110.637 1111.582 1110.816 1101.921 1111.091 1108.501 1112.535 1104.631 1108.284
[11] 1110.994 1106.585 1111.397 1117.406 1106.690 1101.783 1101.605 1110.183 1104.162 1111.829
[21] 1117.093 1125.493 1118.417 1127.879 1133.574 1136.395 1139.048 1141.850 1145.630 1141.288
[31] 1141.897 1140.310 1138.026 1121.849 1122.069 1120.479 1120.970 1111.594 1109.572 1116.355
[41] 1115.454 1113.911 1115.509 1113.004 1119.440 1112.878 1117.642 1100.516 1099.672 1109.223
[51] 1105.088 1107.167 1114.355 1110.620 1110.499 1110.161 1107.868 1118.085 1108.166 1106.347
[61] 1114.036 1106.968 1109.807 1113.943 1106.869 1104.390 1102.446 1110.770 1114.684 1114.142
[71] 1118.877 1128.470 1133.922 1128.420 1134.058 1142.529 1126.432 1127.824 1124.561 1130.823
[81] 1122.907 1117.422 1116.851 1114.980 1114.543 1108.584 1120.410 1120.900 1109.226 1101.367
[91] 1098.330 1110.474 1106.010 1108.451 1095.196 1096.007
4- Run Mape function and save the results (I have the actual values)
mape <- function(actual, pred){
return(100 * mean(abs((actual- pred)/actual)))
}
5- Do the same thing it for next 24 hours (i.e 2018-01-02) and so on
Incomplete Solution, it is not correct! (I think it should be done by something like this)
result_df =[]
for (i in 1:96){
test_data<- test_data[i,]
pred<- predict.glm(glm_model,test_data)
result_df$pred[i] <- pred
result_df$Actual[i+1] <- result_df$pred[i]
mape[i] <- function(actual, pred){
return(100 * mean(abs((actual- pred)/actual)))
}
}
SUMMARY: I want to store all of the values of mape by passing one day incrementally each time to pred.
NOTE: I will appreciate if you can show me the loop process in R and/or Python.
Consider building a generalized function, mape_calc, to receive a subset data frame as input and call the function in R's by. As the object-oriented wrapper to tapply, by will subset the main data frame by each distinct date, passing subsets into defined function for calculation.
Within the method, a new, one-row data frame is built to align mape with each date. Then all rows are binded together with do.call:
mape_calc <- function(sub_df) {
pred <- predict.glm(glm_model, sub_df)
actual <- sub_df$Ptot
mape <- 100 * mean(abs((actual - pred)/actual))
new_df <- data.frame(date = sub_df$date[[1]], mape = mape)
return(new_df)
}
# LIST OF ONE-ROW DATAFRAMES
df_list <- by(test_data, test_data$date, map_calc)
# FINAL DATAFRAME
final_df <- do.call(rbind, df_list)
Should you have same setup in Python pandas and numpy (possibly statsmodels for glm model), use pandas DataFrame.groupby as the counterpart to R's by. Of course adjust below pseudocode to your actual needs.
import pandas as pd
import numpy as np
import statsmodels.api as sm
...
train_data = sm.add_constant(train_data)
model_formula = 'Ptot ~ Temp + JFK + AEH + ART + CS + CP ...'
glm_model = sm.glm(formula = model_formula,
data = train_data.drop(columns=['date','Date_time']),
family = sm.families.Gaussian()).fit()
def mape_calc(dt, sub_df):
pred = glm_model.predict(sub_df.drop(columns=['date','Date_time','Ptot']))
actual = sub_df['Ptot']
mape = 100 * np.mean(np.abs((actual - pred)/actual))
new_df = pd.DataFrame({'date': dt, 'mape': mape}, index=[0])
return new_df
# LIST OF ONE-ROW DATAFRAMES
df_list = [mape_calc(i, g) for i, g in test_data.groupby('date')]
# FINAL DATAFRAME
final_df = pd.concat(df_list, ignore_index=True)
It sounds to me like you are looking for an introduction to python. Forgive me if I have misunderstood. I realize my answer is very simple.
I am happy to answer your question about how to do a loop in python. I will
give you two examples. I am going to assume that you are using "ipython" which
would allow you to type the following and test it out. I will show you a
for loop and a while loop.
I will demonstrate summing a bunch of numbers. Note that loops must be indented to work. This is a feature of python that freaks out newbies.
So ... inside an ipython environment.
In [21]: data = [1.1, 1.3, 0.5, 0.8, 0.9]
In [22]: def sum1(data):
summ=0
npts=len(data)
for i in range(npts):
summ+=data[i]
return summ
In [23]: sum1(data)
Out[23]: 4.6000000000000005
In [24]: def sum2(data):
summ=0;i=0
npts=len(data)
while i<npts:
summ+=data[i]
i+=1
return summ
#Note that in a while loop you must increment "i" on your own but a for loop
#does it for you ... just like every other language!
In [25]: sum2(data)
Out[25]: 4.6000000000000005
I ignored the question of how to get your data into an array. Python supports both lists (which is what I used in the example) and actual arrays (via numpy). If this is of interest to you, we can talk about numpy next.
There are all kinds of wonderful function for reading data files as well.
OK -- I don't know how to read "R" ... but it looks kind of C-like with elements of Matlab (which means matplotlib and numpy will work great for you!)
I can make your syntax "pythonic". That does not mean I am giving you running code.
We assume that you are interested in learning python. If you are a student asking
for someone else to do your homework, then I will be irritated. Regardless, I would very much appreciate if you would accept one of my answers as I could use some reputation on this site. I just got on it tonight even though I have been coding since 1975.
Here's how to do a function:
def mape(actual, pred):
return(100 * mean(abs((actual-pred)/actual)))
You are obviously using arrays ... you probably want numpy which will work much like I think you expect R to work.
for i in range(2,97):
test=test_data[i]
pred=predict.glm(glm_model,test)
#don't know what this dollar sign thing means
#so I didn't mess with it
result_df$pred[i] =pred
result_df$Actual[i+1] = result_df$pred[i]
I guess the dollar sign is some kind of appending thing. You can certainly append to an array in python. At this point if you want more help you need to break this into questions like ... "How do I create and fill an array in numpy?"
Good luck!
I am trying to calculate the diff_chg of S&P sectors for 4 different dates (given in start_return) :
start_return = [-30,-91,-182,-365]
for date in start_return:
diff_chg = closingprices[-1].divide(closingprices[date])
for i in sectors: #Sectors is XLK, XLY , etc
diff_chg[i] = diff_chg[sectordict[i]].mean() #finds the % chg of all sectors
diff_df = diff_chg.to_frame
My expected output is to have 4 columns in the df, each one with the returns of each sector for the given period (-30,-91, -182,-365.) .
As of now when I run this code, it returns the sum of the returns of all 4 periods in the diff_df. I would like it to create a new column in the df for each period.
my code returns:
XLK 1.859907
XLI 1.477272
XLF 1.603589
XLE 1.415377
XLB 1.526237
but I want it to return:
1mo (-30) 3mo (-61) 6mo (-182) 1yr (-365
XLK 1.086547 values here etc etc
XLI 1.0334
XLF 1.07342
XLE .97829
XLB 1.0281
Try something like this:
start_return = [-30,-91,-182,-365]
diff_chg = pd.DataFrame()
for date in start_return:
diff_chg[date] = closingprices[-1].divide(closingprices[date])
What this does is to add columns for each date in start_return to a single DataFrame created at the beginning.