How could I create sum of column itself dynamically in python - python

My raw data is:
def f_ST(ST,F,T):
a=ST/F-1-np.log(ST/F)
return 2*a/T
df=pd.DataFrame(range(50,140,5),columns=['K'])
df['f(K0)']=df.apply(lambda x: f_ST(x.K,100,0.25),axis=1)
df['f(K1)']=df['f(K0)'].shift(-1)
df['dK']=df['K'].diff(1)
The thing I want to do is: I have a funtion f(k)
f(k)= (k-100)/100 - ln(k/100)
I want to calculate w, which goes following the steps
get 1-period foward value of f(k), then calculate
tmp(k)=f1_f(k)-f(k)/dk
w is calculated as
w[0]=tmpw[0]
w[n]-tmpw[n]-(w[0]+w[1]+...w[n-1])
And the result look like
nbr date k f(k) f1_f(k) d_k tmpw w
10 2019-02-19 100 0.000000 0.009679 5.0 0.001936 0.001936
11 2019-02-19 105 0.009679 0.037519 5.0 0.005568 0.003632
12 2019-02-19 110 0.037519 0.081904 5.0 0.008877 0.003309
13 2019-02-19 115 0.081904 0.141428 5.0 ...
14 2019-02-19 120 0.141428 0.214852 5.0 ...
15 2019-02-19 125 0.214852 0.301086 5.0
16 2019-02-19 130 0.301086 0.399163 5.0
Question: could anyone help to derive a quick code (not mathematically) without using loop?
Thanks a lot!

I don't fully understand your question, for me all those notation were a bit confusing.
If I got what you want right, for every row you want to have an accumulated value of all previous rows. than the value of another column of this row would be calculated based on this accumulated value.
In this case I would prefer something, calculate an accumulated column first, use it later.
for example:
note you need to call list(range()) instead of list, so your example is throwing an error
import pandas as pd
import numpy as np
def f_ST(ST,F,T):
a=ST/F-1-np.log(ST/F)
return 2*a/T
df=pd.DataFrame(list(range(50,140,5)),columns=['K'])
df['f(K0)']=df.apply(lambda x: f_ST(x.K,100,0.25),axis=1)
df['f(K1)']=df['f(K0)'].shift(-1)
df['dK']=df['K'].diff(1)
df['accumulate'] = df['K'].shift(1).cumsum()
df['currentVal-accumulated'] = df['K'] - df['accumulate']
print(df)
prints:
K f(K0) ... accumulate currentVal-accumulated
0 50 1.545177 ... NaN NaN
1 55 1.182696 ... 50.0 5.0
2 60 0.886605 ... 105.0 -45.0
3 65 0.646263 ... 165.0 -100.0
4 70 0.453400 ... 230.0 -160.0
5 75 0.301457 ... 300.0 -225.0
6 80 0.185148 ... 375.0 -295.0
7 85 0.100151 ... 455.0 -370.0
8 90 0.042884 ... 540.0 -450.0
9 95 0.010346 ... 630.0 -535.0
10 100 0.000000 ... 725.0 -625.0
11 105 0.009679 ... 825.0 -720.0
12 110 0.037519 ... 930.0 -820.0
13 115 0.081904 ... 1040.0 -925.0
14 120 0.141428 ... 1155.0 -1035.0
15 125 0.214852 ... 1275.0 -1150.0
16 130 0.301086 ... 1400.0 -1270.0
17 135 0.399163 ... 1530.0 -1395.0
[18 rows x 6 columns]

Related

Pandas: calculating mean value of multiple columns using datetime and Grouper removes columns or doesn't return correct Dataframe

As part of a larger task, I want to calculate the monthly mean values for each specific station. This is already difficult to do, but I am getting close.
The dataframe has many columns, but ultimately I only use the following information:
Date Value Station_Name
0 2006-01-03 18 2
1 2006-01-04 12 2
2 2006-01-05 11 2
3 2006-01-06 10 2
4 2006-01-09 22 2
... ... ...
3510 2006-12-23 47 45
3511 2006-12-24 46 45
3512 2006-12-26 35 45
3513 2006-12-27 35 45
3514 2006-12-30 28 45
I am running into two issues, using:
df.groupby(['Station_Name', pd.Grouper(freq='M')])['Value'].mean()
It results in something like:
Station_Name Date
2 2003-01-31 29.448387
2003-02-28 30.617857
2003-03-31 28.758065
2003-04-30 28.392593
2003-05-31 30.318519
...
45 2003-09-30 16.160000
2003-10-31 18.906452
2003-11-30 26.296667
2003-12-31 30.306667
2004-01-31 29.330000
Which I can't seem to use as a regular dataframe, and the datetime is messed up as it doesn't show the monthly mean but gives the last day back. Also the station name is a single index, and not for the whole column. Plus the mean value doesn't have a "column name" at all. This isn't a dataframe, but a pandas.core.series.Series. I can't convert this again because it's not correct, and using the .to_frame() method shows that it is still indeed a Dataframe. I don't get this part.
I found that in order to return a normal dataframe, to use
as_index = False
In the groupby method. But this results in the months not being shown:
df.groupby(['station_name', pd.Grouper(freq='M')], as_index = False)['Value'].mean()
Gives:
Station_Name Value
0 2 29.448387
1 2 30.617857
2 2 28.758065
3 2 28.392593
4 2 30.318519
... ... ...
142 45 16.160000
143 45 18.906452
144 45 26.296667
145 45 30.306667
146 45 29.330000
I can't just simply add the month later, as not every station has an observation in every month.
I've tried using other methods, such as
df.resample("M").mean()
But it doesn't seem possible to do this on multiple columns. It returns the mean value of everything.
Edit: This is ultimately what I would want.
Station_Name Date Value
0 2 2003-01 29.448387
1 2 2003-02 30.617857
2 2 2003-03 28.758065
3 2 2003-04 28.392593
4 2 2003-05 30.318519
... ... ...
142 45 2003-08 16.160000
143 45 2003-09 18.906452
144 45 2003-10 26.296667
145 45 2003-11 30.306667
146 45 2003-12 29.330000
ok , how baout this :
df = df.groupby(['Station_Name',df['Date'].dt.to_period('M')])['Value'].mean().reset_index()
outut:
>>
Station_Name Date Value
0 2 2006-01 14.6
1 45 2006-12 38.2

Using pd.to_datetime to convert "object" column into %HH:MM:SS

I am doing some exploratory data analysis using finish-time data scraped from the 2018 KONA IRONMAN. I used JSON to format the data and pandas to read into csv. The 'swim','bike','run' columns should be formatted as %HH:MM:SS to be operable, however, I am receiving a ValueError: ('Unknown string format:', '--:--:--').
print(data.head(2))
print(kona.info())
print(kona.describe())
Name div_rank ... bike run
0 Avila, Anthony 2470 138 ... 05:27:59 04:31:56
1 Lindgren, Mikael 1050 151 ... 05:17:51 03:49:20
swim 2472 non-null object
bike 2472 non-null object
run 2472 non-null object
Name div_rank ... bike run
count 2472 2472 ... 2472 2472
unique 2472 288 ... 2030 2051
top Jara, Vicente 986 -- ... --:--:-- --:--:--
freq 1 165 ... 122 165
How should I use pd.to_datetime to properly format the 'bike','swim','run' column and for future use sum these columns and append a 'Total Finish Time' column? Thanks!
The reason the error is because it can't pull the time from '--:--:--'. So you'd need to convert all those to '00:00:00', but then that implies they did the event in 0 time. The other option is to just convert the times that are present, leaving a null in the places that don't have a time. This will also include a date of 1900-01-01, when you convert to datetime. I put the .dt.time so only time will display.
timed_events = ['bike', 'swim', 'run']
for event in timed_events:
result[event] = pd.to_datetime(result[result[event] != '--:--:--'][event], format="%H:%M:%S").dt.time
The problem with this though is I remember seeing you wanted to sum those times, which would require you to do some extra conversions. So I'm suggesting to use .to_timedelta() instead. It'll work the same way, as you'd need to not include the --:--:--. But then you can sum those times. I also added a column of number of event completed, so that if you want to sort by best times, you can filter out anyone who hasn't competed in all three events, as obviously they'd have better times because they are missing entire events:
I'll also add, regarding the comment of:
"You think providing all the code will be helpful but it does not. You
will get a quicker and more useful response if you keep the code
minimum that can replicate your issue.stackoverflow.com/help/mcve –
mad_ "
I'll give him the benefit of the doubt as seeing the whole code and not realizing that the code you provided was the minimal code to replicate your issue, as no one wants to code a way to generate your data to work with. Sometimes you can explicitly state that in your question.
ie:
Here's the code to generate my data:
CODE PART 1
import bs4
import pandas as pd
code...
But now that I have the data, here's where I'm having trouble:
df = pd.to_timedelta()...
...
Luckily I remembered helping you earlier on this so knew I could go back and get that code. So the code you originally had was fine.
But here's the full code I used, which is a different way of storing the csv than you orginially had. So you can change that part, but the end part is what you'll need:
from bs4 import BeautifulSoup, Comment
from collections import defaultdict
import requests
import pandas as pd
sauce = 'http://m.ironman.com/triathlon/events/americas/ironman/world-championship/results.aspx'
r = requests.get(sauce)
data = r.text
soup = BeautifulSoup(data, 'html.parser')
def parse_table(soup):
result = defaultdict(list)
my_table = soup.find('tbody')
for node in my_table.children:
if isinstance(node, Comment):
# Get content and strip comment "<!--" and "-->"
# Wrap the rows in "table" tags as well.
data = '<table>{}</table>'.format(node[4:-3])
break
table = BeautifulSoup(data, 'html.parser')
for row in table.find_all('tr'):
name, _, swim, bike, run, div_rank, gender_rank, overall_rank = [col.text.strip() for col in row.find_all('td')[1:]]
result[name].append({
'div_rank': div_rank,
'gender_rank': gender_rank,
'overall_rank': overall_rank,
'swim': swim,
'bike': bike,
'run': run,
})
return result
jsonObj = parse_table(soup)
result = pd.DataFrame()
for k, v in jsonObj.items():
temp_df = pd.DataFrame.from_dict(v)
temp_df['name'] = k
result = result.append(temp_df)
result = result.reset_index(drop=True)
result.to_csv('C:/data.csv', index=False)
# However you read in your csv/dataframe, use the code below on it to get those times
timed_events = ['bike', 'swim', 'run']
for event in timed_events:
result[event] = pd.to_timedelta(result[result[event] != '--:--:--'][event])
result['total_events_participated'] = 3 - result.isnull().sum(axis=1)
result['total_times'] = result[timed_events].sum(axis=1)
Output:
print (result)
bike div_rank ... total_events_participated total_times
0 05:27:59 138 ... 3 11:20:06
1 05:17:51 151 ... 3 10:16:17
2 06:14:45 229 ... 3 14:48:28
3 05:13:56 162 ... 3 10:19:03
4 05:19:10 6 ... 3 09:51:48
5 04:32:26 25 ... 3 08:23:26
6 04:49:08 155 ... 3 10:16:16
7 04:50:10 216 ... 3 10:55:47
8 06:45:57 71 ... 3 13:50:28
9 05:24:33 178 ... 3 10:21:35
10 06:36:36 17 ... 3 14:36:59
11 NaT -- ... 0 00:00:00
12 04:55:29 100 ... 3 09:28:53
13 05:39:18 72 ... 3 11:44:40
14 04:40:41 -- ... 2 05:35:18
15 05:23:18 45 ... 3 10:55:27
16 05:15:10 3 ... 3 10:28:37
17 06:15:59 78 ... 3 11:47:24
18 NaT -- ... 0 00:00:00
19 07:11:19 69 ... 3 15:39:51
20 05:49:02 29 ... 3 10:32:36
21 06:45:48 4 ... 3 13:39:17
22 04:39:46 -- ... 2 05:48:38
23 06:03:01 3 ... 3 11:57:42
24 06:24:58 193 ... 3 13:52:57
25 05:07:42 116 ... 3 10:01:24
26 04:44:46 112 ... 3 09:29:22
27 04:46:06 55 ... 3 09:32:43
28 04:41:05 69 ... 3 09:31:32
29 05:27:55 68 ... 3 11:09:37
... ... ... ... ...
2442 NaT -- ... 0 00:00:00
2443 05:26:40 3 ... 3 11:28:53
2444 05:04:37 19 ... 3 10:27:13
2445 04:50:45 74 ... 3 09:15:14
2446 07:17:40 120 ... 3 14:46:05
2447 05:26:32 45 ... 3 10:50:48
2448 05:11:26 186 ... 3 10:26:00
2449 06:54:15 185 ... 3 14:05:16
2450 05:12:10 22 ... 3 11:21:37
2451 04:59:44 45 ... 3 09:29:43
2452 06:03:59 96 ... 3 12:12:35
2453 06:07:27 16 ... 3 12:47:11
2454 04:38:06 91 ... 3 09:52:27
2455 04:41:56 14 ... 3 08:58:46
2456 04:38:48 85 ... 3 09:18:31
2457 04:42:30 42 ... 3 09:07:29
2458 04:40:54 110 ... 3 09:32:34
2459 06:08:59 37 ... 3 12:15:23
2460 04:32:20 -- ... 2 05:31:05
2461 04:45:03 96 ... 3 09:30:06
2462 06:14:29 95 ... 3 13:38:54
2463 06:00:20 164 ... 3 12:10:03
2464 05:11:07 22 ... 3 10:32:35
2465 05:56:06 188 ... 3 13:32:48
2466 05:09:26 2 ... 3 09:54:55
2467 05:22:15 7 ... 3 10:26:14
2468 05:53:14 254 ... 3 12:34:21
2469 05:00:29 156 ... 3 10:18:29
2470 04:30:46 7 ... 3 08:38:23
2471 04:34:59 39 ... 3 09:04:13
[2472 rows x 9 columns]

Using pandas in python I am trying to group data from price ranges

Here is the code I am running, It creates a bar plot but i would like to group together values within $5 of each other for each bar in the graph. The bar graph currently shows all 50 values as individual bars and makes the data nearly unreadable. Is a histogram a better option? Also, bdf is the bids and adf is the asks.
import gdax
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from gdax import *
from pandas import *
from numpy import *
s= 'sequence'
b= 'bids'
a= 'asks'
public_client = gdax.PublicClient()
o = public_client.get_product_order_book('BTC-USD', level=2)
df = pd.DataFrame(o)
bdf = pd.DataFrame(o[b],columns = ['price','size','null'], dtype='float')
adf = pd.DataFrame(o[b],columns = ['price','size','null'], dtype='float')
del bdf['null'] bdf.plot.bar(x='price', y='size')
plt.show()
pause = input('pause')
Here is an example of the data I receive as a DataFrame object.
price size
0 11390.99 13.686618
1 11389.40 0.002000
2 11389.00 0.090700
3 11386.53 0.060000
4 11385.26 0.010000
5 11385.20 0.453700
6 11381.33 0.006257
7 11380.06 0.011100
8 11380.00 0.001000
9 11378.61 0.729421
10 11378.60 0.159554
11 11375.00 0.012971
12 11374.00 0.297197
13 11373.82 0.005000
14 11373.72 0.661006
15 11373.39 0.001758
16 11373.00 1.000000
17 11370.00 0.082399
18 11367.22 1.002000
19 11366.90 0.010000
20 11364.67 1.000000
21 11364.65 6.900000
22 11364.37 0.002000
23 11361.23 0.250000
24 11361.22 0.058760
25 11360.89 0.001760
26 11360.00 0.026000
27 11358.82 0.900000
28 11358.30 0.020000
29 11355.83 0.002000
30 11355.15 1.000000
31 11354.72 8.900000
32 11354.41 0.250000
33 11353.00 0.002000
34 11352.88 1.313130
35 11352.19 0.510000
36 11350.00 1.650228
37 11349.90 0.477500
38 11348.41 0.001762
39 11347.43 0.900000
40 11347.18 0.874096
41 11345.42 7.800000
42 11343.21 1.700000
43 11343.02 0.001754
44 11341.73 0.900000
45 11341.62 0.002000
46 11341.00 0.024900
47 11340.00 0.400830
48 11339.77 0.002946
49 11337.00 0.050000
Is pandas the best way to manipulate this data?
Not sure if I understand correctly, but if you want to count number of bids with a $5 step, here is how you can do it:
> df["size"].groupby((df["price"]//5)*5).sum()
price
11335.0 0.052946
11340.0 3.029484
11345.0 10.053358
11350.0 12.625358
11355.0 1.922000
11360.0 8.238520
11365.0 1.012000
11370.0 2.047360
11375.0 0.901946
11380.0 0.018357
11385.0 0.616400
11390.0 13.686618
Name: size, dtype: float64
You can using cut here
df['bin']=pd.cut(df.price,bins=3)
df.groupby('bin')['size'].sum().plot(kind='bar')

Results in columns without decimal places?

I have looked through a lot of posts, but none of the solutions I can implement in my code:
x4 = x4.set_index('grupa').T.rename_axis('DANE').reset_index().rename_axis(None,1).round()
After which I get the results DataFrame:
DANE BAKALIE NASIONA OWOCE WARZYWA
0 ilosc 5.0 94.0 61.0 623.0
1 marza_netto 7.0 120.0 69.0 668.0
2 marza_procent2 32.0 34.0 29.0 27.0
But I would like to receive:
DANE BAKALIE NASIONA OWOCE WARZYWA
0 ilosc 5 94 61 623
1 marza_netto 7 120 69 668
2 marza_procent2 32 34 29 27
I tried replace('.0',''),int(round(),astype(int), but I don't get good results or I get the incompatibility of the attributes with the DataFrame.
If only non numeric column is DANE then cast before convert to column:
x4 = x4.set_index('grupa')
.T
.rename_axis('DANE')
.astype(int)
.reset_index()
.rename_axis(None,1)
More general solution is select all floats columns and cast:
cols = df.select_dtypes(include=['float']).columns
df[cols] = df[cols].astype(int)
print (df)
DANE BAKALIE NASIONA OWOCE WARZYWA
0 ilosc 5 94 61 623
1 marza_netto 7 120 69 668
2 marza_procent2 32 34 29 27
If some NaNs values convert to int is not possible.
So is possible:
1.drop all NaNs rows:
df = df.dropna()
2.replace NaNs to some integer like 0:
df = df.fillna(0)
Not 100% sure I got your question, but you can use an astype(int) conversion.
df = df.set_index('DANE').astype(int).reset_index()
df
DANE BAKALIE NASIONA OWOCE WARZYWA
0 ilosc 5 94 61 623
1 marza_netto 7 120 69 668
2 marza_procent2 32 34 29 27
If you're dealing with rows that have NaNs, either drop those rows and convert, or convert to astype(object). The latter is not recommended because you lose performance.

Subtraction of two series from different parts of the dataframe

I have the following data frame:
SID AID START END
71 1 1 -11136 -11122
74 1 1 -11121 -11109
78 1 1 -11034 -11014
79 1 2 -11137 -11152
83 1 2 -11114 -11127
86 1 2 -11032 -11038
88 1 2 -11121 -11002
I want to do a subtraction of the START elements with AID==1 and AID==2, in order, such that the expected result would be:
-11136 - (-11137) = 1
-11121 - (-11114) =-7
-11034 - (-11032) =-2
Nan - (-11002) = NaN
So I extracted two groups:
values1 = group.loc[group['AID'] == 1]["START"]
values2 = group.loc[group['AID'] == 2]["START"]
with the following result:
71 -11136
74 -11121
78 -11034
Name: START, dtype: int64
79 -11137
83 -11114
86 -11032
88 -11002
Name: START, dtype: int64
and did a simple subtraction:
values1-values2
But I got all NaNs:
71 NaN
74 NaN
78 NaN
79 NaN
83 NaN
86 NaN
I noticed that if I use data from the same AID group (e.g. START-END), I get the right answer. I get the NaN only when I "mix" AID group. I'm just getting started with Pandas, but I'm obviously missing something here. Any suggestion?
Let's try this:
df.set_index([df.groupby(['SID','AID']).cumcount(),'AID'])['START'].unstack().add_prefix('col_').eval('col_1 - col_2')
Output:
0 1.0
1 -7.0
2 -2.0
3 NaN
dtype: float64
pandas does those operations based on labels. Since your labels ((71, 74, 78) and (79, 83, 86)) don't match, it cannot find any value to subtract. One way to deal with this is to use a numpy array instead of a Series so there is no label associated:
values1 - values2.values
Out:
71 1
74 -7
78 -2
Name: START, dtype: int64
Bizarre way to go about it
-np.diff([g.reset_index(drop=True) for n, g in df.groupby('AID').START])[0]
0 1.0
1 -7.0
2 -2.0
3 NaN
Name: START, dtype: float64

Categories

Resources