Not sure why I cannot get my DataFrame VWAP calculations to TradingView version at this link: https://www.tradingview.com/support/solutions/43000502018-volume-weighted-average-price-vwap/
They provide a simple calculation method which I can duplicate in a DataFrame but my calculations do not match. I believe it has something to do with the TradingView “Anchor Session” setting. Not sure how to adjust or add to my DataFrame calculations to match TradingView. I have also tried the Python Technical Analysis Library which does not match TradingView.
Simple Calculation
There are five steps in calculating VWAP:
Calculate the Typical Price for the period.
[(High + Low + Close)/3)]
Multiply the Typical Price by the period Volume.
(Typical Price x Volume)
Create a Cumulative Total of Typical Price.
Cumulative(Typical Price x Volume)
Create a Cumulative Total of Volume.
Cumulative(Volume)
Divide the Cumulative Totals.
VWAP = Cumulative(Typical Price x Volume) / Cumulative(Volume)
Anchor Period
Indicator calculation period. This setting specifies the Anchor, i.e. how frequently the VWAP calculation will be reset. For VWAP to work properly, each VWAP period should include several bars inside of it, so e.g. setting Anchor to 'Session' and timeframe to '1D' is not useful because the indicator will be reset on every bar.
Possible values: Session, Week, Month, Quarter, Year, Decade, Century, Earnings (reset on earnings), Dividends (reset on dividends), Splits (reset on splits).
# My VWAP - DataFrame Code:
df['Typ_Price'] = (df['high'] + df['low'] + df['close'] ) /3
df['Typ_PriceVol'] = df['Typ_Price'] * df['volume']
df['Cum_Vol_Price'] = df['Typ_PriceVol'].cumsum()
df['Cum_Vol'] = df['volume'].cumsum()
df['VWAP'] = df['Cum_Vol_Price'] / df['Cum_Vol']
print(df)
Related
I'm trying to calc the annualized return of Amazon stock and can't figure out the main difference between the following approaches
df = pdr.get_data_yahoo('amzn',datetime(2015, 1, 1),datetime(2019, 12, 31))['Adj Close']
1)df.pct_change()).mean()*252
Result = 0,400
2)df.resample('Y').last().pct_change().mean()
Result = 0,472
Why there is a difference about 7% ?
After reading the doc for the functions, I'd like go through an example of resampling time series data for a better understanding.
With resample method the price column of the DataFrame is grouped by a certain time span, in this case the 'Y' indicates a resampling by year and with last() we get the price value at the end of each year.
data.resample('Y').last()
Output: 1. Step
Next, with pct_change() we calculate the percentage change between the values for each row and the previous rows which are the price values at the end of each year that we got before.
data.resample('Y').last().pct_change()
Output: 2. Step
Now, in the final step we calculate the mean percentage change during the entire time period by using the mean() method
data.resample('Y').last().pct_change().mean()
Output: 3. Step
like #itprorh66 already wrote, the main difference between the two approaches is just about when the mean of the values is calculated.
I have this excel data with price movement and traded volume.
By using mydf[mydf.columns[5]].resample('1Min').ohlc(), I get OHLC data but don't know how to get trade volume for each minute. I have few problems in my mind:
the tick frequency is not uniform (means for some particular minute i may have say 100 sample size and for for other it may vary to 120 so .group() may not work for me)
OHLC function takes care of the previous issue automatically as i make column G date time index
Can i have a code which should based on "G" column do sum for volume in particular minute and then subtract it with previous minute volume data so that i get exact traded volume for that particular minute?
Here is the input for ohlc
and the output i get is this,
.
I am not interested in CE as of now.
I just want another column added in this dataframe with volume for each minute value.
Hi I had created a dataframe with Acutal Close, High, Low and now I will have to calculate the Day-Change, 3Days-Change, 2weeks-Change for each of the row.
With the code below, I can see the Day-Change field with Blank/NaN value (10/27/2009 D-Chg field), and now how can I get python to Auto-Pick the last trading date (10/23/2009) AC price for calculation when shifted date doesn't exist?
data["D-Chg"]=stock_store['Adj Close'] - stock_store['Adj Close'].shift(1, freq='B')
Thanks with Regards
format your first column to datetime.
data['Mycol'] = pd.to_datetime(data['Mycol'], format='%d%b%Y:%H:%M:%S.%f')
get the max value.
last_date = data['date'].max()
Get the most up-to-date row
is_last = data['date'] == last_date
data[is_last]
This may be done in one step if you give your desired column to max().
I have a dataframe with different companies price history and a dividend adjustment factor in the dataframe df.
I want to calculate the adjusted close price (which considers the dividend adjustment factor) for each company.
I tried some variations of
df['Adj Close'] = df.groupby('companyName')['priceUSD']*df['divAdjFactor'].shift(1)
Picture of a snippet of my original dataframe (non-grouped) and a test view of a filtered frame where I apply the calculation as I want. In the second frame I multiplied 0.595318 with 36.48 (just so happens here the first two divAdjFactor are there same, not always the case). I want to do this calculation on the original dataframe.
testlist = ['General Electric Company']
df_adj = df.query('companyName == #testlist')
df_adj['Adj Close'] = df_adj['priceUSD'] * df_adj['divAdjFactor'].shift(1)
You are close, need DataFrameGroupBy.shift per divAdjFactor:
df['Adj Close'] = df['priceUSD']*df.groupby('companyName')['divAdjFactor'].shift(1)
It's my first time working with spark data frames and I am trying to figure out how to use the window functions to compute the average daily return of every stock for every date.
I am trying to group by the ticker and then try to apply a rolling difference window function, but I can't find a lot of documentation on the window functions or how they work.
The data I have is date, open price, high price, low price, close price, volume traded, and ticker.
You find rolling average return by subtracting the close price yesterday from the close price today and then dividing it all by the close price yesterday.
What I tried so far:
w = Window()
df.groupBy("ticker")
I am trying to learn how to use window and groupby together to solve my problem.
Do you mean:
w = Window().partitionBy("ticker").orderBy("date")
df.withColumn("percentDiff", (col("close") - lag("close", 1).over(w)) / lag("close", 1).over(w)) \
.groupBy("date").agg(mean("percentDiff"))