I have data of some stock in text format and I want to convert it into JSON in a specific format.
The data points in the text file are separated by commas (,) and each line contains data of 1 min interval.
Also in some lines, there are extra unnecessary data at the end, so I want to make sure after conversion only the six datapoints are present ( excluding the first and any data after the 7th data point)
Input data:
BANKNIFTY_F1,20150228,15:27,19904.65,19924.00,19900.40,19920.20,31225
BANKNIFTY_F1,20150228,15:28,19921.05,19941.30,19921.05,19937.00,31525
BANKNIFTY_F1,20150228,15:29,19932.45,19945.00,19930.10,19945.00,38275
BANKNIFTY_F1,20150228,15:30,19947.00,19949.40,19930.00,19943.80,43400
BANKNIFTY_F1,20150302,09:16,20150.15,20150.15,20021.50,20070.00,91775,2026525
BANKNIFTY_F1,20150302,09:17,20071.50,20085.00,20063.50,20063.50,45700,2026525
Expected output data:
[{"date":"20150228","time":"15:27","open":"19904.65","high":"19924.00","low":"19900.40","close":"19920.20","volume":"31225"},{"date": "20150228", "time":"15:28", "open":"19921.05","high":"19941.30" ,"low":"19921.05","close":"19937.00", "volume":"31525"}, {"date":"20150228","time" :"15:29" ,"open": "19932.45" ,"high" :"19945.00 ","low":"19930.10","close" :"19945.00","volume":"38275"},{"date": "20150228","time ":" 15:30","open ":"19947.00","high" :"19949.40","low":"19930.00" ,"close":"19943.80", "volume":"43400"} , {"date": "20150302","time" :"09:16","open":"20150.15","high ":"20150.15", "low":"20021.50", "close":"20070.00 ","volume":"91775"}, {"date":"20150302", "time": "09:17","open": "20071.50", "high":"20085.00" , "low":"20063.50", "close":"20063.50", "volume": "45700"}
Please note in the expected output the last unnecessary datapoint as shown in the last two input lines is ignored.
You want to transform a csv file to JSON. When working with CSV files in python, always think about Pandas dataframes. So first install Pandas (pip install pandas).
Read the csv file as a Pandas dataframe, set the column headers to your keys, and then transform to json using the Pandas built-in functionality to_dict. Just a few lines of code.
You will first need to clean out the lines of the file that you do not need. If you only want the first X columns, also use parameters in pd.read_csv to selectd specific columns. Then do this:
import pandas as pd
dataframe = pd.read_csv("stockdata.txt", header = None, names = ["date","time","open","high","low","close","volume"])
// this is a python dictionary
json_dictionary = dataframe.to_dict('records')
print(json_dictionary)
// optionally convert to a json string
json_string = json_dictionary.dumps()
You can alo use pd.read_csv to set specific data types for your columns
You can simply do this using file handling in python.
import json
stocks = []
with open('stocks.txt', 'r') as data:
for line in data:
line = line.strip()
ldata = line.split(',')
temp_stock = {
'date':ldata[1],
'time':ldata[2],
'open':ldata[3],
'high':ldata[4],
'low':ldata[5],
'close':ldata[6],
'volume':ldata[7]
}
stocks.append(temp_stock)
with open('stocks.json', 'w') as fp:
json.dump(stocks, fp, indent=4)
from pprint import pprint
pprint(stocks)
Or else
with open('stocks.txt', 'r') as data:
res = [ {
'date':line.strip().split(',')[1],
'time':line.strip().split(',')[2],
'open':line.strip().split(',')[3],
'high':line.strip().split(',')[4],
'low':line.strip().split(',')[5],
'close':line.strip().split(',')[6],
'volume':line.strip().split(',')[7]
} for line in data ]
Output:
'date': '20150228',
'high': '19924.00',
'low': '19900.40',
'open': '19904.65',
'time': '15:27',
'volume': '31225'},
{'close': '19937.00',
'date': '20150228',
'high': '19941.30',
'low': '19921.05',
'open': '19921.05',
'time': '15:28',
'volume': '31525'},
{'close': '19945.00',
'date': '20150228',
'high': '19945.00',
'low': '19930.10',
'open': '19932.45',
'time': '15:29',
'volume': '38275'},
{'close': '19943.80',
'date': '20150228',
'high': '19949.40',
'low': '19930.00',
'open': '19947.00',
'time': '15:30',
'volume': '43400'},
{'close': '20070.00',
'date': '20150302',
'high': '20150.15',
'low': '20021.50',
'open': '20150.15',
'time': '09:16',
'volume': '91775'},
{'close': '20063.50',
'date': '20150302',
'high': '20085.00',
'low': '20063.50',
'open': '20071.50',
'time': '09:17',
'volume': '45700'}]
Assumed all the lines in the text file are built the same way you could iterate on the text file line by line and break it in a strict way like:
my_tokens = []
for line in f.read():
tokens = line.split(',')
my_dict = {}
try:
my_dict['date'] = tokens[1]
my_dict['time'] = tokens[2]
my_dict['open'] = tokens[3]
my_dict['high'] = tokens[4]
my_dict['low'] = tokens[5]
my_dict['close'] = tokens[6]
my_dict['volume'] = tokens[7]
except Exception as:
continue
my_tokens.append(my_dict)
That's not the prettiest answer but it works on your type of data (:
Related
I am trying to combine all 3 pandas data frames together data, data2, data3 sort them in synchronous order in terms of date as well as removing all duplicate rows. No more than 1 date value must be the same however the date of '2021-10-21 00:03:00' is both present in data2 and data3 so there should only be a single row present in the output. What would I be able to add to the coed so that I achieve the Expected Output?
Code:
import pandas as pd
data = {'Unix Timesamp': [1444311600000, 1444311660000, 1444311720000],
'date': ['2015-10-08 13:40:00', '2015-10-08 13:41:00', '2015-10-08 13:42:00'],
'Symbol': ['BTCUSD', 'BTCUSD', 'BTCUSD'],
'Open': [10384.54, 10389.08,10387.15],
'High': [10389.08, 10389.08, 10388.36],
'Low': [10340.2, 10332.8, 10385]}
data2 = {'Unix Timesamp': [1634774460000, 1634774520000, 1634774580000],
'date': ['2021-10-21 00:01:00', '2021-10-21 00:02:00', '2021-10-21 00:03:00'],
'Symbol': ['BTCUSD', 'BTCUSD', 'BTCUSD'],
'High': [4939.97, 4961.75, 4964.33],
'Open': [4939.95, 4959.18,4964.33]}
data3 = {'Unix Timesamp': [1634774640000, 1634774640000],
'date': ['2021-10-21 00:03:00', '2021-10-21 00:04:00'],
'High': [4964.33, 4867.33],
'Symbol': ['BTCUSD', 'BTCUSD'],
'Open': [4964.33, 4800.2]}
dataset = pd.DataFrame.from_dict(data)
dataset2 = pd.DataFrame.from_dict(data2)
dataset3 = pd.DataFrame.from_dict(data3)
dataset.drop('Low',1).append([dataset2, dataset3], ignore_index=True).drop_duplicates()
Output:
Expected Output (The 6th row in Output should not exist):
The below code should satisfy your requirement. Make sure you include 'subset=['date']' within paranthesis of the .drop_duplicates() method. Example: .drop_duplicates(subset=['date'])
dataset.drop('Low',1).append([dataset2, dataset3],ignore_index=True).drop_duplicates(subset=['date'])
For more info refer https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.drop_duplicates.html
I am working on some portfolio analysis and am trying to get a working function for pulling data for stocks, using a list of Ticker Symbols. Here is my list:
Ticker_List={'Tickers':['SPY', 'AAPL', 'TSLA', 'AMZN', 'BRK.B', 'DAL', 'EURN', 'AMD',
'NVDA', 'SPG', 'DIS', 'SBUX', 'MMP', 'USFD', 'CHEF', 'SYY',
'GOOGL', 'MSFT']}
I'm passing the list through this function like so:
Port=kit.d(Ticker_List)
def d(Ticker_List):
x=[]
for i in Ticker_List['Tickers']:
x.append(Closing_price_alltime(i))
return x
def Closing_price_alltime(Ticker):
Closedf=td_client.get_price_history(Ticker, period_type='year', period=20, frequency_type='daily', frequency=1)
return Closedf
Which pulls data from TDAmeritrade and gives me back:
[{'candles': [{'open': 147.46875,'high': 148.21875,
'low': 146.875,'close': 147.125,
'volume': 6998100,'datetime': 960181200000},
{'open': 146.625,'high': 147.78125,
'low': 145.90625,'close': 146.46875,
'volume': 4858900,'datetime': 960267600000},
...],
'symbol': 'MSFT',
'empty': False}]`
(This is just a sample of course)
Finally, I'm cleaning up with:
Port=pd.DataFrame(Port)
Port=pd.DataFrame.drop(Port, columns='empty')`
Which gives the DataFrame:
candles symbol
0 [{'open': 147.46875, 'high': 148.21875, 'low': 146.875, 'close': 147.125, 'volume': 6998100, 'datetime': 960181200000}, {'open': 146.625, 'high': ...} SPY
1 [{'open': 3.33259, 'high': 3.401786, 'low': 3.203126, 'close': 3.261161, 'volume': 80917200, 'datetime': 960181200000}, {'open': 3.284599, 'high':...} AAPL
How can I get the close price out of the nested dictionary in each row and set that as a the columns, with the ticker symbols (currently in their own column) as the headers for the closing price columns. Also how to extract the datetime from each nested dictionary and set it as the index.
EDIT: More info
My original method of building this DataFrame was:
SPY_close=kit.Closing_price_alltime('SPY')
AAPL_close=kit.Closing_price_alltime('AAPL')
TSLA_close=kit.Closing_price_alltime('TSLA')
AMZN_close=kit.Closing_price_alltime('AMZN')
BRKB_close=kit.Closing_price_alltime('BRK.B')
DAL_close=kit.Closing_price_alltime('DAL')
EURN_close=kit.Closing_price_alltime('EURN')
AMD_close=kit.Closing_price_alltime('AMD')
NVDA_close=kit.Closing_price_alltime('NVDA')
SPG_close=kit.Closing_price_alltime('SPG')
DIS_close=kit.Closing_price_alltime('DIS')
SBUX_close=kit.Closing_price_alltime('SBUX')
MMP_close=kit.Closing_price_alltime('MMP')
USFD_close=kit.Closing_price_alltime('USFD')
CHEF_close=kit.Closing_price_alltime('CHEF')
SYY_close=kit.Closing_price_alltime('SYY')
GOOGL_close=kit.Closing_price_alltime('GOOGL')
MSFT_close=kit.Closing_price_alltime('MSFT')
def Closing_price_alltime(Ticker):
"""
Gets Closing Price for Past 20 Years w/ Daily Intervals
and Formats it to correct Date and single 'Closing Price'
column.
"""
Raw_close=td_client.get_price_history(Ticker,
period_type='year', period=20, frequency_type='daily', frequency=1)
#Closedf = pd.DataFrame(Raw_close['candles']).set_index('datetime')
#Closedf=pd.DataFrame.drop(Closedf, columns=['open', 'high',
'low', 'volume'])
#Closedf.index = pd.to_datetime(Closedf.index, unit='ms')
#Closedf.index.names=['Date']
#Closedf.columns=[f'{Ticker} Close']
#Closedf=Closedf.dropna()
return Closedf
SPY_pct=kit.pct_change(SPY_close)
AAPL_pct=kit.pct_change(AAPL_close)
TSLA_pct=kit.pct_change(TSLA_close)
AMZN_pct=kit.pct_change(AMZN_close)
BRKB_pct=kit.pct_change(BRKB_close)
DAL_pct=kit.pct_change(DAL_close)
EURN_pct=kit.pct_change(EURN_close)
AMD_pct=kit.pct_change(AMD_close)
NVDA_pct=kit.pct_change(NVDA_close)
SPG_pct=kit.pct_change(SPG_close)
DIS_pct=kit.pct_change(DIS_close)
SBUX_pct=kit.pct_change(SBUX_close)
MMP_pct=kit.pct_change(MMP_close)
USFD_pct=kit.pct_change(USFD_close)
CHEF_pct=kit.pct_change(CHEF_close)
SYY_pct=kit.pct_change(SYY_close)
GOOGL_pct=kit.pct_change(GOOGL_close)
MSFT_pct=kit.pct_change(MSFT_close)
def pct_change(Ticker_ClosingValues):
"""
Takes Closing Values and Finds Percent Change.
Closing Value Column must be named 'Closing Price'.
"""
return_pct=Ticker_ClosingValues.pct_change()
return_pct=return_pct.dropna()
return return_pct
Portfolio_hist_rets=[SPY_pct, AAPL_pct, TSLA_pct, AMZN_pct,
BRKB_pct, DAL_pct, EURN_pct, AMD_pct,
NVDA_pct, SPG_pct, DIS_pct, SBUX_pct,
MMP_pct, USFD_pct, CHEF_pct, SYY_pct,
GOOGL_pct, MSFT_pct]
Which returned exactly what I wanted:
SPY Close AAPL Close TSLA Close AMZN Close BRK.B Close
Date
2000-06-06 05:00:00 -0.004460 0.017111 NaN -0.072248 -0.002060
2000-06-07 05:00:00 0.006934 0.039704 NaN 0.024722 0.013416
2000-06-08 05:00:00 -0.003920 -0.018123 NaN 0.001206 -0.004073
This method is obviously much less efficient than just using a for loop to create a DataFrame from a list of tickers.
In short, I'm asking what changes can be made to my new code (above my edit) to achieve the same end result as my old code (below my edit) (a well formatted and labeled DataFrame).
Closing_price_alltime return value:
d = [{'candles': [{'open': 147.46875,'high': 148.21875,
'low': 146.875,'close': 147.125,
'volume': 6998100,'datetime': 960181200000},
{'open': 146.625,'high': 147.78125,
'low': 145.90625,'close': 146.46875,
'volume': 4858900,'datetime': 960267600000}
],
'symbol': 'MSFT',
'empty': False}]
You could extract symbol,datetime and closing like this.
import operator
import pandas as pd
data = operator.itemgetter('datetime','close')
symbol = d[0]['symbol']
candles = d[0]['candles']
dt, closing = zip(*map(data, candles))
# for loop equivalent to zip(*map...)
#dt = []
#closing = []
#for candle in candles:
# dt.append(candle['datetime'])
# closing.append(candle['close'])
s = pd.Series(data=closing,index=dt,name=symbol)
This will create a DataFrame of closing prices for each symbol in the list.
results = []
for ticker in Ticker_List['Tickers']:
d = Closing_price_alltime(ticker)
symbol = d[0]['symbol']
candles = d[0]['candles']
dt, closing = zip(*map(data, candles))
results.append(pd.Series(data=closing,index=dt,name=symbol))
df = pd.concat(results, axis=1)
pandas.DataFrame.pct_change
This is the final function I wrote which accomplishes my goal:
def Port_consol(Ticker_List):
"""
Consolidates Ticker Symbol Returns and Returns
a Single Portolio
"""
Port=[]
Port_=[]
for i in Ticker_List['Tickers']:
Port.append(Closing_price_alltime(i))
j=list(range(0, (n_assets)))
for i in j:
data = operator.itemgetter('datetime','close')
symbol = Port[i]['symbol']
candles = Port[i]['candles']
dt, closing = zip(*map(data, candles))
s = pd.Series(data=closing,index=dt,name=symbol)
s=pd.DataFrame(s)
s.index = pd.to_datetime(s.index, unit='ms')
Port_.append(s)
Portfolio=pd.concat(Port_, axis=1, sort=False)
return Portfolio
I can now pass though a list of tickers to this function, the data will be pulled from TDAmeritrade's API (using python package td-ameritrade-python-api), and a DataFrame is formed with historical closing prices for the Stocks whose tickers I pass through.
I have a Dataset structured like this:
"Date","Time","Open","High","Low","Close","Up","Down","Volume"
01/03/2000,00:05,1481.50,1481.50,1481.00,1481.00,2,0,0.00
01/03/2000,00:10,1480.75,1480.75,1480.75,1480.75,1,0,1.00
01/03/2000,00:20,1480.50,1480.50,1480.50,1480.50,1,0,1.00
[...]
03/01/2018,11:05,2717.25,2718.00,2708.50,2709.25,9935,15371,25306.00
03/01/2018,11:10,2709.25,2711.75,2706.50,2709.50,8388,8234,16622.00
03/01/2018,11:15,2709.25,2711.50,2708.25,2709.50,4738,4703,9441.00
03/01/2018,11:20,2709.25,2709.50,2706.00,2707.25,3609,4685,8294.00
I read this file in this way:
rows = pd.read_csv("Datasets/myfile.txt")
I want to get this information with pandas: for each day (so grouped day by day) get the first value of "Open", last value of "Close", Highest value of "High" and Lower value of "Low", and sum of Volume.
I know how to do with some for cicle, but it is a very inefficient way. Is it possibile to do with a few line with Pandas?
Thanks
Use groupby and agg:
df.groupby('Date').agg({
'Close': 'last',
'Open': 'first',
'High': 'max',
'Low': 'min',
'Volume': 'sum'
})
Output:
Close Open High Low Volume
Date
01/03/2000 1480.50 1481.50 1481.5 1480.5 2.0
03/01/2018 2707.25 2717.25 2718.0 2706.0 59663.0
The example list
{
'date': array(['06/08/2016', '06/09/2016', '06/10/2016']),
'close': array([ 923.13, 914.25, 909.42])
}
I try to get the Date of close is 914.25 that is list['date'][2]
but i don't know how to get index 2 for close.
Thank you.
Ideally, if you would do this kind of queries often, you should restructure your data to fit the use case better. For instance, have a dictionary where the keys are amounts and dates are values. Then, you would have quick O(1) lookups into the dictionary by key.
But, in this state of the problem, you can solve it with zip() and next():
>>> d = {
... 'date': ['06/08/2016', '06/09/2016', '06/10/2016'],
... 'close': [ 923.13, 914.25, 909.42]
... }
>>> a = 914.25
>>> next(date for date, amount in zip(d['date'], d['close']) if amount == a)
'06/09/2016'
Note that if the amount would not be found, next() would fail with a StopIteration exception. You can either handle it, or you can provide a default beforehand:
>>> a = 10.00
>>> next((date for date, amount in zip(d['date'], d['close']) if amount == a), 'Not Found')
'Not Found'
You can try this :
>>> data = { 'date': ['06/08/2016', '06/09/2016', '06/10/2016'],'close': [ 923.13, 914.25, 909.42]}
>>> data['date'][data['close'].index(914.25)]
'06/09/2016'
Thanks to index(), you are able to get the index of the required value (914.25 in this case).
Where is the example from? I don't think you can have an array of strings in Python.
Assuming that the Python data structure is:
{
'date': ['06/08/2016', '06/09/2016', '06/10/2016'],
'close': [923.13, 914.25, 909.42]
}
and the indexes of close always matches the indexes of date, then:
In [1]: d = {
...: 'date': ['06/08/2016', '06/09/2016', '06/10/2016'],
...: 'close': [923.13, 914.25, 909.42]
...: }
You find the index of 914.25:
In [2]: d['close'].index(914.25)
Out[2]: 1
You find the corresponding date:
In [3]: d['date'][1]
Out[4]: '06/09/2016'
I am using the yahoo finance library that can be found here:
https://pypi.python.org/pypi/yahoo-finance/1.2.1
I have a text file with ticker symbols. I am going through the ticker symbols and printing historical data for the stock prices.
How can I take the closing prices and store them so that I can use them later (to calculate averages)?
here is my code:
from yahoo_finance import Share
from pprint import pprint #for easy to view historical data
import calendar
import datetime
import time
cal = calendar.TextCalendar(calendar.SUNDAY)
#cal.prmonth(today)
#using datetime below:
today = datetime.date.today() #todays date
todayH = str(today) # because in .get_historical I need to use a string
yesterday = (today.toordinal()-10) #yesterdays date mathamatically
dateYes = datetime.date.fromordinal(yesterday) #yesterdays date in format we want
dateYesH = str(dateYes) # because in .get_historical I need to use a string
print 'today:', today
print dateYesH
print 'ordinal:', today.toordinal()
rand = Share('yhoo')
# print rand.get_price() (works)
#pprint(rand.get_historical(dateYesH, todayH))
#reading text file
file1 = open('TickerF.txt', 'r')
words = file1.read().split(' ')
length = len(words)
#print words
#print len(words)
#print file1.read()
file1.close()
c = 0
try :
while c < length:
for i in words:
symbol = str(i)
stock = Share(symbol)
c= c+1
print i
#print c
pprint(stock.get_historical(dateYesH, todayH))
except:
pass
my output is :
today: 2015-12-06
2015-11-26
ordinal: 735938
YHOO
[{'Adj_Close': '34.91',
'Close': '34.91',
'Date': '2015-12-04',
'High': '35.200001',
'Low': '34.18',
'Open': '34.34',
'Symbol': 'YHOO',
'Volume': '15502700'},
{'Adj_Close': '34.34',
'Close': '34.34',
'Date': '2015-12-03',
'High': '35.720001',
'Low': '34.099998',
'Open': '35.59',
'Symbol': 'YHOO',
'Volume': '17068000'},
{'Adj_Close': '35.650002',
'Close': '35.650002',
'Date': '2015-12-02',
'High': '36.389999',
'Low': '34.77',
'Open': '35.00',
'Symbol': 'YHOO',
'Volume': '56614000'},
{'Adj_Close': '33.709999',
'Close': '33.709999',
'Date': '2015-12-01',
'High': '33.889999',
'Low': '33.470001',
'Open': '33.869999',
'Symbol': 'YHOO',
'Volume': '10862500'},
{'Adj_Close': '33.810001',
'Close': '33.810001',
'Date': '2015-11-30',
'High': '33.830002',
'Low': '32.849998',
'Open': '33.029999',
'Symbol': 'YHOO',
'Volume': '17363600'},
{'Adj_Close': '32.939999',
'Close': '32.939999',
'Date': '2015-11-27',
'High': '33.09',
'Low': '32.439999',
'Open': '32.790001',
'Symbol': 'YHOO',
'Volume': '5313400'}]
How can I store the 'close' values while I go through my array? I have an idea to create another array which will store the 'close' values, but how do I make it so that the array only stores the close values and not any of the other values?
You've got lots of options. The most common would be taking that list of dictionaries and saving it as (and this is in order of my preference for the different storage formats) a pickle, json, csv, or raw_text.
I'd like to give you some unsolicited advice and steer you towards pandas. It'll make your life easier because it does a particularly good job at data analysis as well as reading and writing to file. You can get most of the benefit of using pandas just by converting that list of dictionaries to a DataFrame, but pandas also provides some of the same parsing parts that yahoo_finance provides. For instance:
from pandas.io import data
df = data.get_data_yahoo('YHOO')
will give you those same Date / Close / Adj Close / Open / High / Low / Volume going back to 2010. If you want to save/load the data to disk, you can just do
df.to_pickle('/tmp/yhoo.pkl')
df = pd.read_pickle('/tmp/yhoo.pkl')
It'll also make it easier to analyze the data. For instance if you just want the average close price
>>> print df.Close.mean()
25.470388733914213
I wrote an example that stores all the close prices (per date) in an array. The output is the close prices of GOOG for the first ~7 months of the year:
from yahoo_finance import Share
stock = Share('GOOG')
start_date = '2015-01-01'
end_date = '2015-06-30'
closes = [c['Close'] for c in stock.get_historical(start_date, end_date)]
for c in closes:
print c
Output:
520.51001
521.52002
531.690002
535.22998
537.840027
540.47998
538.190002
536.690002
536.72998
529.26001
528.150024
527.200012
532.330017
534.609985
536.690002
526.690002
526.830017
533.330017
536.700012
540.309998
539.179993
533.98999
532.109985
539.780029
539.789978
532.320007
540.109985
542.51001
539.27002
537.359985
532.299988
533.849976
538.400024
529.619995
529.039978
535.700012
538.219971
530.700012
524.219971
530.799988
540.780029
537.900024
537.340027
549.080017
553.679993
555.369995
565.062561
547.002472
539.367458
533.972413
535.382408
524.052386
533.802391
532.532429
530.392405
539.172404
540.012477
540.782472
541.612446
537.022465
536.767432
535.532478
542.562439
548.002468
552.032502
548.342512
555.172522
558.787478
570.192597
558.81251
560.362537
557.992512
559.502513
550.842532
554.512509
547.322503
555.512505
551.182515
555.012538
568.852557
567.687558
575.332609
573.372583
573.64261
571.342601
558.402572
555.482516
543.872489
536.092424
531.912381
538.952441
542.872432
539.702422
542.842504
549.012501
542.932472
535.972405
536.942412
527.832406
531.002415
527.582391
522.762349
529.2424
528.482381
534.522445
510.662331
510.002318
518.63237
535.212448
539.952437
534.39245
518.042312
506.902294
508.082288
501.792271
500.872267
496.182251
492.552209
496.172274
502.682255
501.102268
501.962262
513.872306
524.812404