Wrong Range Rate with Pyephem - python

I am trying to calculate a satellites Range Rate using Python and pyephem. Unfortunately pyephems result seems to be wrong.
After comparing the value with calculations made by other satellite tracking programs like GPredict or Ham Radio Deluxe the the difference goes up to 2km/sec.The calculated values for the Azemuth and Elevation ankle are almost the same thought. TLE's are new and the system clock is the same.
Do you see any mistake I made in my code or do you have an idea what else could cause the error?
Thank you very much!
Here is my Code:
import ephem
import time
#TLE Kepler elements
line1 = "ESTCUBE 1"
line2 = "1 39161U 13021C 13255.21187718 .00000558 00000-0 10331-3 0 3586"
line3 = "2 39161 98.1264 332.9982 0009258 190.0328 170.0700 14.69100578 18774"
satellite = ephem.readtle(line1, line2, line3) # create ephem object from tle information
while True:
city = ephem.Observer() # recreate Oberserver with current time
city.lon, city.lat, city.elevation = '52.5186' , '13.4080' , 100
satellite.compute(city)
RangeRate = satellite.range_velocity/1000 # get RangeRate in km/sec
print ("RangeRate: " + str(RangeRate))
time.sleep(1)
I recorded some Range Rate values from the script and from GPRedict to make the error reproducibly:
ESTCUBE 1
1 39161U 13021C 13255.96108453 .00000546 00000-0 10138-3 0 3602
2 39161 98.1264 333.7428 0009246 187.4393 172.6674 14.69101320 18883
date: 2013-09-13
time pyephem-Script Gpredict
14:07:02 -1.636 -3.204
14:12:59 -2.154 -4.355
14:15:15 -2.277 -4.747
14:18:48 -2.368 -5.291
And I added some lines to calculate the satellites elevation and coordinates:
elevation = satellite.elevation
sat_latitude = satellite.sublat
sat_longitude = satellite.sublong
The results with time stamp are:
2013-09-13 14:58:13
RangeRate: 2.15717797852 km/s
Range: 9199834.0
Sat Elevation: 660743.6875
Sat_Latitude: -2:22:27.3
Sat_Longitude: -33:15:15.4
2013-09-13 14:58:14
RangeRate: 2.15695092773 km/s
Range: 9202106.0
Sat Elevation: 660750.9375
Sat_Latitude: -2:26:05.8
Sat_Longitude: -33:16:01.7
Another important information might be that I am trying to calculate the Doppler Frequency for a satellite pass. And therefore I need the Range Rate:
f_Doppler_corrected = (c0/(c0 + RangeRate))*f0
Range Rate describes the velocity of a moving object on the visual axis to the observer. Maybe the range_velocity is something different?

It seems pyephem (libastro as a backend) and gpredict (predict) as a backend use different ways to calculate the satellite velocity. I am attaching detailed output of comparison for an actual reference observation. It can be seen that both output the correct position, while only gpredict outputs reasonable range_rate values. The error seems to occur in the satellite velocity vector. I would say that the reasons from gpredict are more reasonable (and the similar code is with question marks in libastro ..) therefore I will propose a fix in libastro to handle it as in gpredict, however maybe someone who understands the math behind it can add to this.
I added another tool, PyPredict (also predict based), to get some calculations here. However these values are off, so must be something else.
Pyephem: 3.7.5.3
Gpredict: 1.3
PyPredict 1.1 (Git: 10/02/2015)
OS: Ubuntu x64
Python 2.7.6
Time:
Epoch timestamp: 1420086600
Timestamp in milliseconds: 1420086600000
Human time (GMT): Thu, 01 Jan 2015 04:30:00 GMT
ISS (ZARYA)
1 25544U 98067A 15096.52834639 .00016216 00000-0 24016-3 0 9993
2 25544 51.6469 82.0200 0006014 185.1879 274.8446 15.55408008936880
observation point: N0 E0 alt=0
Test 1:
Gpredict: (Time, Az, El, Slant Range, Range Velocity)
2015 01 01 04:30:00 202.31 -21.46 5638 -5.646
2015 01 01 04:40:00 157.31 -2.35 2618 -3.107
2015 01 01 04:50:00 72.68 -10.26 3731 5.262
Pyephem 3.7.5.3 (default atmospheric refraction)
(2015/1/1 04:30:00, 202:18:45.3, -21:27:43.0, 5638.0685, -5.3014228515625)
(2015/1/1 04:40:00, 157:19:08.3, -1:21:28.6, 2617.9915, -2.934402099609375)
(2015/1/1 04:50:00, 72:40:59.9, -10:15:15.1, 3730.78375, 4.92381201171875)
No atmospheric refraction
(2015/1/1 04:30:00, 202:18:45.3, -21:27:43.0, 5638.0685, -5.3014228515625)
(2015/1/1 04:40:00, 157:19:08.3, -1:21:28.6, 2617.9915, -2.934402099609375)
(2015/1/1 04:50:00, 72:40:59.9, -10:15:15.1, 3730.78375, 4.92381201171875)
Pypredict
1420086600.0
{'decayed': 0, 'elevation': -19.608647085869123, 'name': 'ISS (ZARYA)', 'norad_id': 25544, 'altitude': 426.45804846615556, 'orbit': 92208, 'longitude': 335.2203454719759, 'sunlit': 1, 'geostationary': 0, 'footprint': 4540.173580837984, 'epoch': 1420086600.0, 'doppler': 1635.3621339278857, 'visibility': 'D', 'azimuth': 194.02436209048014, 'latitude': -45.784314563471646, 'orbital_model': 'SGP4', 'orbital_phase': 73.46488929141783, 'eclipse_depth': -8.890253049060693, 'slant_range': 5311.3721164183535, 'has_aos': 1, 'orbital_velocity': 27556.552465256085}
1420087200.0
{'decayed': 0, 'elevation': -6.757496200551716, 'name': 'ISS (ZARYA)', 'norad_id': 25544, 'altitude': 419.11153234752874, 'orbit': 92208, 'longitude': 9.137628905963876, 'sunlit': 1, 'geostationary': 0, 'footprint': 4502.939901708917, 'epoch': 1420087200.0, 'doppler': 270.6901377419433, 'visibility': 'D', 'azimuth': 139.21315598291235, 'latitude': -20.925997669236732, 'orbital_model': 'SGP4', 'orbital_phase': 101.06301876416072, 'eclipse_depth': -18.410968838249545, 'slant_range': 3209.8444916123644, 'has_aos': 1, 'orbital_velocity': 27568.150821416708}
1420087800.0
{'decayed': 0, 'elevation': -16.546383900323555, 'name': 'ISS (ZARYA)', 'norad_id': 25544, 'altitude': 414.1342802649042, 'orbit': 92208, 'longitude': 31.52356804788407, 'sunlit': 1, 'geostationary': 0, 'footprint': 4477.499436144489, 'epoch': 1420087800.0000002, 'doppler': -1597.032808834609, 'visibility': 'D', 'azimuth': 76.1840387294104, 'latitude': 9.316828913183791, 'orbital_model': 'SGP4', 'orbital_phase': 128.66115193399546, 'eclipse_depth': -28.67721196244149, 'slant_range': 4773.838774518728, 'has_aos': 1, 'orbital_velocity': 27583.591664378775}
Test 2 (short time):
Gpredict: (Slant Range, Range Velocity)
2015 01 01 04:30:00 5638 -5.646
2015 01 01 04:30:10 5581 -5.648
->5.7 km/s avg
(2015/1/1 04:30:00, 5638.0685, -5.3014228515625)
(2015/1/1 04:30:10, 5581.596, -5.30395361328125)
->5.7 km/s avg
Pyephem
import ephem
import time
#TLE Kepler elements
line1 = "ISS (ZARYA)"
line2 = "1 25544U 98067A 15096.52834639 .00016216 00000-0 24016-3 0 9993"
line3 = "2 25544 51.6469 82.0200 0006014 185.1879 274.8446 15.55408008936880"
satellite = ephem.readtle(line1, line2, line3) # create ephem object from tle information
obs = ephem.Observer() # recreate Oberserver with current time
obs.lon, obs.lat, obs.elevation = '0' , '0' , 0
print('Pyephem Default (atmospheric refraction)')
obs.date = '2015/1/1 04:30:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
obs.date = '2015/1/1 04:40:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
obs.date = '2015/1/1 04:50:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
obs.pressure = 0 # disable atmospheric refraction
print('Pyephem No atmospheric refraction')
obs.date = '2015/1/1 04:30:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
obs.date = '2015/1/1 04:40:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
obs.date = '2015/1/1 04:50:00'
satellite.compute(obs)
print(obs.date, satellite.az, satellite.alt,satellite.range/1000, satellite.range_velocity/1000)
print('10 s timing')
obs.date = '2015/1/1 04:30:00'
satellite.compute(obs)
print(obs.date, satellite.range/1000, satellite.range_velocity/1000)
obs.date = '2015/1/1 04:30:10'
satellite.compute(obs)
print(obs.date, satellite.range/1000, satellite.range_velocity/1000)
Pypredict
import predict
import datetime
import time
format = '%Y/%m/%d %H:%M:%S'
tle = """ISS (ZARYA)
1 25544U 98067A 15096.52834639 .00016216 00000-0 24016-3 0 9993
2 25544 51.6469 82.0200 0006014 185.1879 274.8446 15.55408008936880"""
qth = (0, 10, 0) # lat (N), long (W), alt (meters)
#expect time as epoch time float
time= (datetime.datetime.strptime('2015/1/1 04:30:00', format) -datetime.datetime(1970,1,1)).total_seconds()
result = predict.observe(tle, qth, time)
print time
print result
time= (datetime.datetime.strptime('2015/1/1 04:40:00', format) -datetime.datetime(1970,1,1)).total_seconds()
result = predict.observe(tle, qth, time)
print time
print result
time= (datetime.datetime.strptime('2015/1/1 04:50:00', format) -datetime.datetime(1970,1,1)).total_seconds()
result = predict.observe(tle, qth, time)
print time
print result
Debug output of Gpredict and PyEphem
PyPredict
Name = ISS (ZARYA)
current jd = 2457023.68750
current mjd = 42003.7
satellite jd = 2457119.02835
satellite mjd = 42099
SiteLat = 0
SiteLong = 6.28319
SiteAltitude = 0
se_EPOCH : 115096.52834638999775052071
se_XNO : 0.06786747737871574870
se_XINCL : 0.90140843391418457031
se_XNODEO : 1.43151903152465820312
se_EO : 0.00060139998095110059
se_OMEGAO : 3.23213863372802734375
se_XMO : 4.79694318771362304688
se_BSTAR : 0.00024016000679694116
se_XNDT20 : 0.00000000049135865048
se_orbit : 93688
dt : -137290.81880159676074981689
CrntTime = 42004.2
SatX = -3807.5
SatY = 2844.85
SatZ = -4854.26
Radius = 6793.68
SatVX = -5.72752
SatVY = -3.69533
SatVZ = 2.32194
SiteX = -6239.11
SiteY = 1324.55
SiteZ = 0
SiteVX = -0.0965879
SiteVY = -0.454963
Height = 426.426
SSPLat = -0.795946
SSPLong = 0.432494
Azimuth = 3.53102
Elevation = -0.374582
Range = 5638.07
RangeRate = -5.30142
(2015/1/1 04:30:00, 5638.0685, -5.3014228515625)
Gpredict
time: 2457023,687500
pos obs: -6239,093574, 1324,506494, 0,000000
pos sat: -3807,793748, 2844,641722, -4854,112635
vel obs: -0,096585, -0,454962, 0,000000
vel sat: -6,088242, -3,928388, 2,468585
Gpredict (sgp_math.h)
/------------------------------------------------------------------/
/* Converts the satellite's position and velocity */
/* vectors from normalised values to km and km/sec */
void
Convert_Sat_State( vector_t *pos, vector_t *vel )
{
Scale_Vector( xkmper, pos );
Scale_Vector( xkmper*xmnpda/secday, vel );
} /* Procedure Convert_Sat_State */
Ephem (Libastro)
*SatX = ERAD*posvec.x/1000; /* earth radii to km */
*SatY = ERAD*posvec.y/1000;
*SatZ = ERAD*posvec.z/1000;
*SatVX = 100*velvec.x; /* ?? */
*SatVY = 100*velvec.y;
*SatVZ = 100*velvec.z;

Updating to the most recent release of pyephem (I tried V3.7.6.0) seems to solve the problem. The range rate now agrees closely with the values given by other commonly used tracking software.

Related

Trend Trigger Factor Indicator (TTF) in Python?

I am trying to convert TTF Indicator from TradingView Pine Script to Python. (with no plotting)
This is the Pine Script code I am trying to convert:
//#version=3
// Copyright (c) 2018-present, Alex Orekhov (everget)
// Trend Trigger Factor script may be freely distributed under the MIT license.
study("Trend Trigger Factor", shorttitle="TTF")
length = input(title="Lookback Length", type=integer, defval=15)
upperLevel = input(title="Upper Trigger Level", type=integer, defval=100, minval=1)
lowerLevel = input(title="Lower Trigger Level", type=integer, defval=-100, maxval=-1)
highlightBreakouts = input(title="Highlight Overbought/Oversold Breakouts ?", type=bool, defval=true)
src = input(title="Source", type=source, defval=close)
hh = highest(length)
ll = lowest(length)
buyPower = hh - nz(ll[length])
sellPower = nz(hh[length]) - ll
ttf = 200 * (buyPower - sellPower) / (buyPower + sellPower)
ttfColor = ttf > upperLevel ? #0ebb23 : ttf < lowerLevel ? #ff0000 : #f4b77d
plot(ttf, title="TTF", linewidth=2, color=ttfColor, transp=0)
transparent = color(white, 100)
maxLevelPlot = hline(200, title="Max Level", linestyle=dotted, color=transparent)
upperLevelPlot = hline(upperLevel, title="Upper Trigger Level", linestyle=dotted)
hline(0, title="Zero Level", linestyle=dotted)
lowerLevelPlot = hline(lowerLevel, title="Lower Trigger Level", linestyle=dotted)
minLevelPlot = hline(-200, title="Min Level", linestyle=dotted, color=transparent)
fill(upperLevelPlot, lowerLevelPlot, color=purple, transp=95)
upperFillColor = ttf > upperLevel and highlightBreakouts ? green : transparent
lowerFillColor = ttf < lowerLevel and highlightBreakouts ? red : transparent
fill(maxLevelPlot, upperLevelPlot, color=upperFillColor, transp=90)
fill(minLevelPlot, lowerLevelPlot, color=lowerFillColor, transp=90)
Here is what I done so far:
from finta import TA
#import pandas_ta as ta
import yfinance as yf
import pandas as pd
import numpy as np
ohlc = yf.download('BTC-USD', start='2022-08-01', interval='1d')
length = 15
hh = ohlc['High'].rolling(length).max()
ll = ohlc['Low'].rolling(length).min()
buyPower = hh - ll.fillna(0)
sellPower = hh.fillna(0) - ll
ttf = 200 * (buyPower - sellPower) / (buyPower + sellPower)
I don't know what I'm not doing the right way but everytime TTF is like this way below:
Date
2022-07-31 NaN
2022-08-01 NaN
2022-08-02 NaN
2022-08-03 NaN
2022-08-04 NaN
...
2022-11-14 0.0
2022-11-15 0.0
2022-11-16 0.0
2022-11-17 0.0
2022-11-18 0.0
Length: 111, dtype: float64
I'm thinking that these two Pine Script functions below I did them in the wrong way:
buyPower = hh - nz(ll[length])
sellPower = nz(hh[length]) - ll
But I'm not sure and also I don't know what would be their Python equivalent.
Any idea, please?
Thank you in advance!
After I struggled a little bit I have found the right answer.
I was right about where it could be the wrong part in my code above:
buyPower = hh - nz(ll[length])
sellPower = nz(hh[length]) - ll
It is not equal to this:
buyPower = hh - ll.fillna(0)
sellPower = hh.fillna(0) - ll
The correct python conversion is like so:
buyPower = hh - ll.shift(length).fillna(0)
sellPower = hh.shift(length).fillna(0) - ll

How can I price Digital option with short maturity

I am trying to price a Digital Call option using Quantlib Python package, but I have an inconsistent result. I am trying to price a deep ITM option with a 1 day maturity. The thing is that it returns me 0, which doesn't make sense. Could you please help me?
def Digital_Call(s0: float,
strike: float,
risk_free: float,
vol: float,
today: datetime,
maturity: datetime,
cash_payoff: float):
"Output: option_npv"
dividend_yield = 0
riskFreeTS = ql.YieldTermStructureHandle(ql.FlatForward(today, risk_free, ql.ActualActual()))
dividendTS = ql.YieldTermStructureHandle(ql.FlatForward(today, dividend_yield, ql.ActualActual()))
volatility = ql.BlackVolTermStructureHandle(ql.BlackConstantVol(today, ql.NullCalendar(), vol, ql.ActualActual()))
initialValue = ql.QuoteHandle(ql.SimpleQuote(s0))
process = ql.BlackScholesMertonProcess(initialValue, dividendTS, riskFreeTS, volatility)
engine = ql.AnalyticEuropeanEngine(process)
option_type = ql.Option.Call
option = ql.VanillaOption(ql.CashOrNothingPayoff(option_type, strike, cash_payoff), ql.EuropeanExercise(maturity))
option.setPricingEngine(engine)
return option
s0=150
strike = 14.36
risk_free = 0.0302373913
#risk_free = 0
vol = 0.2567723271
today = datetime(2022,9,28)
maturity = datetime(2022,9,30) #it doesn't work with Maturity of both 1 & 2 days
cash_payoff = 30
It returns 0.
Thank You

max curve time error in Heston calibration quantlib python

I am running a compiled from source SWIG python 1.16 version of QuantLib.
I have been trying to calibrate a heston model following this example.
I am only using the QL calibration at the moment to test it out before trying others.
I need time dependent parameters so I am using PiecewiseTimeDependentHestonModel.
Here is the relevant portion of my code.
Helper functions :
def tenor2date(s, base_date=None,ql=False):
# returns a date from a tenor and a base date
if base_date is None:
base_date = datetime.today()
num = float(s[:-1])
period = s[-1].upper()
if period == "Y":
return_date = base_date + relativedelta(years=num)
elif period == "M":
return_date = base_date + relativedelta(months=num)
elif period == "W":
return_date = base_date + relativedelta(weeks=num)
elif period == "D":
return_date = base_date + relativedelta(days=num)
else:
return_date = base_date
if ql:
return Date(return_date.strftime("%F"),"yyyy-mm-dd")
else:
return return_date
def setup_model(yield_ts, dividend_ts, spot, times,init_condition=(0.02, 0.2, 0.5, 0.1, 0.01)):
theta, kappa, sigma, rho, v0 = init_condition
model = ql.PiecewiseTimeDependentHestonModel(yield_ts, dividend_ts, ql.QuoteHandle(ql.SimpleQuote(spot)), v0, ql.Parameter(), ql.Parameter(),
ql.Parameter(), ql.Parameter(), ql.TimeGrid(times))
engine = ql.AnalyticPTDHestonEngine(model)
return model, engine
def setup_helpers(engine, vol_surface, ref_date, spot, yield_ts, dividend_ts):
heston_helpers = []
grid_data = []
for tenor in vol_surface:
expiry_date = tenor2date(tenor, datetime(ref_date.year(), ref_date.month(), ref_date.dayOfMonth()), True)
t = (expiry_date - ref_date)
print(f"{tenor} : {t / 365}")
p = ql.Period(t, ql.Days)
for strike, vol in zip(vol_surface[tenor]["strikes"], vol_surface[tenor]["volatilities"]):
print((strike, vol))
helper = ql.HestonModelHelper(p, calendar, spot, strike, ql.QuoteHandle(ql.SimpleQuote(vol / 100)), yield_ts, dividend_ts)
helper.setPricingEngine(engine)
heston_helpers.append(helper)
grid_data.append((expiry_date, strike))
return heston_helpers, grid_data
Market data :
vol_surface = {'12M': {'strikes': [1.0030154025220293, 0.9840808634190958, 0.9589657270688433, 0.9408279805370683, 0.9174122318462831, 0.8963792435025802, 0.8787138822765832, 0.8538712672800733, 0.8355036501980958], 'volatilities': [6.7175, 6.5, 6.24375, 6.145, 6.195, 6.425, 6.72125, 7.21, 7.5625], 'forward': 0.919323}, '1M': {'strikes': [0.9369864196692815, 0.9324482223892986, 0.9261255003380027, 0.9213195223581382, 0.9150244003650484, 0.9088253068972495, 0.9038936313900919, 0.897245676067657, 0.8924388848562849], 'volatilities': [6.3475, 6.23375, 6.1075, 6.06, 6.09, 6.215, 6.3725, 6.63125, 6.8225], 'forward': 0.915169}, '1W': {'strikes': [0.9258809998009043, 0.9236526412979602, 0.920487656155217, 0.9180490618315417, 0.9148370595017086, 0.9116231311263782, 0.9090950947170667, 0.9057357691404444, 0.9033397443834199], 'volatilities': [6.7175, 6.63375, 6.53625, 6.5025, 6.53, 6.6425, 6.77875, 6.99625, 7.1525], 'forward': 0.914875}, '2M': {'strikes': [0.9456173410343232, 0.9392447942175677, 0.9304717860942596, 0.9238709412876663, 0.9152350197527926, 0.9068086964842931, 0.9000335970840222, 0.8908167643473346, 0.884110721680849], 'volatilities': [6.1575, 6.02625, 5.8825, 5.8325, 5.87, 6.0175, 6.1975, 6.48875, 6.7025], 'forward': 0.915506}, '3M': {'strikes': [0.9533543407827232, 0.945357456067501, 0.9343646071178692, 0.9261489737826977, 0.9154251386183144, 0.9050707394248945, 0.8966770979707913, 0.8851907303568785, 0.876803402158318], 'volatilities': [6.23, 6.09125, 5.93, 5.8725, 5.915, 6.0775, 6.28, 6.60375, 6.84], 'forward': 0.915841}, '4M': {'strikes': [0.9603950279333742, 0.9509237742916833, 0.9379657828957041, 0.928295643018581, 0.9156834006905108, 0.9036539552069216, 0.8938804229269658, 0.8804999196762403, 0.870730837142799], 'volatilities': [6.3175, 6.17125, 6.005, 5.94375, 5.985, 6.15125, 6.36, 6.69375, 6.9375], 'forward': 0.916255}, '6M': {'strikes': [0.9719887962018352, 0.9599837798239937, 0.943700651576822, 0.9316544554849711, 0.9159768970939797, 0.9013018796367052, 0.8892904835162911, 0.8727031923006017, 0.8605425787295339], 'volatilities': [6.3925, 6.22875, 6.04125, 5.9725, 6.01, 6.1875, 6.41375, 6.78625, 7.0575], 'forward': 0.916851}, '9M': {'strikes': [0.9879332225745909, 0.9724112749400833, 0.951642771321364, 0.936450663789222, 0.9167103888580063, 0.8985852649047051, 0.8835274087791912, 0.8625837214139542, 0.8472311260811375], 'volatilities': [6.54, 6.34875, 6.1325, 6.055, 6.11, 6.32, 6.5875, 7.01625, 7.32], 'forward': 0.918086}}
spotDates = [ql.Date(1,7,2019), ql.Date(8,7,2019), ql.Date(1,8,2019), ql.Date(1,9,2019), ql.Date(1,10,2019), ql.Date(1,11,2019), ql.Date(1,1,2020), ql.Date(1,4,2020), ql.Date(1,7,2020)]
spotRates = [0.9148, 0.914875, 0.915169, 0.915506, 0.915841, 0.916255, 0.916851, 0.918086, 0.919323]
udl_value = 0.9148
todaysDate = ql.Date("2019-07-01","yyyy-mm-dd")
settlementDate = ql.Date("2019-07-03","yyyy-mm-dd")
and the script itself:
ql.Settings.instance().evaluationDate = todaysDate
dayCounter = ql.Actual365Fixed()
interpolation = ql.Linear()
compounding = ql.Compounded
compoundingFrequency = ql.Annual
times = [(x - spotDates[0]) / 365 for x in spotDates][1:]
discountFactors = [-log(x / spotRates[0]) / (times[i]) for i, x in enumerate(spotRates[1:])]
fwdCurve = ql.ZeroCurve(spotDates, [0] + discountFactors, dayCounter, calendar, interpolation, compounding, compoundingFrequency)
fwdCurveHandle = ql.YieldTermStructureHandle(fwdCurve)
dividendCurveHandle = ql.YieldTermStructureHandle(ql.FlatForward(settlementDate, 0, dayCounter))
hestonModel, hestonEngine = setup_model(fwdCurveHandle, dividendCurveHandle, udl_value, times)
heston_helpers, grid_data = setup_helpers(hestonEngine, vol_surface, todaysDate, udl_value, fwdCurveHandle, dividendCurveHandle)
lm = ql.LevenbergMarquardt(1e-8, 1e-8, 1e-8)
hestonModel.calibrate(heston_helpers, lm, ql.EndCriteria(500, 300, 1.0e-8, 1.0e-8, 1.0e-8))
When I run the last line I get the following error message :
RuntimeError: time (1.42466) is past max curve time (1.00274)
I do not understand how it can try to price things beyond 1Y as both the helpers and the forwards curve are defined on the same set of dates.
In case it helps someone, posting here the answer I got from the quantlb mailing :
specifying the maturity in days
t = (expiry_date - ref_date)
print(f"{tenor} : {t / 365}")
p = ql.Period(t, ql.Days)
might have an counterintuitive effect here as the specified calendar is used
to calculate the real expiry date. If the calendar is e.g. ql.UnitedStates
then this takes weekends and holidays into consideration,
ql.UnitedStates().advance(ql.Date(1,1,2019),ql.Period(365, ql.Days)) => Date(12,6,2020)
whereas
ql.NullCalendar().advance(ql.Date(1,1,2019),ql.Period(365, ql.Days)) => Date(1,1,2020)
hence I guess the interest rate curve is not long enough and throws the
error message.
So the fix is to make sure to use ql.NullCalendar() accross.

Dataframe to Time Series when minutes are repeated

I'm working with clinical data and want to make predictions of patients' waiting time at every minute, and the data (simplified) looks something like this:
Time(minutes) PatientSerial RemainingTime(minutes)
420 1 5
420 2 10
420 3 8
421 1 4
421 2 9
421 3 7
Where 420 is the number of minutes since midnight (420 = 7:00am), where my output is RemainingTime (historical data). In general, the machine learning algorithm should generate waiting time of every patient in each minute, given that the input is clinical data that is generated every minute. But I'm confused as to how to convert this dataframe into Time Series when the same minutes are repeated?
For clarity: This is not an answer but asking how the result should look like (not able to show the view in comment underneath question). This may help to get a better understanding how this question should be solved. Edit1: Coded answer is below.
#Ted:
I would like to know if the result what you try to get is as in the following table:
Time (min) MeanWait (min, single default patient)
... ...
420 5.2
421 4.9
422 4.3
423 4.2
...
...
820 11.39
821 11.41
822 11.41
823 11.09
824 10.7
825 10.69
... ...
Should the end-result be viewed in PDF using Matplotlib or in program GUI on screen? If so modify your question to include that.
EDIT 1:
Based on the comments I've made below script that does the core job "calculating" mean patient-waitingtime per unit daytime (minutes). Inline there are comments what happens where. As I reckon you can implement filedata loading and output writing yourself I did not add that as for the matplotlib. There are many examples here and around the web that suffice.
import datetime
# The day timescale is from 0 to 1440 minutes and then resets for day 2.
# The input-textfile can have 24h (0-1440) or continues (e.g. 0-4320 == 3 days) timescaling for x-axis.
# Testset for dataprocessing (day 1 and day2 data)
datas = ['Time(minutes) RemainingTime(minutes)',
'420 : 5',
'420 : 10',
'420 : 8',
'421 : 4',
'421 : 9',
'421 : 7',
'830 : 8',
'830 : 4',
'340 : 3',
'340 : 5',
'340 : 4',
'351 : 10',
'351 : 7',
'420 : 9',
'420 : 7',]
def sort_data(scr):
raw_data = {}
day_minute_counter = 0
current_list = []
day_in_minutes = (24 * 60)
elapsed_days_min = 0 # during processing this holds value in minutes
processed_days = 1
data_from_exception = {}
count_exceptions = 0
for row in scr:
print row
try:
# the following steps take ito account that lapsed time is linear for a single day.
# each row is being searched for ":" which identifies teh row as having integers or floats.
x_value, y_value = row.split(":")
# print 'xy_values : %s, %s' % (x_value, y_value)
# clipping trailing whitespaces from both ends.
x_value = x_value.strip(' ')
y_value = y_value.strip(' ')
# string > integer conversion
x_val = int(x_value)
y_val = int(y_value)
# set each x-axis timepoint only once.
if day_minute_counter == 0:
print 'Start', day_minute_counter, x_val
day_minute_counter = x_val
# zipping: append all y-axis datapoint that belong to single x-axispoint
if day_minute_counter == x_val:
print 'Append', day_minute_counter, x_val
current_list.append(y_val)
# add x,y-axis data to the datalist
if day_minute_counter < x_val:
print 'Done', day_minute_counter, x_val, current_list
raw_data[(day_minute_counter + elapsed_days_min)] = current_list
day_minute_counter = x_val
# new list for the next point in the "day_minute_counter".
current_list = []
current_list.append(y_val)
# correct x-axis "next-day" time difference.
if day_minute_counter > x_val:
processed_days += 1
print 'Next Day Marker', day_minute_counter, x_val, current_list
raw_data[(day_minute_counter+ elapsed_days_min)] = current_list
elapsed_days_min += day_in_minutes
# reset day_minute_counter because a day has elapsed.
day_minute_counter = 0
print 'elapsed_day in minutes : ', elapsed_days_min
except ValueError:
#get axis information
count_exceptions += 1
data_from_exception[count_exceptions] = row
# print 'Graph info or "none integer" information collected:\n\n%s > %s\n' % (count_exceptions, row)
# End of datablock : add the last x,y datapoints without known what EOF marker is being used.
raw_data[day_minute_counter] = current_list
print '\nRaw Data : %s\nOther info : %s\n ' % (raw_data, data_from_exception)
return (raw_data, processed_days, data_from_exception)
def calc_mean(scr):
days = scr[1]
minutes = (days * 24 * 60)
missing_datapoints = []
result = []
print 'Dataset spans a total of "%s" minutes.\n' % minutes
data = scr[0]
for x_datapoint in range(1, minutes):
meanwait = 0.0
totalwait = 0
try:
# process data from sorte_data.
# print 'datapoint', x_datapoint # shows only the absent datapoints on x-axis.
dataset = data[x_datapoint]
# print 'datapoint', x_datapoint # shows only the available datapoints on x-axis.
total_values = len(dataset)
for value in dataset:
totalwait += value
meanwait = float(totalwait) / float(total_values)
x = x_datapoint
y = meanwait
result.append((x, y))
print 'Patient meanwaiting time per Timepoint %s : %.03f' % (x_datapoint, meanwait)
except Exception:
missing_datapoints.append(x_datapoint)
# print 'Patient meainwaiting time "%s" is not available.' % x_datapoint
return result
def main():
# open file code here and use readlines to import data to "datas"
#
# datas = ...
ct = str(datetime.datetime.now())[0:23]
print '%s --> Collecting patient waittime data from Time Series.\n' % ct
sorted_data = sort_data(datas) # used template date from this script.
print 'Processing data to obtain main values'
the_result = calc_mean(sorted_data)
print '\nProcessing Finished. Here is the result :\n\n%s' % the_result
# create new file and store result or keep processing to PDF in matplotlib
if __name__ == '__main__':
main()

(Python) Retrieve the sunrise and sunset times from Google

Hello!
I recently went to the LACMA museum of art, and stumbled upon this clock. Basically it uses a light sensor to determine the percent of the day that has passed. This means sunrise would be 0.00% and sunset would be 100%. I wanted to create a easier version of this, having a program Google the sunset and sunrise times for the day and work from there. Eventually this would all be transferred to a Raspberry Pi 3 (another problem for another day), therefore the code would have to be in Python. Could I maybe get some help writing it?
TLDR Version
I need a Python program that googles and returns the times of the sunset and sunrise for the day. Mind helping?
It's not pretty but it should work, just use your coordinates as the parameters.
From their website "NOTE: All times are in UTC and summer time adjustments are not included in the returned data."
import requests
from datetime import datetime
from datetime import timedelta
def get_sunrise_sunset(lat, long):
link = "http://api.sunrise-sunset.org/json?lat=%f&lng=%f&formatted=0" % (lat, long)
f = requests.get(link)
data = f.text
sunrise = data[34:42]
sunset = data[71:79]
print("Sunrise = %s, Sunset = %s" % (sunrise, sunset))
s1 = sunrise
s2 = sunset
FMT = '%H:%M:%S'
tdelta = datetime.strptime(s2, FMT) - datetime.strptime(s1, FMT)
daylight = timedelta(days=0,seconds=tdelta.seconds, microseconds=tdelta.microseconds)
print('Total daylight = %s' % daylight)
t1 = datetime.strptime(str(daylight), '%H:%M:%S')
t2 = datetime(1900, 1, 1)
daylight_as_minutes = (t1 - t2).total_seconds() / 60.0
print('Daylight in minutes = %s' % daylight_as_minutes)
sr1 = datetime.strptime(str(sunrise), '%H:%M:%S')
sr2 = datetime(1900, 1, 1)
sunrise_as_minutes = (sr1 - sr2).total_seconds() / 60.0
print('Sunrise in minutes = %s' % sunrise_as_minutes)
ss1 = datetime.strptime(str(sunset), '%H:%M:%S')
ss2 = datetime(1900, 1, 1)
sunset_as_minutes = (ss1 - ss2).total_seconds() / 60.0
print('Sunset in minutes = %s' % sunset_as_minutes)
if __name__ == '__main__':
get_sunrise_sunset(42.9633599,-86.6680863)

Categories

Resources