I am using PyEphem to calculate the location of the Sun in the sky at various times.
I have an Observer point (happens to be at Stonehenge) and can use PyEphem to calculate sunrise, sunset, and the altitude angle and azimuth (degrees from N) for the Sun at any hour of the day. Brilliant, no problem.
However, what I really need is to be able to calculate the altitude angle of the Sun from an known azimuth. So I would set the same observer point (long/lat/elev/date (just yy/mm/dd, not time)) and an azimuth for the Sun. And from this input, calculate the altitude of the Sun and the time it is at that azimuth.
I had hoped I would be able to just set Sun.date and Sun.az and work backwards from those values, but alas. Any thoughts on how to approach this (and if it even is approachable) with PyEphem?
The only other option I'm seeing available is to "sneak up" on the azimuth by iterating over a sequence of times until I get within a margin of error of the azimuth I desire, but that is just gross.
thanks in advance, Dave
Astronomy software predicts the location of the Sun by taking JPL predictions of where the Earth and Sun will be, which the JPL expresses as a series of polynomials that cover specific ranges of dates. Asking “when will the sun be at azimuth z?” is asking when three different polynomials, that are each varying at a different rate (the polynomial for the Sun, for the Earth-Moon barycenter revolving around the Sun, and the Earth revolving around the barycenter), will happen to bring the difference between the two positions to precisely a certain angle.
And, it turns out, that problem falls into the class of “gross” math problems — or, as professionals say, “non-closed-form-solution problems.” But I like your word “gross” because it catches very well how most of us feel when we discover that much of the world has to be tackled by trial-and-error instead of just giving us an answer.
Fortunately, a vast enough swatch of science is “gross” in this sense that there are standard ways of asking “when will this big complicated function reach exactly value z?” If you are able to install and try out SciPy, the increasingly popular science library for Python, you will find that it has a whole collection of routines that sneak up on solutions, each using a different tactic. The other answerer has already identified one such tactic — halving the search space with each trial — but that is generally the slowest (though in some extreme cases, the safest) approach; here are some others:
http://docs.scipy.org/doc/scipy/reference/optimize.html
Create a little function that returns “how far off” the Sun's azimuth is a time t from the azimuth you want, where the function will finally return zero when the azimuth is exactly right, like:
def f(t):
...
return desired_az - sun.az
Then try out one of the “root finding scalar functions” from that SciPy page. The bisect() function will, just like the other answerer suggests, keep cutting the search space in half to narrow things down. But my guess is that you'll find a Newton's method to be far less “gross” and far faster — try newton() or brentq(), and see what happens!
Without knowing the details of the internal calculations that PyEphem is doing I don't know how easy or difficult it would be to invert those calculations to give the result you want.
With regards to the "sneaking up on it" option however, you could pick two starting times (eg sunrise and noon) where the azimuth is known to be either side (one greater and one less than) the desired value. Then just use a simple "halving the interval" approach to quickly find an approximate solution.
Related
I have been recently acquainted with orbital mechanics and am trying to do some analysis on the subject. Since I don't have subject matter expertise, I was at a crossroads with trying to decide that how would one determine if a satellite has performed maneuver/rendezvous operation given the historical TLE data of that satellite from which we extract the orbital elements. To drill down further, I am approaching to the problem like this:
I take my satellite of interest and collect the historical TLE data
for it.
Once, I have the data, I extract and calculate all the orbital
parameters from the TLE.
From the list of orbital parameters, I choose a subset of those
parameters and calculate long term standardized anomalies for each
of them.
Once I have the anomalies, I filter out those dates where any one
parameter has anomalies greater than 1.5 or less than -1.5.
But the deal is, I am not too sure of my subset. As of now, I have Inclination, RAAN, Argument of Perigee and Longitude.
Is there any other factor that I should add or remove from this subset in order to nail this analysis the right way? Or is there altogether any other approach that I can use?
What I'm interested in, is to find out the days when a satellite has performed maneuvers.
You should add major and minor semi axis sizes (min and max altitude). Those changes after any burns along trajectory or perpendicular to it and decrease from friction for too low orbits.
Analyzing that can possibly hint what kind of maneuver was performed. Also changing those is usually done on the opposite side of the orbit so once you find a bump in periaxis or apoaxis the burn most likely ocured half orbit before reaching it.
Another stuff I would detect was speed. Compute local speed as derivation of consequent data points distance/time) and compare that with Kepler's equation. If they are not matching it means some kind of burn or collision or ejection ocured. and from the difference you can also detect what has been done.
For more info see:
solving Kepler`s equation
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
I have a database of time-stamped points which represent a path being drawn by a user in a 2-D plane. I also have a list of points which represent the goal path. These are not timestamped. I want to find how accurate the users' drawn paths are as compared to the goal path. The parameter to define accuracy is not clear and something I'm trying to decide. I don't really care about the temporal aspect of the user drawn path. I only want to compare the two paths.
I'm doing this to do the analysis for an experiment done by a behavioral lab. This is my current algorithm.
Find the total distance of the user drawn path by adding the straight line difference between all points.
At every 1% of the total distance of both the user path and the goal path find the straight line distance between the two paths.
Average the 100 points together to get the total average distance between the two paths.
Increase the sampling frequency if I want to have a more accurate number
I'm only looking for algorithmic help since implementing this would be quite trivial. My issue is that I'm not sure whether I'm missing something here, and nor sure of the correctness of this algorithm and wanted to run it by some experienced programmers.
I'm not a programmer by trade but this data analysis is essential for the paper the lab is working on. I'm not sure if I need to be familiar with some higher level Math which makes this trivial.
I'm completely language agnostics and would appreciate any pointers to any existing algorithms or novel solutions which solve this problem.
Query:
I want to estimate the trajectory of a person wearing an IMU between point a and point b. I know the exact location of point a and point b in an x,y,z space and the time it takes the person to walk between the points.
Is it possible to reconstruct the trajectory of the person moving from point a to point b using the data from an IMU and the time?
This question is too broad for SO. You could write a PhD thesis answering it, and I know people who have.
However, yes, it is theoretically possible.
However, there are a few things you'll have to deal with:
Your system is going to discretize time on some level. The result is that your estimate of position will be non-smooth. Increasing sampling rates is one way to address this, but this frequently increases the noise of the measurement.
Possible paths are non-unique. Knowing the time it takes to travel from a-b constrains slightly the information from the IMUs, but you are still left with an infinite family of possible routes between the two. Since you mention that you're considering a person walking between two points with z-components, perhaps you can constrain the route using knowledge of topography and roads?
IMUs function by integrating accelerations to velocities and velocities to positions. If the accelerations have measurement errors, and they always do, then the error in your estimate of the position will grow over time. The longer you run the system for, the more the results will diverge. However, if you're able to use roads/topography as a constraint, you may be able to restart the integration from known points in space; that is, if you can detect 90 degree turns on a street grid, each turn gives you the opportunity to tie the integrator back to a feasible initial condition.
Given the above, perhaps the most important question you have to ask yourself is how much error you can tolerate in your path reconstruction. Low-error estimates are going to require better (i.e. more expensive) sensors, higher sampling rates, and higher-order integrators.
I am trying to solve a problem very similar to the one discussed in this post
I have a broadband signal, which contains a component with time-varying frequency. I need to monitor the phase of this component over time. I am able to track the frequency shifts by (a somewhat brute force method of) peak tracking in the spectrogram. I need to "clean up" the signal around this time varying peak to extract the Hilbert phase (or, alternatively, I need a method of tracking the phase that does not involve the Hilbert transform).
To summarize that previous post: varying the coefficients of a FIR/IIR filter in time causes bad things to happen (it does not just shift the passband, it also completely confuses the filter state in ways that cause surprising transients). However, there probably is some way to adjust filter coefficients in time (probably by jointly modifying the filter coefficients and the filter state in some intelligent way). This is beyond my expertise, but I'd be open to any solutions.
There were two classes of solutions that seem plausible: one is to use a resonator filter (basically a damped harmonic oscillator driven by the signal) with a time-varying frequency. This model is simple enough to avoid surprising filter transients. I will try this -- but resonators have very poor attenuation in the stop band (if they can even be said to have a stop band?). This makes me nervous as I'm not 100% sure how the resonate filters will behave.
The other suggestion was to use a filter bank and smoothly interpolate between various band-pass filtered signals according to the frequency. This approach seems appealing, but I suspect it has some hidden caveats. I imagine that linearly mixing two band-pass filtered signals might not always do what you would expect, and might cause weird things? But, this is not my area of expertise, so if mixing over a filter bank is considered a safe solution (one that has been analyzed and published before), I would use it.
Another potential class of solutions occurs to me, which is to just take the phase from the frequency peak in a sliding short-time Fourier transform (could be windowed, multitaper, etc). If anyone knows any prior literature on this I'd be very interested. Related, would be to take the phase at the frequency power peak from a sliding complex Morlet wavelet transform over the band of interest.
So, I guess, basically I have three classes of solutions in mind.
1. Resonator filters with time-varying frequncy.
2. Using a filter bank, possibly with mixing?
3. Pulling phase from a STFT or CWT, (these can be considered a subset of the filter bank approach)
My supicion is that in (2,3) surprising thing will happen to the phase from time to time, and that in (1) we may not be able to reject as much noise as we'd like. It's not clear to me that this problem even has a perfect solution (uncertainty principle in time-frequency resolution?).
Anyway, if anyone has solved this before, and... even better, if anyone knows any papers that sound directly applicable here, I would be grateful.
Not sure if this will help, but googling "monitor phase of time varying component" resulted in this: Link
Hope that helps.
I have some sampled (univariate) data - but the clock driving the sampling process is inaccurate - resulting in a random slip of (less than) 1 sample every 30. A more accurate clock at approximately 1/30 of the frequency provides reliable samples for the same data ... allowing me to establish a good estimate of the clock drift.
I am looking to interpolate the sampled data to correct for this so that I 'fit' the high frequency data to the low-frequency. I need to do this 'real time' - with no more than the latency of a few low-frequency samples.
I recognise that there is a wide range of interpolation algorithms - and, among those I've considered, a spline based approach looks most promising for this data.
I'm working in Python - and have found the scipy.interpolate package - though I could see no obvious way to use it to 'stretch' n samples to correct a small timing error. Am I overlooking something?
I am interested in pointers to either a suitable published algorithm, or - ideally - a Python library function to achieve this sort of transform. Is this supported by SciPy (or anything else)?
UPDATE...
I'm beginning to realise that what, at first, seemed a trivial problem isn't as straightforward as I first thought. I am no-longer convinced that naive use of splines will suffice. I've also realised that my problem can be better described without reference to 'clock drift'... like this:
A single random variable is sampled at two different frequencies - one low and one high, with no common divisor - e.g. 5hz and 144hz. If we assume sample 0 is identical at both sample rates, sample 1 #5hz falls between samples 28 amd 29. I want to construct a new series - at 720hz, say - that fits all the known data points "as smoothly as possible".
I had hoped to find an 'out of the box' solution.
Before you can ask the programming question, it seems to me you need to investigate a more fundamental scientific one.
Before you can start picking out particular equations to fit badfastclock to goodslowclock, you should investigate the nature of the drift. Let both clocks run a while, and look at their points together. Is badfastclock bad because it drifts linearly away from real time? If so, a simple quadratic equation should fit badfastclock to goodslowclock, just as a quadratic equation describes the linear acceleration of a object in gravity; i.e., if badfastclock is accelerating linearly away from real time, you can deterministically shift badfastclock toward real time. However, if you find that badfastclock is bad because it is jumping around, then smooth curves -- even complex smooth curves like splines -- won't fit. You must understand the data before trying to manipulate it.
Bsed on your updated question, if the data is smooth with time, just place all the samples in a time trace, and interpolate on the sparse grid (time).