I'm trying to solve a system of differential equations and find the trajectory of Sun and Jupiter. But I don't have a nice trajectory, only some points.
Could you help? ("Soleil" means Sun)
Here's my code
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from mc_deriv import deriv
start = 0
end = 14*365
nbpas = end/10
t = np.linspace(start,end,nbpas)
M = M_Soleil + M_Jupiter
x0 = x_Jupiter - x_Soleil
y0 = y_Jupiter - y_Soleil
vx0 = vx_Jupiter - vx_Soleil
vy0 = vy_Jupiter - vy_Soleil
syst_CI = [x0,y0,vx0,vy0]
Sols=odeint(deriv,syst_CI,t,args=(M,))
x = Sols[:, 0]
y = Sols[:, 1]
vx = Sols[:, 2]
vy = Sols[:, 3]
The initialisation
x_Soleil = -7.139143380212696e-03 # (UA)
y_Soleil = -2.792019770161695e-03 # (UA)
x_Jupiter = +3.996321311604079e+00 # (UA)
y_Jupiter = +2.932561211517850e+00 # (UA)
vx_Soleil = -7.139143380212696e-03 # (UA*j^-1)
vy_Soleil = -2.792019770161695e-03 # (UA*j^-1)
vx_Jupiter = +3.996321311604079e+00 # (UA*j^-1)
vy_Jupiter = +2.932561211517850e+00 # (UA*j^-1)
M_Soleil = 2e30 # masse Soleil (kg)
M_Jupiter = 1.9e27 # masse Jupiter (kg)
r_Soleil = 696e6 # rayon Soleil (m)
And the outer function
def deriv(syst,t,M):
G = 6.67e-11
x = syst[0]
y = syst[1]
vx = syst[2]
vy = syst[3]
dxdt = vx
dydt = vy
dvxdt = -(G*M*x)/((x**2+y**2)**(3/2))
dvydt = -(G*M*y)/((x**2+y**2)**(3/2))
return dxdt,dydt,dvxdt,dvydt
The plot
plt.figure(figsize=(7, 5))
plt.title("Trajectoires Soleil-Jupiter")
#plt.xlabel("UA)")
#plt.ylabel("UA)")
plt.plot(x, y, '-', color="red")
plt.show()
The result of the plot :
Eureka it works!!!!
Currently I see the following problems in your code that render your observations unreproducible (apart from missing reference values):
In the initial data, the length unit is the astronomical unit and the time unit is one day. The unit of the gravitational constant is m^3 s^-1 kg^-2, so to combine them in one formula you need to convert AU into m, the factor is about 150e+09, of which the cube is to be divided out. And one day is 24*3600 seconds, which has to be multiplied in.
The integration time interval should also be counted in years, at the moment you seem to think of days, a third unit without appropriate conversion factors. [solved, un jour=one day]
From the division in the construction of the time nodes it appears as if you use python 2. Then the exponents 3/2 evaluate to 1 in integer division, you can directly use 1.5 in the exponent, it is an exact value in the binary floating point format. [actually python 3, then the first division should be explicitly integer]
In copying the initial data you made a copy-paste error, the initial positions and velocities have the same numbers, while the real numbers should form perpendicular vectors. [not solved in code, image has correct velocities] Looking for online data that fits your position, the NASA HORIZON system gives me for 2011-Nov-11 04:00 the Jupiter positions and velocities as
pos: 3.996662712108880E+00, 2.938301820497121E+00, -1.017177623308866E-01,
vel: -4.560191659347578E-03, 6.440946682361135E-03, 7.529386668190383E-05
The normalization to a center-of-gravity frame needs to apply conservation of momentum, the mass of Jupiter is large enough that just subtracting the velocities might give physically wrong results. [not resolved, the initial data should already be barycentric, no corrections should be necessary]
The varying accuracy of the physical constants will also introduce errors that will lead away from the reference positions. The most "dirty" constants that are visible at the moment are the gravitational constant and the masses, after that the uncertainty in the type of year. You only get the first two digits reliable of any (correctly) computed result.
Related
This link provides code for an autocorrelation-based pitch detection algorithm. I am using it to detect pitches in simple guitar melodies.
In general, it produces very good results. For example, for the melody C4, C#4, D4, D#4, E4 it outputs:
262.743653536
272.144441273
290.826273006
310.431336809
327.094621169
Which correlates to the correct notes.
However, in some cases like this audio file (E4, F4, F#4, G4, G#4, A4, A#4, B4) it produces errors:
325.861452246
13381.6439242
367.518651703
391.479384923
414.604661221
218.345286173
466.503751322
244.994090035
More specifically, there are three errors here: 13381Hz is wrongly detected instead of F4 (~350Hz) (weird error), and also 218Hz instead of A4 (440Hz) and 244Hz instead of B4 (~493Hz), which are octave errors.
I assume the two errors are caused by something different? Here is the code:
slices = segment_signal(y, sr)
for segment in slices:
pitch = freq_from_autocorr(segment, sr)
print pitch
def segment_signal(y, sr, onset_frames=None, offset=0.1):
if (onset_frames == None):
onset_frames = remove_dense_onsets(librosa.onset.onset_detect(y=y, sr=sr))
offset_samples = int(librosa.time_to_samples(offset, sr))
print onset_frames
slices = np.array([y[i : i + offset_samples] for i
in librosa.frames_to_samples(onset_frames)])
return slices
You can see the freq_from_autocorr function in the first link above.
The only think that I have changed is this line:
corr = corr[len(corr)/2:]
Which I have replaced with:
corr = corr[int(len(corr)/2):]
UPDATE:
I noticed the smallest the offset I use (the smallest the signal segment I use to detect each pitch), the more high-frequency (10000+ Hz) errors I get.
Specifically, I noticed that the part that goes differently in those cases (10000+ Hz) is the calculation of the i_peak value. When in cases with no error it is in the range of 50-150, in the case of the error it is 3-5.
The autocorrelation function in the code snippet that you linked is not particularly robust. In order to get the correct result, it needs to locate the first peak on the left hand side of the autocorrelation curve. The method that the other developer used (calling the numpy.argmax() function) does not always find the correct value.
I've implemented a slightly more robust version, using the peakutils package. I don't promise that it's perfectly robust either, but in any case it achieves a better result than the version of the freq_from_autocorr() function that you were previously using.
My example solution is listed below:
import librosa
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import fftconvolve
from pprint import pprint
import peakutils
def freq_from_autocorr(signal, fs):
# Calculate autocorrelation (same thing as convolution, but with one input
# reversed in time), and throw away the negative lags
signal -= np.mean(signal) # Remove DC offset
corr = fftconvolve(signal, signal[::-1], mode='full')
corr = corr[len(corr)//2:]
# Find the first peak on the left
i_peak = peakutils.indexes(corr, thres=0.8, min_dist=5)[0]
i_interp = parabolic(corr, i_peak)[0]
return fs / i_interp, corr, i_interp
def parabolic(f, x):
"""
Quadratic interpolation for estimating the true position of an
inter-sample maximum when nearby samples are known.
f is a vector and x is an index for that vector.
Returns (vx, vy), the coordinates of the vertex of a parabola that goes
through point x and its two neighbors.
Example:
Defining a vector f with a local maximum at index 3 (= 6), find local
maximum if points 2, 3, and 4 actually defined a parabola.
In [3]: f = [2, 3, 1, 6, 4, 2, 3, 1]
In [4]: parabolic(f, argmax(f))
Out[4]: (3.2142857142857144, 6.1607142857142856)
"""
xv = 1/2. * (f[x-1] - f[x+1]) / (f[x-1] - 2 * f[x] + f[x+1]) + x
yv = f[x] - 1/4. * (f[x-1] - f[x+1]) * (xv - x)
return (xv, yv)
# Time window after initial onset (in units of seconds)
window = 0.1
# Open the file and obtain the sampling rate
y, sr = librosa.core.load("./Vocaroo_s1A26VqpKgT0.mp3")
idx = np.arange(len(y))
# Set the window size in terms of number of samples
winsamp = int(window * sr)
# Calcualte the onset frames in the usual way
onset_frames = librosa.onset.onset_detect(y=y, sr=sr)
onstm = librosa.frames_to_time(onset_frames, sr=sr)
fqlist = [] # List of estimated frequencies, one per note
crlist = [] # List of autocorrelation arrays, one array per note
iplist = [] # List of peak interpolated peak indices, one per note
for tm in onstm:
startidx = int(tm * sr)
freq, corr, ip = freq_from_autocorr(y[startidx:startidx+winsamp], sr)
fqlist.append(freq)
crlist.append(corr)
iplist.append(ip)
pprint(fqlist)
# Choose which notes to plot (it's set to show all 8 notes in this case)
plidx = [0, 1, 2, 3, 4, 5, 6, 7]
# Plot amplitude curves of all notes in the plidx list
fgwin = plt.figure(figsize=[8, 10])
fgwin.subplots_adjust(bottom=0.0, top=0.98, hspace=0.3)
axwin = []
ii = 1
for tm in onstm[plidx]:
axwin.append(fgwin.add_subplot(len(plidx)+1, 1, ii))
startidx = int(tm * sr)
axwin[-1].plot(np.arange(startidx, startidx+winsamp), y[startidx:startidx+winsamp])
ii += 1
axwin[-1].set_xlabel('Sample ID Number', fontsize=18)
fgwin.show()
# Plot autocorrelation function of all notes in the plidx list
fgcorr = plt.figure(figsize=[8,10])
fgcorr.subplots_adjust(bottom=0.0, top=0.98, hspace=0.3)
axcorr = []
ii = 1
for cr, ip in zip([crlist[ii] for ii in plidx], [iplist[ij] for ij in plidx]):
if ii == 1:
shax = None
else:
shax = axcorr[0]
axcorr.append(fgcorr.add_subplot(len(plidx)+1, 1, ii, sharex=shax))
axcorr[-1].plot(np.arange(500), cr[0:500])
# Plot the location of the leftmost peak
axcorr[-1].axvline(ip, color='r')
ii += 1
axcorr[-1].set_xlabel('Time Lag Index (Zoomed)', fontsize=18)
fgcorr.show()
The printed output looks like:
In [1]: %run autocorr.py
[325.81996740236065,
346.43374761017725,
367.12435233192753,
390.17291696559079,
412.9358117076161,
436.04054933498134,
465.38986619237039,
490.34120132405866]
The first figure produced by my code sample depicts the amplitude curves for the next 0.1 seconds following each detected onset time:
The second figure produced by the code shows the autocorrelation curves, as computed inside of the freq_from_autocorr() function. The vertical red lines depict the location of the first peak on the left for each curve, as estimated by the peakutils package. The method used by the other developer was getting incorrect results for some of these red lines; that's why his version of that function was occasionally returning the wrong frequencies.
My suggestion would be to test the revised version of the freq_from_autocorr() function on other recordings, see if you can find more challenging examples where even the improved version still gives incorrect results, and then get creative and try to develop an even more robust peak finding algorithm that never, ever mis-fires.
The autocorrelation method is not always right. You may want to implement a more sophisticated method like YIN:
http://audition.ens.fr/adc/pdf/2002_JASA_YIN.pdf
or MPM:
http://www.cs.otago.ac.nz/tartini/papers/A_Smarter_Way_to_Find_Pitch.pdf
Both of the above papers are good reads.
I am aware of scipy.solve_bvp but it requires that you interpolate your variables which I do not want to do.
I have a boundary value problem of the following form:
y1'(x) = -c1*f1(x)*f2(x)*y2(x) - f3(x)
y2'(x) = f4(x)*y1 + f1(x)*y2(x)
y1(x=0)=0, y2(x=1)=0
I have values for x=[0, 0.0001, 0.025, 0.3, ... 0.9999999, 1] on a non-uniform grid and values for all of the variables/functions at only those values of x.
How can I solve this BVP?
This is a new function, and I don't have it on my scipy version (0.17), but I found the source in scipy/scipy/integrate/_bvp.py (github).
The relevant pull request is https://github.com/scipy/scipy/pull/6025, last April.
It is based on a paper and MATLAB implementation,
J. Kierzenka, L. F. Shampine, "A BVP Solver Based on Residual
Control and the Maltab PSE", ACM Trans. Math. Softw., Vol. 27,
Number 3, pp. 299-316, 2001.
The x mesh handling appears to be:
while True:
....
solve_newton
....
insert_1, = np.nonzero((rms_res > tol) & (rms_res < 100 * tol))
insert_2, = np.nonzero(rms_res >= 100 * tol)
nodes_added = insert_1.shape[0] + 2 * insert_2.shape[0]
if m + nodes_added > max_nodes:
status = 1
if verbose == 2:
nodes_added = "({})".format(nodes_added)
print_iteration_progress(iteration, max_rms_res, m,
nodes_added)
...
if nodes_added > 0:
x = modify_mesh(x, insert_1, insert_2)
h = np.diff(x)
y = sol(x)
where modify_mesh add nodes to x based on:
insert_1 : ndarray
Intervals to each insert 1 new node in the middle.
insert_2 : ndarray
Intervals to each insert 2 new nodes, such that divide an interval
into 3 equal parts.
From this I deduce that
you can track the addition of nodes with the verbose parameter
nodes are added, but not removed. So the out mesh should include all of your input points.
I assume nodes are added to improve resolution in certain segments of the problem
This is based on reading the code, and not verified with test code. You may be the only person to be asking about this function on SO, and one of the few to have actually used it.
I am trying to solve a simple example with the dopri5 integrator in scipy.integrate.ode. As the documentation states
This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output).
this should work. So here is my example:
import numpy as np
from scipy.integrate import ode
import matplotlib.pyplot as plt
def MassSpring_with_force(t, state):
""" Simple 1DOF dynamics model: m ddx(t) + k x(t) = f(t)"""
# unpack the state vector
x = state[0]
xd = state[1]
# these are our constants
k = 2.5 # Newtons per metre
m = 1.5 # Kilograms
# force
f = force(t)
# compute acceleration xdd
xdd = ( ( -k*x + f) / m )
# return the two state derivatives
return [xd, xdd]
def force(t):
""" Excitation force """
f0 = 1 # force amplitude [N]
freq = 20 # frequency[Hz]
omega = 2 * np.pi *freq # angular frequency [rad/s]
return f0 * np.sin(omega*t)
# Time range
t_start = 0
t_final = 1
# Main program
state_ode_f = ode(MassSpring_with_force)
state_ode_f.set_integrator('dopri5', rtol=1e-6, nsteps=500,
first_step=1e-6, max_step=1e-3)
state2 = [0.0, 0.0] # initial conditions
state_ode_f.set_initial_value(state2, 0)
sol = np.array([[t_start, state2[0], state2[1]]], dtype=float)
print("Time\t\t Timestep\t dx\t\t ddx\t\t state_ode_f.successful()")
while state_ode_f.t < (t_final):
state_ode_f.integrate(t_final, step=True)
sol = np.append(sol, [[state_ode_f.t, state_ode_f.y[0], state_ode_f.y[1]]], axis=0)
print("{0:0.8f}\t {1:0.4e} \t{2:10.3e}\t {3:0.3e}\t {4}".format(
state_ode_f.t, sol[-1, 0]- sol[-2, 0], state_ode_f.y[0], state_ode_f.y[1], state_ode_f.successful()))
The result I get is:
Time Timestep dx ddx state_ode_f.successful()
0.49763822 4.9764e-01 2.475e-03 -8.258e-04 False
0.99863822 5.0100e-01 3.955e-03 -3.754e-03 False
1.00000000 1.3618e-03 3.950e-03 -3.840e-03 False
with a warning:
c:\python34\lib\site-packages\scipy\integrate_ode.py:1018: UserWarning: dopri5: larger nmax is needed
self.messages.get(idid, 'Unexpected idid=%s' % idid))
The result is incorect. If I run the same code with vode integrator, I get the expected result.
Edit
A similar issue is described here:
Using adaptive step sizes with scipy.integrate.ode
The suggested solution recommends setting nsteps=1, which solves the ODE correctly and with step-size control. However the integrator returns state_ode_f.successful() as False.
No, there is nothing wrong. You are telling the integrator to perform an integration step to t_final and it performs that step. Internal steps of the integrator are not reported.
The sensible thing to do is to give the desired sampling points as input of the algorithm, set for example dt=0.1 and use
state_ode_f.integrate( min(state_ode_f.t+dt, t_final) )
There is no single-step method in dopri5, only vode has it defined, see the source code https://github.com/scipy/scipy/blob/v0.14.0/scipy/integrate/_ode.py#L376, this could account for the observed differences.
As you found in Using adaptive step sizes with scipy.integrate.ode, one can force single-step behavior by setting the iteration bound nsteps=1. This will produce a warning every time, so one has to suppress these specific warnings to see a sensible result.
You should not use a parameter (which is a constant for the integration interval) for a time-dependent force. Use inside MassSpring_with_force the evaluation f=force(t). Possibly you could pass the function handle of force as parameter.
I am using the standard differential equation for SHM for the above simulation, a = -w^2*x. I'm using Python with the odeint being the solver. Despite editing it several times, I keep getting the output as a straight line instead of a sinusoidal curve. The code is:
from scipy.integrate import odeint
from pylab import *
k = 80 #Spring Constant
m = 8 #mass of block
omega = sqrt(k/m) #angular velocity
def deriv(x,t):
return array([x[1],(-1)*(k/m)*x[0]])
t = linspace(0,3.62,100)
xinit = array([0,0])
x = odeint(deriv,xinit,t)
acc_mass = zeros(t.shape[0])
for q in range(0,t.shape[0]):
acc_mass[q] = (-1)*(omega**2)*x[q][0]
f, springer = subplots(3, sharex = True)
springer[0].plot(t,x[:,0],'r')
springer[0].set_title('Position Variation')
springer[1].plot(t,x[:,1],'b')
springer[1].set_title('Velocity Variation')
springer[2].plot(t,acc_mass,'g')
springer[2].set_title('Acceleration Variation')
As pointed out by Warren Weckesser, the code is correct, but since the initial conditions are given as 0 for the displacement, the output is also 0. Hence on his advice, I changed the initial conditions and got the required output which was a sinusoidal curve.
Here is a complete example of SHM using odeint:
http://nbviewer.ipython.org/gist/dpsanders/d417c1ffbb76f13f678c#2D-equations
I have a TOF spectrum and I would like to implement an algorithm using python (numpy) that finds all the maxima of the spectrum and returns the corresponding x values.
I have looked up online and I found the algorithm reported below.
The assumption here is that near the maximum the difference between the value before and the value at the maximum is bigger than a number DELTA. The problem is that my spectrum is composed of points equally distributed, even near the maximum, so that DELTA is never exceeded and the function peakdet returns an empty array.
Do you have any idea how to overcome this problem? I would really appreciate comments to understand better the code since I am quite new in python.
Thanks!
import sys
from numpy import NaN, Inf, arange, isscalar, asarray, array
def peakdet(v, delta, x = None):
maxtab = []
mintab = []
if x is None:
x = arange(len(v))
v = asarray(v)
if len(v) != len(x):
sys.exit('Input vectors v and x must have same length')
if not isscalar(delta):
sys.exit('Input argument delta must be a scalar')
if delta <= 0:
sys.exit('Input argument delta must be positive')
mn, mx = Inf, -Inf
mnpos, mxpos = NaN, NaN
lookformax = True
for i in arange(len(v)):
this = v[i]
if this > mx:
mx = this
mxpos = x[i]
if this < mn:
mn = this
mnpos = x[i]
if lookformax:
if this < mx-delta:
maxtab.append((mxpos, mx))
mn = this
mnpos = x[i]
lookformax = False
else:
if this > mn+delta:
mintab.append((mnpos, mn))
mx = this
mxpos = x[i]
lookformax = True
return array(maxtab), array(mintab)
Below is shown part of the spectrum. I actually have more peaks than those shown here.
This, I think could work as a starting point. I'm not a signal-processing expert, but I tried this on a generated signal Y that looks quite like yours and one with much more noise:
from scipy.signal import convolve
import numpy as np
from matplotlib import pyplot as plt
#Obtaining derivative
kernel = [1, 0, -1]
dY = convolve(Y, kernel, 'valid')
#Checking for sign-flipping
S = np.sign(dY)
ddS = convolve(S, kernel, 'valid')
#These candidates are basically all negative slope positions
#Add one since using 'valid' shrinks the arrays
candidates = np.where(dY < 0)[0] + (len(kernel) - 1)
#Here they are filtered on actually being the final such position in a run of
#negative slopes
peaks = sorted(set(candidates).intersection(np.where(ddS == 2)[0] + 1))
plt.plot(Y)
#If you need a simple filter on peak size you could use:
alpha = -0.0025
peaks = np.array(peaks)[Y[peaks] < alpha]
plt.scatter(peaks, Y[peaks], marker='x', color='g', s=40)
The sample outcomes:
For the noisy one, I filtered peaks with alpha:
If the alpha needs more sophistication you could try dynamically setting alpha from the peaks discovered using e.g. assumptions about them being a mixed gaussian (my favourite being the Otsu threshold, exists in cv and skimage) or some sort of clustering (k-means could work).
And for reference, this I used to generate the signal:
Y = np.zeros(1000)
def peaker(Y, alpha=0.01, df=2, loc=-0.005, size=-.0015, threshold=0.001, decay=0.5):
peaking = False
for i, v in enumerate(Y):
if not peaking:
peaking = np.random.random() < alpha
if peaking:
Y[i] = loc + size * np.random.chisquare(df=2)
continue
elif Y[i - 1] < threshold:
peaking = False
if i > 0:
Y[i] = Y[i - 1] * decay
peaker(Y)
EDIT: Support for degrading base-line
I simulated a slanting base-line by doing this:
Z = np.log2(np.arange(Y.size) + 100) * 0.001
Y = Y + Z[::-1] - Z[-1]
Then to detect with a fixed alpha (note that I changed sign on alpha):
from scipy.signal import medfilt
alpha = 0.0025
Ybase = medfilt(Y, 51) # 51 should be large in comparison to your peak X-axis lengths and an odd number.
peaks = np.array(peaks)[Ybase[peaks] - Y[peaks] > alpha]
Resulting in the following outcome (the base-line is plotted as dashed black line):
EDIT 2: Simplification and a comment
I simplified the code to use one kernel for both convolves as #skymandr commented. This also removed the magic number in adjusting the shrinkage so that any size of the kernel should do.
For the choice of "valid" as option to convolve. It would probably have worked just as well with "same", but I choose "valid" so I didn't have to think about the edge-conditions and if the algorithm could detect spurios peaks there.
As of SciPy version 1.1, you can also use find_peaks:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
np.random.seed(0)
Y = np.zeros(1000)
# insert #deinonychusaur's peaker function here
peaker(Y)
# make data noisy
Y = Y + 10e-4 * np.random.randn(len(Y))
# find_peaks gets the maxima, so we multiply our signal by -1
Y *= -1
# get the actual peaks
peaks, _ = find_peaks(Y, height=0.002)
# multiply back for plotting purposes
Y *= -1
plt.plot(Y)
plt.plot(peaks, Y[peaks], "x")
plt.show()
This will plot (note that we use height=0.002 which will only find peaks higher than 0.002):
In addition to height, we can also set the minimal distance between two peaks. If you use distance=100, the plot then looks as follows:
You can use
peaks, _ = find_peaks(Y, height=0.002, distance=100)
in the code above.
After looking at the answers and suggestions I decided to offer a solution I often use because it is straightforward and easier to tweak.
It uses a sliding window and counts how many times a local peak appears as a maximum as window shifts along the x-axis. As #DrV suggested, no universal definition of "local maximum" exists, meaning that some tuning parameters are unavoidable. This function uses "window size" and "frequency" to fine tune the outcome. Window size is measured in number of data points of independent variable (x) and frequency counts how sensitive should peak detection be (also expressed as a number of data points; lower values of frequency produce more peaks and vice versa). The main function is here:
def peak_finder(x0, y0, window_size, peak_threshold):
# extend x, y using window size
y = numpy.concatenate([y0, numpy.repeat(y0[-1], window_size)])
x = numpy.concatenate([x0, numpy.arange(x0[-1], x0[-1]+window_size)])
local_max = numpy.zeros(len(x0))
for ii in range(len(x0)):
local_max[ii] = x[y[ii:(ii + window_size)].argmax() + ii]
u, c = numpy.unique(local_max, return_counts=True)
i_return = numpy.where(c>=peak_threshold)[0]
return(list(zip(u[i_return], c[i_return])))
along with a snippet used to produce the figure shown below:
import numpy
from matplotlib import pyplot
def plot_case(axx, w_f):
p = peak_finder(numpy.arange(0, len(Y)), -Y, w_f[0], w_f[1])
r = .9*min(Y)/10
axx.plot(Y)
for ip in p:
axx.text(ip[0], r + Y[int(ip[0])], int(ip[0]),
rotation=90, horizontalalignment='center')
yL = pyplot.gca().get_ylim()
axx.set_ylim([1.15*min(Y), yL[1]])
axx.set_xlim([-50, 1100])
axx.set_title(f'window: {w_f[0]}, count: {w_f[1]}', loc='left', fontsize=10)
return(None)
window_frequency = {1:(15, 15), 2:(100, 100), 3:(100, 5)}
f, ax = pyplot.subplots(1, 3, sharey='row', figsize=(9, 4),
gridspec_kw = {'hspace':0, 'wspace':0, 'left':.08,
'right':.99, 'top':.93, 'bottom':.06})
for k, v in window_frequency.items():
plot_case(ax[k-1], v)
pyplot.show()
Three cases show parameter values that render (from left to right panel):
(1) too many, (2) too few, and (3) an intermediate amount of peaks.
To generate Y data, I used the function #deinonychusaur gave above, and added some noise to it from #Cleb's answer.
I hope some might find this useful, but it's efficiency primarily depends on actual peak shapes and distances.
Finding a minimum or a maximum is not that simple, because there is no universal definition for "local maximum".
Your code seems to look for a miximum and then accept it as a maximum if the signal falls after the maximum below the maximum minus some delta value. After that it starts to look for a minimum with similar criteria. It does not really matter if your data falls or rises slowly, as the maximum is recorded when it is reached and appended to the list of maxima once the level fallse below the hysteresis threshold.
This is a possible way to find local minima and maxima, but it has several shortcomings. One of them is that the method is not symmetric, i.e. if the same data is run backwards, the results are not necessarily the same.
Unfortunately, I cannot help much more, because the correct method really depends on the data you are looking at, its shape and its noisiness. If you have some samples, then we might be able to come up with some suggestions.