MUSIC Algorithm Spectrum Python Implementation - python

I am working on a small radar project that can measure the Doppler shift created by the heart and chest. Since I know the number of sources in advance, I decided to choose the MUSIC Algorithm for spectral analysis. I am acquiring data and sending it to Python for analysis. However, my Python code is saying that the power for ALL frequencies of a signal with two mixed sinusoids of frequency 1 Hz and 2 Hz is equal. My code is linked here with a sample output:
from scipy import signal
import numpy as np
from numpy import linalg as LA
import matplotlib.pyplot as plt
import cmath
import scipy
N = 5
z = np.linspace(0,2*np.pi, num=N)
x = np.sin(2*np.pi * z) + np.sin(1 * np.pi * z) + np.random.random(N) * 0.3 # sample signal
conj = np.conj(x);
l = len(conj)
sRate = 25 # sampling rate
p = 2
flipped = [0 for h in range(0, l)]
flipped = conj[::-1]
acf = signal.convolve(x,flipped,'full')
a1 = scipy.linalg.toeplitz(c=np.asarray(acf),r=np.asarray(acf))#autocorrelation matrix that will be decomposed into eigenvectors
eigenValues,eigenVectors = LA.eig(a1)
idx = eigenValues.argsort()[::-1]
eigenValues = eigenValues[idx]
eigenVectors = eigenVectors[:,idx]
idx = eigenValues.argsort()[::-1]
eigenValues = eigenValues[idx]# soriting the eigenvectors and eigenvalues from greatest to least eigenvalue
eigenVectors = eigenVectors[:,idx]
signal_eigen = eigenVectors[0:p]#these vectors make up the signal subspace, by using the number of principal compoenets, 2 to split the eigenvectors
noise_eigen = eigenVectors[p:len(eigenVectors)]# noise subspace
for f in range(0, sRate):
sum1 = 0
frequencyVector = np.zeros(len(noise_eigen[0]), dtype=np.complex_)
for i in range(0,len(noise_eigen[0])):
frequencyVector[i] = np.conjugate(complex(np.cos(2 * np.pi * i * f), np.sin(2 * np.pi * i * f)))#creating a frequency vector with e to the 2pi *k *f and taking the conjugate of the each component
for u in range(0,len(noise_eigen)):
sum1 += (abs(np.dot(np.asarray(frequencyVector).transpose(), np.asarray( noise_eigen[u]) )))**2 # summing the dot product of each noise eigenvector and frequency vector taking the absolute value and squaring
print(1/sum1)
print("\n")
"""
(OUTPUT OF THE ABOVE CODE)
0.120681885992
0
0.120681885992
1
0.120681885992
2
0.120681885992
3
0.120681885992
4
0.120681885992
5
0.120681885992
6
0.120681885992
7
0.120681885992
8
0.120681885992
9
0.120681885992
10
0.120681885992
11
0.120681885992
12
0.120681885992
13
0.120681885992
14
0.120681885992
15
0.120681885992
16
0.120681885992
17
0.120681885992
18
0.120681885992
19
0.120681885992
20
0.120681885992
21
0.120681885992
22
0.120681885992
23
0.120681885992
24
Process finished with exit code 0
"""
Here is the formula for the MUSIC Algorithm:
https://drive.google.com/file/d/0B5EG2FEWlIZwYmkteUludHNXS0k/view?usp=sharing

Mathematically, the problem is that i and f are both integers. Thus, 2*π*i*f is an integral multiple of 2π. Allowing for a tiny bit of round-off error, this gives you a cosine very close to 1.0 and a sin very close to 0.0. These values yield virtually no variation in frequencyVector from one iteration to the next.
I also see a problem in that you set up your signal_eigen matrix, but never use it. Isn't the signal itself required by this algorithm? As a result, all you're doing is sampling the noise at intervals of 2πi.
Let's try chopping up one cycle into sRate evenly-spaced sampling points. This results in spikes at 0.24 and 0.76 (out of the range 0.0 - 0.99). Does this match your intuition about how this should work?
signal_eigen = eigenVectors[0:p]
noise_eigen = eigenVectors[p:len(eigenVectors)] # noise subspace
print "Signal\n", signal_eigen
print "Noise\n", noise_eigen
for f_int in range(0, sRate * p + 1):
sum1 = 0
frequencyVector = np.zeros(len(noise_eigen[0]), dtype=np.complex_)
f = float(f_int) / sRate
for i in range(0,len(noise_eigen[0])):
# create a frequency vector with e to the 2pi *k *f and taking the conjugate of the each component
frequencyVector[i] = np.conjugate(complex(np.cos(2 * np.pi * i * f), np.sin(2 * np.pi * i * f)))
# print f, i, np.pi, np.cos(2 * np.pi * i * f)
# print frequencyVector
for u in range(0,len(noise_eigen)):
# sum the squared dot product of each noise eigenvector and frequency vector.
sum1 += (abs(np.dot(np.asarray(frequencyVector).transpose(), np.asarray( noise_eigen[u]) )))**2
print f, 1/sum1
Output
Signal
[[ -3.25974386e-01 3.26744322e-01 -5.24205744e-16 -1.84108176e-01
-7.07106781e-01 -6.86652798e-17 2.71561652e-01 3.78607948e-16
4.23482344e-01]
[ 3.40976541e-01 5.42419088e-02 -5.00000000e-01 -3.62655793e-01
-1.06880232e-16 3.53553391e-01 -3.89304223e-01 -3.53553391e-01
3.12595284e-01]]
Noise
[[ -3.06261935e-01 -5.16768248e-01 7.82012443e-16 -3.72989138e-01
-3.12515753e-16 -5.00000000e-01 5.19589478e-03 -5.00000000e-01
-2.51205535e-03]
[ 3.21775774e-01 8.19916352e-02 5.00000000e-01 -3.70053622e-01
1.44550753e-16 3.53553391e-01 4.33613344e-01 -3.53553391e-01
-2.54514258e-01]
[ -4.00349040e-01 4.82750272e-01 -8.71533036e-16 -3.42123880e-01
-2.68725150e-16 2.42479504e-16 -4.16290671e-01 -4.89739378e-16
-5.62428795e-01]
[ 3.21775774e-01 8.19916352e-02 -5.00000000e-01 -3.70053622e-01
-2.80456498e-16 -3.53553391e-01 4.33613344e-01 3.53553391e-01
-2.54514258e-01]
[ -3.06261935e-01 -5.16768248e-01 1.08027782e-15 -3.72989138e-01
-1.25036869e-16 5.00000000e-01 5.19589478e-03 5.00000000e-01
-2.51205535e-03]
[ 3.40976541e-01 5.42419088e-02 5.00000000e-01 -3.62655793e-01
-2.64414807e-16 -3.53553391e-01 -3.89304223e-01 3.53553391e-01
3.12595284e-01]
[ -3.25974386e-01 3.26744322e-01 -4.97151703e-16 -1.84108176e-01
7.07106781e-01 -1.62796158e-16 2.71561652e-01 2.06561854e-16
4.23482344e-01]]
0.0 0.115397176866
0.04 0.12355071192
0.08 0.135377011677
0.12 0.136669716901
0.16 0.148772917566
0.2 0.195742574649
0.24 0.237792763699
0.28 0.181921271171
0.32 0.12959840172
0.36 0.121070836044
0.4 0.139075881122
0.44 0.139216853056
0.48 0.117815494324
0.52 0.117815494324
0.56 0.139216853056
0.6 0.139075881122
0.64 0.121070836044
0.68 0.12959840172
0.72 0.181921271171
0.76 0.237792763699
0.8 0.195742574649
0.84 0.148772917566
0.88 0.136669716901
0.92 0.135377011677
0.96 0.12355071192
I'm also unsure of the correct implementation; having more of the paper for formula context would help. I'm not certain about the range and sampling of the f values. When I worked on FFT software, f was swept over the wave form in small increments, typically 2π/sRate.
I'm not getting those distinctive spikes now -- not sure what I did before. I made a small parametrized change, adding a num_slice variable:
num_slice = sRate * N
for f_int in range(0, num_slice + 1):
sum1 = 0
frequencyVector = np.zeros(len(noise_eigen[0]), dtype=np.complex_)
f = float(f_int) / num_slice
You can compute it however you like, of course, but the ensuing loop runs through just the one cycle. Here's my output:
0.0 0.136398199883
0.008 0.136583829848
0.016 0.13711117893
0.024 0.137893463111
0.032 0.138792904453
0.04 0.139633157335
0.048 0.140219450839
0.056 0.140365986349
0.064 0.139926689416
0.072 0.138822121693
0.08 0.137054535152
0.088 0.13470609994
0.096 0.131921188389
0.104 0.128879079596
0.112 0.125765649854
0.12 0.122750994163
0.128 0.119976226317
0.136 0.117549199221
0.144 0.115546862203
0.152 0.114021482029
0.16 0.113008398728
0.168 0.112533730494
0.176 0.112621097254
0.184 0.113296863522
0.192 0.114593615279
0.2 0.116551634665
0.208 0.119218062482
0.216 0.12264326497
0.224 0.126873674308
0.232 0.131940131305
0.24 0.137840727381
0.248 0.144517728837
0.256 0.151830000359
0.264 0.159526062508
0.272 0.167228413981
0.28 0.174444818009
0.288 0.180621604818
0.296 0.185241411664
0.304 0.187943197745
0.312 0.188619481273
0.32 0.187445977812
0.328 0.184829467764
0.336 0.181300320748
0.344 0.177396490666
0.352 0.173576190425
0.36 0.170171993077
0.368 0.167379359825
0.376 0.165265454514
0.384 0.163786582966
0.392 0.16280869726
0.4 0.162130870823
0.408 0.161514399035
0.416 0.160719375729
0.424 0.159546457646
0.432 0.157875982968
0.44 0.155693319037
0.448 0.153091632029
0.456 0.150251065569
0.464 0.147402137481
0.472 0.144785618099
0.48 0.14261932062
0.488 0.141076562538
0.496 0.140275496354
0.504 0.140275496354
0.512 0.141076562538
0.52 0.14261932062
0.528 0.144785618099
0.536 0.147402137481
0.544 0.150251065569
0.552 0.153091632029
0.56 0.155693319037
0.568 0.157875982968
0.576 0.159546457646
0.584 0.160719375729
0.592 0.161514399035
0.6 0.162130870823
0.608 0.16280869726
0.616 0.163786582966
0.624 0.165265454514
0.632 0.167379359825
0.64 0.170171993077
0.648 0.173576190425
0.656 0.177396490666
0.664 0.181300320748
0.672 0.184829467764
0.68 0.187445977812
0.688 0.188619481273
0.696 0.187943197745
0.704 0.185241411664
0.712 0.180621604818
0.72 0.174444818009
0.728 0.167228413981
0.736 0.159526062508
0.744 0.151830000359
0.752 0.144517728837
0.76 0.137840727381
0.768 0.131940131305
0.776 0.126873674308
0.784 0.12264326497
0.792 0.119218062482
0.8 0.116551634665
0.808 0.114593615279
0.816 0.113296863522
0.824 0.112621097254
0.832 0.112533730494
0.84 0.113008398728
0.848 0.114021482029
0.856 0.115546862203
0.864 0.117549199221
0.872 0.119976226317
0.88 0.122750994163
0.888 0.125765649854
0.896 0.128879079596
0.904 0.131921188389
0.912 0.13470609994
0.92 0.137054535152
0.928 0.138822121693
0.936 0.139926689416
0.944 0.140365986349
0.952 0.140219450839
0.96 0.139633157335
0.968 0.138792904453
0.976 0.137893463111
0.984 0.13711117893
0.992 0.136583829848
1.0 0.136398199883

Related

Get the Transformation Matrix From the SciPy Procrustes Implementation

The Procrustes library has an example where it demonstrates how to get the Transformation Matrix of two matrices by solving the Procrustes problem. The library seems to be old and doesn't work in Python 3.
I was wondering if there's any way to use the SciPy implementation of the Procrustes problem and be able to solve the exact problem discussed in the library's example.
Another StackOverflow question seems to need the exact thing that I'm asking here but I can't get it to give me the proper Transformation Matrix that would transform the Source Matrix to nearly
Using
In summary, I'd like to be able to implement this example using the SciPy library.
You could use scipy.linalg.orthogonal_procrustes. Here's a demonstration. Note that the function generateAB only exists to generate the arrays A and B for the demo. The key steps of the calculation are to center A and B, and then call orthogonal_procrustes.
import numpy as np
from scipy.stats import ortho_group
from scipy.linalg import orthogonal_procrustes
def generateAB(shape, noise=0, rng=None):
# Generate A and B for the example.
if rng is None:
rng = np.random.default_rng()
m, n = shape
# Random matrix A
A = 3 + 2*rng.random(shape)
Am = A.mean(axis=0, keepdims=True)
# Random orthogonal matrix T
T = ortho_group.rvs(n, random_state=rng)
# Target matrix B
B = ((A - Am) # T + rng.normal(scale=noise, size=A.shape)
+ 3*rng.random((1, n)))
# Include T in the return, but in a real problem, T would not be known.
return A, B, T
# For reproducibility, use a seeded RNG.
rng = np.random.default_rng(0x1ce1cebab1e)
A, B, T = generateAB((7, 5), noise=0.01, rng=rng)
# Find Q. Note that `orthogonal_procrustes` does not include
# dilation or translation. To handle translation, we center
# A and B by subtracting the means of the points.
A0 = A - A.mean(axis=0, keepdims=True)
B0 = B - B.mean(axis=0, keepdims=True)
Q, scale = orthogonal_procrustes(A0, B0)
with np.printoptions(precision=3, suppress=True):
print('T (used to generate B from A):')
print(T)
print('Q (computed by orthogonal_procrustes):')
print(Q)
print('\nCompare A0 # Q with B0.')
print('A0 # Q:')
print(A0 # Q)
print('B0 (should be close to A0 # Q if the noise parameter was small):')
print(B0)
Output:
T (used to generate B from A):
[[-0.873 0.017 0.202 -0.44 -0.054]
[-0.129 0.606 -0.763 -0.047 -0.18 ]
[ 0.055 -0.708 -0.567 -0.408 0.088]
[ 0.024 0.24 -0.028 -0.168 0.955]
[ 0.466 0.272 0.235 -0.78 -0.21 ]]
Q (computed by orthogonal_procrustes):
[[-0.871 0.022 0.203 -0.443 -0.052]
[-0.129 0.604 -0.765 -0.046 -0.178]
[ 0.053 -0.709 -0.565 -0.409 0.087]
[ 0.027 0.239 -0.029 -0.166 0.956]
[ 0.47 0.273 0.233 -0.779 -0.21 ]]
Compare A0 # Q with B0.
A0 # Q:
[[-0.622 0.224 0.946 1.038 0.578]
[ 0.263 0.143 -0.031 -0.949 0.492]
[-0.49 0.758 0.473 -0.221 -0.755]
[ 0.205 -0.74 0.065 -0.192 -0.551]
[-0.295 -0.434 -1.103 0.444 0.547]
[ 0.585 -0.378 -0.645 -0.233 0.651]
[ 0.354 0.427 0.296 0.113 -0.963]]
B0 (should be close to A0 # Q if the noise parameter was small):
[[-0.627 0.226 0.949 1.032 0.576]
[ 0.268 0.135 -0.028 -0.95 0.492]
[-0.493 0.765 0.475 -0.201 -0.75 ]
[ 0.214 -0.743 0.071 -0.196 -0.55 ]
[-0.304 -0.433 -1.115 0.451 0.551]
[ 0.589 -0.375 -0.645 -0.235 0.651]
[ 0.354 0.426 0.292 0.1 -0.969]]

How to assign new observations to cluster using distance matrix and kmedoids?

I have a dataframe that holds the Word Mover's Distance between each document in my dataframe. I am running kmediods on this to generate clusters.
1 2 3 4 5
1 0.00 0.05 0.07 0.04 0.05
2 0.05 0.00 0.06 0.04 0.05
3. 0.07 0.06 0.00 0.06 0.06
4 0.04 0.04. 0.06 0.00 0.04
5 0.05 0.05 0.06 0.04 0.00
kmed = KMedoids(n_clusters= 3, random_state=123, method ='pam').fit(distance)
After running on this initial matrix and generating clusters, I want to add new points to be clustered. After adding a new document to the distance matrix I end up with:
1 2 3 4 5 6
1 0.00 0.05 0.07 0.04 0.05 0.12
2 0.05 0.00 0.06 0.04 0.05 0.21
3. 0.07 0.06 0.00 0.06 0.06 0.01
4 0.04 0.04. 0.06 0.00 0.04 0.05
5 0.05 0.05 0.06 0.04 0.00 0.12
6. 0.12 0.21 0.01 0.05 0.12 0.00
I have tried using kmed.predict on the new row.
kmed.predict(new_distance.loc[-1: ])
However, this gives me an error of incompatible dimensions X.shape[1] == 6 while Y.shape[1] == 5.
How can I use this distance of the new document to determine which cluster it should be a part of? Is this even possible, or do I have to recompute clusters every time? Thanks!
The source code for k-medoids says the following:
def transform(self, X):
"""Transforms X to cluster-distance space.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_query, n_features), \
or (n_query, n_indexed) if metric == 'precomputed'
Data to transform.
"""
I assume that you use the precomputed metric (because you compute the distances outside the classifier), so in your case n_query is the number of new documents, and n_indexed is the number of the documents for which the fit method was called.
In your particular case when you fit the model on 5 documents and then want to classify the 6'th one, the X for classification should have shape (1,5), that can be computed as
kmed.predict(new_distance.loc[-1: , :-1])
this is my trial, we must recompute the distance between the new point and the old ones each time.
import pandas as pd
from sklearn_extra.cluster import KMedoids
from sklearn.metrics import pairwise_distances
import numpy as np
# dummy data for trial
df = pd.DataFrame({0: [0,1],1 : [1,2]})
# calculatie distance
distance = pairwise_distances(df.values, df.values)
# fit model
kmed = KMedoids(n_clusters=2, random_state=123, method='pam').fit(distance)
new_point = [2,3]
distance = pairwise_distances(np.array(new_point).reshape(1, -1), df.values)
#calculate the distance between the new point and the initial dataset
print(distance)
#get ride of the last element which is the ditance of the new point with itself
print(kmed.predict(distance[0][:2].reshape(1, -1)))

Obtaining the last value that equals or most near in the column dataframe

i have an issue in my code, i'm making points of cuts.
First, this is my Dataframe Column:
In [23]: df['bad_%']
0 0.025
1 0.007
2 0.006
3 0.006
4 0.006
5 0.006
6 0.007
7 0.007
8 0.007
9 0.006
10 0.006
11 0.009
12 0.009
13 0.009
14 0.008
15 0.008
16 0.008
17 0.012
18 0.012
19 0.05
20 0.05
21 0.05
22 0.05
23 0.05
24 0.05
25 0.05
26 0.05
27 0.062
28 0.062
29 0.061
5143 0.166
5144 0.166
5145 0.166
5146 0.167
5147 0.167
5148 0.167
5149 0.167
5150 0.167
5151 0.05
5152 0.167
5153 0.167
5154 0.167
5155 0.167
5156 0.051
5157 0.052
5158 0.161
5159 0.149
5160 0.168
5161 0.168
5162 0.168
5163 0.168
5164 0.168
5165 0.168
5166 0.168
5167 0.168
5168 0.049
5169 0.168
5170 0.168
5171 0.168
5172 0.168
Name: bad%, Length: 5173, dtype: float64
I used this code to detected the value equals or most near to 0.05 (VALUE THAT INTRODUCED on the CONSOLE)
error = 100 #Margin of error
valuesA = [] #array to save data
pointCut=0 #identify cut point
for index, row in df.iterrows():
if(abs(row['bad%'] - a) <= error):
valuesA = row
error = abs(row['bad%'] - a)
#Variable "a" introduced by console, in this case is "0.05"
pointCut = index
This code return the value "0.05" in the index 5151, in first instance looks good, because the "0.05" in the index "5151" is the last "0.05".
Out [27]:
5151 0.05
But my objetive is obtain THE LAST VALUE IN THE COLUMN equal or most near to "0.05", in this case this value correspond to "0.049" in the index "5168", i need obtain this value.
Exists an algorithm that permit this? Any solution or recomendation?
Thanks in advance.
Solutions if exist at leas one value:
Use [::-1] for swap values from back and get idxmax for last matched index value:
a = 0.05
s = df['bad%']
b = s[[(s[::-1] <= a).idxmax()]]
print (b)
5168 0.049
Or:
b = s[(s <= a)].iloc[[-1]]
print (b)
5168 0.049
Name: bad%, dtype: float64
Solution working also if value not exist - then empty Series yields:
a = 0.05
s = df['bad%']
m1 = (s <= a)
m2 = m1[::-1].cumsum().eq(1)
b = s[m1 & m2]
print (b)
5168 0.049
Name: bad%, dtype: float64
Sample data:
df = pd.DataFrame({'bad%': {5146: 0.16699999999999998, 5147: 0.16699999999999998, 5148: 0.16699999999999998, 5149: 0.049, 5150: 0.16699999999999998, 5151: 0.05, 5152: 0.16699999999999998, 5167: 0.168, 5168: 0.049, 5169: 0.168}})

linear fit with predefined error in response variable

I have the following dataset (replication):
ordinal_var fraction error_on_fraction
1 1.2 0.1
2 0.87 0.23
4 1.12 0.11
5 0.75 0.06
5 0.66 0.15
6 0.98 0.08
7 1.34 0.05
7 2.86 0.12
Now I want to do linear regression analysis (preferably in R but python is also fine) were I pass the error in y for each point within the formula. So in R this would be something like (for better understanding of the question):
lm(fraction +-error_on_fraction ~ ordinal_var, data = dataset)
Of course I tried to find how to do it myself first but I can't find an answer.
For previous analysis with error on x and y I just the scipy.odr library but I can't find how to do it with only an error in the y(response) variable.
Any help would be much appreciated!
We can use a simple weighted least squares model.
Sample data
Let's read in your sample data.
df <- read.table(text =
"ordinal_var fraction error_on_fraction
1 1.2 0.1
2 0.87 0.23
4 1.12 0.11
5 0.75 0.06
5 0.66 0.15
6 0.98 0.08
7 1.34 0.05
7 2.86 0.12", header = T)
Weighted least squares model
We fit a weighted linear model of the form fraction ~ ordered(ordinal_var), where the weights are given by 1 / error_on_fraction.
fit <- lm(
fraction ~ ordered(ordinal_var),
weights = 1 / error_on_fraction,
data = df)
summary(fit)
#
#Call:
#lm(formula = fraction ~ ordered(ordinal_var), data = df, weights = 1/error_on_fraction)
#
#Weighted Residuals:
# 1 2 3 4 5 6 7
# 2.220e-16 -1.851e-16 -1.753e-17 1.050e-01 -1.660e-01 1.810e-17 -1.999e+00
# 8
# 3.097e+00
#
#Coefficients:
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 1.1136 0.3365 3.309 0.0804 .
#ordered(ordinal_var).L 0.3430 0.7847 0.437 0.7047
#ordered(ordinal_var).Q 0.6228 0.7057 0.883 0.4706
#ordered(ordinal_var).C 0.2794 0.8920 0.313 0.7838
#ordered(ordinal_var)^4 0.2127 0.9278 0.229 0.8400
#ordered(ordinal_var)^5 -0.2469 0.7916 -0.312 0.7846
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 2.61 on 2 degrees of freedom
#Multiple R-squared: 0.5427, Adjusted R-squared: -0.6004
#F-statistic: 0.4748 on 5 and 2 DF, p-value: 0.783

how to set a variable dynamically - python, pandas

>>>import pandas as pd
>>>import numpy as np
>>>from pandas import Series, DataFrame
>>>rawData = pd.read_csv('wow.txt')
>>>rawData
time mean
0 0.005 0
1 0.010 258.64
2 0.015 258.43
3 0.020 253.72
4 0.025 0
5 0.030 0
6 0.035 253.84
7 0.040 254.17
8 0.045 0
9 0.050 0
10 0.055 0
11 0.060 254.73
12 0.065 254.90
.
.
.
489 4.180 167.46
I want to apply below formula and get 'y' when I enter 'x' value dynamically to plotting a graph.
y = y0 + (y1-y0)*(x-x0/x1-x0)
If 'mean' value is 0(for example index4,5 index8,9,10)
1) Ask question "Do you want to interpolate?"
2) If yes, enter the 'x' value
3) calculating using formula (repeat 1-3 until answer is no)
4) If answer is no, finish the program.
time(x-axis) mean(y-axis)
0 0.005 0
1 0.010 258.64
2 0.015 258.43
3 0.020 <--x0 253.72 <-- y0
4 0.025 0
5 0.030 0
6 0.035 <--x1 253.84 <-- y1
7 0.040 <--x0 254.17 <-- y0
8 0.045 0
9 0.050 0
10 0.055 0
11 0.060 <--x1 254.73 <-- y1
12 0.065 254.90
.
.
.
489 4.180 167.46
variable x0,x1,y0,y1 is determined when they are located outside between '0' value.
How to get a variable dynamically and calculate?
Do you have any good idea to design program?
for i in df.index:
if df.mean [i]=0:
answer=input ("Do you want to interpolate?")
if answer="Y":
x1 = df.loc[df.mean > 0].index[1]
y1 = df.loc[df.time > 0].index[1]
Interpolate eqn
else:
Process
else:
x0 = df.time [i]
y0 = df.mean[i]
Excuse typos, working on mobile phone.

Categories

Resources