how to plot on a smaller scale - python

I am using matplotlib and I'm finding some problems when trying to plot large vectors.
sometimes get "MemoryError"
My question is whether there is any way to reduce the scale of values ​​that i need to plot ?
In this example I'm plotting a vector with size 2647296!
is there any way to plot the same values ​​on a smaller scale?

It is very unlikely that you have so much resolution on your display that you can see 2.6 million data points in your plot. A simple way to plot less data is to sample e.g. every 1000th point: plot(x[::1000]). If that loses too much and it is e.g. important to see the extremal values, you could write some code to split the long vector into suitably many parts and take the minimum and maximum of each part, and plot those:
tmp = x[:len(x)-len(x)%1000] # drop some points to make length a multiple of 1000
tmp = tmp.reshape((1000,-1)) # split into pieces of 1000 points
tmp = tmp.reshape((-1,1000)) # alternative: split into 1000 pieces
figure(); hold(True) # plot minimum and maximum in the same figure
plot(tmp.min(axis=0))
plot(tmp.max(axis=0))

You can use a min/max for each block of data to subsample the signal.
Window size would have to be determined based on how accurately you want to display your signal and/or how large the window is compared to the signal length.
Example code:
from scipy.io import wavfile
import matplotlib.pyplot as plt
def value_for_window_min_max(data, start, stop):
min = data[start]
max = data[start]
for i in range(start,stop):
if data[i] < min:
min = data[i]
if data[i] > max:
max = data[i]
if abs(min) > abs(max):
return min
else:
return max
# This will only work properly if window_size divides evenly into len(data)
def subsample_data(data, window_size):
print len(data)
print len(data)/window_size
out_data = []
for i in range(0,(len(data)/window_size)):
out_data.append(value_for_window_min_max(data,i*window_size,i*window_size+window_size-1))
return out_data
sample_rate, data = wavfile.read('<path_to_wav_file>')
sub_amt = 10
sub_data = subsample_data(data, sub_amt)
print len(data)
print len(sub_data)
fig = plt.figure(figsize=(8,6), dpi=100)
fig.add_subplot(211)
plt.plot(data)
plt.title('Original')
plt.xlim([0,len(data)])
fig.add_subplot(212)
plt.plot(sub_data)
plt.xlim([0,len(sub_data)])
plt.title('Subsampled by %d'%sub_amt)
plt.show()
Output:

Related

How to find peaks in a noisy signal or estimate its number?

I have a series of signals, sample data looks like this:
We can see that there are 5 peaks there. I can assume that there won't be more than 1 pick every 10 samples, usually there is one pick every 20 to 40 samples.
I was trying to fit a polynomial and then use scipy.signal.find_peaks and it kind of works but I have to choose different numbers of spline knots to approximate each series correctly and the number of knots correlates to the number of peaks so I sort of ended up where I begun - but now I'd need only a rough idea about the number of peaks.
Then I tried it by dividing the signal into parts:
window = 10 # the smallest range potentially containing whole peak
parts = np.array_split(data, len(data)//window) # divide data set into parts
lengths = []
d = np.nan
for i in parts:
d = abs(i.max() - i.min())
lengths.append(d) # differences between max and min values in each part
av = sum(lengths)/len(lengths)
for i in lengths:
if i < some_tolerance_fraction*av:
window = window+1 # make part for the next check bigger
break
The idea was that the difference between min and max values in these parts should be smaller than the height of an actual pick I'm looking for unless the parts are large enough to contain whole peak - then the differences should be similar in each part and the average should also be similar to the actual height of the pick.
But this doesn't work at all and possibly doesn't even make sense - depending on the tolerance it divides window all the time or doesn't divide it at all.
this is the array from the image:
array([254256., 254390., 251546., 250561., 250603., 250128., 251000.,
252612., 253552., 253776., 252843., 251800., 250808., 250569.,
249804., 247755., 247685., 247111., 242320., 242580., 243462.,
240383., 239689., 240730., 239508., 239604., 238544., 240174.,
240806., 240218., 239956., 241325., 241343., 241532., 240696.,
242064., 241830., 237569., 237392., 236353., 234819., 234430.,
233890., 233215., 233745., 232159., 231778., 230307., 228754.,
225823., 225139., 223737., 222078., 221188., 220669., 221944.,
223928., 224996., 223405., 223018., 224966., 226590., 226166.,
226012., 226192., 224900., 224439., 223179., 222375., 221509.,
220734., 219686., 218656., 217792., 215934., 214829., 213673.,
212837., 211604., 210748., 210216., 209974., 209659., 209707.,
210131., 210663., 212113., 213078., 214476., 215087., 216220.,
216831., 217286., 217373., 217030., 216491., 215642., 214249.,
213273., 212148., 210846., 209570., 208202., 207165., 206677.,
205703., 203837., 202620., 201530., 198812., 197654., 196506.,
194163., 193736., 193945., 193785., 193417., 193044., 193768.,
194690., 195739., 198592., 199237., 199932., 200142., 199859.,
199593., 199337., 198403., 197500., 195988., 195114., 194278.,
193837., 193861.])
I would use find_peaks of scipy but filtering the signal with a moving average mean:
import numpy as np
import matplotlib.pyplot as plt
arr = np.array([254256., 254390., 251546., 250561., 250603., 250128., 251000.,
252612., 253552., 253776., 252843., 251800., 250808., 250569.,
249804., 247755., 247685., 247111., 242320., 242580., 243462.,
240383., 239689., 240730., 239508., 239604., 238544., 240174.,
240806., 240218., 239956., 241325., 241343., 241532., 240696.,
242064., 241830., 237569., 237392., 236353., 234819., 234430.,
233890., 233215., 233745., 232159., 231778., 230307., 228754.,
225823., 225139., 223737., 222078., 221188., 220669., 221944.,
223928., 224996., 223405., 223018., 224966., 226590., 226166.,
226012., 226192., 224900., 224439., 223179., 222375., 221509.,
220734., 219686., 218656., 217792., 215934., 214829., 213673.,
212837., 211604., 210748., 210216., 209974., 209659., 209707.,
210131., 210663., 212113., 213078., 214476., 215087., 216220.,
216831., 217286., 217373., 217030., 216491., 215642., 214249.,
213273., 212148., 210846., 209570., 208202., 207165., 206677.,
205703., 203837., 202620., 201530., 198812., 197654., 196506.,
194163., 193736., 193945., 193785., 193417., 193044., 193768.,
194690., 195739., 198592., 199237., 199932., 200142., 199859.,
199593., 199337., 198403., 197500., 195988., 195114., 194278.,
193837., 193861.])
def moving_average(x, w):
"""calculate moving average with window size w"""
return np.convolve(x, np.ones(w), 'valid') / w
#moving average with size 5
n=5
arr_f = moving_average(arr, 5)
#to show in same plot
arr_f_ext= np.hstack([np.ones(n//2)*arr_f[0],arr_f])
plt.figure()
plt.plot(arr,'o')
plt.plot(arr_f_ext)
This will show:
Then find peaks:
from scipy.signal import find_peaks
#n//2 is the offset of the averaged signal (2 in this example)
peaks =find_peaks(arr_f)[0] + n//2
plt.plot(peaks,arr[peaks],'xr',ms=10)
wich will show:
Note that,
the filtered signal will have a delay of n/2 samples (rounding down) so add n//2 to the peaks finded in filtered signal.
2)the filtered signal does not have the same values that the original, but same behaviour, Then to extract peak value use the original signal.
My informal definition of a peak is a point surrounded by two vectors, one ascending and one descending. It's pretty easy to implement it by iterating the array and comparing two neighbouring segments.
If they are both in the same direction, we merge the 2 segments by deleting the middle point.
To determine if they are in the same direction, I used multiplication. The product is positive if the 2 segments are in same direction.
At the end, every point will be a peak (we cannot determine for the first and last two).
i = 0 # position cursor at beginning
while i <= (len(t)-3):
if (t[i] - t[i+1]) * (t[i+1] - t[i+2]) >= 0:
# Same direction: join 2 segments by removing the middlepoint.
# This test also include the case of an horizontal segment \
# formed by the first 2 points. We remove the second.
del( t[i+1])
else:
# different directions. Delete nothing. Move cursor by 1
i += 1
see plot. You can see the reduction from 135 to 34 points.
Each blue mark is a peak.
Some of these peaks are non-significant and some more filtering is required. But the best method depend on your application. You may filter on vertical distance between 2 adjacent peaks or the horizontal distance between 2 adjacent peaks. For this last case, we need the x value of each peak so I rewrote the program using x-y data points.
t0 = [254256, 254390, 251546, 250561, 250603, 250128, 251000,
252612, 253552, 253776, 252843, 251800, 250808, 250569,
249804, 247755, 247685, 247111, 242320, 242580, 243462,
240383, 239689, 240730, 239508, 239604, 238544, 240174,
240806, 240218, 239956, 241325, 241343, 241532, 240696,
242064, 241830, 237569, 237392, 236353, 234819, 234430,
233890, 233215, 233745, 232159, 231778, 230307, 228754,
225823, 225139, 223737, 222078, 221188, 220669, 221944,
223928, 224996, 223405, 223018, 224966, 226590, 226166,
226012, 226192, 224900, 224439, 223179, 222375, 221509,
220734, 219686, 218656, 217792, 215934, 214829, 213673,
212837, 211604, 210748, 210216, 209974, 209659, 209707,
210131, 210663, 212113, 213078, 214476, 215087, 216220,
216831, 217286, 217373, 217030, 216491, 215642, 214249,
213273, 212148, 210846, 209570, 208202, 207165, 206677,
205703, 203837, 202620, 201530, 198812, 197654, 196506,
194163, 193736, 193945, 193785, 193417, 193044, 193768,
194690, 195739, 198592, 199237, 199932, 200142, 199859,
199593, 199337, 198403, 197500, 195988, 195114, 194278,
193837, 193861]
def graph( t1, t2):
import matplotlib.pyplot as plt
fig=plt.figure()
plt.plot( [p[0] for p in t1], [p[1] for p in t1], color='r', label="raw data")
plt.plot( [p[0] for p in t2], [p[1] for p in t2], marker='.', color='b', label="reduced data")
plt.title('Peak identification')
plt.legend()
plt.show()
def reduce( t):
i = 0 # position cursor at beginning
while i < (len(t)-2):
if (t[i][1] - t[i+1][1]) * (t[i+1][1] - t[i+2][1]) >= 0:
# Same direction: join 2 segments by removing the middlepoint.
# This test also include the case of an horizontal segment \
# formed by the first 2 points. We remove the second.
del( t[i+1])
else:
# different directions. Delete nothing. Move cursor by 1
i += 1
t1 = [(i,t) for i,t in enumerate(t0)] # add x to every data point
t = t1.copy()
reduce( t)
graph( t1, t)
Have fun!

How to split dataframe according to intersection point in Python?

I am working on a project which is aiming to show difference between good form and bad form of an exercise. To do this we collected the acceleration data with wrist based accelerometer. The image above shows 2 set of a fitness execise (bench press). Each set has 10 repetitions. And the image below shows 10 repetitions of 1 set.I have a raw data set which consist of 10 set of an execises. What I want to do is splitting the raw data to 10 parts which will contain the part between 2 black line in the image above so I can analyze the data easily. My supervisor gave me a starting point which is choosing cutpoint in the each set. He said take a cutpoint, find the first interruption time start cutting at 3 sec before that time and count to 10 and finish cutting.
This an idea that I don't know how to apply. At least, if you can tell how to cut a dataframe according to cutpoint I would be greatful.
Well, I found another way to detect periodic parts of my accelerometer data. So, Here is my code:
import numpy as np
from peakdetect import peakdetect
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib import style
from pandas import DataFrame as df
style.use('ggplot')
def get_periodic(path):
periodics = []
data_frame = df.from_csv(path)
data_frame.columns = ['z', 'y', 'x']
if path.__contains__('1'):
if path.__contains__('bench'):
bench_press_1_week = data_frame.between_time('11:24', '11:52')
peak_indexes = get_peaks(bench_press_1_week.y, lookahead=3000)
for i in range(0, len(peak_indexes)):
time_indexes = bench_press_1_week.index.tolist()
start_time = time_indexes[0]
periodic_start = start_time.to_datetime() + dt.timedelta(0, peak_indexes[i] / 100)
periodic_end = periodic_start + dt.timedelta(0, 60)
periodic = bench_press_1_week.between_time(periodic_start.time(), periodic_end.time())
periodics.append(periodic)
return periodics
def get_peaks(data, lookahead):
peak_indexes = []
correlation = np.correlate(data, data, mode='full')
realcorr = correlation[correlation.size / 2:]
maxpeaks, minpeaks = peakdetect(realcorr, lookahead=lookahead)
for i in range(0, len(maxpeaks)):
peak_indexes.append(maxpeaks[i][0])
return peak_indexes
def show_segment_plot(data, periodic_area, exercise_name):
plt.figure(8)
gs = gridspec.GridSpec(7, 2)
ax = plt.subplot(gs[:2, :])
plt.title(exercise_name)
ax.plot(data)
k = 0
for i in range(2, 7):
for j in range(0, 2):
ax = plt.subplot(gs[i, j])
title = "{} {}".format(k + 1, ".Set")
plt.title(title)
ax.plot(periodic_area[k])
k = k + 1
plt.show()
Firstly, this question gave me another perspective for my problem. The image below shows the raw accelerometer data of bench press with 10 sets. Here it has 3 axis(x,y,z) and it's major axis is y(Blue on the image).
I used autocorrelation function for detecting the periodic parts, In the image above every peak represents 1 set of execises. With this peak detection algorithm I found each peak's x-axis value,
In[196]: maxpeaks
Out[196]:
[[16204, 32910.14013671875],
[32281, 28726.95849609375],
[48515, 24583.898681640625],
[64436, 22088.130859375],
[80335, 19582.248291015625],
[96699, 16436.567626953125],
[113081, 12100.027587890625],
[129027, 8098.98486328125],
[145184, 5387.788818359375]]
Basically, each x-value represent samples. My sampling frequency was 100Hz so 16204/100 = 162,04 seconds. To find the time of periodic part I added 162,04 sec to started time. Each bench press took aproximatelly 1 min and in this example, exercise's starting time was 11:24, for first periodic part's start time is 11:26 and ending time is 1 min after. There is some lag but yes best solution that I found is this.

How to average a signal to remove noise with Python

I am working on a small project in the lab with an Arduino Mega 2560 board. I want to average the signal (voltage) of the positive-slope portion (rise) of a triangle wave to try to remove as much noise as possible. My frequency is 20Hz and I am working with a data rate of 115200 bits/second (fastest recommended by Arduino for data transfer to a computer).
The raw signal looks like this:
My data is stored in a text file, with each line corresponding to a data point. Since I do have thousands of data points, I expect that some averaging would smooth the way my signal looks and make a close-to-perfect straight line in this case. However, other experimental conditions might lead to a signal where I could have features along the positive-slope portion of the triangle wave, such as a negative peak, and I absolutely do need to be able to see this feature on my averaged signal.
I am a Python beginner so I might not have the ideal approach to do so and my code might look bad for most of you guys but I would still like to get your hints / ideas on how to improve my signal processing code to achieve a better noise removal by averaging the signal.
#!/usr/bin/python
import matplotlib.pyplot as plt
import math
# *** OPEN AND PLOT THE RAW DATA ***
data_filename = "My_File_Name"
filepath = "My_File_Path" + data_filename + ".txt"
# Open the Raw Data
with open(filepath, "r") as f:
rawdata = f.readlines()
# Remove the \n
rawdata = map(lambda s: s.strip(), rawdata)
# Plot the Raw Data
plt.plot(rawdata, 'r-')
plt.ylabel('Lightpower (V)')
plt.show()
# *** FIND THE LOCAL MAXIMUM AND MINIMUM
# Number of data points for each range
datarange = 15 # This number can be changed for better processing
max_i_range = int(math.floor(len(rawdata)/datarange))-3
#Declare an empty lists for the max and min
min_list = []
max_list = []
min_list_index = []
max_list_index = []
i=0
for i in range(0, max_i_range):
delimiter0 = i * datarange
delimiter1 = (i+1) * datarange
delimiter2 = (i+2) * datarange
delimiter3 = (i+3) * datarange
sumrange1 = sum(float(rawdata[i]) for i in range(delimiter0, delimiter1 + 1))
averagerange1 = sumrange1 / len(rawdata[delimiter0:delimiter1])
sumrange2 = sum(float(rawdata[i]) for i in range(delimiter1, delimiter2 + 1))
averagerange2 = sumrange2 / len(rawdata[delimiter1:delimiter2])
sumrange3 = sum(float(rawdata[i]) for i in range(delimiter2, delimiter3 + 1))
averagerange3 = sumrange3 / len(rawdata[delimiter2:delimiter3])
# Find if there is a minimum in range 2
if ((averagerange1 > averagerange2) and (averagerange2 < averagerange3)):
min_list.append(min(rawdata[delimiter1:delimiter2])) # Find the value of all the minimum
#Find the index of the minimum
min_index = delimiter1 + [k for k, j in enumerate(rawdata[delimiter1:delimiter2]) if j == min(rawdata[delimiter1:delimiter2])][0] # [0] To use the first index out of the possible values
min_list_index.append(min_index)
# Find if there is a maximum in range 2
if ((averagerange1 < averagerange2) and (averagerange2 > averagerange3)):
max_list.append(max(rawdata[delimiter1:delimiter2])) # Find the value of all the maximum
#Find the index of the maximum
max_index = delimiter1 + [k for k, j in enumerate(rawdata[delimiter1:delimiter2]) if j == max(rawdata[delimiter1:delimiter2])][0] # [0] To use the first index out of the possible values
max_list_index.append(max_index)
# *** PROCESS EACH RISE PATTERN ***
# One rise pattern goes from a min to a max
numb_of_rise_pattern = 50 # This number can be increased or lowered. This will average 50 rise patterns
max_min_diff_total = 0
for i in range(0, numb_of_rise_pattern):
max_min_diff_total = max_min_diff_total + (max_list_index[i]-min_list_index[i])
# Find the average number of points for each rise pattern
max_min_diff_avg = abs(max_min_diff_total / numb_of_rise_pattern)
# Find the average values for each of the rise pattern
avg_position_value_list = []
for i in range(0, max_min_diff_avg):
sum_position_value = 0
for j in range(0, numb_of_rise_pattern):
sum_position_value = sum_position_value + float( rawdata[ min_list_index[j] + i ] )
avg_position_value = sum_position_value / numb_of_rise_pattern
avg_position_value_list.append(avg_position_value)
#Plot the Processed Signal
plt.plot(avg_position_value_list, 'r-')
plt.title(data_filename)
plt.ylabel('Lightpower (V)')
plt.show()
At the end, the processed signal looks like this:
I would expect a straighter line, but I could be wrong. I believe that there are probably a lot of flaws in my code and there would certainly be better ways to achieve what I want. I have included a link to a text file with some raw data if any of you guys want to have fun with it.
http://www108.zippyshare.com/v/2iba0XMD/file.html
Simpler might be to use a smoothing function, such as a moving window average. This is pretty simple to implement using the rolling function from pandas.Series. (Only 501 points are shown.) Tweak the numerical argument (window size) to get different amounts of smoothing.
import pandas as pd
import matplotlib.pyplot as plt
# Plot the Raw Data
ts = rawdata[0:500]
plt.plot(ts, 'r-')
plt.ylabel('Lightpower (V)')
# previous version
# smooth_data = pd.rolling_mean(rawdata[0:500],5).plot(style='k')
# changes to pandas require a change to the code as follows:
smooth_data = pd.Series(ts).rolling(window=7).mean().plot(style='k')
plt.show()
Moving Average
A moving average is, basically, a low-pass filter. So, we could also implement a low-pass filter with functions from SciPy as follows:
import scipy.signal as signal
# First, design the Buterworth filter
N = 3 # Filter order
Wn = 0.1 # Cutoff frequency
B, A = signal.butter(N, Wn, output='ba')
smooth_data = signal.filtfilt(B,A, rawdata[0:500])
plt.plot(ts,'r-')
plt.plot(smooth_data[0:500],'b-')
plt.show()
Low-Pass Filter
The Butterworth filter method is from OceanPython.org, BTW.

get bins coordinates with hexbin in matplotlib

I use matplotlib's method hexbin to compute 2d histograms on my data.
But I would like to get the coordinates of the centers of the hexagons in order to further process the results.
I got the values using get_array() method on the result, but I cannot figure out how to get the bins coordinates.
I tried to compute them given number of bins and the extent of my data but i don't know the exact number of bins in each direction. gridsize=(10,2) should do the trick but it does not seem to work.
Any idea?
I think this works.
from __future__ import division
import numpy as np
import math
import matplotlib.pyplot as plt
def generate_data(n):
"""Make random, correlated x & y arrays"""
points = np.random.multivariate_normal(mean=(0,0),
cov=[[0.4,9],[9,10]],size=int(n))
return points
if __name__ =='__main__':
color_map = plt.cm.Spectral_r
n = 1e4
points = generate_data(n)
xbnds = np.array([-20.0,20.0])
ybnds = np.array([-20.0,20.0])
extent = [xbnds[0],xbnds[1],ybnds[0],ybnds[1]]
fig=plt.figure(figsize=(10,9))
ax = fig.add_subplot(111)
x, y = points.T
# Set gridsize just to make them visually large
image = plt.hexbin(x,y,cmap=color_map,gridsize=20,extent=extent,mincnt=1,bins='log')
# Note that mincnt=1 adds 1 to each count
counts = image.get_array()
ncnts = np.count_nonzero(np.power(10,counts))
verts = image.get_offsets()
for offc in xrange(verts.shape[0]):
binx,biny = verts[offc][0],verts[offc][1]
if counts[offc]:
plt.plot(binx,biny,'k.',zorder=100)
ax.set_xlim(xbnds)
ax.set_ylim(ybnds)
plt.grid(True)
cb = plt.colorbar(image,spacing='uniform',extend='max')
plt.show()
I would love to confirm that the code by Hooked using get_offsets() works, but I tried several iterations of the code mentioned above to retrieve center positions and, as Dave mentioned, get_offsets() remains empty. The workaround that I found is to use the non-empty 'image.get_paths()' option. My code takes the mean to find centers but which means it is just a smidge longer, but it does work.
The get_paths() option returns a set of x,y coordinates embedded that can be looped over and then averaged to return the center position for each hexagram.
The code that I have is as follows:
counts=image.get_array() #counts in each hexagon, works great
verts=image.get_offsets() #empty, don't use this
b=image.get_paths() #this does work, gives Path([[]][]) which can be plotted
for x in xrange(len(b)):
xav=np.mean(b[x].vertices[0:6,0]) #center in x (RA)
yav=np.mean(b[x].vertices[0:6,1]) #center in y (DEC)
plt.plot(xav,yav,'k.',zorder=100)
I had this same problem. I think what needs to be developed is a framework to have a HexagonalGrid object which can then be applied to many different data sets (and it would be awesome to do it for N dimensions). This is possible and it surprises me that neither Scipy or Numpy has anything for it (furthermore there seems to be nothing else like it except perhaps binify)
That said, I assume you want to use hexbinning to compare multiple binned data sets. This requires some common base. I got this to work using matplotlib's hexbin the following way:
import numpy as np
import matplotlib.pyplot as plt
def get_data (mean,cov,n=1e3):
"""
Quick fake data builder
"""
np.random.seed(101)
points = np.random.multivariate_normal(mean=mean,cov=cov,size=int(n))
x, y = points.T
return x,y
def get_centers (hexbin_output):
"""
about 40% faster than previous post only cause you're not calculating the
min/max every time
"""
paths = hexbin_output.get_paths()
v = paths[0].vertices[:-1] # adds a value [0,0] to the end
vx,vy = v.T
idx = [3,0,5,2] # index for [xmin,xmax,ymin,ymax]
xmin,xmax,ymin,ymax = vx[idx[0]],vx[idx[1]],vy[idx[2]],vy[idx[3]]
half_width_x = abs(xmax-xmin)/2.0
half_width_y = abs(ymax-ymin)/2.0
centers = []
for i in xrange(len(paths)):
cx = paths[i].vertices[idx[0],0]+half_width_x
cy = paths[i].vertices[idx[2],1]+half_width_y
centers.append((cx,cy))
return np.asarray(centers)
# important parts ==>
class Hexagonal2DGrid (object):
"""
Used to fix the gridsize, extent, and bins
"""
def __init__ (self,gridsize,extent,bins=None):
self.gridsize = gridsize
self.extent = extent
self.bins = bins
def hexbin (x,y,hexgrid):
"""
To hexagonally bin the data in 2 dimensions
"""
fig = plt.figure()
ax = fig.add_subplot(111)
# Note mincnt=0 so that it will return a value for every point in the
# hexgrid, not just those with count>mincnt
# Basically you fix the gridsize, extent, and bins to keep them the same
# then the resulting count array is the same
hexbin = plt.hexbin(x,y, mincnt=0,
gridsize=hexgrid.gridsize,
extent=hexgrid.extent,
bins=hexgrid.bins)
# you could close the figure if you don't want it
# plt.close(fig.number)
counts = hexbin.get_array().copy()
return counts, hexbin
# Example ===>
if __name__ == "__main__":
hexgrid = Hexagonal2DGrid((21,5),[-70,70,-20,20])
x_data,y_data = get_data((0,0),[[-40,95],[90,10]])
x_model,y_model = get_data((0,10),[[100,30],[3,30]])
counts_data, hexbin_data = hexbin(x_data,y_data,hexgrid)
counts_model, hexbin_model = hexbin(x_model,y_model,hexgrid)
# if you want the centers, they will be the same for both
centers = get_centers(hexbin_data)
# if you want to ignore the cells with zeros then use the following mask.
# But if want zeros for some bins and not others I'm not sure an elegant way
# to do this without using the centers
nonzero = counts_data != 0
# now you can compare the two data sets
variance_data = counts_data[nonzero]
square_diffs = (counts_data[nonzero]-counts_model[nonzero])**2
chi2 = np.sum(square_diffs/variance_data)
print(" chi2={}".format(chi2))

Remove data points below a curve with python

I need to compare some theoretical data with real data in python.
The theoretical data comes from resolving an equation.
To improve the comparative I would like to remove data points that fall far from the theoretical curve. I mean, I want to remove the points below and above red dashed lines in the figure (made with matplotlib).
Both the theoretical curves and the data points are arrays of different length.
I can try to remove the points in a roughly-eye way, for example: the first upper point can be detected using:
data2[(data2.redshift<0.4)&data2.dmodulus>1]
rec.array([('1997o', 0.374, 1.0203223485103787, 0.44354759972859786)], dtype=[('SN_name', '|S10'), ('redshift', '<f8'), ('dmodulus', '<f8'), ('dmodulus_error', '<f8')])
But I would like to use a less roughly-eye way.
So, can anyone help me finding an easy way of removing the problematic points?
Thank you!
This might be overkill and is based on your comment
Both the theoretical curves and the data points are arrays of
different length.
I would do the following:
Truncate the data set so that its x values lie within the max and min values of the theoretical set.
Interpolate the theoretical curve using scipy.interpolate.interp1d and the above truncated data x values. The reason for step (1) is to satisfy the constraints of interp1d.
Use numpy.where to find data y values that are out side the range of acceptable theory values.
DONT discard these values, as was suggested in comments and other answers. If you want for clarity, point them out by plotting the 'inliners' one color and the 'outliers' an other color.
Here's a script that is close to what you are looking for, I think. It hopefully will help you accomplish what you want:
import numpy as np
import scipy.interpolate as interpolate
import matplotlib.pyplot as plt
# make up data
def makeUpData():
'''Make many more data points (x,y,yerr) than theory (x,y),
with theory yerr corresponding to a constant "sigma" in y,
about x,y value'''
NX= 150
dataX = (np.random.rand(NX)*1.1)**2
dataY = (1.5*dataX+np.random.rand(NX)**2)*dataX
dataErr = np.random.rand(NX)*dataX*1.3
theoryX = np.arange(0,1,0.1)
theoryY = theoryX*theoryX*1.5
theoryErr = 0.5
return dataX,dataY,dataErr,theoryX,theoryY,theoryErr
def makeSameXrange(theoryX,dataX,dataY):
'''
Truncate the dataX and dataY ranges so that dataX min and max are with in
the max and min of theoryX.
'''
minT,maxT = theoryX.min(),theoryX.max()
goodIdxMax = np.where(dataX<maxT)
goodIdxMin = np.where(dataX[goodIdxMax]>minT)
return (dataX[goodIdxMax])[goodIdxMin],(dataY[goodIdxMax])[goodIdxMin]
# take 'theory' and get values at every 'data' x point
def theoryYatDataX(theoryX,theoryY,dataX):
'''For every dataX point, find interpolated thoeryY value. theoryx needed
for interpolation.'''
f = interpolate.interp1d(theoryX,theoryY)
return f(dataX[np.where(dataX<np.max(theoryX))])
# collect valid points
def findInlierSet(dataX,dataY,interpTheoryY,thoeryErr):
'''Find where theoryY-theoryErr < dataY theoryY+theoryErr and return
valid indicies.'''
withinUpper = np.where(dataY<(interpTheoryY+theoryErr))
withinLower = np.where(dataY[withinUpper]
>(interpTheoryY[withinUpper]-theoryErr))
return (dataX[withinUpper])[withinLower],(dataY[withinUpper])[withinLower]
def findOutlierSet(dataX,dataY,interpTheoryY,thoeryErr):
'''Find where theoryY-theoryErr < dataY theoryY+theoryErr and return
valid indicies.'''
withinUpper = np.where(dataY>(interpTheoryY+theoryErr))
withinLower = np.where(dataY<(interpTheoryY-theoryErr))
return (dataX[withinUpper],dataY[withinUpper],
dataX[withinLower],dataY[withinLower])
if __name__ == "__main__":
dataX,dataY,dataErr,theoryX,theoryY,theoryErr = makeUpData()
TruncDataX,TruncDataY = makeSameXrange(theoryX,dataX,dataY)
interpTheoryY = theoryYatDataX(theoryX,theoryY,TruncDataX)
inDataX,inDataY = findInlierSet(TruncDataX,TruncDataY,interpTheoryY,
theoryErr)
outUpX,outUpY,outDownX,outDownY = findOutlierSet(TruncDataX,
TruncDataY,
interpTheoryY,
theoryErr)
#print inlierIndex
fig = plt.figure()
ax = fig.add_subplot(211)
ax.errorbar(dataX,dataY,dataErr,fmt='.',color='k')
ax.plot(theoryX,theoryY,'r-')
ax.plot(theoryX,theoryY+theoryErr,'r--')
ax.plot(theoryX,theoryY-theoryErr,'r--')
ax.set_xlim(0,1.4)
ax.set_ylim(-.5,3)
ax = fig.add_subplot(212)
ax.plot(inDataX,inDataY,'ko')
ax.plot(outUpX,outUpY,'bo')
ax.plot(outDownX,outDownY,'ro')
ax.plot(theoryX,theoryY,'r-')
ax.plot(theoryX,theoryY+theoryErr,'r--')
ax.plot(theoryX,theoryY-theoryErr,'r--')
ax.set_xlim(0,1.4)
ax.set_ylim(-.5,3)
fig.savefig('findInliers.png')
This figure is the result:
At the end I use some of the Yann code:
def theoryYatDataX(theoryX,theoryY,dataX):
'''For every dataX point, find interpolated theoryY value. theoryx needed
for interpolation.'''
f = interpolate.interp1d(theoryX,theoryY)
return f(dataX[np.where(dataX<np.max(theoryX))])
def findOutlierSet(data,interpTheoryY,theoryErr):
'''Find where theoryY-theoryErr < dataY theoryY+theoryErr and return
valid indicies.'''
up = np.where(data.dmodulus > (interpTheoryY+theoryErr))
low = np.where(data.dmodulus < (interpTheoryY-theoryErr))
# join all the index together in a flat array
out = np.hstack([up,low]).ravel()
index = np.array(np.ones(len(data),dtype=bool))
index[out]=False
datain = data[index]
dataout = data[out]
return datain, dataout
def selectdata(data,theoryX,theoryY):
"""
Data selection: z<1 and +-0.5 LFLRW separation
"""
# Select data with redshift z<1
data1 = data[data.redshift < 1]
# From modulus to light distance:
data1.dmodulus, data1.dmodulus_error = modulus2distance(data1.dmodulus,data1.dmodulus_error)
# redshift data order
data1.sort(order='redshift')
# Outliers: distance to LFLRW curve bigger than +-0.5
theoryErr = 0.5
# Theory curve Interpolation to get the same points as data
interpy = theoryYatDataX(theoryX,theoryY,data1.redshift)
datain, dataout = findOutlierSet(data1,interpy,theoryErr)
return datain, dataout
Using those functions I can finally obtain:
Thank you all for your help.
Just look at the difference between the red curve and the points, if it is bigger than the difference between the red curve and the dashed red curve remove it.
diff=np.abs(points-red_curve)
index= (diff>(dashed_curve-redcurve))
filtered=points[index]
But please take the comment from NickLH serious. Your Data looks pretty good without any filtering, your "outlieres" all have a very big error and won't affect the fit much.
Either you could use the numpy.where() to identify which xy pairs meet your plotting criteria, or perhaps enumerate to do pretty much the same thing. Example:
x_list = [ 1, 2, 3, 4, 5, 6 ]
y_list = ['f','o','o','b','a','r']
result = [y_list[i] for i, x in enumerate(x_list) if 2 <= x < 5]
print result
I'm sure you could change the conditions so that '2' and '5' in the above example are the functions of your curves

Categories

Resources