I am trying to create a 3d surface plot like this, link available here :
https://plotly.com/python/3d-surface-plots/
But the problem is that I only have limited data available where I only have data for the peak location and the height of peak but the rest of the data is missing. In the example z-data need 25 X 25 values 625 data points to generate a valid surface plot.
My data looks something like this:
So my question is that, is it possible to use some polynomial function with the peak location value as a constrain to generate Z-data based on the information I have?
Open to any discussion. Any form of suggestion is appreciated.
Though I don't like this form of interpolation, which is pretty artificial, you can use the following trick:
F(P) = (Σ Fk / d(P, Pk)) / (Σ 1 / d(P, Pk))
P is the point where you interpolate and Pk are the known peak positions. d is the Euclidean distance. (This gives sharp peaks; the squared distance gives smooth ones.)
Unfortunately, far from the peaks this formula tends to the average of the Fk, giving an horizontal surface that is above some of the Fk, giving downward peaks. You can work around this by adding fake peaks of negative height around your data set, to lower the average.
I'm having trouble using the scipy interpolation methods to generate a nice smooth curve from the data points given. I've tried using the standard 1D interpolation, the Rbf interpolation with all options (cubic, gaussian, multiquadric etc.)
in the image provided, the blue line is the original data, and I'm looking to first smooth the sharp edges, and then have dynamically editable points from which to recalculate the curve. Each time a single point is edited it should auto calculate a new spline of some sort to smoothly transition between each point.
It kind of works when the points are within a particular range of each other as below.
But if the points end up too far apart, or too close together, I end up with issues like the following.
Key points are:
The curve MUST be flat between the first two points
The curve must NOT go below point 1 or 2 (i.e. derivative can't be negative)
~15 points (not shown) between points 2 and 3 are also editable and the line between is not necessarily linear. Full control over each of these points is a must, as is the curve going through each of them.
I'm happy to break it down into smaller curves that i then join/convolve, but just need to ensure a >0 gradient.
sample data:
x=[0, 37, 50, 105, 115,120]
y=[0.00965, 0.00965, 0.047850827205882, 0.35600416666667, 0.38074375, 0.38074375]
As an example, try moving point 2 (x=37) to an extreme value, say 10 (keep y the same). Just ensure that all points from x=0 to x=10 (or any other variation) have identical y values of 0.00965.
any assistance is greatly appreciated.
UPDATE
Attempted pchip method suggested in comments with the results below:
pchip method, better and worse...
Solved!
While I'm not sure that this is exactly true, it is as if the spline tools for creating Bezier curves treat the control points as points the calculated curve must go through - which is not true in my case. I couldn't figure out how to turn this feature off, so I found the cubic formula for a Bezier curve (cubic is what I need) and calculated my own points. I only then had to do a little adjustment to make the points fit the required integer x values - in my case, near enough is good enough. I would otherwise have needed to interpolate linearly between two points either side of the desired x value and determine the exact value.
For those interested, cubic needs 4 points - start, end, and 2 control points. The rule is:
B(t) = (1-t)^3 P0 + 3(1-t)^2 tP1 + 3(1-t)t^2 P2 + t^3 P3
Calculate for x and y separately, using a list of values for t. If you need to gradient match, just make sure that the control points for P1 and P2 are only moved along the same gradient as the preceding/proceeding sections.
Perfect result
I am trying to analyze the shift and deforming of a pressure test on a block of concrete.
I have two measurements: one of a height (vector h) and one of the diameter (vector d) at that specific height. I have these measurements for different pressures deforming the block. I will denote the height and diameter for different pressures with the index i (h_1 to h_n, d_i respectively). len(h_i) is not necessarily equal to len(h_j)
I want to find a way to fit (deform and shift) the graph
G_1 = (h_1, d_1) to the graph
G_2 = (h_2, d_2)
I thought of minimizing the square error like it is done in functional fitting, I do however have some problems:
I do not know how to introduce the shift in the height/ x-direction
i.e. f(h_(i+1))=f(h_i - h_shift)
I do not know how to introduce the compression in the height/ x-direction
i.e. f(h_(i+1))=f(a*h_i )
I am not sure how I can introduce the "normal" fitting deforming diameter depending on the height. (say I want to add a deforming of the form d_(i+1)=d_i+ah²+bh+c)
I want to stress again that I do not try to manipulate a function to fit data-points but that I try to manipulate a set of points to fit a different set of points.
UPDATE: I have stored illustrations here on Google Drive
Note that there are two different kinds of shift: one small one (sample_fitting_1.png) and one large one (sample_fitting_2.png).
I am trying not to loose the fine structure of the data as I would by fitting a curve through it.
The goal is to shift and deform one of the graphs onto the other one by manipulating it as described above as well in x as in y-direction.
Thanks in advance
Stephan
I am trying to look at astronomical spectra using Python, and I'm using numpy.correlate to try and find a radial velocity shift. I'm comparing each spectrum I have to one template spectrum. The problem that I'm encountering is that, no matter which spectra I use, numpy.correlate states that the maximal value of the correlation function occurs with a shift of zero pixels, i.e. the spectra already line up, which is very clearly not true. Here is some of the relevant code:
corr = np.correlate(temp_data, imag_data, mode='same')
ax1.plot(delta_data, corr, c='g')
ax1.plot(delta_data, 100*temp_data, c='b')
ax1.plot(delta_data, 100*imag_data, c='r')
The output of this code is shown here:
What I Have
Note that the cross-correlation function peaks at an offset of zero pixels despite the template (blue) and observed (red) spectra clearly showing an offset. What I would expect to see would be something a bit like (albeit not exactly like; this is merely the closest representation I could produce):
What I Want
Here I have introduced an artificial offset of 50 pixels in the template data, and they more or less line up now. What I would like is, for a case like this, for a peak to appear at an offset of 50 pixels rather than at zero (I don't care if the spectra at the bottom appear lined up; that is merely for visual representation). However, despite several hours of work and research online, I can't find someone who even describes this problem, let alone a solution. I've attempted to use ScyPy's correlate and MatLib's xcorr, and bot show this same thing (although I'm led to believe that they are essentially the same function).
Why is the cross-correlation not acting the way I expect, and how to do I get it to act in a useful way?
The issue you're experiencing is probably because your spectra are not zero-centered; their RMS value looks to be about 100 in whichever units you're plotting, instead of 0. The reason this is an issue is because numpy.correlate works by "sliding" imag_data over temp_data to get their dot product at each possible offset between the two series. (See the wikipedia on cross-correlation to understand the operation itself.) When using mode='same' to produce an output that is the same length as your first input (temp_data), NumPy has to "pad" a bunch of dummy values--zeroes--to the ends of imag_data in order to be able to calculate the dot products of all the shifted versions of the imag_data. When we have any non-zero offset between the spectra, some of the values in temp_data are being multiplied by those dummy zero-padding values instead of the values in image_data. If the values in the spectra were centered around zero (RMS=0), then this zero-padding would not impact our expectation of the dot product, but because these spectra have RMS values around 100 units, that dot product (our correlation) is largest when we lay the two spectra on top of one another with no offset.
Notice that your cross-correlation result looks like a triangular pulse, which is what you might expect from the cross-correlation of two square pulses (c.f. Convolution of a Rectangular "Pulse" With Itself. That's because your spectra, once padded, look like a step function from zero up to a pulse of slightly noisy values around 100. You can try convolving with mode='full' to see the entire response of the two spectra you're correlating, or, notice that with mode='valid' that you should only get one value in return, since your two spectra are the exact same length, so there is only one offset (zero!) where you can entirely line them up.
To sidestep this issue, you can try either subtracting away the RMS value of the spectra so that they are zero-centered, or manually padding the beginning and end of imag_data with (len(temp_data)/2-1) dummy values equal to np.sqrt(np.mean(imag_data**2))
Edit:
In response to your questions in the comments, I thought I'd include a graphic to make the point I'm trying to describe a little clearer.
Say we have two vectors of values, not entirely unlike your spectra, each with some large non-zero mean.
# Generate two noisy, but correlated series
t = np.linspace(0,250,250) # time domain from 0 to 250 steps
# signal_model = narrow_peak + gaussian_noise + constant
f = 10*np.exp(-((t-90)**2)/8) + np.random.randn(250) + 40
g = 10*np.exp(-((t-180)**2)/8) + np.random.randn(250) + 40
f has a spike around t=90, and g has a spike around t=180. So we expect the correlation of g and f to have a spike around a lag of 90 timesteps (in the case of spectra, frequency bins instead of timesteps.)
But in order to get an output that is the same shape as our inputs, as in np.correlate(g,f,mode='same'), we have to "pad" f on either side with half its length in dummy values: np.correlate pads with zeroes. If we don't pad f (as in np.correlate(g,f,mode='valid')), we will only get one value in return (the correlation with zero offset), because f and g are the same length, and there is no room to shift one of the signals relative to the other.
When you calculate the correlation of g and f after that padding, you find that it peaks when the non-zero portion of signals aligns completely, that is, when there is no offset between the original f and g. This is because the RMS value of the signals is so much higher than zero--the size of the overlap of f and g depends much more strongly on the number of elements overlapping at this high RMS level than on the relatively small fluctuations each function has around it. We can remove this large contribution to the correlation by subtracting the RMS level from each series. In the graph below, the gray line on the right shows the cross-correlation the two series before zero-centering, and the teal line shows the cross-correlation after. The gray line is, like your first attempt, triangular with the overlap of the two non-zero signals. The teal line better reflects the correlation between the fluctuation of the two signals, as we desired.
xcorr = np.correlate(g,f,'same')
xcorr_rms = np.correlate(g-40,f-40,'same')
fig, axes = plt.subplots(5,2,figsize=(18,18),gridspec_kw={'width_ratios':[5,2]})
for n, axis in enumerate(axes):
offset = (0,75,125,215,250)[n]
fp = np.pad(f,[offset,250-offset],mode='constant',constant_values=0.)
gp = np.pad(g,[125,125],mode='constant',constant_values=0.)
axis[0].plot(fp,color='purple',lw=1.65)
axis[0].plot(gp,color='orange',lw=lw)
axis[0].axvspan(max(125,offset),min(375,offset+250),color='blue',alpha=0.06)
axis[0].axvspan(0,max(125,offset),color='brown',alpha=0.03)
axis[0].axvspan(min(375,offset+250),500,color='brown',alpha=0.03)
if n==0:
axis[0].legend(['f','g'])
axis[0].set_title('offset={}'.format(offset-125))
axis[1].plot(xcorr/(40*40),color='gray')
axis[1].plot(xcorr_rms,color='teal')
axis[1].axvline(offset,-100,350,color='maroon',lw=5,alpha=0.5)
if n == 0:
axis[1].legend(["$g \star f$","$g' \star f'$","offset"],loc='upper left')
plt.show()
I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it.
Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch.
My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists.
1st attempt http://astro.dur.ac.uk/~dmurphy/data_limits.png
OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg:
from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx
y=ymin+dy, do 1
do 1-2, but now sample in y
An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments.
Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary?
A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon.
The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now?
Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really!
~~~~~~~~~~~~~~~~~~~~~~~~~
OK, here's attempt #2 using Mark's idea of convex hulls:
alt text http://astro.dur.ac.uk/~dmurphy/data_limitsv2.png
For this I used qconvex from the qhull package, getting it to return the extreme vertices. For those interested:
cat [data] | qconvex Fx > out
The sampling of the perimeter seems quite low, and although I haven't played much with the settings, I'm not convinced I can improve the fidelity.
I think what you are looking for is the Convex Hull of the data That will give a set of points that if connected will mean that all your points are on or inside the connected points
I may have mixed something, but what's the motivation for simply not determining the maximum and minimum x and y level? Unless you have an enormous amount of data you could simply iterate through your points determining minimum and maximum levels fairly quickly.
This isn't the most efficient example, but if your data set is small this won't be particularly slow:
import random
data = [(random.randint(-100, 100), random.randint(-100, 100)) for i in range(1000)]
x_min = min([point[0] for point in data])
x_max = max([point[0] for point in data])
y_min = min([point[1] for point in data])
y_max = max([point[1] for point in data])