I am trying to smoothen a scatter plot shown below using SciPy's B-spline representation of 1-D curve. The data is available here.
The code I used is:
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
data = np.genfromtxt("spline_data.dat", delimiter = '\t')
x = 1000 / data[:, 0]
y = data[:, 1]
x_int = np.linspace(x[0], x[-1], 100)
tck = interpolate.splrep(x, y, k = 3, s = 1)
y_int = interpolate.splev(x_int, tck, der = 0)
fig = plt.figure(figsize = (5.15,5.15))
plt.subplot(111)
plt.plot(x, y, marker = 'o', linestyle='')
plt.plot(x_int, y_int, linestyle = '-', linewidth = 0.75, color='k')
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
I tried changing the order of the spline and the smoothing condition, but I am not getting a smooth plot.
B-spline interpolation should be able to smoothen the data but what is wrong? Any alternate method to smoothen this data?
Use a larger smoothing parameter. For example, s=1000:
tck = interpolate.splrep(x, y, k=3, s=1000)
This produces:
Assuming we are dealing with noisy observations of some phenomena, Gaussian Process Regression might also be a good choice. Knowledge about the variance of the noise can be included into the parameters (nugget) and other parameters can be found using Maximum Likelihood estimation. Here's a simple example of how it could be applied:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.gaussian_process import GaussianProcess
data = np.genfromtxt("spline_data.dat", delimiter='\t')
x = 1000 / data[:, 0]
y = data[:, 1]
x_pred = np.linspace(x[0], x[-1], 100)
# <GP regression>
gp = GaussianProcess(theta0=1, thetaL=0.00001, thetaU=1000, nugget=0.000001)
gp.fit(np.atleast_2d(x).T, y)
y_pred = gp.predict(np.atleast_2d(x_pred).T)
# </GP regression>
fig = plt.figure(figsize=(5.15, 5.15))
plt.subplot(111)
plt.plot(x, y, marker='o', linestyle='')
plt.plot(x_pred, y_pred, linestyle='-', linewidth=0.75, color='k')
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
which will give:
In your specific case, you could also try changing the last argument of the np.linspace function to a smaller number, np.linspace(x[0], x[-1], 10), for example.
Demo code:
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
data = np.random.rand(100,2)
tempx = list(data[:, 0])
tempy = list(data[:, 1])
x = np.array(sorted([point*10 + tempx.index(point) for point in tempx]))
y = np.array([point*10 + tempy.index(point) for point in tempy])
x_int = np.linspace(x[0], x[-1], 10)
tck = interpolate.splrep(x, y, k = 3, s = 1)
y_int = interpolate.splev(x_int, tck, der = 0)
fig = plt.figure(figsize = (5.15,5.15))
plt.subplot(111)
plt.plot(x, y, marker = 'o', linestyle='')
plt.plot(x_int, y_int, linestyle = '-', linewidth = 0.75, color='k')
plt.xlabel("X")
plt.ylabel("Y")
plt.show()
You could also smooth the data with a rolling_mean in pandas:
import pandas as pd
data = [...(your data here)...]
smoothendData = pd.rolling_mean(data,5)
the second argument of rolling_mean is the moving average (rolling mean) period. You can also reverse the data 'data.reverse', take a rolling_mean of the data that way, and combine it with the forward rolling mean. Another option is exponentially weighted moving averages:
Pandas: Exponential smoothing function for column
or using bandpass filters:
fft bandpass filter in python
http://docs.scipy.org/doc/scipy/reference/signal.html
Related
I did a test code brigging something I saw on stack on different topic, and try to assemble it to make what I need : a filled curve with gradient.
After validate this test code I will make a subplot (4 plots for 4 weeks) with the same min/max for all plot (it's a power consumption).
My code :
from matplotlib import pyplot as plt
import numpy as np
# random x
x = range(100)
# smooth random y
y = 0
result = []
for _ in x:
result.append(y)
y += np.random.normal(loc=0, scale=1)#, size=len(x))
y = result
y = list(map(abs, y))
# creation of z for contour
z1 = min(y)
z3 = max(y)/(len(x)+1)
z2 = max(y)-z3
z = [[z] * len(x) for z in np.arange(z1,z2,z3)]
num_bars = len(x) # more bars = smoother gradient
# plt.contourf(x, y, z, num_bars, cmap='greys')
plt.contourf(x, y, z, num_bars, cmap='cool', levels=101)
background_color = 'w'
plt.fill_between(
x,
y,
y2=max(y),
color=background_color
)
But everytime I make the code run, the result display a different gradient scale, that is not smooth neither even straight right.
AND sometime the code is in error : TypeError: Length of y (100) must match number of rows in z (101)
I'm on it since too many time, turning around, and can't figure where I'm wrong...
I finally find something particularly cool, how to :
have both filled gradient curves in a different color (thanks to JohanC in this topic)
use x axis with datetime (thanks to Ffisegydd in this topic)
Here the code :
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib.dates as mdates
np.random.seed(2022)
st_date = '2022-11-01 00:00:00'
st_date = pd.to_datetime(st_date)
en_date = st_date + pd.DateOffset(days=7)
x = pd.date_range(start=st_date,end=en_date,freq='30min')
x = mdates.date2num(x)
y = np.random.normal(0.01, 1, len(x)).cumsum()
fig, ax = plt.subplots(figsize=(18, 5))
ax.plot(x, y, color='grey')
########################
# positives fill
#######################
grad1 = ax.imshow(
np.linspace(0, 1, 256).reshape(-1, 1),
cmap='Blues',
vmin=-0.5,
aspect='auto',
extent=[x.min(), x.max(), 0, y.max()],
# extent=[x[0], x[1], 0, y.max()],
origin='lower'
)
poly_pos = ax.fill_between(x, y.min(), y, alpha=0.1)
grad1.set_clip_path(
poly_pos.get_paths()[0],
transform=ax.transData
)
poly_pos.remove()
########################
# negatives fill
#######################
grad2 = ax.imshow(
np.linspace(0, 1, 256).reshape(-1, 1),
cmap='Reds',
vmin=-0.5,
aspect='auto',
extent=[x.min(), x.max(), y.min(), 0],
origin='upper'
)
poly_neg = ax.fill_between(x, y, y.max(), alpha=0.1)
grad2.set_clip_path(
poly_neg.get_paths()[0],
transform=ax.transData
)
poly_neg.remove()
########################
# decorations and formatting plot
########################
ax.xaxis_date()
date_format = mdates.DateFormatter('%d-%b %H:%M')
ax.xaxis.set_major_formatter(date_format)
fig.autofmt_xdate()
ax.grid(True)
I have a table of measured values for a quantity that depends on two parameters.
So say I have a function fuelConsumption(speed, temperature), for which data on a mesh are known.
Now I want to interpolate the expected fuelConsumption for a lot of measured data points (speed, temperature) from a pandas.DataFrame (and return a vector with the values for each data point).
I am currently using SciPy's interpolate.interp2d for interpolation, but when passing the parameters as two vectors [s1,s2] and [t1,t2] (only two ordered values for simplicity) it will construct a mesh and return:
[[f(s1,t1), f(s2,t1)], [f(s1,t2), f(s2,t2)]]
The result I am hoping to get is:
[f(s1,t1), f(s2, t2)]
How can I interpolate to get the output I want?
From scipy v0.14 onwards you can use scipy.interpolate.RectBivariateSpline with grid=False:
import numpy as np
from scipy.interpolate import RectBivariateSpline
from matplotlib import pyplot as plt
x, y = np.ogrid[-1:1:10j,-1:1:10j]
z = (x + y)*np.exp(-6.0 * (x * x + y * y))
spl = RectBivariateSpline(x, y, z)
xi = np.linspace(-1, 1, 50)
yi = np.linspace(-1, 1, 50)
zi = spl(xi, yi, grid=False)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.imshow(z, cmap=plt.cm.coolwarm, origin='lower', extent=(-1, 1, -1, 1))
ax.scatter(xi, yi, s=60, c=zi, cmap=plt.cm.coolwarm)
I'd like to make a scatter plot where each point is colored by the spatial density of nearby points.
I've come across a very similar question, which shows an example of this using R:
R Scatter Plot: symbol color represents number of overlapping points
What's the best way to accomplish something similar in python using matplotlib?
In addition to hist2d or hexbin as #askewchan suggested, you can use the same method that the accepted answer in the question you linked to uses.
If you want to do that:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
# Generate fake data
x = np.random.normal(size=1000)
y = x * 3 + np.random.normal(size=1000)
# Calculate the point density
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
fig, ax = plt.subplots()
ax.scatter(x, y, c=z, s=100)
plt.show()
If you'd like the points to be plotted in order of density so that the densest points are always on top (similar to the linked example), just sort them by the z-values. I'm also going to use a smaller marker size here as it looks a bit better:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import gaussian_kde
# Generate fake data
x = np.random.normal(size=1000)
y = x * 3 + np.random.normal(size=1000)
# Calculate the point density
xy = np.vstack([x,y])
z = gaussian_kde(xy)(xy)
# Sort the points by density, so that the densest points are plotted last
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
fig, ax = plt.subplots()
ax.scatter(x, y, c=z, s=50)
plt.show()
Plotting >100k data points?
The accepted answer, using gaussian_kde() will take a lot of time. On my machine, 100k rows took about 11 minutes. Here I will add two alternative methods (mpl-scatter-density and datashader) and compare the given answers with same dataset.
In the following, I used a test data set of 100k rows:
import matplotlib.pyplot as plt
import numpy as np
# Fake data for testing
x = np.random.normal(size=100000)
y = x * 3 + np.random.normal(size=100000)
Output & computation time comparison
Below is a comparison of different methods.
1: mpl-scatter-density
Installation
pip install mpl-scatter-density
Example code
import mpl_scatter_density # adds projection='scatter_density'
from matplotlib.colors import LinearSegmentedColormap
# "Viridis-like" colormap with white background
white_viridis = LinearSegmentedColormap.from_list('white_viridis', [
(0, '#ffffff'),
(1e-20, '#440053'),
(0.2, '#404388'),
(0.4, '#2a788e'),
(0.6, '#21a784'),
(0.8, '#78d151'),
(1, '#fde624'),
], N=256)
def using_mpl_scatter_density(fig, x, y):
ax = fig.add_subplot(1, 1, 1, projection='scatter_density')
density = ax.scatter_density(x, y, cmap=white_viridis)
fig.colorbar(density, label='Number of points per pixel')
fig = plt.figure()
using_mpl_scatter_density(fig, x, y)
plt.show()
Drawing this took 0.05 seconds:
And the zoom-in looks quite nice:
2: datashader
Datashader is an interesting project. It has added support for matplotlib in datashader 0.12.
Installation
pip install datashader
Code (source & parameterer listing for dsshow):
import datashader as ds
from datashader.mpl_ext import dsshow
import pandas as pd
def using_datashader(ax, x, y):
df = pd.DataFrame(dict(x=x, y=y))
dsartist = dsshow(
df,
ds.Point("x", "y"),
ds.count(),
vmin=0,
vmax=35,
norm="linear",
aspect="auto",
ax=ax,
)
plt.colorbar(dsartist)
fig, ax = plt.subplots()
using_datashader(ax, x, y)
plt.show()
It took 0.83 s to draw this:
There is also possibility to colorize by third variable. The third parameter for dsshow controls the coloring. See more examples here and the source for dsshow here.
3: scatter_with_gaussian_kde
def scatter_with_gaussian_kde(ax, x, y):
# https://stackoverflow.com/a/20107592/3015186
# Answer by Joel Kington
xy = np.vstack([x, y])
z = gaussian_kde(xy)(xy)
ax.scatter(x, y, c=z, s=100, edgecolor='')
It took 11 minutes to draw this:
4: using_hist2d
import matplotlib.pyplot as plt
def using_hist2d(ax, x, y, bins=(50, 50)):
# https://stackoverflow.com/a/20105673/3015186
# Answer by askewchan
ax.hist2d(x, y, bins, cmap=plt.cm.jet)
It took 0.021 s to draw this bins=(50,50):
It took 0.173 s to draw this bins=(1000,1000):
Cons: The zoomed-in data does not look as good as in with mpl-scatter-density or datashader. Also you have to determine the number of bins yourself.
5: density_scatter
The code is as in the answer by Guillaume.
It took 0.073 s to draw this with bins=(50,50):
It took 0.368 s to draw this with bins=(1000,1000):
Also, if the number of point makes KDE calculation too slow, color can be interpolated in np.histogram2d [Update in response to comments: If you wish to show the colorbar, use plt.scatter() instead of ax.scatter() followed by plt.colorbar()]:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import Normalize
from scipy.interpolate import interpn
def density_scatter( x , y, ax = None, sort = True, bins = 20, **kwargs ) :
"""
Scatter plot colored by 2d histogram
"""
if ax is None :
fig , ax = plt.subplots()
data , x_e, y_e = np.histogram2d( x, y, bins = bins, density = True )
z = interpn( ( 0.5*(x_e[1:] + x_e[:-1]) , 0.5*(y_e[1:]+y_e[:-1]) ) , data , np.vstack([x,y]).T , method = "splinef2d", bounds_error = False)
#To be sure to plot all data
z[np.where(np.isnan(z))] = 0.0
# Sort the points by density, so that the densest points are plotted last
if sort :
idx = z.argsort()
x, y, z = x[idx], y[idx], z[idx]
ax.scatter( x, y, c=z, **kwargs )
norm = Normalize(vmin = np.min(z), vmax = np.max(z))
cbar = fig.colorbar(cm.ScalarMappable(norm = norm), ax=ax)
cbar.ax.set_ylabel('Density')
return ax
if "__main__" == __name__ :
x = np.random.normal(size=100000)
y = x * 3 + np.random.normal(size=100000)
density_scatter( x, y, bins = [30,30] )
You could make a histogram:
import numpy as np
import matplotlib.pyplot as plt
# fake data:
a = np.random.normal(size=1000)
b = a*3 + np.random.normal(size=1000)
plt.hist2d(a, b, (50, 50), cmap=plt.cm.jet)
plt.colorbar()
I got a question that I fight around for days with now.
How do I calculate the (95%) confidence band of a fit?
Fitting curves to data is the every day job of every physicist -- so I think this should be implemented somewhere -- but I can't find an implementation for this neither do I know how to do this mathematically.
The only thing I found is seaborn that does a nice job for linear least-square.
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import pandas as pd
x = np.linspace(0,10)
y = 3*np.random.randn(50) + x
data = {'x':x, 'y':y}
frame = pd.DataFrame(data, columns=['x', 'y'])
sns.lmplot('x', 'y', frame, ci=95)
plt.savefig("confidence_band.pdf")
But this is just linear least-square. When I want to fit e.g. a saturation curve like , I'm screwed.
Sure, I can calculate the t-distribution from the std-error of a least-square method like scipy.optimize.curve_fit but that is not what I'm searching for.
Thanks for any help!!
You can achieve this easily using StatsModels module.
Also see this example and this answer.
Here is an answer for your question:
import numpy as np
from matplotlib import pyplot as plt
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import summary_table
x = np.linspace(0,10)
y = 3*np.random.randn(50) + x
X = sm.add_constant(x)
res = sm.OLS(y, X).fit()
st, data, ss2 = summary_table(res, alpha=0.05)
fittedvalues = data[:,2]
predict_mean_se = data[:,3]
predict_mean_ci_low, predict_mean_ci_upp = data[:,4:6].T
predict_ci_low, predict_ci_upp = data[:,6:8].T
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(X, fittedvalues, 'r-', label='OLS')
ax.plot(X, predict_ci_low, 'b--')
ax.plot(X, predict_ci_upp, 'b--')
ax.plot(X, predict_mean_ci_low, 'g--')
ax.plot(X, predict_mean_ci_upp, 'g--')
ax.legend(loc='best');
plt.show()
kmpfit's confidence_band() calculates the confidence band for non-linear least squares. Here for your saturation curve:
from pylab import *
from kapteyn import kmpfit
def model(p, x):
a, b = p
return a*(1-np.exp(b*x))
x = np.linspace(0, 10, 100)
y = .1*np.random.randn(x.size) + model([1, -.4], x)
fit = kmpfit.simplefit(model, [.1, -.1], x, y)
a, b = fit.params
dfdp = [1-np.exp(b*x), -a*x*np.exp(b*x)]
yhat, upper, lower = fit.confidence_band(x, dfdp, 0.95, model)
scatter(x, y, marker='.', color='#0000ba')
for i, l in enumerate((upper, lower, yhat)):
plot(x, l, c='g' if i == 2 else 'r', lw=2)
savefig('kmpfit confidence bands.png', bbox_inches='tight')
The dfdp are the partial derivatives ∂f/∂p of the model f = a*(1-e^(b*x)) with respect to each parameter p (i.e., a and b), see my answer to a similar question for background links. And here the output:
I've looked at the documentation, but I can't seem to figure out if this is possible -
I have a dataset, with x and y values and discrete z values. Multiple pairs of (x,y) share the same z value. What I want to do is when I mouseover one point with a particular z value, the alpha of all the points with the same z values goes to 1 - i.e., If all the alpha values are initially 0.5, I'd like only the points with the same z value to go to 1.
Here's a minimal working example to illustrate what I'm talking about :
#! /usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
x = np.random.randn(100)
y = np.random.randn(100)
z = np.arange(0, 10, 1)
z = np.repeat(z, 10)
im = plt.scatter(x, y, c=z, alpha = 0.5)
plt.colorbar(im)
plt.show()
You can probably fake what you want to achieve using a second plot:
import numpy as np
import matplotlib.pyplot as plt
Z = np.zeros(1000, dtype = [("Z", int), ("P", float, 2)])
Z["P"] = np.random.uniform(0.0,1.0,(len(Z),2))
Z["Z"] = np.random.randint(0,50,len(Z))
def on_pick(event):
z = Z[event.ind[0]]['Z']
P = Z[np.where(Z["Z"] == z)]["P"]
selection_plot.set_data(P[:,0],P[:,1])
plt.draw()
fig = plt.figure(figsize=(10,10), facecolor='white')
fig.canvas.mpl_connect('pick_event', on_pick)
ax = plt.subplot(111, aspect=1)
ax.plot(Z['P'][:,0], Z['P'][:,1], 'o', color='k', alpha=0.1, picker=5)
selection_plot, = ax.plot([],[], 'o', color='black', alpha=1.0, zorder=10)
plt.show()