I want to plot graph of this function:
y = 2[1-e^(-x+1)]^2-2
When I plot a linear function, I used this code :
import matplotlib.pyplot as plt
import numpy as np
x = np.array(...)
y = np.array(...)
z = np.polyfit(x, y, 2)
p = np.poly1d(z)
xp = np.linspace(...)
_ = plt.plot(x, y, '.', xp, p(xp), '-')
plt.ylim(0, 200)
plt.show()
When the function is non-linear, it does not works
becasue it hard to find each x,y value.
How can I plot a non-linear function?
I hate to be the one to break this news to you, but polynomials of order greater than one are technically nonlinear too.
When you plot in matplotlib, you're really supplying discreet x and y values at a resolution sufficient to be visually pleasing. In this case, you've chosen xp to determine the points you plot for the parabola. You then call p(xp) to generate an array of y-values at those locations.
There nothing stopping you from generating y-values for your formula of interest using simple numpy functions:
y = 2 * (1 - np.exp(1 - xp))**2 - 2
Related
This was very hard to write a title for.
By introducing a bug I discovered an interesting feature of matplotlib and I would like to understand how it works:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from scipy.spatial import distance
n = 100
X = np.linspace(0, 10, 100).reshape(-1, 1)
D = distance.pdist(X, 'euclidean')
D = distance.squareform(D)
S = np.exp(-D)
mvn = multivariate_normal(np.ones(100), S)
y = [mvn.rvs(1) for i in range(100)]
plt.plot(X, y)
plt.show()
Now this seems to plot one plot X, y[:, i] for i in 100.
Or does it not?
Because it doesn't work with:
y = [mvn.rvs(1) for i in range(100)]
However this works again as expected:
y = np.array([mvn.rvs(1) for i in range(5)])
plt.plot(X, y.T)
plt.show()
What does matplotlib plot here?
is it X, y[i, :] instead?
So the correct syntax would be Sample Space, [samples]?
Thank you very much for your insight, I only wanted to plot one of these and made myself to this cool discovery by forgetting that rvs(1) is enough.
From the docs
for plt.plot(x, y)
If x and/or y are 2D arrays a separate data set will be drawn for every column. If both x and y are 2D, they must have the same shape. If only one of them is 2D with shape (N, m) the other must have length N and will be used for every data set m.
Its basically a convenient behaviour instead looping manually over your data.
I have a three column array, contains two parameters which are the x and y axes, and the Chi-square of these two parameters. I should make a meshgrid of these two parameters and then plot 1-sigma, 2-sigma, and 3-sigma contours, considering the Chi-square values. How can I do it in matplotlib?
Here is my code:
x (which is the second column in the "1.txt" file) and y (which is the third column) should be arranged from min to max, in order to make x and y axes, I thought it can be done using meshgrid. And z (the first column in the "1.txt" file) is the Chi-square.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.mlab import griddata
x = np.genfromtxt('1.txt', usecols=(1))
y = np.genfromtxt('1.txt', usecols=(2))
z = np.genfromtxt('1.txt', usecols=(0))
plt.figure()
X, Y = np.meshgrid(x,y)
Z= griddata(x,y,z,X,Y)
contour=plt.contour(X,Y,Z)
plt.show()
this code confront with the error:
"RuntimeError: To use interp='nn' (Natural Neighbor interpolation) in griddata, natgrid must be installed. Either install it from http://github.com/matplotlib/natgrid or use interp='linear' instead."
When I use interp='linear', running the code would last a long time without any result. Is there any way to solve this problem?
It looks like you are creating a "grid" of all values in your columns. Instead you would want to create a regular grid of numbers in an increasing order. E.g. using 100 values between the minimum and maximum of the data
X = np.linspace(x.min(), x.max(), 100)
Y = np.linspace(y.min(), y.max(), 100)
Z = griddata(x, y, z, xi, yi, interp='linear')
contour=plt.contour(X, Y, Z)
Also see this example.
Note however that matplotlib.mlab.griddata has been removed in newer versions of matplotlib, but is available with from scipy.interpolate import griddata, as shown in the new example, which also has the a newer option with axes.tricontour.
Consider directly plotting a triangulated contour using your original values x,y,z,
plt.tricontour(x, y, z)
I have an equation z=0.12861723162963065X + 0.0014024845304814665Y + 1.0964608113924048
I need to plot a 3D plane for this equation in python using matplotlib. I have already tried following this post -- Given general 3D plane equation, how can I plot this in python matplotlib?
However I am unable to set the x,y and z limits for this plane.
Can someone provide me the correct way of converting this equation into 3D plane. Thanks
You have it easy since your equation gives the value of z for any values of x and y.
So choose any limits you like for x and y. You could even use the ones in the web page you linked to. Just calculate the z values according to your equation. Here is code modified slightly from the linked page:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
x = np.linspace(-1,1,10)
y = np.linspace(-1,1,10)
X,Y = np.meshgrid(x,y)
Z=0.12861723162963065*X + 0.0014024845304814665*Y + 1.0964608113924048
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z)
And here is the result:
That is not the greatest graph, but now you can modify some of the parameters to get just what you want.
I've been playing with the Scikit-learn's GMM function. To start with, I've just created a distribution along the line x=y.
from sklearn import mixture
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
line_model = mixture.GMM(n_components = 99)
#Create evenly distributed points between 0 and 1.
xs = np.linspace(0, 1, 100)
ys = np.linspace(0, 1, 100)
#Create a distribution that's centred along y=x
line_model.fit(zip(xs,ys))
plt.plot(xs, ys)
plt.show()
This produces the expected distribution:
Next I fit a GMM to it, and plot the results:
#Create the x,y mesh that will be used to make a 3D plot
x_y_grid = []
for x in xs:
for y in ys:
x_y_grid.append([x,y])
#Calculate a probability for each point in the x,y grid.
x_y_z_grid = []
for x,y in x_y_grid:
z = line_model.score([[x,y]])
x_y_z_grid.append([x,y,z])
x_y_z_grid = np.array(x_y_z_grid)
#Plot probabilities on the Z axis.
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x_y_z_grid[:,0], x_y_z_grid[:,1], 2.72**x_y_z_grid[:,2])
plt.show()
The resulting probability distribution has some weird tails along x=0 and x=1 and also extra probability in the corners (x=1, y=1 and x=0,y=0).
Using n_components=5 also shows this behaviour:
Is this something inherent with GMMs, or is there an issue with the implementation, or am I doing something wrong?
Edit: getting scores from the model seems to get rid of this behaviour -- should this be?
I'm training both the models on the same dataset (x=y from x=0 to x=1). Simply checking the probability via the score method of the gmm seems to eliminate this boundary effect. Why is this? I've attached the plots and code below.
# Creates a line of 'observations' between (x_small_start, x_small_end)
# and (y_small_start, y_small_end). This is the data both gmms are trained on.
x_small_start = 0
x_small_end = 1
y_small_start = 0
y_small_end = 1
# These are the range of values that will be plotted
x_big_start = -1
x_big_end = 2
y_big_start = -1
y_big_end = 2
shorter_eval_range_gmm = mixture.GMM(n_components = 5)
longer_eval_range_gmm = mixture.GMM(n_components = 5)
x_small = np.linspace(x_small_start, x_small_end, 100)
y_small = np.linspace(y_small_start, y_small_end, 100)
x_big = np.linspace(x_big_start, x_big_end, 100)
y_big = np.linspace(y_big_start, y_big_end, 100)
#Train both gmms on a distribution that's centered along y=x
shorter_eval_range_gmm.fit(zip(x_small,y_small))
longer_eval_range_gmm.fit(zip(x_small,y_small))
#Create the x,y meshes that will be used to make a 3D plot
x_y_evals_grid_big = []
for x in x_big:
for y in y_big:
x_y_evals_grid_big.append([x,y])
x_y_evals_grid_small = []
for x in x_small:
for y in y_small:
x_y_evals_grid_small.append([x,y])
#Calculate a probability for each point in the x,y grid.
x_y_z_plot_grid_big = []
for x,y in x_y_evals_grid_big:
z = longer_eval_range_gmm.score([[x, y]])
x_y_z_plot_grid_big.append([x, y, z])
x_y_z_plot_grid_big = np.array(x_y_z_plot_grid_big)
x_y_z_plot_grid_small = []
for x,y in x_y_evals_grid_small:
z = shorter_eval_range_gmm.score([[x, y]])
x_y_z_plot_grid_small.append([x, y, z])
x_y_z_plot_grid_small = np.array(x_y_z_plot_grid_small)
#Plot probabilities on the Z axis.
fig = plt.figure()
fig.suptitle("Probability of different x,y pairs")
ax1 = fig.add_subplot(1, 2, 1, projection='3d')
ax1.plot(x_y_z_plot_grid_big[:,0], x_y_z_plot_grid_big[:,1], np.exp(x_y_z_plot_grid_big[:,2]))
ax1.set_xlabel('X Label')
ax1.set_ylabel('Y Label')
ax1.set_zlabel('Probability')
ax2 = fig.add_subplot(1, 2, 2, projection='3d')
ax2.plot(x_y_z_plot_grid_small[:,0], x_y_z_plot_grid_small[:,1], np.exp(x_y_z_plot_grid_small[:,2]))
ax2.set_xlabel('X Label')
ax2.set_ylabel('Y Label')
ax2.set_zlabel('Probability')
plt.show()
There is no problem with the fit, but with the visualisation you're using. A hint should be the straight line connecting (0,1,5) to (0,1,0), which is actually just a rendering of the connection of two points (which is due to the order in which the points are read). Although the two points at its extrema are in your data, no other point on this line actually is.
Personally, I think it is a rather bad idea to use 3d plots (wires) to represent a surface for the reason mentioned above, and I would recommend surface plots or contour plots instead.
Try this:
from sklearn import mixture
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
line_model = mixture.GMM(n_components = 99)
#Create evenly distributed points between 0 and 1.
xs = np.atleast_2d(np.linspace(0, 1, 100)).T
ys = np.atleast_2d(np.linspace(0, 1, 100)).T
#Create a distribution that's centred along y=x
line_model.fit(np.concatenate([xs, ys], axis=1))
plt.scatter(xs, ys)
plt.show()
#Create the x,y mesh that will be used to make a 3D plot
X, Y = np.meshgrid(xs, ys)
x_y_grid = np.c_[X.ravel(), Y.ravel()]
#Calculate a probability for each point in the x,y grid.
z = line_model.score(x_y_grid)
z = z.reshape(X.shape)
#Plot probabilities on the Z axis.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, z)
plt.show()
From an academic point of view I am quite uncomfortable with the goal of fitting a 1D line in a 2D space by a 2D mixture model. Manifold learning with GMMs requires at least the normal direction to have zero variance, reducing thus to a dirac-distribution. Numerically and analytically this is unstable, and should be avoided (there seems to be some stabilising trick in the gmm fit, since variance of the model is rather large in the direction of the normal to the straight line).
It is also recommended to use plt.scatter rather than plt.plot when drawing data, since there is no reason to connect the dots when you're fitting their joint distribution.
Hope this helps to shed some light on your problem.
EDIT:
This is not correct. Talking with Ronald P., you can't get Gibbs effects because the Gaussians cannot compensate each other by "going negative", as probability is strictly > 0. This seems to be a simple plotting issue... see his answer instead! Either way, I would recommend using 2D data to test GMMs, rather than a 1D line.
The GMM is fitting to the data you gave it - specifically:
xs = np.linspace(0, 1, 100)
ys = np.linspace(0, 1, 100)
Because the data ends at 0 and 1, the GMM is attempting to model that fact: -.01 and 1.01 are technically outside the trained data range and should be scored with very low probabilities. In doing so it ends up creating a gaussian with smaller spread (smaller covariance/higher precision) to cover the ends of the data and model the fact that the data stops.
I would expect that adding enough gaussians would lead to a pseudo-Gibbs phenomena effect, and you can kind of see that happening in the change from 5 to 99. To exactly model the edges, you would need an infinite mixture model. This is analogous to infinite frequency components - you are representing a "signal" with a set of basis functions (in this case, gaussians) in GMM as well!
I have captured 3D measurement data on a sphere (this is an antenna radiation pattern, so the measurement antenna captured the radiation intensity from each phi,theta direction and logged this value as a function of phi,theta).
I am having great difficulty getting the data represented.
I have tried multiple options. This is the last one I am now trying:
import numpy as np
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
nElevationPoints = 16
nAzimuthPoints = 40
stepSizeRad = 0.05 * np.pi
def r(phi,theta):
radius = 1
return radius
phi = np.arange(0,nAzimuthPoints*stepSizeRad,stepSizeRad)
theta = np.arange(0,nElevationPoints*stepSizeRad,stepSizeRad)
x = (r(phi,theta)*np.outer(r(phi,theta)*np.cos(phi), np.sin(theta)))
y = (-r(phi,theta)*np.outer(np.sin(phi), np.sin(theta)))
z = (r(phi,theta)*np.outer(np.ones(np.size(phi)), np.cos(theta)))
fig = plt.figure(1)
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(x, y, z, rstride=4, cstride=4, color='b')
plt.ioff()
plt.show()
This code in itself is working, and it plots a sphere. Now the thing is, that in accordance with the measurement data, I would actually need the radius not be a constant "1", but corresponding with the radiation intensity measured. So it needs to be a function of phi,theta.
However, as soon as I change the "r" function to anything containing the phi or theta parameter, I get an error about operands that could not be broadcast.
If there's any work around that loops through phi,theta that would be perfectly fine as well.
But I'm stuck now, so I'd appreciate any help :-)
BTW, the reason I went for the above approach is because I couldn't make sense of how the x,y,z should be defined in order to be acceptable to the plot_surface function.
I did manage to generate a scatter plot, by calculating the actual positions (x,y,z) from the phi,theta,intensity data, but this is only a representation by individual points and doesn't generate any well visible antenna radiation pattern plot. For this I assume that a contour plot would be better, but then again I am stuck at either the "r" function call or by understanding how x,y,z should be formatted (the documentation refers to x,y,z needing to be 2D-arrays, but this is beyond my comprehension as x,y,z usually are one dimensional arrays in themselves).
Anyway, looking forward to any help anyone may be willing to give.
-- EDIT --
With #M4rtini 's suggested changes I come to the following:
import numpy as np
from mayavi import mlab
def r(phi,theta):
r = np.sin(phi)**2
return r
phi, theta = np.mgrid[0:2*np.pi:201j, 0:np.pi:101j]
x = r(phi,theta)*np.sin(phi)*np.cos(theta)
y = r(phi,theta)*np.sin(phi)*np.sin(theta)
z = r(phi,theta)*np.cos(phi)
intensity = phi * theta
obj = mlab.mesh(x, y, z, scalars=intensity, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
This works, thanks, #M4rtini, and I now am able to have a phi,theta dependent "r" function.
However, noted that the example now ensures phi and theta to be of the same length (due to the mgrid function). This is not the case in my measurement. When declaring phi and theta separately and of different dimensions, it doesn't work still. So I now will have a look into measurement interpolation.
This might not be the exact answer you were looking for, but if you can accept using intensity values as a mapping of a color, this should work.
Actually, you could probably calculate a specific r here also. But i did not test that.
Using mayavi since it is, in my opinion, far superior than matplotlib for 3D.
import numpy as np
from mayavi import mlab
r = 1.0
phi, theta = np.mgrid[0:np.pi:200j, 0:2*np.pi:101j]
x = r*np.sin(phi)*np.cos(theta)
y = r*np.sin(phi)*np.sin(theta)
z = r*np.cos(phi)
intensity = phi * theta
obj = mlab.mesh(x, y, z, scalars=intensity, colormap='jet')
obj.enable_contours = True
obj.contour.filled_contours = True
obj.contour.number_of_contours = 20
mlab.show()
Output of example script, now this is in a interactive gui. so you can rotate, translate, scale as you please. And even interactively manipulate the data, and the representation options.