There is a for-loop in my part of code, and every step it can generate new tpr(as X), fpr(as Y) like that
0.05263157894736842 0.1896551724137931
0.06578947368421052 0.19540229885057472
0.07894736842105263 0.22988505747126436
0.07894736842105263 0.25862068965517243
0.07894736842105263 0.28735632183908044
I want collect all these points and get a full plot, but it didn't work. And my code are attached below
for i in range (-30,20):
predicted = (np.sign(t+i*1e-4)+1)/2.
vals, cm = re.get_CM_vals(y_test, predicted)
tpr = re.TPR_CM(cm)
fpr = re.FPR_CM(cm)
#print(tpr, fpr)
plt.plot(fpr, tpr,'b.-',linewidth=1)
plt.show()
Beside, I want to the the right angle line between points like that.is there a func in matplotlib?
Using your current code, I suggest adding the x values to an array and the y values to another array. You could also use something like: ArrayName = [[],[]], then append the x and y values to ArrayName[0] and ArrayName[1], respectively. Not only would this actually work, but it would be slightly faster, since the plt.plot and plt.scatter functions work faster plotting all the points at once instead of through a for loop.
If you don't want to plot the points connected with lines, I still suggest using an array since that would be faster. (It wouldn't be that much faster in this case, but it's a good habit to have.
Hi I have a numpy object with shape (1000,3) that I wish do a contour plot of. The first two columns represent x and y values and the 3rd is the associated density value at the the point denoted by the x and y values. These are NOT evenly spaced as the x and y values were generated by MCMC sampling methods. I wish to plot the x and y values and demarcate points which have density at a certain level.
I have tried calling the contour function but it does not appear to work.
presuming I have a data object such that np.shape(data) gives (1000,3)
plt.figure()
contour(data[:,0],data[:,1],data[:,2])
plt.show()
this does not seem to work and gives the following error
TypeError: Input z must be a 2D array
I understand z, the 3rd column needs to be some sort of meshgrid, but the all the examples I have seen seen to rely on constructing one from evenly spaced x and y which I do not have. Help appreciated on how I can resolve this.
EDIT: I have figured it out. Need to use the method for unevenly spaced points as described here.
https://matplotlib.org/devdocs/gallery/images_contours_and_fields/griddata_demo.html#sphx-glr-gallery-images-contours-and-fields-griddata-demo-py
I'm wondering if there is a way to use matplotlib and numpy to plot the heatmap of three lists. My grid is not regular, it is oddly shaped so this does not work for me: Plotting a Heat Map X,Y,Intensity From Three Lists. When I try that, I get ValueError: cannot reshape array of size 1906 into shape (1847,127). My lists have multiple of the same X and Y values, and are not in any way rectangular. I was wondering if there is a way to plot my data so it looks like the meshgrid/imshow grids that you can get when you have rectangular data. Basically, I just want to plot a bunch of given intensities at given X, Y values and have them show as a heatmap.
Thanks!
edit: Here is the kind of data that I am working with. I'm trying to use the 4th column as the intensity and the 1st and 2nd columns as the x and y respectively.
https://pastebin.com/XCcwRiJn
Thanks ymmx, I'm new to Stack Overflow so I don't know how to turn your answer into the actual answer. Here's what I did to make it work using a 2D iterpolation:
myData = np.genfromtxt(fileName, delimiter=",")
X = myData[:, 0]
Y = myData[:, 1]
intensity = myData[:, 3]
XY = np.column_stack((Y,X))
grid_x, grid_y = np.mgrid[-.2:.2:100j, 0:.05:200j]
grid1 = griddata(XY, intensity, (grid_x, grid_y), method='cubic')
plt.imshow(grid1.T, extent=(0, .5, 0, .05), origin='lower',cmap='jet')
If someone has this same problem, you have to adjust your mgrid and extent values to fit your data.
Thanks for the help!
Is it possible in mayavi to specify individually both the size and the colors of every point?
That API is cumbersome to me.
points3d(x, y, z...)
points3d(x, y, z, s, ...)
points3d(x, y, z, f, ...)
x, y and z are numpy arrays, or lists, all of the same shape, giving the positions of the points.
If only 3 arrays x, y, z are given, all the points are drawn with the same size and color.
In addition, you can pass a fourth array s of the same shape as x, y, and z giving an associated scalar value for each point, or a function f(x, y, z) returning the scalar value. This scalar value can be used to modulate the color and the size of the points.
So in this case scalar controls both the size and the color and it's not possible to disentangle them. I want a way to specify size as a (N,1) array and color as another (N,1) array individually..
Seems complicated?
Each VTK source has a dataset for both scalars and vectors.
The trick I use in my program to getting the color and size to differ is to bypass the mayavi source and directly in the VTK source, use scalars for color and vectors for size (it probably works the other way around as well).
nodes = points3d(x,y,z)
nodes.glyph.scale_mode = 'scale_by_vector'
#this sets the vectors to be a 3x5000 vector showing some random scalars
nodes.mlab_source.dataset.point_data.vectors = np.tile( np.random.random((5000,)), (3,1))
nodes.mlab_source.dataset.point_data.scalars = np.random.random((5000,))
You may need to transpose the 5000x3 vector data or otherwise shift the matrix dimensions somehow.
I agree that the API that Mayavi provides here is unpleasant. The Mayavi documentation suggests the following hack (which I have paraphrased slightly) to independently adjust the size and color of points.
pts = mayavi.mlab.quiver3d(x, y, z, sx, sy, sz, scalars=c, mode="sphere", scale_factor=f)
pts.glyph.color_mode = "color_by_scalar"
pts.glyph.glyph_source.glyph_source.center = [0,0,0]
This will display x,y,z points as spheres, even though you're calling mayavi.mlab.quiver3d. Mayavi will use the norm of sx,sy,sz vectors to determine the size the points, and will use the scalar values in c to index into a color map. You can optionally supply a constant size scaling factor, which will be applied to all the points.
This is certainly not the most self-documenting code you'll ever write, but it works.
I also agree that API is ugly. I just did a simple and complete example with using #aestrivex's idea:
from mayavi.mlab import *
import numpy as np
K = 10
xx = np.arange(0, K, 1)
yy = np.arange(0, K, 1)
x, y = np.meshgrid(xx, yy)
x, y = x.flatten(), y.flatten()
z = np.zeros(K*K)
colors = 1.0 * (x + y)/(max(x)+max(y))
nodes = points3d(x, y, z, scale_factor=0.5)
nodes.glyph.scale_mode = 'scale_by_vector'
nodes.mlab_source.dataset.point_data.scalars = colors
show()
which produces:
If, as in my case, anyone is trying to for example update the scale of a single point, or the color of a point upon a clicking action or a keypress event, you may need to add the following line to make sure it updates the scalars even after the figure is already displayed (I add the complete example of my function that modifies the size of a point upon clicking on it as it might be helpful to some people) :
def picker(picker):
if picker.actor in glyphs.actor.actors:
point_id = picker.point_id//glyph_points.shape[0]
# If the no points have been selected, we have '-1'
if point_id != -1:
glyphs.mlab_source.dataset.point_data.scalars[point_id] = 10
# following line is necessary for the live update
glyphs.mlab_source.dataset.modified()
# you would typically use this function the following way in your main :
figure = mlab.gcf()
mlab.clf()
# define your points
pts = ...
# define scalars or they will be defined to None by default
s = len(pts)*[1]
glyphs = mlab.points3d(pts[:,0], pts[:,1], pts[:,2], s, scale_factor=1, mode='cube')
glyph_points = glyphs.glyph.glyph_source.glyph_source.output.points.to_array()
picker = figure.on_mouse_pick(picker, button='Left')
picker.tolerance = 0.01
mlab.show()
Inspired from this example : https://docs.enthought.com/mayavi/mayavi/auto/example_select_red_balls.html
I checked the available interpolation method in scipy, but could not get the proper solution for my case.
assume i have 100 points whose coordinates are random,
e.g., their x and y positions are:
x=np.random.rand(100)*100
y=np.random.rand(100)*100
z = f(x,y) #the point value calculated by certain function
now i want to get the point value z of a new evenly sampled coordinates (xnew and y new)
xnew = range(100)
ynew = range(100)
how should i do this using bilinear sampling?
i know it is possible to do it point by point, e.g., find the 4 nearest random points, and do the interpolation, but there got to be some easier existing functions to do this
thanks alot!
Use scipy.interpolate.griddata. It does the exact thing you need
# griddata expects an ndarray for the interpolant coordinates
interpolants = numpy.array([xnew, ynew])
# defaults to linear interpolation
znew = scipy.interpolate.griddata((x, y), z, interpolants)
http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata