I can create a plot as follows:
import matplotlib.pyplot as plt
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.colorbar()
plt.xlabel("axis x")
plt.ylabel("axis y")
plt.show()
But how can I change the axis itself, i.e. I want to transform e.g. the x-axis to a different range. For example, the value 0 on the plot that the code above generates corresponds to a value of -4.8573. And the value '4' of the plot above corresponds to a value of 12.443.
Then I want to have an axis with ticks at -5, 0, 10, 15 or so. How can I achieve this?
The real axis value can be calculated by
x_real = a + x * b
To rescale the x-axis range, you can use
plt.xticks(ticks, labels)
ticks: The list of old xtick locations.
labels: The labels to place at the given ticks locations.
so, You just need to provide the following code before plt.show():
plt.xticks(range(0, 5), range(-5, 16, 5))
# range(0, 5): current range
# range(-5, 16, 5): new range
# [0, 1, 2, 3, 4] -> [-5, 0, 5, 10, 15]
import matplotlib.pyplot as plt
import numpy as np
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.colorbar()
plt.xlabel("axis x")
plt.ylabel("axis y")
plt.xticks(range(0, 5), range(-5, 16, 5))
plt.show()
Which product this image(click here)
To auto interpolate, you could do something like this:
import matplotlib.pyplot as plt
import math
import numpy as np
n=5
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.colorbar()
x = [37.59390426045407, 38.00530354847739, 38.28412244348653, 38.74871247986305, 38.73175910429809, 38.869008864244016, 39.188234404976555, 39.92835838352555, 40.881394113153334, 41.686136269465884]
y = [0.1305391767832006, 0.13764519613447768, 0.14573326951792354, 0.15090729309032114, 0.16355823707239897, 0.17327106424274763, 0.17749746339532224, 0.17310384614773594, 0.16545780437882962, 0.1604752704890856]
def ceil_power_of_10(n):
exp = math.log(n, 10)
exp = math.ceil(exp)
return 10**exp
x0 = min(x)
x1 = max(x)
y0 = min(y)
y1 = max(y)
# Fill the 2D array
dx = (x1 - x0)/n
dy = (y1 - y0)/n
dx_steps = ceil_power_of_10(dx)
dy_steps = ceil_power_of_10(dy)
dx_steps_alpha = round((math.ceil(x1/dx_steps)*dx_steps) - (math.floor(x0/dx_steps)*dx_steps) )
dy_steps_alpha = round(((math.ceil(y1/dy_steps)*dy_steps) - (math.floor(y0/dy_steps)*dy_steps) ) / dy_steps)
x_new = np.linspace((math.floor(x0/dx_steps)*dx_steps), (math.ceil(x1/dx_steps)*dx_steps), dx_steps_alpha, endpoint=False)
y_new = np.linspace((math.floor(y0/dy_steps)*dy_steps), (math.ceil(y1/dy_steps)*dy_steps), dy_steps_alpha, endpoint=False)
labels_x = x_new
labels_x = [round(x,dx_steps) for x in labels_x]
positions_x = list(range(0, len(labels_x)))
labels_y = y_new
labels_y = [round(y/dy_steps) * dy_steps for y in labels_y]
positions_y = list(range(0, len(labels_y)))
# In the end, used to create a surface plot
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.xticks(positions_x, labels_x)
plt.yticks(positions_y, labels_y)
plt.xlabel("axis x")
plt.ylabel("axis y")
plt.show()
You mean like this?
import matplotlib.pyplot as plt
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.colorbar()
positions = [0,1,2,3,4]
labels = [-5, 0, 10, 15, 20]
plt.xticks(positions, labels)
plt.yticks(positions, labels)
plt.xlabel("axis x")
plt.ylabel("axis y")
plt.show()
If I understood correctly, this should help
view image
import matplotlib.pyplot as plt
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
plt.imshow(image, cmap="plasma", interpolation='nearest')
plt.colorbar()
plt.xlabel("axis x")
plt.ylabel("axis y")
#sets limes x
plt.xlim([-5,15])
#sets limes y
plt.ylim([-5,15])
plt.show()
You should compute the axes min, max and step values:
xmin = -4.8573
xmax = 12.443
dx = (xmax - xmin)/(np.shape(image)[0] - 1)
ymin = -5
ymax = 15
dy = (ymax - ymin)/(np.shape(image)[1] - 1)
then, then pass those values to extent parameter of imshow:
img = ax.imshow(image, cmap="plasma", interpolation='nearest', extent = [xmin - dx/2, xmax + dx/2, ymin - dy/2, ymax + dy/2])
finally, set up the axes ticks:
ax.set_xticks(np.linspace(xmin, xmax, (np.shape(image)[0])))
ax.set_yticks(np.linspace(ymin, ymax, (np.shape(image)[1])))
Complete Code
import matplotlib.pyplot as plt
import numpy as np
image = [[0.0, 0.0, 0.0, 0.0, 0.0],
[0.2, 0.0, 0.1, 0.0 ,0.0],
[0.0, 0.0, 0.3, 0.0 ,0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0]]
print(image)
xmin = -4.8573
xmax = 12.443
dx = (xmax - xmin)/(np.shape(image)[0] - 1)
ymin = -5
ymax = 15
dy = (ymax - ymin)/(np.shape(image)[1] - 1)
fig, ax = plt.subplots()
img = ax.imshow(image, cmap="plasma", interpolation='nearest', extent = [xmin - dx/2, xmax + dx/2, ymin - dy/2, ymax + dy/2])
plt.colorbar(img)
ax.set_xlabel("axis x")
ax.set_ylabel("axis y")
ax.set_xticks(np.linspace(xmin, xmax, (np.shape(image)[0])))
ax.set_yticks(np.linspace(ymin, ymax, (np.shape(image)[1])))
plt.show()
Related
We have the following numpy array:
b = np.array([[0.3, -0.2, 0.4, 0.5, -0.8, 1.0, 0.0, 0.0],
[0.6, 0.2, 0.7, 0.91, 0.67, 0.0, 1.0, 0.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
We can see here that in the right side of this array (last 3 columns) we have a diagonal matrix. How can I get the column where 1 first occur in this diagonal matrix? i.e., column 5. I tried the following, which gives the correct answer:
first_occurence = np.argmax(b == 1, axis=1)[0]
But, if we have the following array, this does not work, giving me 0 as answer (which should be 6)
b = np.array([[0.3, -0.2, 0.4, 0.5, -0.8, 0.0, 0.0, 0.0],
[0.6, 0.2, 0.7, 0.91, 0.67, 0.0, 1.0, 0.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
You can do this:
firsts = np.argmax(b == 1, axis=1)
first_occurence = min(firsts[firsts != 0])
The firsts[firsts != 0] argument to min() filters out rows for which b does not contain a 1, and min() then finds the column you're looking for.
UPDATE:
Assumptions, based on OP's clarifications:
the input contains a submatrix that is an identity matrix of order between 1 and b.shape[0] for input matrix b
the rightmost column of this identity matrix is to be found within the rightmost column of the input matrix
the top row of this identity matrix is between 0 and b.shape[0] - 1.
Here is a way to identify the column in the input matrix which contains the leftmost column in the embedded identity matrix:
def foo(b):
rows = b.shape[0]
left = b.shape[1] - rows
for tops in range(rows):
order = rows - tops
eye = np.eye(order)
for top in range(tops + 1):
if np.allclose(b[top:top + order, left:left + order], eye):
return left
left += 1
Test code:
b1 = np.array([
[0.3, -0.2, 0.4, 0.5, -0.8, 1.0, 0.0, 0.0],
[0.6, 0.2, 0.7, 0.91, 0.67, 0.0, 1.0, 0.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
b2 = np.array([
[0.3, -0.2, 0.4, 0.5, -0.8, 0.0, 0.0, 0.0],
[0.6, 0.2, 0.7, 0.91, 0.67, 0.0, 1.0, 0.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
b3 = np.array([
[0, -0.2, 0.4, 0.5, -0.8, 1.0, 0.0, 0.0],
[0.6, 1, 0.7, 0.91, 0.67, 0.0, 1.0, 0.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
b4 = np.array([
[0.3, -0.2, 0.4, 0.5, -0.8, 0.0, 1.0, 0.0],
[0.6, 0.2, 0.7, 0.91, 0.67, 0.0, 0.0, 1.0],
[0.5, 0.1, 0.7, 0.0, 0.6, 0.0, 0.0, 1.0]])
print( foo(b1) )
print( foo(b2) )
print( foo(b3) )
print( foo(b4) )
Output:
5
6
5
6
I am working with some 3D (volumetric) data using Python, and for every tetrahedron, I have not only the vertices's coordinates but also a fourth dimension which is the value of some parameter for that tetrahedron volume.
For example:
# nodes coordinates that defines a tetrahedron volume:
x = [0.0, 1.0, 0.0, 0.0]
y = [0.0, 0.0, 1.0, 0.0]
z = [0.0, 0.0, 0.0, 1.0]
# Scaler value of the potential for the given volume:
c = 100.0
I would like to plot a 3D volume (given by the nodes coordinates) filled with some solid color, which would represent the given value C.
How could I do that in Python 3.6 using its plotting libraries?
You can use mayavi.mlab.triangular_mesh():
from mayavi import mlab
from itertools import combinations, chain
x = [0.0, 1.0, 0.0, 0.0, 2.0, 3.0, 0.0, 0.0]
y = [0.0, 0.0, 1.0, 0.0, 2.0, 0.0, 3.0, 0.0]
z = [0.0, 0.0, 0.0, 1.0, 2.0, 0.0, 0.0, 3.0]
c = [20, 30]
triangles = list(chain.from_iterable(combinations(range(s, s+4), 3) for s in range(0, len(x), 4)))
c = np.repeat(c, 4)
mlab.triangular_mesh(x, y, z, triangles, scalars=c)
I have a numpy array like this:
a = [[0.04393, 0.0, 0.0], [0.04393, 0.005, 0.0], [0.04393, 0.01, 0.0],[0.04393, 0.015, 0.0]]
And i want to format it in this:
b = [((0.04393, 0.0, 0.0), ), ((0.04393, 0.005, 0.0), ), ((0.04393,
0.01, 0.0), ), ((0.04393, 0.015, 0.0), )]
How can i do it?
This will do:
a = [[0.04393, 0.0, 0.0], [0.04393, 0.005, 0.0], [0.04393, 0.01, 0.0],[0.04393, 0.015, 0.0]]
b = [ (tuple(a1),) for a1 in a]
The following questions makes use of vtk python but what I am attempting to do should not require any knowledge of vtk because I have converted the data I wish to plot into numpy arrays described below. If anyone does know of an improvement to the way I go about actually processing the vtk data into numpy, please let me know!
I have some data that I have extracted using vtk python. The data consists of a 3D unstructured grid and has several 'blocks'. The block I am interested in is block0. The data is contained at each cell rather than at each point. I wish to plot a contourf plot of a scalar variable on this grid using matplotlib. In essence my problem comes down to the following:
Given a set of cell faces with known vertices in space and a known scalar field variable, create a contour plot as one would get if one had created a numpy.meshgrid and used plt.contourf/plt.pcolormesh etc. Basically I post process my vtk data like so:
numCells = block0.GetCells().GetNumberOfCells()
# Array of the 8 vertices that make up a cell in 3D
cellPtsArray = np.zeros((numCells,8,3))
# Array of the 4 vertices that make up a cell face
facePtsArray = np.zeros((numCells,4,3))
#Array to store scalar field value from each cell
valueArray = np.zeros((numCells,1))
for i in xrange(numCells):
cell = block0.GetCell(i)
numCellPts = cell.GetNumberOfPoints()
for j in xrange(numCellPts):
cellPtsArray[i,j,:] = block0.GetPoint(cell.GetPointId(j))
valueArray[i] = block0.GetCellData().GetArray(3).GetValue(i)
xyFacePts = cell.GetFaceArray(3)
facePtsArray[i,:,:] = cellPtsArray[i,xyFacePts,:]
Now I wish to create a contour plot of this data (fill each cell in space according to an appropriate colormap of the scalar field variable). Is there a good built in function in matplotlib to do this? Note that I cannot use any form of automatic triangulation-the connectivity of the mesh is already specified by facePtsArray by the fact that connections between points of a cell have been ordered correctly (see my plot below)
Here is some test data:
import numpy as np
import matplotlib.pyplot as plt
# An example of the array containing the mesh information: In this case the
# dimensionality is (9,4,3) denoting 9 adjacent cells, each with 4 vertices and
# each vertex having (x,y,z) coordinates.
facePtsArray = np.asarray([[[0.0, 0.0, 0.0 ],
[1.0, 0.0, 0.0 ],
[1.0, 0.5, 0.0 ],
[0.0, 0.5, 0.0 ]],
[[0.0, 0.5, 0.0 ],
[1.0, 0.5, 0.0 ],
[1.0, 1.0, 0.0 ],
[0.0, 1.0, 0.0 ]],
[[0.0, 1.0, 0.0 ],
[1.0, 1.0, 0.0 ],
[1.0, 1.5, 0.0 ],
[0.0, 1.5, 0.0 ]],
[[1.0, 0.0, 0.0 ],
[2.0, -0.25, 0.0],
[2.0, 0.25, 0.0],
[1.0, 0.5, 0.0]],
[[1.0, 0.5, 0.0],
[2.0, 0.25, 0.0],
[2.0, 0.75, 0.0],
[1.0, 1.0, 0.0]],
[[1.0, 1.0, 0.0],
[2.0, 0.75, 0.0],
[2.0, 1.25, 0.0],
[1.0, 1.5, 0.0]],
[[2.0, -0.25, 0.0],
[2.5, -0.75, 0.0],
[2.5, -0.25, 0.0 ],
[2.0, 0.25, 0.0]],
[[2.0, 0.25, 0.0],
[2.5, -0.25,0.0],
[2.5, 0.25, 0.0],
[2.0, 0.75, 0.0]],
[[2.0, 0.75, 0.0],
[2.5, 0.25, 0.0],
[2.5, 0.75, 0.0],
[2.0, 1.25, 0.0]]])
valueArray = np.random.rand(9) # Scalar field values for each cell
plt.figure()
for i in xrange(9):
plt.plot(facePtsArray[i,:,0], facePtsArray[i,:,1], 'ko-')
plt.show()
How can I change the color of a histogram after I draw it? (using hist)
z = hist([1,2,3])
z.set_color(???) < -- Something like this
also how can I check what color is the histogram
z = hist([1,2,3])
color = z.get_color(???) < -- also Something like this
Thank you.
Such functions exist. You just need to store the patches returned by hist and access the facecolor of each of them:
import matplotlib.pyplot as plt
n, bins, patches = plt.hist([1,2,3])
for p in patches:
print p.get_facecolor()
p.set_facecolor((1.0, 0.0, 0.0, 1.0))
Output:
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
(0.0, 0.5, 0.0, 1.0)
Note that you get one patch per bin. By default hist plots 10 bins. You might want to define it differently using plt.hist([1,2,3], bins=3).