Im trying to plot an image using matplotlib, and it comes out rotated 90 degrees clockwise, which seems to be a common problem. So i need to rotate it back 90 degrees counterclockwise to show my actual image. However when I tried to plot the transpose of the data, i get an error message that says "invalid dimensions for image data" additionally i also tried to set origin to lower because that also seems to be a way to fix such problems, but that only flipped it across the x axis. How do I fix this? here is my original code
from dipy.reconst.dti import color_fa
cfa = color_fa(FA, tenfit.evecs)
cfa_img = nib.Nifti1Image(cfa.astype(np.float32), img.get_affine())
data_cfa = cfa_img.get_data()
import matplotlib.pyplot as plt
plt.figure('color_fa')
plt.imshow(data_cfa[:,:,6,:])
plt.show()
so it shows slice 6 of an image which is 192x192
and when i change the imshow line to
plt.imshow(data_cfa[:,:,6,:].T)
i get that error message.
I'm new to python and matplotlib, so any help would be greatly appreciated!
The problem is that you are trying to transpose an image with three dimensions. The dimensions of your image are N x M x 3, and you would like to have a M x N x 3 array (rotate but keep the color planes intact).
With the .T method you'll unfortunately get an array with dimensions 3 x M x N, which is not what you want. This is the source of the error.
Instead of .T use .transpose(1,0,2).This will transpose the two first axes but leaves the third intact. Now the image should be rotated as you wanted it:
plt.imshow(data_cfa[:,:,6,:].transpose(1,0,2))
See the documentation for np.transpose: http://docs.scipy.org/doc/numpy/reference/generated/numpy.transpose.html
(If your image were a N x M grayscale image, then the .T trick would have been the right one.)
Try scipy.ndimage.rotate (see example code below). Unlike .T which swaps all the axes (see answer from #DrV), ndimage.rotate is designed to leave the color information alone. It can also handle an arbitrary rotation (e.g. 31.4 degrees). For a broader range of examples look at http://scipy-lectures.github.io/advanced/image_processing/#geometrical-transformations.
data_cfa = rand(10,10,7,3)
plt.subplot(211)
plt.imshow(data_cfa[:,:,6,:])
plt.title('Original')
plt.subplot(212)
plt.imshow(ndimage.rotate(data_cfa[:,:,6,:], 90))
plt.title('Rotated by 90')
plt.show()
Related
I am plotting a contour plot in python 3 with matplotlib, and I am getting a strange result. At first, I was using plt.contourf, and notices there was a strange north-south linear artifact in the data that I knew shouldn't be there (I used simulated data). So I changed plt.contourf to plt.contour, and the problem seems to be that some of the edge contours are deformed for some reason (see picture).
Unfortunately, it is hard for me to past a simple version of my code because this is part of a large GUI based app. Here is what I am doing though.
#grid the x,y,z data so it can be used in the contouring
self.beta_zi =
#This is matplot griddata, not the scipy.interpolate.griddata
griddata(self.output_df['x'].values,self.output_df['y'].values,
self.output_df['Beta'].values,
self.cont_grid_x,
self.cont_grid_y,
interp='linear')
#call to the contour itself
self.beta_contour=self.beta_cont_ax.contour(self.cont_grid_x,self.cont_grid_y,
self.beta_zi,
levels=np.linspace(start=0,stop=1, num=11, endpoint=True),
cmap=cm.get_cmap(self.user_beta_cmap.get()))
This seems like a simple problem based on the edges. Has anyone seen this before that can help. I am use a TK backend, which works better with the tkinter based GUI I wrote.
UPDATE: I also tried changing to scipy.interpolate.griddata because matplot's griddata is deprecated, but the problem is the same and persists, so it must be with the actual contour plotting function.
I found that the problem had to do with how I was interpreting the inputs of contour and grid data.
plt.contour and matplot.griddata takes
x = x location of sample data
y = y location of sample data
z = height or z value of sample data
xi = locations of x tick marks on grid
zi = locations of y ticks marks on grid
Typically xi and yi are all the locatoins of each grid node, which is what I was supplying, but in this case you only need the unqiue tick marks on each axis.
Thanks to this post I figured it out.
Matplotlib contour from xyz data: griddata invalid index
I need to write a python function or class with the following Input/Output
Input :
The position of the X-rays source (still not sure why it's needed)
The position of the board (still not sure why it's needed)
A three dimensional CT-Scan
Output :
A 2D X-ray Scan (simulate an X-Ray Scan which is a scan that goes through the whole body)
A few important remarks to what I'm trying to achieve:
You don’t need additional information from the real world or any advanced knowledge.
You can add any input parameter that you see fit.
If your method produces artifacts, you are excepted to fix them.
Please explain every step of your method.
What I've done until now: (.py file added)
I've read the .dicom files, which are located in "Case2" folder.
These .dicom files can be downloaded from my Google Drive:
https://drive.google.com/file/d/1lHoMJgj_8Dt62JaR2mMlK9FDnfkesH5F/view?usp=sharing
I've sorted the files by their position.
Finally, I've created a 3D array, and added all the images to that array in order to plot the results (you can see them in the added image) - which are slice of the CT Scans. (reference: https://pydicom.github.io/pydicom/stable/auto_examples/image_processing/reslice.html#sphx-glr-auto-examples-image-processing-reslice-py)
Here's the full code:
import pydicom as dicom
import os
import matplotlib.pyplot as plt
import sys
import glob
import numpy as np
path = "./Case2"
ct_images = os.listdir(path)
slices = [dicom.read_file(path + '/' + s, force=True) for s in ct_images]
slices[0].ImagePositionPatient[2]
slices = sorted(slices, key = lambda x: x.ImagePositionPatient[2])
#print(slices)
# Read a dicom file with a ctx manager
with dicom.dcmread(path + '/' + ct_images[0]) as ds:
# plt.imshow(ds.pixel_array, cmap=plt.cm.bone)
print(ds)
#plt.show()
fig = plt.figure()
for num, each_slice in enumerate(slices[:12]):
y= fig.add_subplot(3,4,num+1)
#print(each_slice)
y.imshow(each_slice.pixel_array)
plt.show()
for i in range(len(ct_images)):
with dicom.dcmread(path + '/' + ct_images[i], force=True) as ds:
plt.imshow(ds.pixel_array, cmap=plt.cm.bone)
plt.show()
# pixel aspects, assuming all slices are the same
ps = slices[0].PixelSpacing
ss = slices[0].SliceThickness
ax_aspect = ps[1]/ps[0]
sag_aspect = ps[1]/ss
cor_aspect = ss/ps[0]
# create 3D array
img_shape = list(slices[0].pixel_array.shape)
img_shape.append(len(slices))
img3d = np.zeros(img_shape)
# fill 3D array with the images from the files
for i, s in enumerate(slices):
img2d = s.pixel_array
img3d[:, :, i] = img2d
# plot 3 orthogonal slices
a1 = plt.subplot(2, 2, 1)
plt.imshow(img3d[:, :, img_shape[2]//2])
a1.set_aspect(ax_aspect)
a2 = plt.subplot(2, 2, 2)
plt.imshow(img3d[:, img_shape[1]//2, :])
a2.set_aspect(sag_aspect)
a3 = plt.subplot(2, 2, 3)
plt.imshow(img3d[img_shape[0]//2, :, :].T)
a3.set_aspect(cor_aspect)
plt.show()
The result isn't what I wanted because:
These are slice of the CT scans. I need to simulate an X-Ray Scan which is a scan that goes through the whole body.
Would love your help to simulate an X-Ray scan that goes through the body.
I've read that it could be done in the following way: "A normal 2D X-ray image is a sum projection through the volume. Send parallel rays through the volume and add up the densities." Which I'm not sure how it's accomplished in code.
References that may help: https://pydicom.github.io/pydicom/stable/index.html
EDIT: as further answers noted, this solution yields a parallel projection, not a perspective projection.
From what I understand of the definition of "A normal 2D X-ray image", this can be done by summing each density for each pixel, for each slice of a projection in a given direction.
With your 3D volume, this means performing a sum over a given axis, which can be done with ndarray.sum(axis) in numpy.
# plot 3 orthogonal slices
a1 = plt.subplot(2, 2, 1)
plt.imshow(img3d.sum(2), cmap=plt.cm.bone)
a1.set_aspect(ax_aspect)
a2 = plt.subplot(2, 2, 2)
plt.imshow(img3d.sum(1), cmap=plt.cm.bone)
a2.set_aspect(sag_aspect)
a3 = plt.subplot(2, 2, 3)
plt.imshow(img3d.sum(0).T, cmap=plt.cm.bone)
a3.set_aspect(cor_aspect)
plt.show()
This yields the following result:
Which, to me, looks like a X-ray image.
EDIT : the result is a bit too "bright", so you may want to apply gamma correction. With matplotlib, import matplotlib.colors as colors and add a colors.PowerNorm(gamma_value) as the norm parameter in plt.imshow:
plt.imshow(img3d.sum(0).T, norm=colors.PowerNorm(gamma=3), cmap=plt.cm.bone)
Result:
The way I understand the task you are expected to write a ray-tracer that follows the X-rays from the source (that's why you need its position) to the projection plane (That's why you need its position).
Sum up the values as you go and do a mapping to the allowed grey-values in the end.
Take a look at line drawing algorithms to see how you can do this.
It is really no black magic, I have done this kind of stuff more than 30 years ago. Damn, I'm old...
What you want is a perspective projection instead of a parallel projection. In order to obtain this, you need to know which values to sum for each point on the projection plane. There are multiple considerations to keep in mind:
We are talking about voxels, so you need to a method to determine whether a certain point in space belongs to a certain voxel in your volume.
A line between two points is straight, but because voxels are a discrete representation of space different methods of determining the above can lead to different (mostly minor) results. This difference will ultimately also lead to slightly different images depending on the alogrithms used. This is expected.
Let's say you have a CT scan volume comprising of 256 512x512 pixel slices. This gives you a volume of 512x512x256 voxels. For each of these voxels you need to know what their positions in x,y,z coordinates are. You can do this as follows:
- Use the ImagePositionPatient attribute to find out the x,y,z coordinate of the upper left hand corner pixel in mm for a given slice.
- Use the PixelSpacing attribute to calculate the x,y,z coordinates of the other pixels in your slice. Repeat for all slices
edit: i just found a counterexample against below method, the rest is still helpful. will update
Now to find out for a given point (Xa, Ya, Za) what voxel values need to be summed if the source is at (Xb, Yb, Zb):
Find the voxel that belongs to (Xa,Ya, Za). Keep pixel/voxel data.
Calculate (you can do this with NumPy) the distance between voxel(Xa, Ya, Za) and (Xb, Yb, Zb). There is an optimalization possible here :)
For all directly surrounding voxels (that will be a number of 3x3x3-1 voxels) also calculate this distance. Can also be optimized :)
Take the voxel with the shortest distance as the starting point for a next iteration of the above. Add pixel/voxel data.
Repeat until out of bounds of you CT volume.
In order to obtain a projection repeat these steps for all points on your projection plane and visualize the result. Good luck with your assignment! :)
I would like to create a visualization like the upper part of this image. Essentially, a heatmap where each point in time has a fixed number of components but these components are anchored to the y axis by means of labels (that I can supply) rather than by their first index in the heatmap's matrix.
I am aware of pcolormesh, but that does not seem to give me the y-axis functionality I seek.
Lastly, I am also open to solutions in R, although a Python option would be much preferable.
I am not completely sure if I understand your meaning correctly, but by looking at the picture you have linked, you might be best off with a roll-your-own solution.
First, you need to create an array with the heatmap values so that you have on row for each label and one column for each time slot. You fill the array with nans and then write whatever heatmap values you have to the correct positions.
Then you need to trick imshow a bit to scale and show the image in the correct way.
For example:
# create some masked data
a=cumsum(random.random((20,200)), axis=0)
X,Y=meshgrid(arange(a.shape[1]),arange(a.shape[0]))
a[Y<15*sin(X/50.)]=nan
a[Y>10+15*sin(X/50.)]=nan
# draw the image along with some curves
imshow(a,interpolation='nearest',origin='lower',extent=[-2,2,0,3])
xd = linspace(-2, 2, 200)
yd = 1 + .1 * cumsum(random.random(200)-.5)
plot(xd, yd,'w',linewidth=3)
plot(xd, yd,'k',linewidth=1)
axis('normal')
Gives:
I have a series of x,y coordinates and associated heading angles for multiple aircraft. I can plot the paths flown, and I would like to use a special marker to mark a particular location along the path that also shows the aircraft's heading when it was at that location.
Using matplotlib.pyplot I've used an arrowhead with no base to do this, but having to define the head and tail locations ended up with inconsistent arrowhead lengths when plotting multiple aircraft. I also used a custom three-sided symbol with the tuple (numsides, style, angle) as well as the wedge and bigvee symbols, but they never look very good.
From Custom arrow style for matplotlib, pyplot.annotate Saullo Castro showed a nice custom arrow (arrow1) that I'm wondering whether it can be used or converted in such a way as to just simply plot it at a given x,y and have its orientation defined by a heading angle.
I can plot the custom arrow with the following. Any ideas on how to rotate it to reflect a heading?
a1 = np.array([[0,0],[0,1],[-1,2],[3,0],[-1,-2],[0,-1],[0,0]], dtype=float)
polB = patches.Polygon(a1, closed=True, facecolor='grey')
ax.add_patch(polB)
Thanks in advance.
So I made the polygon a little simpler and also found that the rotation could be done by using mpl.transforms.Affine2D().rotate_deg_around():
a2 = np.array([[newX,newY+2],[newX+1,newY-1],[newX,newY],[newX-1,newY-1],[newX,newY+2]], dtype=float)
polB = patches.Polygon(a2, closed=True, facecolor='gold')
t2 = mpl.transforms.Affine2D().rotate_deg_around(newX,newY,heading) + newax.transData
polB.set_transform(t2)
newax.add_patch(polB)
I first tried to overlay the polygon on a line plotted from the x,y coordinates. However, the scales of the x and y axes were not equal (nor did I want them to be), so the polygon ended up looking all warped and stretched when rotated. I got around this by first adding a new axis with equal x/y scaling:
newax = fig.add_axes(ax.get_position(), frameon=False)
newax.set_xlim(-20,20)
newax.set_ylim(-20,20)
I could at least then rotate all I wanted and not have the warp issue. But then I needed to figure out how to basically connect the two axes so that I could plot the polygon on the new axis at a point referenced from the original axis. The way I figured to do this was by using transformations to go from the data coordinates on the original axis, converting them to display coordinates, and then inverting them back to data coordinates except this time at the data coordinates on the new axis:
inTrans = ax.transData.transform((x, y))
inv = newax.transData.inverted()
newTrans = inv.transform((inTrans[0], inTrans[1]))
newX = newTrans[0]
newY = newTrans[1]
It felt a little like some sort of Rube Goldberg machine to do it this way, but it did what I wanted.
In the end, I decided I didn't like this approach and went with keeping it simpler and using a fancy arrowhead instead of a polygon. Such is life...
How do I invert a color mapped image?
I have a 2D image which plots data on a colormap. I'd like to read the image in and 'reverse' the color map, that is, look up a specific RGB value, and turn it into a float.
For example:
using this image: http://matplotlib.sourceforge.net/_images/mri_demo.png
I should be able to get a 440x360 matrix of floats, knowing the colormap was cm.jet
from pylab import imread
import matplotlib.cm as cm
a=imread('mri_demo.png')
b=colormap2float(a,cm.jet) #<-tricky part
There may be better ways to do this; I'm not sure.
If you read help(cm.jet) you will see the algorithm used to map values in the interval [0,1] to RGB 3-tuples. You could, with a little paper and pencil, work out formulas to invert the piecewise-linear functions which define the mapping.
However, there are a number of issues which make the paper and pencil solution somewhat unappealing:
It's a lot of laborious algebra, and
the solution is specific for cm.jet.
You'd have to do all this work again
if you change the color map. How to automate the solving of these algebraic equations is interesting, but not a problem I know how to solve.
In general, the color map may not be
invertible (more than one value may
be mapped to the same color). In the
case of cm.jet, values between 0.11
and 0.125 are all mapped to the RGB
3-tuple (0,0,1), for example. So if
your image contains a pure blue
pixel, there is really no way to
tell if it came from a value of 0.11
or a value of, say, 0.125.
The mapping from [0,1] to
3-tuples is a curve in 3-space. The
colors in your image may not lie
perfectly on this curve. There might
be round-off error, for example. So any practical solution has to be able to interpolate or somehow project points in 3-space onto the curve.
Due to the non-uniqueness issue, and the projection/interpolation issue, there can be many possible solutions to the problem you pose. Below is just one possibility.
Here is one way to resolve the uniqueness and projection/interpolation issues:
Create a gradient which acts as a "code book". The gradient is an array of RGBA 4-tuples in the cm.jet color map. The colors of the gradient correspond to values from 0 to 1. Use scipy's vector quantization function scipy.cluster.vq.vq to map all the colors in your image, mri_demo.png, onto the nearest color in gradient.
Since a color map may use the same color for many values, the gradient may contain duplicate colors. I leave it up to scipy.cluster.vq.vq to decide which (possibly) non-unique code book index to associate with a particular color.
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import scipy.cluster.vq as scv
def colormap2arr(arr,cmap):
# http://stackoverflow.com/questions/3720840/how-to-reverse-color-map-image-to-scalar-values/3722674#3722674
gradient=cmap(np.linspace(0.0,1.0,100))
# Reshape arr to something like (240*240, 4), all the 4-tuples in a long list...
arr2=arr.reshape((arr.shape[0]*arr.shape[1],arr.shape[2]))
# Use vector quantization to shift the values in arr2 to the nearest point in
# the code book (gradient).
code,dist=scv.vq(arr2,gradient)
# code is an array of length arr2 (240*240), holding the code book index for
# each observation. (arr2 are the "observations".)
# Scale the values so they are from 0 to 1.
values=code.astype('float')/gradient.shape[0]
# Reshape values back to (240,240)
values=values.reshape(arr.shape[0],arr.shape[1])
values=values[::-1]
return values
arr=plt.imread('mri_demo.png')
values=colormap2arr(arr,cm.jet)
# Proof that it works:
plt.imshow(values,interpolation='bilinear', cmap=cm.jet,
origin='lower', extent=[-3,3,-3,3])
plt.show()
The image you see should be close to reproducing mri_demo.png:
(The original mri_demo.png had a white border. Since white is not a color in cm.jet, note that scipy.cluster.vq.vq maps white to to closest point in the gradient code book, which happens to be a pale green color.)
Here is a simpler approach, that works for many colormaps, e.g. viridis, though not for LinearSegmentedColormaps such as 'jet'.
The colormaps are stored as lists of [r,g,b] values. For lots of colormaps, this map has exactly 256 entries. A value between 0 and 1 is looked up using its nearest neighbor in the color list. So, you can't get the exact value back, only an approximation.
Some code to illustrate the concepts:
from matplotlib import pyplot as plt
def find_value_in_colormap(tup, cmap):
# for a cmap like viridis, the result of the colormap lookup is a tuple (r, g, b, a), with a always being 1
# but the colors array is stored as a list [r, g, b]
# for some colormaps, the situation is reversed: the lookup returns a list, while the colors array contains tuples
tup = list(tup)[:3]
colors = cmap.colors
if tup in colors:
ind = colors.index(tup)
elif tuple(tup) in colors:
ind = colors.index(tuple(tup))
else: # tup was not generated by this colormap
return None
return (ind + 0.5) / len(colors)
val = 0.3
tup = plt.cm.viridis(val)
print(find_value_in_colormap(tup, plt.cm.viridis))
This prints the approximate value:
0.298828125
being the value corresponding to the color triple.
To illustrate what happens, here is a visualization of the function looking up a color for a value, followed by getting the value corresponding to that color.
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(-0.1, 1.1, 10000)
y = [ find_value_in_colormap(plt.cm.viridis(x), plt.cm.viridis) for x in x]
fig, axes = plt.subplots(ncols=3, figsize=(12,4))
for ax in axes.ravel():
ax.plot(x, x, label='identity: y = x')
ax.plot(x, y, label='lookup, then reverse')
ax.legend(loc='best')
axes[0].set_title('overall view')
axes[1].set_title('zoom near x=0')
axes[1].set_xlim(-0.02, 0.02)
axes[1].set_ylim(-0.02, 0.02)
axes[2].set_title('zoom near x=1')
axes[2].set_xlim(0.98, 1.02)
axes[2].set_ylim(0.98, 1.02)
plt.show()
For a colormap with only a few colors, a plot can show the exact position where one color changes to the next. The plot is colored corresponding to the x-values.
Hy unutbu,
Thanks for your reply, I understand the process you explain, and reproduces it. It works very well, I use it to reverse IR camera shots in temperature grids, since a picture can be easily rework/reshape to fulfill my purpose using GIMP.
I'm able to create grids of scalar from camera shots that is really usefull in my tasks.
I use a palette file that I'm able to create using GIMP + Sample a Gradient Along a Path.
I pick the color bar of my original picture, convert it to palette then export as hex color sequence.
I read this palette file to create a colormap normalized by a temperature sample to be used as the code book.
I read the original image and use the vector quantization to reverse color into values.
I slightly improve the pythonic style of the code by using code book indices as index filter in the temperature sample array and apply some filters pass to smooth my results.
from numpy import linspace, savetxt
from matplotlib.colors import Normalize, LinearSegmentedColormap
from scipy.cluster.vq import vq
# sample the values to find from colorbar extremums
vmin = -20.
vmax = 120.
precision = 1.
resolution = 1 + vmax-vmin/precision
sample = linspace(vmin,vmax,resolution)
# create code_book from sample
cmap = LinearSegmentedColormap.from_list('Custom', hex_color_list)
norm = Normalize()
code_book = cmap(norm(sample))
# quantize colors
indices = vq(flat_image,code_book)[0]
# filter sample from quantization results **(improved)**
values = sample[indices]
savetxt(image_file_name[:-3]+'.csv',values ,delimiter=',',fmt='%-8.1f')
The results are finally exported in .csv
Most important thing is to create a well representative palette file to obtain a good precision. I start to obtain a good gradient (code book) using 12 colors and more.
This process is useful since sometimes camera shots cannot be translated to gray-scale easily and linearly.
Thanks to all contributors unutbu, Rob A, scipy community ;)
The LinearSegmentedColormap doesn't give me the same interpolation if I don't it manually during my test, so I prefer to use my own :
As an advantage, matplotlib is not more required since I integrate my code within an existing software.
def codeBook(color_list, N=256):
"""
return N colors interpolated from rgb color list
!!! workaround to matplotlib colormap to avoid dependency !!!
"""
# seperate r g b channel
rgb = np.array(color_list).T
# normalize data points sets
new_x = np.linspace(0., 1., N)
x = np.linspace(0., 1., len(color_list))
# interpolate each color channel
rgb = [np.interp(new_x, x, channel) for channel in rgb]
# round elements of the array to the nearest integer.
return np.rint(np.column_stack( rgb )).astype('int')