In my PhD project I analyze 3D microCT datasets of lung tissue samples. One topic is the simulation of an atelectasis by warping the image using ITK Python. In order to achieve that (with the WarpImageFilter or the ResampleImageFilter in ITK) I have to create a displacement vector field. Therefore, I have to convert a 3D numpy array into an itk image using the GetImageFromArray function. The resulting output should be in a format which the ResampleImageFilter or WarpImageFilter can work with:
Here´s my code:
array1 = []
for i in range (-5,5):
for j in range(-5,5):
for k in range(-5,5):
if i == 0 and j == 0 and k == 0:
array1.append([0, 0, 0])
else:
x = (float(i)/float(i**2 + j**2 + k**2))
y = (float(j)/float(i**2 + j**2 + k**2))
z = (float(k)/float(i**2 + j**2 + k**2))
array1.append([x, y, z])
displacementFieldFileName = itk.image_from_array(np.reshape(array1, (10,10,10,3)), is_vector = True)
The last line shows the conversion from a numpy array into a 3D ITK vector image format which is needed by the filters mentioned above. However, I receive the following error message:
Traceback (most recent call last):
File “Test_Displacement.py”, line 39, in
displacementFieldFileName = itk.image_from_array(np.reshape(array1, (10,10,10,3)), is_vector = True)
File “/XXXX/YYYY/.local/lib/python2.7/site-packages/itkExtras.py”, line 297, in GetImageFromArray
return _GetImageFromArray(arr, “GetImageFromArray”, is_vector)
File “/XXXX/YYYY/.local/lib/python2.7/site-packages/itkExtras.py”, line 291, in _GetImageFromArray
templatedFunction = getattr(itk.PyBuffer[ImageType], function)
File “/XXXX/YYYY/.local/lib/python2.7/site-packages/itkTemplate.py”, line 340, in getitem
raise TemplateTypeError(self, tuple(cleanParameters))
itkTemplate.TemplateTypeError: itk.PyBuffer is not wrapped for input type itk.Image[itk.Vector[itk.D,3],3].
A similar topic can be found here:
https://discourse.itk.org/t/importing-image-from-array-and-axis-reorder/1192
I already tried using dtype=np.float32 and .astype(np.float32) to specify the float data type but this leads to another error:
File "Test_Displacement.py", line 59, in <module>
fieldReader.SetFileName(displacementFieldFileName)
TypeError: in method 'itkImageFileReaderIF3_SetFileName', argument 2 of type 'std::string const &'
How can the displacement field created properly? Any help will be highly appreciated!
Alex
It seems like it's asking for:
itk.Image[itk.Vector[itk.D,3],3]
Not a numpy array. Or maybe your numpy array has the wrong dimensionality.
Related
I have this function here designed to create uniform particles over given x and y ranges, which are going to be 1x2 matrices. However, when I try and run it, I get the error below. I feel that there is a slicker way to assign the x and y values into my particles matrix. How can I solve this?
def create_uniform_particles(x_range, y_range, N):
particles = np.empty((N, 2))
new_x = uniform(x_range[0], x_range[1], size=(N,1))
new_y = uniform(y_range[0], y_range[1], size=(N,1))
for i in range(N):
particles[i][0] = new_x[i]
particles[i][1] = new_y[i]
return particles
#Error:
Traceback (most recent call last):
File "/Users/scottdayton/PycharmProjects/Uncertainty Research/particle.py", line 83, in <module>
particle_filter(init, sigma, obs, n, trans, sigma0)
File "/Users/scottdayton/PycharmProjects/Uncertainty Research/particle.py", line 49, in particle_filter
particles = create_uniform_particles(new_x_range, new_y_range, n)
File "/Users/scottdayton/PycharmProjects/Uncertainty Research/particle.py", line 8, in create_uniform_particles
new_x = uniform(x_range[0], x_range[1], size=(N,1))
IndexError: too many indices for array
Your code for this function appears to be correct (at least, it works for me without any modifications) when I do:
create_uniform_particles([0,1],[2,3],5)
I recommend verifying that the variables in the function one level above create_uniform_particles (wherever you set up new_x_range and new_y_range) are the shapes you were expecting. Since this function you wrote works for inputs correctly passed in, it's probably happening somewhere around there.
In terms of assigning the x's and y's, you can use hstack to concatenate the new_x and new_y vectors together into an array. Give this below a try if you like it better. As a side note, the alternative to hstack is vstack, which will concatenate them after each other instead of "next to" each other in your case.
import numpy as np
from numpy.random import uniform
def create_uniform_particles(x_range, y_range, N):
particles = np.empty((N, 2))
new_x = uniform(x_range[0], x_range[1], size=(N,1))
new_y = uniform(y_range[0], y_range[1], size=(N,1))
return np.hstack([new_x,new_y])
create_uniform_particles([0,1],[2,3],5)
I'm trying to rewrite Zhao Koch steganography method from matlab into python and I am stuck right at the start.
The first two procedures as they are in matlab:
Step 1:
A = imread(casepath); # Reading stegonography case image and aquiring it's RGB values. In my case it's a 400x400 PNG image, so it gives a 400x400x3 array.
Step 2:
D = dct2(A(:,:,3)); # Applying 2D DCT to blue values of the image
Python code analog:
from scipy import misc
from numpy import empty,arange,exp,real,imag,pi
from numpy.fft import rfft,irfft
arr = misc.imread('casepath')# 400x480x3 array (Step 1)
arr[20, 30, 2] # Getting blue pixel value
def dct(y): #Basic DCT build from numpy
N = len(y)
y2 = empty(2*N,float)
y2[:N] = y[:]
y2[N:] = y[::-1]
c = rfft(y2)
phi = exp(-1j*pi*arange(N)/(2*N))
return real(phi*c[:N])
def dct2(y): #2D DCT bulid from numpy and using prvious DCT function
M = y.shape[0]
N = y.shape[1]
a = empty([M,N],float)
b = empty([M,N],float)
for i in range(M):
a[i,:] = dct(y[i,:])
for j in range(N):
b[:,j] = dct(a[:,j])
return b
D = dct2(arr) # step 2 anlogue
However, when I try to execute the code I get the following error:
Traceback (most recent call last):
File "path to .py file", line 31, in <module>
D = dct2(arr)
File "path to .py file", line 25, in dct2
a[i,:] = dct(y[i,:])
File "path to .py file", line 10, in dct
y2[:N] = y[:]
ValueError: could not broadcast input array from shape (400,3) into shape (400)
Perhaps someone could kindly explain to me what am I doing wrong?
Additional Info:
OS: Windows 10 Pro 64 bit
Python: 2.7.12
scipy:0.18.1
numpy:1.11.2
pillow: 3.4.1
Your code works fine, but it is designed to only accept a 2D array, just like dct2() in Matlab. Since your arr is a 3D array, you want to do
D = dct2(arr[...,2])
As mentioned in my comment, instead or reinventing the wheel, use the (fast) built-in dct() from the scipy package.
The code from the link in my comment effectively provides you this:
import numpy as np
from scipy.fftpack import dct, idct
def dct2(block):
return dct(dct(block.T, norm='ortho').T, norm='ortho')
def idct2(block):
return idct(idct(block.T, norm='ortho').T, norm='ortho')
But again, I must stress that you have to call this function for each colour plane individually. Scipy's dct() will happily accept any N-dimensional array and will apply the transform on the last axis. Since that's your colour planes and not your rows and columns of your pixels, you'll get the wrong result. Yes, there is a way to address this with the axis input parameter, but I won't unnecessarily overcomplicate this answer.
Regarding the various DCT implementations involved here, your version and scipy's implementation give the same result if you omit the norm='ortho' parameter from the snippet above. But with that parameter included, scipy's transform will agree with Matlab's.
For the following code
# Numerical operation
SN_map_final = (new_SN_map - mean_SN) / sigma_SN
# Plot figure
fig12 = plt.figure(12)
fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')
plt.colorbar()
fig12 = plt.savefig(outname12)
with new_SN_map being a 1D array and mean_SN and sigma_SN being constants, I get the following error.
Traceback (most recent call last):
File "c:\Users\Valentin\Desktop\Stage M2\density_map_simple.py", line 546, in <module>
fig_SN_final = plt.imshow(SN_map_final, interpolation='nearest')
File "c:\users\valentin\appdata\local\enthought\canopy\user\lib\site-packages\matplotlib\pyplot.py", line 3022, in imshow
**kwargs)
File "c:\users\valentin\appdata\local\enthought\canopy\user\lib\site-packages\matplotlib\__init__.py", line 1812, in inner
return func(ax, *args, **kwargs)
File "c:\users\valentin\appdata\local\enthought\canopy\user\lib\site-packages\matplotlib\axes\_axes.py", line 4947, in imshow
im.set_data(X)
File "c:\users\valentin\appdata\local\enthought\canopy\user\lib\site-packages\matplotlib\image.py", line 453, in set_data
raise TypeError("Invalid dimensions for image data")
TypeError: Invalid dimensions for image data
What is the source of this error? I thought my numerical operations were allowed.
There is a (somewhat) related question on StackOverflow:
Showing an image with pylab.imshow()
Here the problem was that an array of shape (nx,ny,1) is still considered a 3D array, and must be squeezed or sliced into a 2D array.
More generally, the reason for the Exception
TypeError: Invalid dimensions for image data
is shown here: matplotlib.pyplot.imshow() needs a 2D array, or a 3D array with the third dimension being of shape 3 or 4!
You can easily check this with (these checks are done by imshow, this function is only meant to give a more specific message in case it's not a valid input):
from __future__ import print_function
import numpy as np
def valid_imshow_data(data):
data = np.asarray(data)
if data.ndim == 2:
return True
elif data.ndim == 3:
if 3 <= data.shape[2] <= 4:
return True
else:
print('The "data" has 3 dimensions but the last dimension '
'must have a length of 3 (RGB) or 4 (RGBA), not "{}".'
''.format(data.shape[2]))
return False
else:
print('To visualize an image the data must be 2 dimensional or '
'3 dimensional, not "{}".'
''.format(data.ndim))
return False
In your case:
>>> new_SN_map = np.array([1,2,3])
>>> valid_imshow_data(new_SN_map)
To visualize an image the data must be 2 dimensional or 3 dimensional, not "1".
False
The np.asarray is what is done internally by matplotlib.pyplot.imshow so it's generally best you do it too. If you have a numpy array it's obsolete but if not (for example a list) it's necessary.
In your specific case you got a 1D array, so you need to add a dimension with np.expand_dims()
import matplotlib.pyplot as plt
a = np.array([1,2,3,4,5])
a = np.expand_dims(a, axis=0) # or axis=1
plt.imshow(a)
plt.show()
or just use something that accepts 1D arrays like plot:
a = np.array([1,2,3,4,5])
plt.plot(a)
plt.show()
I am wanting to perturb a set of points assuming a normal distribution. I am using scipy.stats.truncnorm as I need to ensure that the perturbed points are always positive. Here is a MWE:
import numpy as np
from scipy.stats import truncnorm
# Generate points to perturb
N = 100000
z = np.random.rand(N)
sigmaz = (z+1.0)*0.03
# Set limits for truncnorm
a = (0.0-z)/sigmaz
b = np.ones_like(z)*np.inf
# Set size -- want to sample once for each point
size = tuple(np.ones(len(z)))
print truncnorm.rvs(a=a,b=b,loc=z,scale=sigmaz,size=size)
However, I am getting the following error:
Traceback (most recent call last):
File "./test.py", line 17, in <module>
print truncnorm.rvs(a=a,b=b,loc=z,scale=sigmaz,size=size)
File "/share/modules/install_dir/anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.py", line 818, in rvs
cond = logical_and(self._argcheck(*args), (scale >= 0))
File "/share/modules/install_dir/anaconda/lib/python2.7/site-packages/scipy/stats/_continuous_distns.py", line 3796, in _argcheck
if self.a > 0:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
So does anybody know how to get around this error and specify arrays for the mean and sigma, each with their own different value for the bounds a,b?
Or does anybody know of another way to do this in python that avoids manual loops?
Thanks a lot for any help you can provide!
This is a known bug. Truncated normal distribution does not accept array-like loc and scale. Most distributions do, but not this one.
I process rather large matrices in Python/Scipy. I need to extract rows from large matrix (which is loaded to coo_matrix) and use them as diagonal elements. Currently I do that in the following fashion:
import numpy as np
from scipy import sparse
def computation(A):
for i in range(A.shape[0]):
diag_elems = np.array(A[i,:].todense())
ith_diag = sparse.spdiags(diag_elems,0,A.shape[1],A.shape[1], format = "csc")
#...
#create some random matrix
A = (sparse.rand(1000,100000,0.02,format="csc")*5).astype(np.ubyte)
#get timings
profile.run('computation(A)')
What I see from the profile output is that most of the time is consumed by get_csr_submatrix function while extracting diag_elems. That makes me think that I use either inefficient sparse representation of initial data or wrong way of extracting row from a sparse matrix. Can you suggest a better way to extract a row from a sparse matrix and represent it in a diagonal form?
EDIT
The following variant removes bottleneck from the row extraction (notice that simple changing 'csc' to csr is not sufficient, A[i,:] must be replaced with A.getrow(i) as well). However the main question is how to omit the materialization (.todense()) and create the diagonal matrix from the sparse representation of the row.
import numpy as np
from scipy import sparse
def computation(A):
for i in range(A.shape[0]):
diag_elems = np.array(A.getrow(i).todense())
ith_diag = sparse.spdiags(diag_elems,0,A.shape[1],A.shape[1], format = "csc")
#...
#create some random matrix
A = (sparse.rand(1000,100000,0.02,format="csr")*5).astype(np.ubyte)
#get timings
profile.run('computation(A)')
If I create DIAgonal matrix from 1-row CSR matrix directly, as follows:
diag_elems = A.getrow(i)
ith_diag = sparse.spdiags(diag_elems,0,A.shape[1],A.shape[1])
then I can neither specify format="csc" argument, nor convert ith_diags to CSC format:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/profile.py", line 70, in run
prof = prof.run(statement)
File "/usr/local/lib/python2.6/profile.py", line 456, in run
return self.runctx(cmd, dict, dict)
File "/usr/local/lib/python2.6/profile.py", line 462, in runctx
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "<stdin>", line 4, in computation
File "/usr/local/lib/python2.6/site-packages/scipy/sparse/construct.py", line 56, in spdiags
return dia_matrix((data, diags), shape=(m,n)).asformat(format)
File "/usr/local/lib/python2.6/site-packages/scipy/sparse/base.py", line 211, in asformat
return getattr(self,'to' + format)()
File "/usr/local/lib/python2.6/site-packages/scipy/sparse/dia.py", line 173, in tocsc
return self.tocoo().tocsc()
File "/usr/local/lib/python2.6/site-packages/scipy/sparse/coo.py", line 263, in tocsc
data = np.empty(self.nnz, dtype=upcast(self.dtype))
File "/usr/local/lib/python2.6/site-packages/scipy/sparse/sputils.py", line 47, in upcast
raise TypeError,'no supported conversion for types: %s' % args
TypeError: no supported conversion for types: object`
Here's what I came up with:
def computation(A):
for i in range(A.shape[0]):
idx_begin = A.indptr[i]
idx_end = A.indptr[i+1]
row_nnz = idx_end - idx_begin
diag_elems = A.data[idx_begin:idx_end]
diag_indices = A.indices[idx_begin:idx_end]
ith_diag = sparse.csc_matrix((diag_elems, (diag_indices, diag_indices)),shape=(A.shape[1], A.shape[1]))
ith_diag.eliminate_zeros()
Python profiler said 1.464 seconds versus 5.574 seconds before. It takes advantage of the underlying dense arrays (indptr, indices, data) that define sparse matrices. Here's my crash course: A.indptr[i]:A.indptr[i+1] defines which elements in the dense arrays correspond to the non-zero values in row i. A.data is a dense 1d array of non-zero the values of A and A.indptr is the column where those values go.
I would do some more testing to make very certain this does the same thing as before. I only checked a few cases.