I'm trying to recreate the function
max(array, [], 3)
From MatLab, which can take my 300x300px image stack of N images (I'm saying "Image" here because I'm processing images, really this is just a big double array), 300x300xN, and create a 300x300 array. What I think is happening in this function, if it were to operate inefficiently, is that it is parsing through each (x,y) point, then taking the maximum value from that point across the z-axis, then normalizing with maximum and minimum values of the entire array.
I've tried recreating this in python with
# Shape of dataset: (300, 300, 181)
# Type of dataset: <type 'numpy.ndarray'>
for x in range(numpy.size(self.dataset, 0)):
for y in range(numpy.size(self.dataset, 1)):
print "Point is", x, y
# more would go here to find the maximum (x,y) value over Z axis in self.dataset
A very simple X,Y iterator. -- but not only does my IDE crash after a few milliseconds of running this code, but also it feels gross and inefficient.
Is there something I'm missing? I'm new to Python, and therefore the answer here isn't clear to me. Is there an existing function that does this operation?
import numpy as np
import matplotlib.pyplot as plt
from skimage import io
path = "test.tif"
IM = io.imread(path)
IM_MAX= np.max(IM, axis=0)
plt.imshow(IM_MAX)
Related
I have a gray scale image that I want to rotate. However, I need to do optimization on it. Therefore, I cannot use pillow or opencv.
I want to reshape this image using python with numpy.reshape into an one dimensional vector (where I use the default settings C-style reshape).
And thereafter, I want to rotate this image around a point using matrix multiplication and addition, i.e. it should be something like
rotated_image_vector = A # vector + b # (or the equivalent in homogenious coordinates).
After this operation I want to reshape the outcome back to two dimensions and have the rotated image.
It would be best if it would as well use linear interpolation between the pixels that do not fit exactly to an other pixel.
The mathematical theory tells it is possible, and I believe there is a very elegant solution to this problem, but I do not see how to create this matrix. Did anyone already have this problem or sees an immediate solution?
Thanks a lot,
Eike
I like your approach but there is a slight misconception in it. What you want to transform are not the pixel values themselves but the coordinates. So you don't reshape your image but rather do a np.indices on it to obtain coordinates to each pixel. For those a rotation around a point looks like
rotation_matrix#(coordinates-fixed_point)+fixed_point
except that I have to transpose a bit to get the dimensions to align. The cove below is a slight adoption of my code in this answer.
As an example I am going to use the Wikipedia-logo-v2 by Nohat. It is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
First I read in the picture, swap x and y axis to not get mad and rotate the coordinates as described above.
import numpy as np
import matplotlib.pyplot as plt
import itertools
image = plt.imread('wikipedia.jpg')
image = np.swapaxes(image,0,1)/255
fixed_point = np.array(image.shape[:2], dtype='float')/2
points = np.moveaxis(np.indices(image.shape[:2]),0,-1).reshape(-1,2)
a = 2*np.pi/8
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
Now I set up a little class to interpolate between the pixels that do not fit exactly to an other pixel. And finally I swap the axis back and plot it.
class Image_knn():
def fit(self, image):
self.image = image.astype('float')
def predict(self, x, y):
image = self.image
weights_x = [(1-(x % 1)).reshape(*x.shape,1), (x % 1).reshape(*x.shape,1)]
weights_y = [(1-(y % 1)).reshape(*x.shape,1), (y % 1).reshape(*x.shape,1)]
start_x = np.floor(x)
start_y = np.floor(y)
return sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
image_model = Image_knn()
image_model.fit(image)
transformed_image = image_model.predict(*rotated_coordinates.T).reshape(*image.shape)
plt.imshow(np.swapaxes(transformed_image,0,1))
And I get a result like this
Possible Issue
The artifact in the bottom left that looks like one needs to clean the screen comes from the following problem: When we rotate it can happen that we don't have enough pixels to paint the lower left. What we do by default in image_knn is to clip the coordinates to an area where we have information. That means when we ask image knn for pixels coming from outside the image it gives us the pixels at the boundary of the image. This looks good if there is a background but if an object touches the edge of the picture it looks odd like here. Just something to keep in mind when using this.
Thank you for your answer!
But actually it is not a misconception that you could let this roation be represented by a matrix multiplication with the reshaped vector.
I used your code to generate such a matrix (its surely not the most efficient way but it works, most likely you see a more efficient implementation immediately XD. You see I really need it as a matix multiplication :-D).
What I basically did is to generate the representation matrix of the linear transformation, by computing how every of the 100*100 basis images (i.e. the image with zeros everywhere und a one) is mapped by your transformation.
import sys
import numpy as np
import matplotlib.pyplot as plt
import itertools
angle = 2*np.pi/6
image_expl = plt.imread('wikipedia.jpg')
image_expl = image_expl[:,:,0]
plt.imshow(image_expl)
plt.title("Image")
plt.show()
image_shape = image_expl.shape
pixel_number = image_shape[0]*image_shape[1]
rot_mat = np.zeros((pixel_number,pixel_number))
for i in range(pixel_number):
vector = np.zeros(pixel_number)
vector[i] = 1
image = vector.reshape(*image_shape)
fixed_point = np.array(image.shape, dtype='float')/2
points = np.moveaxis(np.indices(image.shape),0,-1).reshape(-1,2)
a = -angle
A = np.array([[np.cos(a),-np.sin(a)],[np.sin(a),np.cos(a)]])
rotated_coordinates = (A#(points-fixed_point.reshape(1,2)).T).T+fixed_point.reshape(1,2)
x,y = rotated_coordinates.T
image = image.astype('float')
weights_x = [(1-(x % 1)).reshape(*x.shape), (x % 1).reshape(*x.shape)]
weights_y = [(1-(y % 1)).reshape(*x.shape), (y % 1).reshape(*x.shape)]
start_x = np.floor(x)
start_y = np.floor(y)
transformed_image_returned = sum([image[np.clip(np.floor(start_x + x), 0, image.shape[0]-1).astype('int'),
np.clip(np.floor(start_y + y), 0, image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y]
for x,y in itertools.product(range(2),range(2))])
rot_mat[:,i] = transformed_image_returned
if i%100 == 0: print(int(100*i/pixel_number), "% finisched")
plt.imshow((rot_mat # image_expl.reshape(-1)).reshape(image_shape))
Thank you again :-)
I have a 1D ray containing data that looks like this (48000 points), spaced by one wavenumber (R = 1 cm-1). The shape of the x and y array is (48000, 1), I want to rebin both in a similar way
xarr=[50000,9999,9998,....,2000]
yarr=[0.1,0.02,0.8,0.5....0.1]
I wish to decrease the spatial resolution, lets say R= 10 cm-1), so I want ten times less points (4800), from 50000 to 2000. And do the same for the y array
How to start?
I try by taking the natural log of the wavelength scale, then re-bin this onto a new log of wavelength scale generate using np.linspace()
xi=np.log(xarr[0])
xf=np.log(xarr[-1])
xnew=np.linspace(xi, xf, num=4800)
now I need to recast the y array into this xnew array, I am thinking of using rebin, a 2D rebin, but not sure how to use this. Any suggestions?
import numpy as np
arr1=[2,3,65,3,5...,32,2]
series=np.array(arr1)
print(series[:3])
I tried this and it seems to work!
import numpy as np
import scipy.stats as stats
#irregular x and y arrays
yirr= np.random.randint(1,101,10)
xirr=np.arange(10)
nbins=5
bin_means, bin_edges, binnumber = stats.binned_statistic(xirr,yirr, 'mean', bins=nbins)
yreg=bin_means # <== regularized yarr
xi=xirr[0]
xf=xirr[-1]
xreg=np.linspace(xi, xf, num=nbins)
print('yreg',yreg)
print('xreg',xreg) # <== regularized xarr
If anyone can find an improvement or see a problem with this, please post!
I'll try it on my logarithmically scaled data now
I am trying to convert a set of 3D points into a heightmap (a 2d image that shows the largest displacements of the points from the floor)
The only way I can come up with is writing a for look that iterates through all points and update the heightmap, this method, is quite slow.
import numpy as np
heightmap_resolution = 0.02
# generate some random 3D points
points = np.array([[x,y,z] for x in np.random.uniform(0,2,100) for y in np.random.uniform(0,2,100) for z in np.random.uniform(0,2,100)])
heightmap = np.zeros((int(np.max(points[:,1])/heightmap_resolution) + 1,
int(np.max(points[:,0])/heightmap_resolution) + 1))
for point in points:
y = int(point[1]/heightmap_resolution)
x = int(point[0]/heightmap_resolution)
if point[2] > heightmap[y][x]:
heightmap[y][x] = point[2]
I wonder if there is a better way of doing this. Any improvement is greatly appreciated!
The intuition:
If you find yourself using a for loop with numpy, you probably need to check again if numpy has an operation for it. I saw you wanted to compare items to get max and I wasn't sure if the structure was imporant so I changed it.
2nd point is heightmap is pre-allocating a lot of memory you aren't going to use. Try using a dictionary with a tuple (x,y) as the key or this (a dataframe)
import numpy as np
import pandas as pd
heightmap_resolution = 0.02
# generate some random 3D points
points = np.array([[x,y,z] for x in np.random.uniform(0,2,100) for y in np.random.uniform(0,2,100) for z in np.random.uniform(0,2,100)])
points_df = pd.DataFrame(points, columns = ['x','y','z'])
#didn't know if you wanted to keep the x and y columns so I made new ones.
points_df['x_normalized'] = (points_df['x']/heightmap_resolution).astype(int)
points_df['y_normalized'] = (points_df['y']/heightmap_resolution).astype(int)
points_df.groupby(['x_normalized','y_normalized'])['z'].max()
I made a 3D array, which consists of numbers(0~4). What I want is to save 3D array as a stack of 2D images(if possible, save *.tiff file). What am I supposed to do?
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
Actually, I made it. This is my code.
With this code, I don't need to stack a series of 2D image(array).
Make a 3D array, and save it. That is just what I did for this.
import numpy as np
from skimage.external import tifffile as tif
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
tif.imsave('a.tif', a, bigtiff=True)
This should work. I haven't tested it but I have separated color images into RGB slices using this method and it should work pretty much the same way here, assuming you don't want to do anything with those pixel values first. (They will be very close to the same color in an image).
import imageio
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
for i in range(100):
newimage = a[:, :, i]
imageio.imwrite("path/to/image%d.tiff" %i, newimage)
What exactly do you mean by "stack"? As you refer to tiff as output format, I assume here you want your data in one file as a multiframe-tiff.
This can easily be done with imageio's mimwrite() function:
# import numpy as np
# a = np.random.randint(0,5, size=(100,100,100))
# a = a.astype('int8')
import imageio
imageio.mimwrite("image.tiff", a)
Note that this function relies on having the counter for your several frames as first parameter and x and y follw. See also its documentation.
However, if I'm wrong and you want to have n (e.g. 100) separate tif-files, you can also use the normal imwrite() function in a loop:
n = len(a)
for i in range(n):
imageio.imwrite(f'image_{i:03}.tiff', a[i])
I am looking for some function that can be used to rebin some ndarray, that satisfies:
The result can be arbitrary dimensions, either upscaling or downscaling.
After the rebinning, the summation should be the same as before.
It should not change the overall image shape. In other words, it should be reversible in case of upscaling.
Second constraint is not just summation-normalization or something, but the rebinning algorithm itself should calculate the fraction the original array elements are overlapped within resulting array elements.
Third argument can be tested in this way:
# image is ndarray with shape of 20x20
func(image, func(image, [40,40]),[20,20])==image # if func works as intended
So far I am aware of only two functions, which are
ndarray.resize: I don't fully understand what it does, but basically not what I am looking for.
scipy.misc.imresize: It interpolates values of each element, which is not so good for my purpose.
But they does not satisfy conditions I mentioned. As an example, I attached a code to argue the behaviour of scipy.misc.imresize.
import numpy as np
from scipy.special import erf
import matplotlib.pyplot as plt
from scipy.misc import imresize
def gaussian(size, center, width, a):
xcoord=np.arange(size[0])[:,np.newaxis]+np.zeros(size[1])[np.newaxis,:]
ycoord=np.zeros(size[0])[:,np.newaxis]+np.arange(size[1])[np.newaxis,:]
return a*((erf((xcoord+1-center[0])/(width[0]*np.sqrt(2)))-erf((xcoord-center[0])/(width[0]*np.sqrt(2))))*
(erf((ycoord+1-center[1])/(width[1]*np.sqrt(2)))-erf((ycoord-center[1])/(width[1]*np.sqrt(2)))))
size=np.asarray([20,20])
c=[[0.1,0.2],[0.4,0.6],[0.8,0.4]]
c=[np.asarray(x) for x in c]
s=[[0.02,0.02],[0.05,0.05],[0.03,0.01]]
s=[np.asarray(x) for x in s]
im = gaussian(size, c[0]*size, s[0]*size, 1) \
+gaussian(size, c[1]*size, s[1]*size, 3) \
+gaussian(size, c[2]*size, s[2]*size, 2)
sciim=imresize(imresize(im,[40,40]),[20,20])
plt.imshow(im/np.sum(im)-sciim/np.sum(sciim))
plt.show()
So, is there any function, preferably built-in function to some package, that satisfies my requirements?
For other language, I know that frebin in IDL works as what I mentioned. Of course I could re-write the function, or perhaps someone already did it, but I wonder whether if there is any existing solution.
frebin implements pixel duplication when the expansion is by integer value (like the 2x increase in your toy problem). If you want similar reversibility in such cases, try this:
def py_frebin(im, shape):
if np.isclose(x.shape % shape , np.zeros.like(x.shape)):
interp = 'nearest'
else:
interp = 'lanczos'
im2 = scipy.misc.imresize(im, shape, interp = interp, mode = 'F')
im2 *= im.sum() / im2.sum()
return im2
Should be better than frebin in non-integer expansions (as frebin seems to be doing interp = 'bilinear' which is less reversible), and similar in integral expansions.