I'm swapping values of a multidimensional numpy array in Python. But the code is too slow. Another thread says:
Typically, you avoid iterating through them directly. ... there's a good chance that it's easy to vectorize.
So, do you know a way to optimize the following code?
import PIL.Image
import numpy
pil_image = PIL.Image.open('Image.jpg').convert('RGB')
cv_image = numpy.array(pil_image)
# Convert RGB to BGR
for y in range(len(cv_image)):
for x in range(len(cv_image[y])):
(cv_image[y][x][0], cv_image[y][x][2]) = (cv_image[y][x][2],
cv_image[y][x][0])
For an 509x359 image this last more than one second, which is way too much. It should perform it's task in no time.
How about this single operation inverting the matrix along the last axis?
cv_image = cv_image[:,:,::-1]
Related
I made a 3D array, which consists of numbers(0~4). What I want is to save 3D array as a stack of 2D images(if possible, save *.tiff file). What am I supposed to do?
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
Actually, I made it. This is my code.
With this code, I don't need to stack a series of 2D image(array).
Make a 3D array, and save it. That is just what I did for this.
import numpy as np
from skimage.external import tifffile as tif
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
tif.imsave('a.tif', a, bigtiff=True)
This should work. I haven't tested it but I have separated color images into RGB slices using this method and it should work pretty much the same way here, assuming you don't want to do anything with those pixel values first. (They will be very close to the same color in an image).
import imageio
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
for i in range(100):
newimage = a[:, :, i]
imageio.imwrite("path/to/image%d.tiff" %i, newimage)
What exactly do you mean by "stack"? As you refer to tiff as output format, I assume here you want your data in one file as a multiframe-tiff.
This can easily be done with imageio's mimwrite() function:
# import numpy as np
# a = np.random.randint(0,5, size=(100,100,100))
# a = a.astype('int8')
import imageio
imageio.mimwrite("image.tiff", a)
Note that this function relies on having the counter for your several frames as first parameter and x and y follw. See also its documentation.
However, if I'm wrong and you want to have n (e.g. 100) separate tif-files, you can also use the normal imwrite() function in a loop:
n = len(a)
for i in range(n):
imageio.imwrite(f'image_{i:03}.tiff', a[i])
Code:
from PIL import Image
import numpy as np
img = Image.open('test.tif')
imarray = np.zeros(shape = (34,23,18))
for i in range(34): # there are 34 images in the .tif file
for j in range(18): # each slice has size 18x23
for k in range(23):
try:
img.seek(i)
imarray[i,k,j] = img.getpixel((k,j))
except EOFError:
break
The purpose of this code is to accept .tif greyscale stacks. I want to be able to work with them as numpy arrays, so storing the original pixel values is essential.
This code successfully copies each slice to the np.array "imarray." However, it changes the values. For example, I printed all of the "img.getpixel" values for a given slice, and the values (type int) ranged between 2000 and 65500. However, the values in imarray (type float64) did not exceed 2800. I tried casting, ie:
imarray[0,j,i] = np.float64(img.getpixel((j,i)))
But it did not help. How can I revise this code to avoid my input data (img.getpixels) changing? If there are better alternatives to this approach, I'm happy to hear
I am looking for some function that can be used to rebin some ndarray, that satisfies:
The result can be arbitrary dimensions, either upscaling or downscaling.
After the rebinning, the summation should be the same as before.
It should not change the overall image shape. In other words, it should be reversible in case of upscaling.
Second constraint is not just summation-normalization or something, but the rebinning algorithm itself should calculate the fraction the original array elements are overlapped within resulting array elements.
Third argument can be tested in this way:
# image is ndarray with shape of 20x20
func(image, func(image, [40,40]),[20,20])==image # if func works as intended
So far I am aware of only two functions, which are
ndarray.resize: I don't fully understand what it does, but basically not what I am looking for.
scipy.misc.imresize: It interpolates values of each element, which is not so good for my purpose.
But they does not satisfy conditions I mentioned. As an example, I attached a code to argue the behaviour of scipy.misc.imresize.
import numpy as np
from scipy.special import erf
import matplotlib.pyplot as plt
from scipy.misc import imresize
def gaussian(size, center, width, a):
xcoord=np.arange(size[0])[:,np.newaxis]+np.zeros(size[1])[np.newaxis,:]
ycoord=np.zeros(size[0])[:,np.newaxis]+np.arange(size[1])[np.newaxis,:]
return a*((erf((xcoord+1-center[0])/(width[0]*np.sqrt(2)))-erf((xcoord-center[0])/(width[0]*np.sqrt(2))))*
(erf((ycoord+1-center[1])/(width[1]*np.sqrt(2)))-erf((ycoord-center[1])/(width[1]*np.sqrt(2)))))
size=np.asarray([20,20])
c=[[0.1,0.2],[0.4,0.6],[0.8,0.4]]
c=[np.asarray(x) for x in c]
s=[[0.02,0.02],[0.05,0.05],[0.03,0.01]]
s=[np.asarray(x) for x in s]
im = gaussian(size, c[0]*size, s[0]*size, 1) \
+gaussian(size, c[1]*size, s[1]*size, 3) \
+gaussian(size, c[2]*size, s[2]*size, 2)
sciim=imresize(imresize(im,[40,40]),[20,20])
plt.imshow(im/np.sum(im)-sciim/np.sum(sciim))
plt.show()
So, is there any function, preferably built-in function to some package, that satisfies my requirements?
For other language, I know that frebin in IDL works as what I mentioned. Of course I could re-write the function, or perhaps someone already did it, but I wonder whether if there is any existing solution.
frebin implements pixel duplication when the expansion is by integer value (like the 2x increase in your toy problem). If you want similar reversibility in such cases, try this:
def py_frebin(im, shape):
if np.isclose(x.shape % shape , np.zeros.like(x.shape)):
interp = 'nearest'
else:
interp = 'lanczos'
im2 = scipy.misc.imresize(im, shape, interp = interp, mode = 'F')
im2 *= im.sum() / im2.sum()
return im2
Should be better than frebin in non-integer expansions (as frebin seems to be doing interp = 'bilinear' which is less reversible), and similar in integral expansions.
Say I have a 2D image in python stored in a numpy array and called npimage (1024 times 1024 pixels).
I would like to define a function ShowImage, that take as a paramater a slice of the numpy array:
def ShowImage(npimage,SliceNumpy):
imshow(npimage(SliceNumpy))
such that it can plot a given part of the image, lets say:
ShowImage(npimage,[200:800,300:500])
would plot the image for lines between 200 and 800 and columns between 300 and 500, i.e.
imshow(npimage[200:800,300:500])
Is it possible to do that in python? For the moment passing something like [200:800,300:500] as a parameter to a function result in error.
Thanks for any help or link.
Greg
It's not possible because [...] are a syntax error when not directly used as slice, but you could do:
Give only the relevant sliced image - not with a seperate argument ShowImage(npimage[200:800,300:500]) (no comma)
or give a tuple of slices as argument: ShowImage(npimage,(slice(200,800),slice(300:500))). Those can be used for slicing inside the function because they are just another way of defining this slice:
npimage[(slice(200,800),slice(300, 500))] == npimage[200:800,300:500]
A possible solution for the second option could be:
import matplotlib.pyplot as plt
def ShowImage(npimage, SliceNumpy):
plt.imshow(npimage[SliceNumpy])
plt.show()
ShowImage(npimage, (slice(200,800),slice(300, 500)))
# plots the relevant slice of the array.
I need to write a matrix convolution without using any built in functions to help. I am taking an image and turning it to greyscale, and then I'm supposed to pass a filter matrix over it. One of the filter matrices I have to use is:
[[-1,0,1],
[-1,0,1],
[-1,0,1]]
I understand how convolutions work, I just don't understand how to apply the convolution with code. Here is the code I am using to get my greyscale array:
import numpy
from scipy import misc
mylist = []
for i in myfile:
mylist.append(i)
for i in mylist:
q = i
print(q)
image = misc.imread(q[0:-1])
threshold()
image = misc.imread('image1.png')
def averageArr(pixel): #make the pixel color values more realistic
return 0.299*pixel[:,:,0] + 0.587*pixel[:,:,1] + 0.114*pixel[:,:,2]
def threshold():
picture = averageArr(image)
for i in range(0,picture.shape[0]): #begin thresholding
for j in range(0,picture.shape[1]):
myList.append(i,j)
misc.imsave('image1.png') #save the image file
I take the values from the function, and add them to a list, and then I am supposed to iterate over the list, but I'm not sure how to go about doing that. I can use scipy and numpy to read and arrange the matrix, but the actual convolution function has to be written.