I'm using PIL to load images and then transform them to NumPy arrays. Then I've to create a new image based on a list of images, so I append all theearrays to a list and then transform the list back to an array, so the shape for the list of images has 4 dimensions (n_images, height, width, rgb_channels). I'm using this code:
def gallery(array, ncols=4):
nindex, height, width, intensity = array.shape
nrows = nindex // ncols
# want result.shape = (height*nrows, width*ncols, intensity)
result = (array.reshape(nrows, ncols, height, width, intensity)
.swapaxes(1,2)
.reshape(height*nrows, width*ncols, intensity))
return result
def make_array(dim_x):
for i in range(dim_x):
print('series',i)
series = []
for j in range(TIME_STEP-1):
print('photo',j)
aux = np.asarray(Image.open(dirpath+'/images/pre_images /series_{0}_Xquakemap_{1}.jpg'.format(i,j)).convert('RGB'))
print(np.shape(aux))
series.append(aux)
print(np.shape(series))
im = Image.fromarray(gallery(np.array(series)))
im.save(dirpath+'/images/gallery/series_{0}_Xquakemap.jpg'.format(i))
im_shape = (im.size)
make_array(n_photos)
# n_photos is the total of photos in the dirpath
The problem is when the append on the series list happened, the shape of the image (the NumPy array added) gets lost. So when trying to reshape the array in the function gallery it causes a problem. A snippet of the output for the code above is this one:
...
series 2
photo 0
(585, 619, 3)
(1, 585, 619, 3)
photo 1
(587, 621, 3)
(2,)
photo 2
(587, 621, 3)
(3,)
photo 3
(587, 621, 3)
(4,)
...
As you can see, when appending the second photo the list loses a dimension. This is weird because the code works the first two iterations, which use fairly the same images. I tried using np.stack() but the error prevails.
I also find this issue on Github but I think it doesn't apply to this case even if the behavior is similar.
Working on Ubuntu 18, Python 3.7.3 and Numpy 1.16.2.
edit: added what #kwinkunks asked
In the second function, I think you need to move series = [] to before the outer loop.
Here's my reproduction of the problem:
import numpy as np
from PIL import Image
TIME_STEP = 3
def gallery(array, ncols=4):
"""Stitch images together."""
nindex, height, width, intensity = array.shape
nrows = nindex // ncols
result = array.reshape(nrows, ncols, height, width, intensity)
result = result.swapaxes(1,2)
result = result.reshape(height*nrows, width*ncols, intensity)
return result
def make_array(dim_x):
"""Make an image from a list of arrays."""
series = [] # <<<<<<<<<<< This is the line you need to check.
for i in range(dim_x):
for j in range(TIME_STEP - 1):
aux = np.ones((100, 100, 3)) * np.random.randint(0, 256, 3)
series.append(aux.astype(np.uint8))
im = Image.fromarray(gallery(np.array(series)))
return im
make_array(4)
This results in:
Related
I have several 3D images of shape (32,32,32) and I want to create 2D images from them. I want to do that by getting each slice in the z-axis and putting each of them in a square array in order, something like this:
Because I want the 2D image to be square I need to fill the missing slices with zeros (Black in the example).
This is what I did:
# I created an array of the desired dimensions
grid = np.zeros((6*32,6*32))
# Then, I assigned to each section of the grid the values of every slice of the 3d_image:
grid[0:32, 0:32] = 3d_image[:,:,0]
grid[0:32, 32:64] = 3d_image[:,:,1]
grid[0:32, 64:96] = 3d_image[:,:,2]
grid[0:32, 96:128] = 3d_image[:,:,3]
grid[0:32, 128:160] = 3d_image[:,:,4]
grid[0:32, 160:192] = 3d_image[:,:,5]
grid[32:64, 0:32] = 3d_image[:,:,6]
grid[32:64, 32:64] = 3d_image[:,:,7]
grid[32:64, 64:96] = 3d_image[:,:,8]
grid[32:64, 96:128] = 3d_image[:,:,9]
grid[32:64, 128:160] = 3d_image[:,:,10]
grid[32:64, 160:192] = 3d_image[:,:,11]
grid[64:96, 0:32] = 3d_image[:,:,12]
grid[64:96, 32:64] = 3d_image[:,:,13]
...
grid[160:192, 160:192] = 3d_image[:,:,31]
And It worked!! But I want to automate it, so I tried this:
d = [0, 32, 64, 96, 128, 160]
for j in range(6):
for i in d:
grid[0:32, i:i+32] = 3d_image[:,:,j]
But it didn't work, the slice index for 3d_image (j) is not changing, and I don't know how to change the index range for grid after every 6th slice.
Could you help me?
Assuming that that img is an array of the shape (32,32,32), this should work:
N = 32
a = np.vstack([img, np.zeros((4, N, N), dtype=img.dtype)])
grid = a.transpose(1, 0, 2).reshape(N, -1, 6*N).transpose(1, 0, 2).reshape(6*N, -1)
Here's an automated way to do it. Let's say your array with shape (32, 32, 32) is called n. Note that this method relies on all 3 dimensions having the same size.
num_layers = n.shape[0]
# num_across = how many images will go in 1 row or column in the final array.
num_across = int(np.ceil(np.sqrt(num_layers)))
# new_shape = how many numbers go in a row in the final array.
new_shape = num_across * num_layers
final_im = np.zeros((new_shape**2)).reshape(new_shape, new_shape)
for i in range(num_layers):
# Get what number row and column the image goes in (e.g. in the example,
# the image labelled 28 is in the 4th (3rd with 0-indexing) column and 5th
# (4th with 0-indexing) row.
col_num = i % num_across
row_num = i // num_across
# Put the image in the appropriate part of the final image.
final_im[row_num*num_layers:row_num*num_layers + num_layers, col_num*num_layers:col_num*num_layers + num_layers] = n[i]
final_im now contains what you want. Below is a representation where each image is a different color and the "black" areas are purple because matplotlib colormaps are funny like that:
Anyway, you can tell that the images go where they're supposed to and you get your empty space along the bottom.
So I'm trying to do fft, and here is my code
def fftImage(img_gray, rows, cols):
rPadded = cv2.getOptimalDFTSize(rows)
cPadded = cv2.getOptimalDFTSize(cols)
imgPadded = np.zeros((rPadded, cPadded), dtype=np.float32)
imgPadded[:rows, :cols] = img_gray
img_fft = cv2.dft(imgPadded, flags=cv2.DFT_COMPLEX_OUTPUT)
return img_fft
img_gray is obtained using cv2.imread
img_gray = cv2.imread("/content/hinh1.jpg")
# 1.fast Fourier transform
rows, cols = img_gray.shape[:2]
img_fft = stdFftImage(img_gray, rows, cols)
def stdFftImage(img_gray, rows, cols):
fimg = np.copy(img_gray)
fimg = fimg.astype(np.float32) #Notice the type conversion here
# 1.Image matrix times(-1)^(r+c), Centralization
for r in range(rows):
for c in range(cols):
if (r+c) % 2:
fimg[r][c] = -1 * img_gray[r][c]
img_fft = fftImage(fimg, rows, cols)
return img_fft
And the error is
----> img_fft = stdFftImage(img_gray, rows, cols)
---> imgPadded[:rows, :cols] = img_gray
ValueError: could not broadcast input array from shape (480,640,3) into shape (480,640)
So how do I fix this simple error? Thanks, I'm a newbie.
Number of elements in your input array must be equal to elements of output array. for example if you are reshaping a 3-dimentional picture from shape (480,640,3) into 2-dimentional picture, your output shape could be (1440,640) or something like that.
In another word 4806403 should be equal to (a,b) which herr, a & b can be 1440 and 640, respectively.
I hope I could help you.
I'm struggling in creating a data generator in PyTorch to extract 2D images from many 3D cubes saved in .dat format
There is a total of 200 3D cubes each having a 128*128*128 shape. Now I want to extract 2D images from all of these cubes along length and breadth.
For example, a is a cube having size 128*128*128
So I want to extract all 2D images along length i.e., [:, i, :] which will get me 128 2D images along the length, and similarly i want to extract along width i.e., [:, :, i], which will give me 128 2D images along the width. So therefore i get a total of 256 2D images from 1 3D cube, and i want to repeat this whole process for all 200 cubes, there by giving me 51200 2D images.
So far I've tried a very basic implementation which is working fine but is taking approximately 10 minutes to run. I want you guys to help me create a more optimal implementation keeping in mind time and space complexity. Right now my current approach has a time complexity of O(n2), can we dec it further to reduce the time complexity
I'm providing below the current implementation
from os.path import join as pjoin
import torch
import numpy as np
import os
from tqdm import tqdm
from torch.utils import data
class DataGenerator(data.Dataset):
def __init__(self, is_transform=True, augmentations=None):
self.is_transform = is_transform
self.augmentations = augmentations
self.dim = (128, 128, 128)
seismicSections = [] #Input
faultSections = [] #Ground Truth
for fileName in tqdm(os.listdir(pjoin('train', 'seis')), total = len(os.listdir(pjoin('train', 'seis')))):
unrolledVolSeismic = np.fromfile(pjoin('train', 'seis', fileName), dtype = np.single) #dat file contains unrolled cube, we need to reshape it
reshapedVolSeismic = np.transpose(unrolledVolSeismic.reshape(self.dim)) #need to transpose the axis to get height axis at axis = 0, while length (axis = 1), and width(axis = 2)
unrolledVolFault = np.fromfile(pjoin('train', 'fault', fileName),dtype=np.single)
reshapedVolFault = np.transpose(unrolledVolFault.reshape(self.dim))
for idx in range(reshapedVolSeismic.shape[2]):
seismicSections.append(reshapedVolSeismic[:, :, idx])
faultSections.append(reshapedVolFault[:, :, idx])
for idx in range(reshapedVolSeismic.shape[1]):
seismicSections.append(reshapedVolSeismic[:, idx, :])
faultSections.append(reshapedVolFault[:, idx, :])
self.seismicSections = seismicSections
self.faultSections = faultSections
def __len__(self):
return len(self.seismicSections)
def __getitem__(self, index):
X = self.seismicSections[index]
Y = self.faultSections[index]
return X, Y
Please Help!!!
why not storing only the 3D data in mem, and let the __getitem__ method "slice" it on the fly?
class CachedVolumeDataset(Dataset):
def __init__(self, ...):
super(...)
self._volumes_x = # a list of 200 128x128x128 volumes
self._volumes_y = # a list of 200 128x128x128 volumes
def __len__(self):
return len(self._volumes_x) * (128 + 128)
def __getitem__(self, index):
# extract volume index from general index:
vidx = index // (128 + 128)
# extract slice index
sidx = index % (128 + 128)
if sidx < 128:
# first dim
x = self._volumes_x[vidx][:, :, sidx]
y = self._volumes_y[vidx][:, :, sidx]
else:
sidx -= 128
# second dim
x = self._volumes_x[vidx][:, sidx, :]
y = self._volumes_y[vidx][:, sidx, :]
return torch.squeeze(x), torch.squeeze(y)
I'm using OpenCV to read images into numpy.array, and they have the following shape.
import cv2
def readImages(path):
imgs = []
for file in os.listdir(path):
if file.endswith('.png'):
img = cv2.imread(file)
imgs.append(img)
imgs = numpy.array(imgs)
return (imgs)
imgs = readImages(...)
print imgs.shape # (100, 718, 686, 3)
Each of the image has 718x686 pixels/dimension. There are 100 images.
I don't want to work on 718x686, I'd like to combine the pixels into a single dimension. That is, the shape should look like: (100,492548,3). Is there anyway either in OpenCV (or any other library) or Numpy that allows me to do that?
Without modifying your reading function:
imgs = readImages(...)
print imgs.shape # (100, 718, 686, 3)
# flatten axes -2 and -3, using -1 to autocalculate the size
pixel_lists = imgs.reshape(imgs.shape[:-3] + (-1, 3))
print pixel_lists.shape # (100, 492548, 3)
In case anyone wants it. Here's a general way of doing this
import functools
def combine_dims(a, i=0, n=1):
"""
Combines dimensions of numpy array `a`,
starting at index `i`,
and combining `n` dimensions
"""
s = list(a.shape)
combined = functools.reduce(lambda x,y: x*y, s[i:i+n+1])
return np.reshape(a, s[:i] + [combined] + s[i+n+1:])
With this function you could use it like this:
imgs = combine_dims(imgs, 1) # combines dimension 1 and 2
# imgs.shape = (100, 718*686, 3)
def combine_dims(a, start=0, count=2):
""" Reshapes numpy array a by combining count dimensions,
starting at dimension index start """
s = a.shape
return numpy.reshape(a, s[:start] + (-1,) + s[start+count:])
This function does what you need in a more general way.
imgs = combine_dims(imgs, 1) # combines dimension 1 and 2
# imgs.shape == (100, 718*686, 3)
It works by using numpy.reshape, which turns an array of one shape into an array with the same data but viewed as another shape. The target shape is just the initial shape, but with the dimensions to be combined replaced by -1. numpy uses -1 as a flag to indicate that it should work out itself how big that dimension should be (based on the total number of elements.)
This code is essentially a simplified version of Multihunter's answer, but my edit was rejected and hinted that it should be a separate answer. So there you go.
import cv2
import os
import numpy as np
def readImages(path):
imgs = np.empty((0, 492548, 3))
for file in os.listdir(path):
if file.endswith('.png'):
img = cv2.imread(file)
img = img.reshape((1, 492548, 3))
imgs = np.append(imgs, img, axis=0)
return (imgs)
imgs = readImages(...)
print imgs.shape # (100, 492548, 3)
The trick was to reshape and append to a numpy array. It's not good practice to hardcode the length of the vector (492548) so if I were you I'd also add a line that calculates this number and puts it in a variable, for use in the rest of the script.
How do I concatenate two matrices into one matrix? The resulting matrix should have the same height as the two input matrices, and its width will equal the sum of the width of the two input matrices.
I am looking for a pre-existing method that will perform the equivalent of this code:
def concatenate(mat0, mat1):
# Assume that mat0 and mat1 have the same height
res = cv.CreateMat(mat0.height, mat0.width + mat1.width, mat0.type)
for x in xrange(res.height):
for y in xrange(mat0.width):
cv.Set2D(res, x, y, mat0[x, y])
for y in xrange(mat1.width):
cv.Set2D(res, x, y + mat0.width, mat1[x, y])
return res
If you are using OpenCV, (you will get Numpy support then), you can use Numpy function np.hstack((img1,img2)) to do this.
eg :
import cv2
import numpy as np
# Load two images of same size
img1 = cv2.imread('img1.jpg')
img2 = cv2.imread('img2.jpg')
both = np.hstack((img1,img2))
You should use OpenCV. Legacy uses cvmat. But numpy arrays are really easy to work with.
As suggested by #abid-rahman-k, you can use hstack(which I didn't know about) so I had used this.
h1, w1 = img.shape[:2]
h2, w2 = img1.shape[:2]
nWidth = w1+w2
nHeight = max(h1, h2)
hdif = (h1-h2)/2
newimg = np.zeros((nHeight, nWidth, 3), np.uint8)
newimg[hdif:hdif+h2, :w2] = img1
newimg[:h1, w2:w1+w2] = img
But if you want to work with Legacy code, this should help
Let's assume that height of img0 is greater than height of image
nW = img0.width+image.width
nH = img0.height
newCanvas = cv.CreateImage((nW,nH), cv.IPL_DEPTH_8U, 3)
cv.SetZero(newCanvas)
yc = (img0.height-image.height)/2
cv.SetImageROI(newCanvas,(0,yc,image.width,image.height))
cv.Copy(image, newCanvas)
cv.ResetImageROI(newCanvas)
cv.SetImageROI(newCanvas,(image.width,0,img0.width,img0.height))
cv.Copy(img0,newCanvas)
cv.ResetImageROI(newCanvas)
OpenCV has in-built functions for concatenating images vertically/horizontally:
cv2.vconcat()
cv2.hconcat()
Note: While concatenating, images must be of the same dimensions or else you will come across an error message similar to: error: (-215:Assertion failed)....
Code:
img = cv2.imread('flower.jpg', 1)
# concatenate images vertically
vertical_concat = cv2.vconcat([img, img])
# concatenate images horizontally
horizontal_concat = cv2.hconcat([img, img])
I know this question is old, but I stumbled across it because I was looking to concatenate arrays that are two dimensions ( not just concatenate in 1 dimension ).
np.hstack will not do this.
Assuming you have two 640x480 images that are simply two dimensions use dstack.
a = cv2.imread('imgA.jpg')
b = cv2.imread('imgB.jpg')
a.shape # prints (480,640)
b.shape # prints (480,640)
imgBoth = np.dstack((a,b))
imgBoth.shape # prints (480,640,2)
imgBothH = np.hstack((a,b))
imgBothH.shape # prints (480,1280)
# = not what I wanted, first dimension not preserverd