dicom image resizing before converting to numpy array - python

I have thousands of dicom images in a folder. I read them with pydicom like this
import numpy as np
import dicom
folder = "/images"
imgs = [dicom.read_file(folder + '/' + s) for s in os.listdir(folder)]
I then want to stack all images as a numpy array, like this:
data = np.stack([i.pixel_array for i in imgs])
However, the images are or different size and therefore cannot be stacked.
How can I add a step that resizes all images to 1000x1000 ?

If you stored then as a list of numpy arrays then they can be different size. Otherwise use scipy zoom function,
import numpy as np
import dicom
import scipy
xsize = 1000; ysize = 1000
folder = "/images"
data = np.zeros(xsize, ysize, len(os.listdir(folder)))
for i, s in enumerate(os.listdir(folder)):
img = np.array(dicom.read_file(folder + '/' + s).pixel_array)
xscale = xsize/img.shape[0]
yscale = ysize/img.shape[1]
data[:,:,i] = scipy.ndimage.interpolation.zoom(img, [xscale, yscale]))
You could save as a list and stack but seems easier to pre-allocate a numpy array of size 1000 by 1000 by len(os.listdir(folder)). I've not got dicom or the test files so cannot check your case but the idea certainly works (I've used it before to scale images to the right size). Also check the scale is correct for your case.

Related

Using patchify to create patches from images with different shapes

I use patchify to generate patches from images. My folder, from which I take the data base, contains images which are of different shape (1536x2048 and 2048x1536).
If I use only one shape (no matter if 1536x2048 or 2048x1536) I get the reasonable number of patches.
But if I combine both shapes, I get some additional images, which are just duplicates of patches.
Why does my code not work when I use two different shapes, even though they should both produce even numbers in the number of resulting patches for both axes?
The core of my code comes from the following question (before this code, I just create lists with the corresponding information about the images I'm using):
Problem when using patchify library to create patches
import numpy as np
import cv2
from PIL import Image
import os
from patchify import patchify
List = []
destinationFile = "C:/.../Output/Images/"
for root, Lists, files in os.walk("C:/.../Input/Images/"):
for name in files:
if name.endswith(".png"):
List.append(os.path.join(root, name))
for filename in List:
img_no_ndarray = Image.open(filename)
img = np.array(img_no_ndarray)
patches_img = patchify(img, (512, 512, 3), step=512)
for i in range(patches_img.shape[0]):
for j in range(patches_img.shape[1]):
single_patch_img = patches_img[i, j, 0, :, :, :]
if not cv2.imwrite(destinationFile + str(i) + "_" + str(j) + "_" + name, single_patch_img):
raise Exception("Could not write the image")
Thanks
Two thoughts, if it works when all input has the same dimensions then have you tried adding a reshaping step to match all input shapes before running through patchify?
And, you might find that this line is better for avoiding duplications/overwriting your output patches:
cv2.imwrite(r'C:/destinationfilepath/image_{}{}.png'.format(str(i).zfill(4),str(j).zfill(4)), single_patch_img)

How can i reproduce an image out of randomly shuffled pixels?

my output my input Hi I am using this python code to generate an shuffle pixel image is there any way to make this process opposite ? for example I give this code output's photo to the program and it reproduce the original photo again.
I am trying to generate an static style image and reverse it back into the original image and I am open into any other ideas for replacing this code
from PIL import Image
import numpy as np
orig = Image.open('lena.jpg')
orig_px = orig.getdata()
orig_px = np.reshape(orig_px, (orig.height * orig.width, 3))
np.random.shuffle(orig_px)
orig_px = np.reshape(orig_px, (orig.height, orig.width, 3))
res = Image.fromarray(orig_px.astype('uint8'))
res.save('out.jpg')
Firstly, bear in mind that JPEG is lossy - so you will never get back what you write with JPEG - it changes your data! So, use PNG if you want to read back losslessly exactly what you started with.
You can do what you ask like this:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def shuffleImage(im, seed=42):
# Get pixels and put in Numpy array for easy shuffling
pix = np.array(im.getdata())
# Generate an array of shuffled indices
# Seed random number generation to ensure same result
np.random.seed(seed)
indices = np.random.permutation(len(pix))
# Shuffle the pixels and recreate image
shuffled = pix[indices].astype(np.uint8)
return Image.fromarray(shuffled.reshape(im.width,im.height,3))
def unshuffleImage(im, seed=42):
# Get shuffled pixels in Numpy array
shuffled = np.array(im.getdata())
nPix = len(shuffled)
# Generate unshuffler
np.random.seed(seed)
indices = np.random.permutation(nPix)
unshuffler = np.zeros(nPix, np.uint32)
unshuffler[indices] = np.arange(nPix)
unshuffledPix = shuffled[unshuffler].astype(np.uint8)
return Image.fromarray(unshuffledPix.reshape(im.width,im.height,3))
# Load image and ensure RGB, i.e. not palette image
orig = Image.open('lena.png').convert('RGB')
result = shuffleImage(orig)
result.save('shuffled.png')
unshuffled = unshuffleImage(result)
unshuffled.save('unshuffled.png')
Which turns Lena into this:
It's impossible to do that reliably as far as I know. Theoretically you could brute force it by shuffling the pixels over and over and feeding the result into Amazon Rekognition, but you would end up with a huge AWS bill and probably only something that is approximately the original picture.

Stitching multiple pngs into a h5 image h5py

I created an model in blender. From here I took 2d slices through the y-plane of that model leading to the following.
600 png files each corresponding to a ylocation i.e y=0, y=0.1 etc
Each png file has a resolution of 500 x 600.
I am now trying to merge the 600 pngs into a h5 file using python before loading the .h5 into some software. I find that each individual png file is read fine and looks great. However when I look at the final 3d image there is some stretching of the image, and im not sure how this is being created.
The images are resized (from 600x600 to 500x600, but I have checked and this is not the cause of the stretching). I would like to know why I am introducing such stretching in other planes (not y-plane).
Here is my code, please note that there is some work in progress here, hence why I append the dataset to a list (this is to be used for later code)
from PIL import Image
import sys
import os
import h5py
import numpy as np
import cv2
from datetime import datetime
dir_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append(dir_path + '//..//..')
Xlen=500
Ylen=600
Zlen=600
directory=dir_path+"/LowPolyA21/"
for filename in os.listdir(directory):
if fnmatch.fnmatch(filename, '*.png'):
image = Image.open(directory+filename)
new_image = image.resize((Zlen, Xlen))
new_image.save(directory+filename)
dataset = np.zeros((Xlen, Zlen, Ylen), np.float)
# traverse all the pictures under the specified address
cnt_num = 0
img_list = sorted(os.listdir(directory))
os.chdir(directory)
for img in (img_list):
if img.endswith(".png"):
gray_img = cv2.imread(img, 0)
dataset[:, :, cnt_num] = gray_img
cnt_num += 1
dataset[dataset == 0] = -1
dataset=dataset.swapaxes(1,2)
datasetlist=[]
datasetlist.append(dataset)
dz_dy_dz = (float(0.001),float(0.001),float(0.001))
for j in range(Xlen):
for k in range(Ylen):
for l in range(Zlen):
if datasetlist[i][j,k,l]>1:
datasetlist[i][j,k,l]=1
now = datetime.now()
timestamp = now.strftime("%d%m%Y_%H%M%S%f")
out_h5_path='voxelA_'+timestamp+'_flipped'
out_h5_path2='voxelA_'+timestamp+'_flipped.h5'
with h5py.File(out_h5_path2, 'w') as f:
f.attrs['dx_dy_dz'] = dz_dy_dz
f['data'] = datasetlist[i] # Write data to the file's primary key data below
Example of image without stretching (in y-plane)
Example of image with stretching (in x-plane)

Python Image of images

I have a folder with 230400 images, each representing one pixel in a 480 x 480 image.
How can I use Python to make a single image out of each image?
I tried to creat a npy-array but I believe it resulted in a 3d array instead of a 2d array:
import cv2
import glob
import numpy as np
data = []
files = glob.glob("./data/*.PNG")
for myFile in files:
print(myFile)
image = cv2.imread(myFile)
data.append(image)
print('shape:', np.array(data).shape)
np.save('data',data)
Output: shape: (230400, 100, 100, 3)
How do I create a 2d array of images? And how do I convert it to an image?
Start by creating an empty numpy image with the size of your output image. For each pixel load the image in.
import numpy as np
import cv2
import glob
image_x = 480
image_y =480
files = glob.glob("./data/*.PNG")
output = np.zeros((image_x, image_y, 3))
for i in range(image_x):
for j in range(image_y):
pixel = cv2.imread(files[image_x*i+j])
output[i,j] = pixel[0,0]
Note: This is neither fast nor nice, but explicit.
For saving, use cv2.imwrite on the resulting array as in:
cv2.imwrite('output.png', output)

ndarray object to array conversion, where object contains variable size images

I want to import images of different sizes in an array in format (Number of images, row_size, col_size). I am using following code to do this.
import os,cv2, numpy as np
PATH = os.getcwd()
data_path = PATH+'/data'
data_dir_list = os.listdir(data_path)
img_data_list=[]
for dataset in data_dir_list:
img_list=os.listdir(data_path+'/'+ dataset)
print ('Loaded the images of dataset-'+'{}\n'.format(dataset))
for img in img_list:
input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img,0)
img_data_list.append(input_img)
img_data = np.array(img_data_list)
When i work with same size images i got proper result in img_data like example size=(1000,50,50) and type=uint8 but when i work with different size images (all 1000 images are of different sizes like 34*45, 25*43........) i got of img_data like size=(1000,) type=object value=ndarray object of numpy module . I am working on deep learning in keras and python.

Categories

Resources