I am creating tiff stacks of different sizes based on the example found here:
http://www.bioimgtutorials.com/2016/08/03/creating-a-z-stack-in-python/
A sample of the tif files can be downloaded here:
nucleus
I have a folder with 5 tiff files inside.
I want to stack them to be able to open them in imageJ so that they look like this:
And this works with the following code:
from skimage import io
import numpy as np
import os
dir = 'C:/Users/Mich/Desktop/tiff stack/'
listfiles =[]
for img_files in os.listdir(dir):
if img_files.endswith(".tif") :
listfiles.append(img_files)
first_image = io.imread(dir+listfiles[0])
io.imshow(first_image)
first_image.shape
stack = np.zeros((5,first_image.shape[0],first_image.shape[1]),np.uint8)
for n in range(0,5):
stack[n,:,:]= io.imread(dir+listfiles[n])
path_results = 'C:/Users/Mich/Desktop/'
io.imsave(path_results+'Stack.tif' ,stack)
The problem comes when I just want to stack the 4 first ones or the 3 first ones.
Example with 4 tiff images:
stack=np.zeros((4,first_image.shape[0],first_image.shape[1]),np.uint8)
for n in range(0,4):
stack[n,:,:]= io.imread(dir+listfiles[n])
Then I obtain this kind of result:
and while trying to stack the 3 first images of the folder, they get combined!
stack=np.zeros((3,first_image.shape[0],first_image.shape[1]),np.uint8)
for n in range(0,3):
stack[n,:,:]= io.imread(dir+listfiles[n])
Where am I wrong in the code, so that it dosent just add the individual tiff in a multidimensional stack of the sizes 3, 4 or 5 ?
Specify the color space of the image data (photometric='minisblack'), otherwise the tifffile plugin will guess it from the shape of the input array.
This is a shorter version using tifffile directly:
import glob
import tifffile
with tifffile.TiffWriter('Stack.tif') as stack:
for filename in glob.glob('nucleus/*.tif'):
stack.save(
tifffile.imread(filename),
photometric='minisblack',
contiguous=True
)
Related
I have tiff files extracted from google earth engine from the same boundary location. I open these files using rasterio in python and then convert them into numpy array. But what happens is that despite the numpy arrays showing the same area they're misaligned. How can I fix this. I'm using the code below to read the tiff files and save them
import rasterio as rs
from rasterio.plot import reshape_as_image
from skimage import exposure
import numpy as np
import cv2
raster_file = reshape_as_image(rs.open(file_path).read())
mask = np.ones_like(raster_file)
mask[np.isnan(raster_file)] = 0
img_fixed = exposure.equalize_hist(raster_file,mask=mask)
img_fixed *= 255
img_fixed = img_fixed.astype('uint8')
png_file_path = file_path[:-4] + ".png"
cv2.imwrite(png_file_path, img_fixed)
One of the images is sentinel-1 ASCENDING VH 2022
and the other one from the same satellite and DESCENDING VV 2018
when using the data from the same year I don't have such a problem they're exactly aligned but when they're from different years they're not.
I appreciate any help :)
It seems that the problems is the different crs of the tif files. The solutions are either to download tif files using the same crs files (Google earth engine has an argument for this here is how to do this) or simply convert the existing tif file crs. Rasterio can reproject files in a different crs formats. Here is an example
I have a problem and don't know how to solve:
I'm learning how to analyze DICOM files with Python and, so,
I got a patient exam, on single patient and one single exam, which is 200 DICOM files all of the size 512x512 each archive representing a different layer of him and I want to turn them into a single archive .npy so I can use in another tutorial that I found online.
Many tutorials try to convert them to jpg or png using opencv first, but I don't want this since I'm not interested in a friendly image to see right now, I need the array. Also, this step screw all the quality of images.
I already know that using:
medical_image = pydicom.read_file(file_path)
image = medical_image.pixel_array
I can grab the path, turn 1 slice in a pixel array and them use it, but the thing is, it doesn't work in a for loop.
The for loop I tried was basically this:
image = [] # to create an empty list
for f in glob.iglob('file_path'):
img = pydicom.dcmread(f)
image.append(img)
It results in a list with all the files. Until here it goes well, but it seems it's not the right way, because I can use the list and can't find the supposed next steps anywhere, not even answers to the errors that I get in this part, (so I concluded it was wrong)
The following code snippet allows to read DICOM files from a folder dir_path and to store them into a list. Actually, the list does not consist of the raw DICOM files, but is filled with NumPy arrays of Hounsfield units (by using the apply_modality_lut function).
import os
from pathlib import Path
import pydicom
from pydicom.pixel_data_handlers import apply_modality_lut
dir_path = r"path\to\dicom\files"
dicom_set = []
for root, _, filenames in os.walk(dir_path):
for filename in filenames:
dcm_path = Path(root, filename)
if dcm_path.suffix == ".dcm":
try:
dicom = pydicom.dcmread(dcm_path, force=True)
except IOError as e:
print(f"Can't import {dcm_path.stem}")
else:
hu = apply_modality_lut(dicom.pixel_array, dicom)
dicom_set.append(hu)
You were well on your way. You just have to build up a volume from the individual slices that you read in. This code snippet will create a pixelVolume of dimension 512x512x200 if your data is as advertised.
import dicom
import numpy
images = [] # to create an empty list
# Read all of the DICOM images from file_path into list "images"
for f in glob.iglob('file_path'):
image = pydicom.dcmread(f)
images.append(image)
# Use the first image to determine the number of rows and columns
repImage = images[0]
rows=int(repImage.Rows)
cols=int(repImage.Columns)
slices=len(images)
# This tuple represents the dimensions of the pixel volume
volumeDims = (rows, cols, slices)
# allocate storage for the pixel volume
pixelVolume = numpy.zeros(volumeDims, dtype=repImage.pixel_array.dtype)
# fill in the pixel volume one slice at a time
for image in images:
pixelVolume[:,:,i] = image.pixel_array
#Use pixelVolume to do something interesting
I don't know if you are a DICOM expert or a DICOM novice, but I am just accepting your claim that your 200 images make sense when interpreted as a volume. There are many ways that this may fail. The slices may not be in expected order. There may be multiple series in your study. But I am guessing you have a "nice" DICOM dataset, maybe used for tutorials, and that this code will help you take a step forward.
I'm trying to write a python function which will output a single TIFF file after combining multiple TIFF files. I have a folder with a large amount of TIFF files and I'm trying to join each of the TIFF files into a single file. I have to load the data as numpy array and should also be populating using memory-mapped IO.
Untested example, that should give you an idea:
from pathlib import Path
import numpy as np
import tifffile
my_path = Path(r'path/to/tiffs')
output = Path('output.tiff')
tiffs = list(my_path.glob('*.tiff'))
x,y = (512,512) # either hardcode or read from first tiff
output = np.zeros((len(tiffs), x, y))
for i, image in enumerate(tiffs):
a = tifffile.imread(image.open(mode = 'rb'))
output[i, :, : ] = a
tifffile.imsave(output.open(mode='wb'), output)
How to load pixels of multiple images in a directory in a numpy array . I have loaded a single image in a numpy array . But can not figure out how to load multiple images from a directory . Here what i have done so far
image = Image.open('bn4.bmp')
nparray=np.array(image)
This loads a 32*32 matrices . I want to load 100 of the images in a numpy array . I want to make 100*32*32 size numpy array . How can i do that ? I know that the structure would look something like this
for filename in listdir("BengaliBMPConvert"):
if filename.endswith(".bmp"):
-----------------
else:
continue
But can not find out how to load the images in numpy array
Getting a list of BMP files
To get a list of BMP files from the directory BengaliBMPConvert, use:
import glob
filelist = glob.glob('BengaliBMPConvert/*.bmp')
On the other hand, if you know the file names already, just put them in a sequence:
filelist = 'file1.bmp', 'file2.bmp', 'file3.bmp'
Combining all the images into one numpy array
To combine all the images into one array:
x = np.array([np.array(Image.open(fname)) for fname in filelist])
Pickling a numpy array
To save a numpy array to file using pickle:
import pickle
pickle.dump( x, filehandle, protocol=2 )
where x is the numpy array to be save, filehandle is the handle for the pickle file, such as open('filename.p', 'wb'), and protocol=2 tells pickle to use its current format rather than some ancient out-of-date format.
Alternatively, numpy arrays can be pickled using methods supplied by numpy (hat tip: tegan). To dump array x in file file.npy, use:
x.dump('file.npy')
To load array x back in from file:
x = np.load('file.npy')
For more information, see the numpy docs for dump and load.
Use OpenCV's imread() function together with os.listdir(), like
import numpy as np
import cv2
import os
instances = []
# Load in the images
for filepath in os.listdir('images/'):
instances.append(cv2.imread('images/{0}'.format(filepath),0))
print(type(instances[0]))
class 'numpy.ndarray'
This returns you a list (==instances) in which all the greyscale values of the images are stored. For colour images simply set .format(filepath),1.
I just would like to share two sites where one can split a dataset into train, test and validation sets: split_folder
and create numpy arrays out of images residing in respective folders code snippet from medium by muskulpesent
I am trying to use SVMs to classify a set if images I have on my computer into 3 categories :
I am just facing a problem of how to load the data as in the following example , he uses a data set that is already saved.
http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html
Me I have all the images in png format saved in a folder on my pc
You can load data as numpy arrays using Pillow, in this way:
from PIL import Image
import numpy as np
data = np.array(Image.open('yourimg.png')) # .astype(float) if necessary
couple it with os.listdir to read multiple files, e.g.
import os
for file in os.listdir('your_dir/'):
img = Image.open(os.path.join('your_dir/', file))
data = np.array(img)
your_model.train(data)