Image segmentation using corresponding masks in python - python

I have corresponding masks to the images that I want to segment.
I put the images in one folder and their corresponding masks in another folder.
I'm trying to apply those masks or multiply them by the images using two for loops in python to get the segmented images.
I'm using the code below:
def ImageSegmentation():
SegmentedImages = []
for img_path in os.listdir('C:/Users/mab/Desktop/images/'):
img=io.imread('C:/Users/mab/Desktop/data/'+img_path)
for img_path2 in os.listdir('C:/Users/mab/Desktop/masks/'):
Mask = io.imread('C:/Users/mab/Desktop/masks/'+img_path2)
[indx, indy] = np.where(Mask==0)
Color_Masked = img.copy()
Color_Masked[indx,indy] = 0
matplotlib.image.imsave('C:/Users/mab/Desktop/SegmentedImages/'+img_path2,Color_Masked)
segs.append(Color_Masked)
return np.vstack(Color_Masked)
This code works when I try it for a single image and a single mask (without the folders and loops).
However, when I try to loop over the images and masks I have in the two folders, I get output images that are segmented by the wrong mask (not their corresponding mask).
I can't segment each single image alone without looping because I have more than 500 Images and their masks.
I don't know what I'm missing or placing wrong in this code and how can I fix it? Also, is there an easier way to get the segmented images?

Unless I have grossly misunderstood, you just need something like this:
import glob
filelist = glob.glob('C:/Users/mab/Desktop/images/*.png')
for i in filelist:
mask = i.replace("images","masks")
print(i,mask)
On my iMac, that sort of thing produces:
/Users/mark/StackOverflow/images/b.png /Users/mark/StackOverflow/masks/b.png
/Users/mark/StackOverflow/images/a.png /Users/mark/StackOverflow/masks/a.png

Related

What is the problem with my code? Image manipulation using numpy and trying to apply filters to images in a folder

I'm very new to programming and in this code I want to apply several filter to images from the "dataFromVid" directory. Instead, I'm getting valueErrors for line 11. (np.hstack)
This is the code :
directory = "/content/dataFromVid/"
for filename in os.listdir(directory):
if filename.endswith(".jpg"): # Check for image files
# Read the image
img = cv2.imread(directory + filename)
# Apply grayscale filter
gray = grayscale_filtre(img)
monochrome = monochrome_filtre(img, 100)
borderDetection = detectEdge(img)
stacked = np.hstack((img, gray, monochrome, borderDetection))
# Show the stacked image
cv2_imshow(stacked)
# Save the grayscale image
cv2.imwrite(directory + "gray_" + filename, gray)
This is the error messages :
Error message
I think that it is something to do with the color channels as I'm trying to put a grayscale filter onto a color image as my first step. But again, I'm a beginner so I'm not too sure. Thanks for any help or comments :))
The ValueError you are getting is most likely due to the fact that the input arrays to np.hstack do not have the same shape along the horizontal axis.
You are trying to horizontally stack four images. If any of these images have a different shape along the horizontal axis, you will get a ValueError.
To fix this issue, you can try resizing all images to have the same width before stacking them. For example, using cv2.resize.

How do I use cv2.xphoto.inpaint?

I have two images that I have uniquely named image and mask.
I use the following code to perform standard OpenCV inpainting:
image = cv2.imread('image.jpg')
mask= cv2.imread('mask.jpg')
dst = cv2.inpaint([1], mask, 3, cv2.INPAINT_NS)
I receive the results, and they look decent; however, I realized that there is another function of OpenCV called cv2.xphoto.inpaint (link: https://docs.opencv.org/4.x/dc/d2f/tutorial_xphoto_inpainting.html and https://docs.opencv.org/4.x/de/daa/group__xphoto.html)
Is the proper usage of this code as follows?
new_dst = cv2.xphoto.inpaint(image,mask, dst, cv2.xphoto.INPAINT_FSR_FAST)
If so, why does new_dst remain empty after running the previous line?

How to create a list of DICOM files and convert it to a single numpy array .npy?

I have a problem and don't know how to solve:
I'm learning how to analyze DICOM files with Python and, so,
I got a patient exam, on single patient and one single exam, which is 200 DICOM files all of the size 512x512 each archive representing a different layer of him and I want to turn them into a single archive .npy so I can use in another tutorial that I found online.
Many tutorials try to convert them to jpg or png using opencv first, but I don't want this since I'm not interested in a friendly image to see right now, I need the array. Also, this step screw all the quality of images.
I already know that using:
medical_image = pydicom.read_file(file_path)
image = medical_image.pixel_array
I can grab the path, turn 1 slice in a pixel array and them use it, but the thing is, it doesn't work in a for loop.
The for loop I tried was basically this:
image = [] # to create an empty list
for f in glob.iglob('file_path'):
img = pydicom.dcmread(f)
image.append(img)
It results in a list with all the files. Until here it goes well, but it seems it's not the right way, because I can use the list and can't find the supposed next steps anywhere, not even answers to the errors that I get in this part, (so I concluded it was wrong)
The following code snippet allows to read DICOM files from a folder dir_path and to store them into a list. Actually, the list does not consist of the raw DICOM files, but is filled with NumPy arrays of Hounsfield units (by using the apply_modality_lut function).
import os
from pathlib import Path
import pydicom
from pydicom.pixel_data_handlers import apply_modality_lut
dir_path = r"path\to\dicom\files"
dicom_set = []
for root, _, filenames in os.walk(dir_path):
for filename in filenames:
dcm_path = Path(root, filename)
if dcm_path.suffix == ".dcm":
try:
dicom = pydicom.dcmread(dcm_path, force=True)
except IOError as e:
print(f"Can't import {dcm_path.stem}")
else:
hu = apply_modality_lut(dicom.pixel_array, dicom)
dicom_set.append(hu)
You were well on your way. You just have to build up a volume from the individual slices that you read in. This code snippet will create a pixelVolume of dimension 512x512x200 if your data is as advertised.
import dicom
import numpy
images = [] # to create an empty list
# Read all of the DICOM images from file_path into list "images"
for f in glob.iglob('file_path'):
image = pydicom.dcmread(f)
images.append(image)
# Use the first image to determine the number of rows and columns
repImage = images[0]
rows=int(repImage.Rows)
cols=int(repImage.Columns)
slices=len(images)
# This tuple represents the dimensions of the pixel volume
volumeDims = (rows, cols, slices)
# allocate storage for the pixel volume
pixelVolume = numpy.zeros(volumeDims, dtype=repImage.pixel_array.dtype)
# fill in the pixel volume one slice at a time
for image in images:
pixelVolume[:,:,i] = image.pixel_array
#Use pixelVolume to do something interesting
I don't know if you are a DICOM expert or a DICOM novice, but I am just accepting your claim that your 200 images make sense when interpreted as a volume. There are many ways that this may fail. The slices may not be in expected order. There may be multiple series in your study. But I am guessing you have a "nice" DICOM dataset, maybe used for tutorials, and that this code will help you take a step forward.

How do I generate a small image randomly in different parts of the big image?

Let's assume there are two images. One is called small image and another one is called big image. I want to randomly generate the small image inside the different parts of the big image one at a time everytime I run.
So, currently I have this image. Let's call it big image
I also have smaller image:
def mask_generation(blob_index,image_index):
experimental_image = markup_images[image_index]
h, w = cropped_images[blob_index].shape[:2]
x = np.random.randint(experimental_image.shape[0] - w)
y = np.random.randint(experimental_image.shape[1] - h)
experimental_image[y:y+h, x:x+w] = cropped_images[blob_index]
return experimental_image
I have created above function to generate the small image in big image everytime I call this function. Note: blob index is the index that I use to call specific 'small image' since I have a collection of those small images and image_index is the index to call specific 'big images'. Big images are stored in the list called experimental_image and small images are stored in list called markup images
However, when I run this, I do get the small image randomly generated but the previously randomly generated image never gets deleted and I am not too sure how to proceed with it,
Example: When I run it once
When I run it twice
How do I fix this? Any help will be appreciated. Thank you
I tried the above code but didn't work as I wanted it to work
I suppose you only want the small random image you generated in this iteration in your code.
The problem you have, is due to the modification of your calling args.
When you call your function multiple times with the same big image
markup_image = ...
result_1 = mask_generation(blob_index=0, image_index=0)
result_2 = mask_generation(blob_index=1, image_index=0)
You get in result_2 both small images.
This is due to the writing to the original image in
experimental_image[y:y+h, x:x+w] = cropped_images[blob_index]
This adds the small image to your original image in your list of images.
When getting this image the next time, the small image is already there.
To fix:
Do not alter your images, e.g. by first copying the image and then adding the small image in your function
Probably even better: Only give your function a big and small image, and make sure that they always receive a copy

OpenCV with Python - reading images in a loop

I'm using OpenCV's imread function to read my images into Python, for further processing as a NumPy array later in the pipeline. I know that OpenCV uses BGR instead of RGB, and have accounted for it, where required. But one thing that stumps me is why I get these differing outputs for the following scenarios?
Reading an image directly into a single works fine. The plotted image (using matplotlib.pyplot) reproduces my .tiff/.png input correctly.
img_train = cv2.imread('image.png')
plt.imshow(img_train)
plt.show()
When I use cv2.imread in a loop (for reading from a directory of such images - which is my ultimate goal here), I create an array as follows:
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
img_train = np.empty([len(files), height, width, channel])
for n in range(0, len(files)):
img_train[n] = cv2.imread(join(mypath, files[n]))
plt.imshow(img_train[n])
plt.show()
When I try to cross check and plot the image obtained thus, I get a very different output. Why so? How do I rectify this so that it looks more like my input, like in the first case? Am I reading the arrays correctly in the second case, or is it flawed?
Otherwise, is it something that stems from Matplotlib's plotting function? I do not know how to cross check for this case, though.
Any advice appreciated.
Extremely trivial solution.
np.empty creates an array of dtype float by default.
Changing this to uint8 as in the first case with OpenCV alone worked fine.

Categories

Resources