Is there a function in PIL/Pillow that for a grayscale image, will separate the image into sub images containing the components that make up the original image? For example, a png grayscale image with a set of blocks in them. Here, the images types always have high contrast to the background.
I don't want to use openCV, I just need some general blob detection, and was hoping Pillow/PIL might have something that does that already.
Yes, it is possible. You can use edge detection algorithms in PIL.
Sample code:
from PIL import Image, ImageFilter
image = Image.open('/tmp/sample.png').convert('RGB')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('/tmp/output.png')
sample.png :
output.png:
Not using PIL, but worth a look I think:
I start with a list of image files that I've imported as a list of numpy arrays, and I create a list of boolean versions where the threshold is > 0
from skimage.measure import label, regionprops
import numpy as np
bool_array_list= []
for image in image_files:
bool_array = np.copy(image)
bool_array[np.where(bool_array > 0)] = 1
bool_array_list.append(bool_array)
img_region_list = []
Then I use label to identify the different areas, using 8-directional connectivity, and regionprops gives me a bunch of metrics, such as size and location.
for item in bool_array_list:
tmp_region_list = regionprops(label(item,
connectivity=2
)
)
img_region_list.append(tmp_region_list)
Related
I'm trying to use blob log or blog dog for blob detection in a 3D image using skimage. I'm using napari and binary blob (3D) images as a sample (but this will not be the image I will be using later this just has clear-cut blobs). However, I'm having trouble applying the blobs to the image/adding it to the viewer.
Skimage has a 2D image example using matplotlib adding circles to the image, but I would like to use this to identify blobs on the 3D image and create either a binary image (a mask essentially) or labels.
This is what I have, but I'm not sure where to go from here:
from skimage.data import binary_blobs as BBlobs
import pandas as pd
import imageio as io
import numpy as np
import napari
from skimage import filters, morphology, measure, exposure, segmentation, restoration, feature
import skimage.io as skio
from scipy import ndimage as ndi
def add_to_viewer(layer_name, name):
viewer.add_image(
layer_name,
name = name,
scale = spacing
)
bblobs = BBlobs(n_dim=3)
add_to_viewer(bblobs, 'image')
blobs = feature.blob_dog(bblobs)
for blob in blobs:
z,y,x,area = blob
This is skimage's blob feature detection example.
Any help would be appreciated.
What are you trying to do after? Do you need the blob sizes or only the positions? The answer depends a lot on the question. Here's three answers:
Just visualise the blobs as points:
viewer.add_points(
blobs[:, :-1], size=blobs[:, -1], name='points', scale=spacing
)
Ignore the size (assuming e.g. you are doing watershed later, this doesn't matter), create a labels volume with one label per coordinate:
from skimage.util import label_points
labels = label_points(blobs[:, :-1], bblobs.shape)
viewer.add_labels(labels, scale=spacing)
Note that label_points relies on the current main branch (unreleased) of scikit-image, but you can just copy the source code in the meantime. scikit-image 0.19 should be released soon after this post with the function.
Make a labels layer and use napari's Labels layer API directly to paint a blob at each point including the size from blob detection:
labels_layer = viewer.add_labels(
np.zeros_like(bblobs, dtype=np.int32), name='blobs', scale=spacing
)
for i, blob in enumerate(blobs, start=1):
labels_layer.selected_label = i
labels_layer.brush_size = blob[-1]
labels_layer.paint(blob[:-1], refresh=False)
labels_layer.refresh()
One small caveat in scenarios 1 and 3 is that I think the blobs sizes are "sigmas", meaning that most of the blob is within 2 sigma of the centre, so you might need to multiply all the sizes by 2 to get a nice display.
I'm currently in the pursue of counting the number of shrimps in a given image. I'm using this test image:
The code I have used so far is the following:
import cv2
import numpy as np
from matplotlib import pyplot as plt
#Load img
path = r'C:\Users\...' #the path to the image
original=cv2.imread(path, cv2.COLOR_BGR2RGB)
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
#Hist to proceed with the binarizarion
hist = cv2.calcHist([img],[0],None,[256],[0,256])
#do the threshold
ret,thresh = cv2.threshold(img,60,255,cv2.THRESH_BINARY_INV)
From this point I have tried different morphological transformations such a erode, dilate, open and close but they don't seem to be working and separating the objects as I want.
I've read that I can apply a Watershed transformation so separate touching elements, but I donĀ“t have experience in this (working at this point at the moment).
After that I am planning on using a Simple Blob Detector to count the blobs, I don't know if these steps are correct.
Any help is very welcomed!
I'm writing a script in Python for my image processing class, which should read a directory for images, display them, and then I will eventually add additional code to perform Otsu thresholding on these images. I can get a reference image to display properly to include Otsu thresholding; however, I run into trouble when I attempt to display the remaining images in the directory. I am not sure that my images are being read from the directory correctly, as I am trying to store them in an array; however, I can see the output window displays grey squares which correspond to the dimensions of the actual image resolutions, which suggests that they are being at least partly read correctly.
I've already attempted to isolate the script to load images and display them into a separate file and running it. I was concerned that the successful processing of my sample image (which included a black/white binarization) was somehow affecting my image display later. This was not the case, as running a separate script produced the same grey square output.
****Update****
I've managed to tweak the below script(not yet updated) to run almost correctly. By writing the full filepath directly for each file, I can get the output to display correctly. It appears there is some issue with loading images into an array, best I can tell; a potential workaround for future testing is importing file locations as a string array, and implementing that vs. loading images into an array directly.
import cv2 as cv
import numpy as np
from PIL import Image
import glob
from matplotlib import pyplot as plot
import time
image=cv.imread('Fig ref.jpg')
image2=cv.cvtColor(image, cv.COLOR_RGB2GRAY)
cv.imshow('Image', image)
# global thresholding
ret1,th1 = cv.threshold(image2,127,255,cv.THRESH_BINARY)
# Otsu's thresholding
ret2,th2 = cv.threshold(image2,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv.GaussianBlur(image2,(5,5),0)
ret3,th3 = cv.threshold(blur,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# plot all the images and their histograms
images = [image2, 0, th1,
image2, 0, th2,
blur, 0, th3]
titles = ['Original Noisy Image','Histogram','Global Thresholding (v=127)',
'Original Noisy Image','Histogram',"Otsu's Thresholding",
'Gaussian filtered Image','Histogram',"Otsu's Thresholding"]
for i in range(3):
plot.subplot(3,3,i*3+1),plot.imshow(images[i*3],'gray')
plot.title(titles[i*3]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+2),plot.hist(images[i*3].ravel(),256)
plot.title(titles[i*3+1]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray')
plot.title(titles[i*3+2]), plot.xticks([]), plot.yticks([])
plot.show()
imageFolderPath = 'D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work'
imagePath = glob.glob(imageFolderPath + '/*.JPG')
im_array = np.array( [np.array(Image.open(img).convert('RGB')) for img in imagePath] )
temp=cv.imread("D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work\Fig ref.jpg")
cv.imshow('image', temp)
time.sleep(15)
for i in range(9):
cv.imshow('Image', im_array[i])
time.sleep(2)
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray'): The second argument says you use gray color map. Get rid of it and you would get color displays.
I want to create a RGB image made from a random array of pixel values in Python with OpenCV/Numpy setup.
I'm able to create a Gray image - which looks amazingly live; with this code:
import numpy as np
import cv2
pic_array=np.random.randint(255, size=(900,800))
pic_array_8bit=slika_array.astype(np.uint8)
pic_g=cv2.imwrite("pic-from-random-array.png", pic_array_8bit)
But I want to make it in color as well. I've tried converting with cv2.cvtColor() but it couldnt work.
The issue might be in an array definition or a missed step. Couldn't find a similar situation... Any help how to make a random RGB image in color, would be great.
thanks!
RGB image is composed of three grayscale images. You can make three grayscale images like
rgb = np.random.randint(255, size=(900,800,3),dtype=np.uint8)
cv2.imshow('RGB',rgb)
cv2.waitKey(0)
First, you should define a random image data consisting of 3 channels using numpy as shown below-
import numpy as np
data = np.random.randint(0, 255, size=(900, 800, 3), dtype=np.uint8)
Now use, python imaging library as shown below-
from PIL import Image
img = Image.fromarray(data, 'RGB')
img.show()
You can also save the image easily using save function
img.save('image.png')
I tried almost all filters in PIL, but failed.
Is there any function in numpy of scipy to remove the noise?
Like Bwareaopen() in Matlab()?
e.g:
PS: If there is a way to fill the letters into black, I will be grateful
Numpy/Scipy can do morphological operations just as well as Matlab can.
See scipy.ndimage.morphology, containing, among other things, binary_opening(), the equivalent of Matlab's bwareaopen().
Numpy/Scipy solution: scipy.ndimage.morphology.binary_opening. More powerful solution: use scikits-image.
from skimage import morphology
cleaned = morphology.remove_small_objects(YOUR_IMAGE, min_size=64, connectivity=2)
See http://scikit-image.org/docs/0.9.x/api/skimage.morphology.html#remove-small-objects
I don't think this is what you want, but this works (uses Opencv (which uses Numpy)):
import cv2
# load image
fname = 'Myimage.jpg'
im = cv2.imread(fname,cv2.COLOR_RGB2GRAY)
# blur image
im = cv2.blur(im,(4,4))
# apply a threshold
im = cv2.threshold(im, 175 , 250, cv2.THRESH_BINARY)
im = im[1]
# show image
cv2.imshow('',im)
cv2.waitKey(0)
Output ( image in a window ):
You can save the image using cv2.imwrite