In my current data analysis I have some segmented Images like for example below.
My Problem is that I would like to fit a polynom or spline (s.th. one-dimensional) to
a certain area (red) in the segmented image. ( the result would be the black line).
Usually i would use something like orthogonal distance regression, the problem is that this
needs some kind of fit function which I don't have in this case.
So what would be the best approach to do this with python/numpy?
Is there maybe some standard algorithm for this kind of problem?
UPDATE:
it seems my drawing skills are probably not the best, the red area in the picture could also have some random noise and does not have to be completely connected (there could be small gaps due to noise).
UPDATE2:
The overall target would be to have a parametrized curve p(t) which returns the position i.e. p(t) => (x, y) for t in [0,1]. where t=0 start of black line, t= 1 end of black line.
I used scipy.ndimage and this gist as a template. This gets you almost there, you'll have to find a reasonable way to parameterize the curve from the mostly skeletonized image.
from scipy.misc import imread
import scipy.ndimage as ndimage
# Load the image
raw = imread("bG2W9mM.png")
# Convert the image to greyscale, using the red channel
grey = raw[:,:,0]
# Simple thresholding of the image
threshold = grey>200
radius = 10
distance_img = ndimage.distance_transform_edt(threshold)
morph_laplace_img = ndimage.morphological_laplace(distance_img,
(radius, radius))
skeleton = morph_laplace_img < morph_laplace_img.min()/2
import matplotlib.cm as cm
from pylab import *
subplot(221); imshow(raw)
subplot(222); imshow(grey, cmap=cm.Greys_r)
subplot(223); imshow(threshold, cmap=cm.Greys_r)
subplot(224); imshow(skeleton, cmap=cm.Greys_r)
show()
You may find other answers that reference skeletonization useful, an example of that is here:
Problems during Skeletonization image for extracting contours
Related
i'm trying to find theses two horizontal lines with the Houghlines transform. As you can see, the picture is very noisy ! Currently my workflow looks like this :
crop the image
blur it
low the noise (for that, I invert the image, and then substract the blured image to the inverted one)
open it and dilate it with an "horizontal kernel" (kernel_1 = np.ones((10,1), np.uint8)
threshold
Houglines
the results are not as good as expected... is there a better strategy, knowing that I will always serach for horizontal lines (hence, abs(theta) will always be closed to 0 or pi)
the issue is the noise and the faint signal. you can subdue the noise with averaging/integration, while maintaining the signal because it's replicated along a dimension (signal is a line).
your approach using a very wide but narrow kernel can be extended to simply integrating along the whole image.
rotate the image so the suspected line is aligned with an axis (let's say horizontal)
sum up all pixels of one scanline (horizontal line), np.sum(axis=1) or mean, either way mind the data type. working with floats is convenient.
work with the 1-dimensional series of values.
this will not tell you how long the line is, only that it's there and potentially spanning the whole width.
edit: since my answer got a reaction, I'll elaborate as well:
I think you can lowpass that to get the "gray" baseline, then subtract ("difference of gaussians"). that should give you a nice signal.
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
import scipy.ndimage
im = cv.imread("0gczo.png", cv.IMREAD_GRAYSCALE) / np.float32(255)
relief = im.mean(axis=1)
smoothed = scipy.ndimage.gaussian_filter(relief, sigma=2.0)
baseline = scipy.ndimage.gaussian_filter(relief, sigma=10.0)
difference = smoothed - baseline
std = np.std(difference)
level = 2
outliers = (difference <= std * -level)
plt.plot(difference)
plt.hlines([std * +level, std * -level], xmin=0, xmax=len(relief))
plt.plot(std * -level + outliers * std)
plt.show()
# where those peaks are:
edgemap = np.diff(outliers.astype(np.int8))
(edges,) = edgemap.nonzero()
print(edges) # [392 398 421 427]
print(edgemap[edges]) # [ 1 -1 1 -1]
Much the same as Christoph's answer, but just wanted to share a processed image which I can't do in the comments.
I just took the mean across the rows with np.mean(axis=1) and normalised the result. Hopefully you can see the two dark bands corresponding to your lines.
Ok here is the situation, I want to make a watershed of this binary vessels image.
Binary vessels.
I want to use these colored vessels as seed points for the algorithm.
Seed points
It seems that when I use the raw colored image, the watershed does not go further than the colored image.
The goal is to have this image.
Filled binary vessels
The code use is this one
distances = distance_transform_edt(vessels)
segmentation = watershed(-distances, markers, mask=vessels).
The only solution that I found was to erode markers data (the 1st colored image).
Do you guys have a solution why watershed do this ? We even try the same code on other computers and it works find without erosion.
Edit:
Here is an image of the distances. When I take the negative, every 1 become -1. So the highest values in the image become 0.
welcome to the scikit-image thread of SO! Below is a small reproducible example showing that the watershed behaves nicely even with touching markers.
import matplotlib.pyplot as plt
import numpy as np
from skimage import segmentation
from scipy import ndimage
img = np.zeros((20, 20), dtype=np.bool)
img[3:-3, 3:-3] = True
distance = ndimage.distance_transform_edt(img)
markers = np.zeros_like(img, dtype=np.uint8)
markers[7:-7, 5:10] = 1
markers[7:-7, 10:15] = 2
ws = segmentation.watershed(-distance, markers, mask=img)
fig, ax = plt.subplots(1, 3)
ax[0].imshow(img)
ax[1].imshow(markers)
ax[2].imshow(ws)
plt.show()
Could it happen that the non-labeled vessel pixels in your markers array are not set to 0 but 1 instead? The watershed only labels 0-valued pixels.
A reproducible standalone script could help, the different images you linked to had different dimensions so it was hard to work from them.
Finally, you could be interested here in trying the random walker algorithm which can produce really good results for images such as your (no strong gradients between the regions you want to separate).
Summary of Question
I am detecting object silhouettes in front of a light source. To simplify the background and remove noise, I require masking everything that isn't the light source. How can I tell when the object would be on the edge of the masked area?
Assumptions
Assume featureless (monochrome black and white for edge detection) and ambiguous (a square in image 1 may be a circle in image 2) in shape.
Detailed Explanation of the Problem with "High Quality" Figures
Consider a silhouette in front of a light source. It is distinct and we can tell it is nested within the outer contour. Figure 1 depicts a simplified case.
We can treat our outer circle as a mask in this case, and easily ignore everything NOT within the contour. Figure 2 depicts the simplified case with some edge detection.
Everything works lovely until the silhouette moves to the edge of the light source. Suddenly we run into problems. Figure 3 is an example of a shape on the edge.
The silhouette is indistinguishable from the black of the background/masked area. OpenCV either assumes that suddenly the contour of our light source is funny shaped and there is no other object to be detected.
The Question Restated
What tools can I use to detect that there has been some sort of interruption of the edge shape? Is there a good/computational cheap way of determining if our silhouette is intersecting with another?
Graveyard of What I Know Does NOT Work
Assuming a static or simple silhouette shape. The figures are cartoons representing a more complicated real problem.
Assuming a perfectly round light source. HoughCircles does not work.
You can use the cv2.log_polar function to unwrap the circle/oval shape.
After that, np.argmax can be used to find the curve. Try smoothing out the curve using Scipy's signal.savgol_filter(). When the object blocks the light source, there will be a big difference between the smoothed line and the argmax data:
This is the code that I used:
import numpy as np
import cv2
# Read the image
img = cv2.imread('/home/stephen/Desktop/JkgJw.png', 0)
# Find the log_polar image
log_polar = cv2.logPolar(img, (img.shape[0]/2, img.shape[1]/2), 40, cv2.WARP_FILL_OUTLIERS)
# Create a background to draw on
bg = np.zeros_like(log_polar)
# Iterate through each row in the image and get the points on the edge
h,w = img.shape
points = []
for col in range(h-1):
col_slice = log_polar[col:col+1, :]
curve = np.argmax(255-col_slice)
cv2.circle(bg, (curve, col), 0, 255, 1)
points.append((curve, col))
cv2.imshow('log_polar', log_polar)
cv2.waitKey(0)
cv2.destroyAllWindows()
import scipy
from scipy import signal
x,y = zip(*points)
x_smooth = signal.savgol_filter(x,123,2)
import matplotlib.pyplot as plt
plt.plot(x)
plt.plot(x_smooth)
plt.show()
I am trying to segment some microscopy bright-field images showing some E. coli bacteria.
The picture I am working with resembles this one (even if this one is obtained with phase contrast):
my problem is that after running my segmentation function (OtsuMask below) I cannot distinguish dividing bacteria (you can try my code below on the sample image). This means that I get one single labeled region for a couple of bacteria which are joined by their end, instead of two different labeled images.
The boundary between two dividing bacteria is too narrow to be highlighted by the morphological operations I perform on the thresholded image, but I guess there must be a way to achieve my goal.
Any ideas/suggestions?
import scipy as sp
import numpy as np
from scipy import optimize
import mahotas as mht
from scipy import ndimage
import pylab as plt
def OtsuMask(img,dilation_size=2,erosion_size=1,remove_size=500):
img_thres=np.asarray(img)
s=np.shape(img)
p0=np.array([0,0,0])
p0[0]=(img[0,0]-img[0,-1])/512.
p0[1]=(img[1,0]-img[1,-1])/512.
p0[2]=img.mean()
[x,y]=np.meshgrid(np.arange(s[1]),np.arange(s[0]))
p=fitplane(img,p0)
img=img-myplane(p,x,y)
m=img.min()
img=img-m
img=abs(img)
img=img.astype(uint16)
"""perform thresholding with Otsu"""
T = mht.thresholding.otsu(img,2)
print T
img_thres=img
img_thres[img<T*0.9]=0
img_thres[img>T*0.9]=1
img_thres=-img_thres+1
"""morphological operations"""
diskD=createDisk(dilation_size)
diskE=createDisk(erosion_size)
img_thres=ndimage.morphology.binary_dilation(img_thres,diskD)
labeled_im,N=mht.label(img_thres)
label_sizes=mht.labeled.labeled_size(labeled_im)
labeled_im=mht.labeled.remove_regions(labeled_im,np.where(label_sizes<remove_size))
figure();
imshow(labeled_im)
return labeled_im
def myplane(p,x,y):
return p[0]*x+p[1]*y+p[2]
def res(p,data,x,y):
a=(data-myplane(p,x,y));
return array(np.sum(np.abs(a**2)))
def fitplane(data,p0):
s=shape(data);
[x,y]=meshgrid(arange(s[1]),arange(s[0]));
print shape(x), shape(y)
p=optimize.fmin(res,p0,args=(data,x,y));
print p
return p
def createDisk( size ):
x, y = np.meshgrid( np.arange( -size, size ), np.arange( -size, size ) )
diskMask = ( ( x + .5 )**2 + ( y + .5 )**2 < size**2)
return diskMask
THE FIRST PART OF THE CODE IN OtsuMask CONSIST OF A PLANE FITTING AND SUBTRACTION.
A similar approach to the one described in this related stackoverflow answer can be used here.
It goes basically like this:
threshold your image, as you have done
apply a distance transform on the thresholded image
threshold the distance transform, so that only a small 'seed' part of each bacterium remains
label these seeds, giving each one a different shade of gray
(also add a labeled seed for the background)
execute the watershed algorithm with these seeds and the distance transformed image, to get the separatd contours of your bacteria
Check out the linked answer for some pictures that will make this much clearer.
A few thoughts:
Otsu may not be a good choice, as you may even use a fixed threshold (your bacteria are black).
Thresholding the image with any method will remove a lot of useful information.
I do not have a complete recipe for you, but even this very simple thing seems to give a lot of interesting information:
import matplotlib.pyplot as plt
import cv2
# cv2 is only used to read the image into an array, use only green channel
bact = cv.imread("/tmp/bacteria.png")[:,:,1]
# draw a contour image with fixed threshold 50
fig = plt.figure()
ax = fig.add_subplot(111)
ax.contourf(bact, levels=[0, 50], colors='k')
This gives:
This suggests that if you use contour-tracing techniques with fixed contours, you will receive quite nice-looking starting points for dilation and erosion. So, two differences in thresholding:
Contouring uses much more of the grayscale information than simple black/white thresholding.
The fixed threshold seems to work well with these images, and if illumination correction is needed, Otsu is not the best choice.
One day skimage Watershed segmentation was more useful for me, than any OpenCV samples. It uses some code borrowed from Cellprofiler project (python-based tool for sophisticated cell image analysis). Hint: use Euclidean distance transform from opencv, it's faster than scipy implementation. Also peak_local_max function has distance parameter, which useful for precise single cells distinguishing. I think this function is more robust in finding cell peaks than rude threshold (because intensity of cells may vary).
You can find scipy watershed implementation, but it has weird behavior.
I'm new to python and stuck..
I want to make a python script that allows me to separate adjacent particles on an image like this:
into separate regions like this:
I was suggested to use the watershed method, which as far as I understand it would give me a something like this:
EDIT Actually found out that this is distance transform and not watershed
Where I then could use a threshold to separate them.. Followed this openCV watershed guide but it only worked to cut out the particles. Was not able to "transform" the code to do what I want.
I then took another approach. Tried to use the openCV contours which gave me good contours of the particles. I have then been looking intensively for an easy way to perform polygon offset in order to shrink the edge like this:
Using the center from the offset contours (polygon) should give me the number of particles.. But I just haven been able to find a simple way to do edge offset / polygon shrinking with python.
Here is a script using numpy, scipy and the scikit-image (aka skimage). It makes use of local maxima extraction and watershading plus labeling (ie connected components extraction).
import numpy as np
import scipy.misc
import scipy.ndimage
import skimage.feature
import skimage.morphology
# parameters
THRESHOLD = 128
# read image
im = scipy.misc.imread("JPh65.png")
# convert to gray image
im = im.mean(axis=-1)
# find peaks
peak = skimage.feature.peak_local_max(im, threshold_rel=0.9, min_distance=10)
# make an image with peaks at 1
peak_im = np.zeros_like(im)
for p in peak:
peak_im[p[0], p[1]] = 1
# label peaks
peak_label, _ = scipy.ndimage.label(peak_im)
# propagate peak labels with watershed
labels = skimage.morphology.watershed(255 - im, peak_label)
# limit watershed labels to area where the image is intense enough
result = labels * (im > THRESHOLD)