finding kill zones in antibiotic assays with low contrast between zones [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This is an antibiotic assay on a petri dish:
I'm working on a project where I'm automating the reading of antibiotic assays of the petri dish kind. In these tests, bacteria from a patient is spread over the petri dish and allowed to grow into a "bacterial lawn." Once the bacteria cover the entire surface of the petri dish, pills either of different medicine types or the same medicine, but in different concentrations are put onto the dish and after 24 hours the kill zones, if they exist, are measured. Kill zones represent regions where the medicine has killed off the bacteria for some radius from the pill. How big the radius of the kill zone is determines whether that bacteria is susceptible or resistant to the medication and the different concentrations.
I've been using a training set of 20 images in a file to generalize my results. I've been using opencv, skimage, numpy, scipy and matplotlib python libraries to build this program. So far I've been able to accurately identify the petri dish rim edge with a hough circle finder and morphological gradient transform of image. I've also been able to use SURF to identify the positions of the pills in the image.
My question:
The issue is that there isn't enough contrast between the kill zone edge and bacterial lawn to use HoughCircleFinder to find these circular zones. Could anyone help me find any method to accurately identify these kill zones?
import numpy as np
import cv2
from skimage import io,img_as_float
from matplotlib import pyplot as plt
import os
from skimage.util import img_as_ubyte
from skimage.color import rgb2gray
from skimage.filters import sobel
import matplotlib.patches as patches
import matplotlib.cbook as cbook
from skimage.filters.rank import entropy
from skimage.morphology import disk
from skimage import io,exposure
from skimage.segmentation import slic
from skimage import io, segmentation
from skimage.color import label2rgb,rgb2gray
from scipy import signal
import scipy
from skimage.future import graph
from skimage.feature import peak_local_max
def HoughCircleFinder(filtered_image,image):
output = image.copy()
gray = img_as_ubyte(filtered_image)
# gray = cv2.cvtColor(img_as_ubyte(image), cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 5, minDist = int(0.45 * max(image.shape)), minRadius = int(0.25 * max(image.shape)), maxRadius = int(0.6 * max(image.shape)))
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
print x,y,r
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
# filter circles that go out of view of image
if not (image.shape[0]*0.25) < x < (image.shape[0]*0.75):
continue
elif not (image.shape[0]*0.25) < y < (image.shape[0]*0.75):
continue
else:
# output = CropCircle(output,x,y,r)
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
return output
def MorphologicalGradient(img):
kernel = np.ones((7,3),np.uint8)
# kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
return gradient
def SURF(img): #image1
featurels = []
surf = cv2.SURF(7000)
kp, des = surf.detectAndCompute(img,None)
for i in kp:
# print round(i.pt[0])
featurels.append(i.pt)
surf.hessianThreshold = 50000
img2 = cv2.drawKeypoints(img,kp,None,(255,0,0),4)
return img2,featurels
# make list of images from file
pic_list = [os.path.join("/Users/sethcommichaux/Desktop/PetriKillZone/PetriDishes/",pic) for pic in os.listdir("/Users/sethcommichaux/Desktop/PetriKillZone/PetriDishes/")]
# for loop for processing images and getting useful data
for image in pic_list[1:]:
print image
image1 = cv2.imread(image,0) #grayscale image
image = cv2.imread(image) #color image
print "number of pixels in image: ",image.size
print "image shape (if grayscale will be 2 tuple, if color 3 or more): ", image.shape
image = HoughCircleFinder(MorphologicalGradient(image1),image)
print 'image data type: ',image.dtype
plt.figure()
io.imshow(image)
plt.show()
# part that finds pills
image,features = SURF(image)
plt.figure()
io.imshow(image)
plt.show()
I have used the SURF key points object to identify the locations of the pills, but how to find the kill zones I'm at a loss for. Below is code for getting plots and histograms along x-axis going out from pill.
for row in features:
PeakFind(image[round(row[1])])
print image[round(row[1])]
plt.figure()
plt.plot(range(len(image[round(row[1])])),image[round(row[1])])
plt.show()
# plt.figure()
# x = plt.hist(image[round(row[0])])
print features

Related

Determining if an object is up or down

I am trying to determine if this object is upward or downward.
Object in question: (shot with phone)
Problem: I cannot determine if the part is flipped right side up or right side down
The ones with the back light are cropped using cv2.minAreaRect so this is the true resolution of what the camera sees.
Work area:
Camera resolution: 2592 (H) × 1944 (V)
Camera is 15-18 inches above the tray (can be moved)
2 Trays side by side both 6x9 inches with a
back light
<<< This image has no local lighting and the hump is down. This finds contours, XY, and rotation.
<<< In this image I added local lighting and the hump is down. The idea was to threshold the image and detect the ridge but since the local lighting is fixed, orientation effects the light reaching the ridge leading to inconsistent data.
<<< This was an image taken with no back light and with my phone. Hump is down (reference image).
<<< This is the same but the hump is up and only a back light.
<<< Hump is up back light and local light
<<< Hump is up (reference image)
There is no general solution, but I guess if you do Hough Circular Transform to detect the circle position and compare that to the middle of the image, you might have a special solution to this case:
# Import Libraries
import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle_perimeter
# Read image
img = imread('21.png')
# RGB to Gray
raw = rgb2gray(img)
# Edge detector
edges = canny(raw)
# Detect two radii
hough_radii = np.arange(5, 25, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent 3 circles
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii,
total_num_peaks=1)
# Check wither shape is up or down
if(cy > raw.shape[0]//2):
pos = "shape is down"
else:
pos = "shape is up"
# Draw them
fig, ax = plt.subplots(ncols=2, nrows=1, figsize=(10, 4))
image = gray2rgb(raw)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius,
shape=image.shape)
image[circy, circx] = (220, 20, 20)
image[image.shape[0]//2,...] = (255,0,0)
ax[0].imshow(img)
ax[0].set_title('Original')
ax[0].axis('off')
ax[1].imshow(image)
ax[1].set_title(pos)
ax[1].axis('off')
plt.show()

How to represent a binary image as a graph with the axis being height and width dimensions and the data being the pixels

I am trying to use Python along with opencv, numpy and matplotlib to do some computer vision for a robot which will use a railing to navigate. I am currently extremely stuck have run out of places to look. My current code is:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('railings.jpg')
railing_image = np.copy(image)
resized_image = cv2.resize(railing_image,(881,565))
gray = cv2.cvtColor(resized_image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 85, 255)
cv2.imshow('test',canny)
image_array = np.array(canny)
ncols, nrows = image_array.shape
count = 0
scan = np.array
for x in range(0,image_array.shape[1]):
for y in range(0,image_array.shape[0]):
if image_array[y, x] == 0:
count += 1
scan = [scan, count]
print(scan)
plt.plot([0, count])
plt.axis([0, nrows, 0, ncols])
plt.show()
cv2.waitKey(0)
I am using a canny image which is stored in an array of 1's and 0's, the image I need represented is
The final result should look something like the following image.
I've tried using a histogram function but I've only managed to get that to output essentially a count of the number of times a 1 or 0 appears.
If anyone could help me or point me in the right direction that would produce a graph that represents the image pixels within a graph of height and width dimensions.
Thank you
I'm not sure how general this is but you could just use numpy argmax to get location of the maximum (like this) in your case. You should avoid loops as this will be very slow, better to use numpy functions. I've imported your image and used the cutoff criterion that 200 or more in the yellow channel is railing,
import cv2
import numpy as np
import matplotlib.pyplot as plt
#This loads the canny image you uploaded
image = cv2.imread('uojHJ.jpg')
#Trim off the top taskbar
trimimage = image[100:, :,0]
#Use argmax with 200 cutoff colour in one channel
maxindex = np.argmax(trimimage[:,:]>200, axis=0)
#Plot graph
plt.plot(trimimage.shape[0] - maxindex)
plt.show()
Where this looks as follows:

How to mark the center and calculate the diameter of an image using opencv - python?

I am working on the recognition of the center and the image rendering. I'm using cv2.findContours to delimit the edges of the image of interest. And using cv.minEnclosingCircle (cnt) to circumnavigate my region of interest. The code below I can identify the center of each ROI, but I am not able to mark in the output of the image the circle corresponding to the image that I want to calculate and also I want to mark with a pqno point the exact location where the algorithm identified the center.
import cv2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnchoredText
thresh = cv2.imread('IMD044.png',0)
_, contours,hierarchy = cv2.findContours(thresh,2,1)
print (len(contours))
cnt = contours
for i in range (len(cnt)):
(x,y),radius = cv2.minEnclosingCircle(cnt[i])
center = (int(x),int(y))
radius = int(radius)
cv2.circle(thresh,center,radius,(0,255,0),2)
print ('Circle: ' + str(i) + ' - Center: ' + str(center) + ' - Radius: ' + str(radius))
plt.text(x-15, y+10, '+', fontsize=25, color = 'red')
plt.text(10, -10, 'Centro: '+str(center), fontsize=11, color = 'red')
plt.text(340, -10, 'Diametro: '+str((radius*2)/100)+'mm', fontsize=11, color = 'red')
plt.Circle(x, y, color='red', fill=False)
plt.imshow(thresh, cmap='gray')
plt.show()
I used the Opencv documentation to demarcate the contours and get the regions, but the green circle mark does not appear.
Exit:
Exit expected:
updating the question I was able to add the information, it's only necessary to add the circle and check if the diameter is correct.
You are using a single channel image, but trying to show 3-channel color. Add the following:
...
thresh = cv2.imread('IMD016.png',0)
_, contours,hierarchy = cv2.findContours(thresh,2,1)
thresh = cv2.cvtColor(thresh, cv2.COLOR_GRAY2RGB)
print (len(contours))
...
Also, be extra careful when mixing the OpenCV and PyPlot drawing routines. OpenCV uses BGR by default, while PyPlot uses RGB. Also, cv2 drawing routines don't add extra channels, while the plt does. Just need to keep that in mind

Image processing - fill in hollow circles

I have a binary black and white images that looks like this
I want to fill in those white circles to be solid white disks. How can I do this in Python, preferrably using skimage?
You can detect circles with skimage's methods hough_circle and hough_circle_peaks and then draw over them to "fill" them.
In the following example most of the code is doing "hierarchy" computation for the best fitting circles to avoid drawing circles which are one inside another:
# skimage version 0.14.0
import math
import numpy as np
import matplotlib.pyplot as plt
from skimage import color
from skimage.io import imread
from skimage.transform import hough_circle, hough_circle_peaks
from skimage.feature import canny
from skimage.draw import circle
from skimage.util import img_as_ubyte
INPUT_IMAGE = 'circles.png' # input image name
BEST_COUNT = 6 # how many circles to draw
MIN_RADIUS = 20 # min radius should be bigger than noise
MAX_RADIUS = 60 # max radius of circles to be detected (in pixels)
LARGER_THRESH = 1.2 # circle is considered significantly larger than another one if its radius is at least so much bigger
OVERLAP_THRESH = 0.1 # circles are considered overlapping if this part of the smaller circle is overlapping
def circle_overlap_percent(centers_distance, radius1, radius2):
'''
Calculating the percentage area overlap between circles
See Gist for comments:
https://gist.github.com/amakukha/5019bfd4694304d85c617df0ca123854
'''
R, r = max(radius1, radius2), min(radius1, radius2)
if centers_distance >= R + r:
return 0.0
elif R >= centers_distance + r:
return 1.0
R2, r2 = R**2, r**2
x1 = (centers_distance**2 - R2 + r2 )/(2*centers_distance)
x2 = abs(centers_distance - x1)
y = math.sqrt(R2 - x1**2)
a1 = R2 * math.atan2(y, x1) - x1*y
if x1 <= centers_distance:
a2 = r2 * math.atan2(y, x2) - x2*y
else:
a2 = math.pi * r2 - a2
overlap_area = a1 + a2
return overlap_area / (math.pi * r2)
def circle_overlap(c1, c2):
d = math.sqrt((c1[0]-c2[0])**2 + (c1[1]-c2[1])**2)
return circle_overlap_percent(d, c1[2], c2[2])
def inner_circle(cs, c, thresh):
'''Is circle `c` is "inside" one of the `cs` circles?'''
for dc in cs:
# if new circle is larger than existing -> it's not inside
if c[2] > dc[2]*LARGER_THRESH: continue
# if new circle is smaller than existing one...
if circle_overlap(dc, c)>thresh:
# ...and there is a significant overlap -> it's inner circle
return True
return False
# Load picture and detect edges
image = imread(INPUT_IMAGE, 1)
image = img_as_ubyte(image)
edges = canny(image, sigma=3, low_threshold=10, high_threshold=50)
# Detect circles of specific radii
hough_radii = np.arange(MIN_RADIUS, MAX_RADIUS, 2)
hough_res = hough_circle(edges, hough_radii)
# Select the most prominent circles (in order from best to worst)
accums, cx, cy, radii = hough_circle_peaks(hough_res, hough_radii)
# Determine BEST_COUNT circles to be drawn
drawn_circles = []
for crcl in zip(cy, cx, radii):
# Do not draw circles if they are mostly inside better fitting ones
if not inner_circle(drawn_circles, crcl, OVERLAP_THRESH):
# A good circle found: exclude smaller circles it covers
i = 0
while i<len(drawn_circles):
if circle_overlap(crcl, drawn_circles[i]) > OVERLAP_THRESH:
t = drawn_circles.pop(i)
else:
i += 1
# Remember the new circle
drawn_circles.append(crcl)
# Stop after have found more circles than needed
if len(drawn_circles)>BEST_COUNT:
break
drawn_circles = drawn_circles[:BEST_COUNT]
# Actually draw circles
colors = [(250, 0, 0), (0, 250, 0), (0, 0, 250)]
colors += [(200, 200, 0), (0, 200, 200), (200, 0, 200)]
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in drawn_circles:
circy, circx = circle(center_y, center_x, radius, image.shape)
color = colors.pop(0)
image[circy, circx] = color
colors.append(color)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()
Result:
Do a morphological closing (explanation) to fill those tiny gaps, to complete the circles. Then fill the resulting binary image.
Code :
from skimage import io
from skimage.morphology import binary_closing, disk
import scipy.ndimage as nd
import matplotlib.pyplot as plt
# Read image, binarize
I = io.imread("FillHoles.png")
bwI =I[:,:,1] > 0
fig=plt.figure(figsize=(24, 8))
# Original image
fig.add_subplot(1,3,1)
plt.imshow(bwI, cmap='gray')
# Dilate -> Erode. You might not want to use a disk in this case,
# more asymmetric structuring elements might work better
strel = disk(4)
I_closed = binary_closing(bwI, strel)
# Closed image
fig.add_subplot(1,3,2)
plt.imshow(I_closed, cmap='gray')
I_closed_filled = nd.morphology.binary_fill_holes(I_closed)
# Filled image
fig.add_subplot(1,3,3)
plt.imshow(I_closed_filled, cmap='gray')
Result :
Note how the segmentation trash has melded to your object on the lower right and the small cape on the lower part of the middle object has been closed. You might want to continue with an morphological erosion or opening after this.
EDIT: Long response to comments below
The disk(4) was just the example I used to produce the results seen in the image. You will need to find a suitable value yourself. Too big of a value will lead to small objects being melded into bigger objects near them, like on the right side cluster in the image. It will also close gaps between objects, whether you want it or not. Too small of a value will lead to the algorithm failing to complete the circles, so the filling operation will then fail.
Morphological erosion will erase a structuring element sized zone from the borders of the objects. Morphological opening is the inverse operation of closing, so instead of dilate->erode it will do erode->dilate. The net effect of opening is that all objects and capes smaller than the structuring element will vanish. If you do it after filling then the large objects will stay relatively the same. Ideally it should remove a lot of the segmentation artifacts caused by the morphological closing I used in the code example, which might or might not be pertinent to you based on your application.
I don't know skimage but if you'd use OpenCv, I would do a Hough transform for circles, and then just draw them over.
Hough Transform is robust, if there are some small holes in the circles that is no problem.
Something like:
circles = cv2.HoughCircles(gray, cv2.cv.CV_HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
# you can check size etc here.
for (x, y, r) in circles:
# draw the circle in the output image
# you can fill here.
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
# show the output image
cv2.imshow("output", np.hstack([image, output]))
cv2.waitKey(0)
See more info here: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/

Partial human detection in OpenCV 3

I am working on a human detection program using OpenCV using Python. I saw this very good example and I ran it on the samples it had. It can detect people regardless where they are facing and has decent overlap detection as well as blurred motion as well.
However, when I was running it on some images I had (mostly knee up, waist up, and chest up photos of people), I found out that the software doesn't quite detect people.
You can get the photos from this link. This is the code I am using:
# import the necessary packages
from __future__ import print_function
from imutils.object_detection import non_max_suppression
from imutils import paths
import numpy as np
import argparse
import imutils
import cv2
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--images", required=True, help="path to images directory")
args = vars(ap.parse_args())
# initialize the HOG descriptor/person detector
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# loop over the image paths
imagePaths = list(paths.list_images(args["images"]))
for imagePath in imagePaths:
# load the image and resize it to (1) reduce detection time
# and (2) improve detection accuracy
image = cv2.imread(imagePath)
image = imutils.resize(image, width=min(400, image.shape[1]))
orig = image.copy()
# detect people in the image
(rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),
padding=(8, 8), scale=1.05)
# draw the original bounding boxes
for (x, y, w, h) in rects:
cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)
# apply non-maxima suppression to the bounding boxes using a
# fairly large overlap threshold to try to maintain overlapping
# boxes that are still people
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)
# draw the final bounding boxes
for (xA, yA, xB, yB) in pick:
cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
# show some information on the number of bounding boxes
filename = imagePath[imagePath.rfind("/") + 1:]
print("[INFO] {}: {} original boxes, {} after suppression".format(
filename, len(rects), len(pick)))
# show the output images
cv2.imshow("Before NMS", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
It is pretty straightforward. It goes through the images, finds the people in it, then draws bounding rectangles. If rectangles overlap, they are joined together to prevent false positives and detecting more than 1 person in a single person.
However, as I mentioned above, the code fails to recognize people if parts of their feet aren't present.
Is there a way to make OpenCV recognize people who may only have partial of their body (knee up, waist up, chest up) present in a video? In my use case scenarios, I don't think it will be critical to look for arms and legs, as long as the torso and head is present, I should be able to see it.
I found the haar upper body cascade. Though it may not work always (I'll post a new question regarding this), it's a good start.
Here's the code:
import numpy as np
import cv2
img = cv2.imread('path/to/img.jpg',0)
upperBody_cascade = cv2.CascadeClassifier('../path/to/haarcascade_upperbody.xml')
arrUpperBody = upperBody_cascade.detectMultiScale(img)
if arrUpperBody != ():
for (x,y,w,h) in arrUpperBody:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
print 'body found'
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
But it's not as refined as the solution I lifted off of pyimagesearch.

Categories

Resources