Detecting start and end point of line in image (numpy array) - python

I have an image like the following:
What I would like is to get the coordinates of the start and end point of each segment. Actually what I thought was to consider the fact that each extreme point should have just one point belonging to the segment in its neighborhood, while all other point should have at least 2. Unfortunately the line does not have thickness equal to one pixel so this reasoning does not hold.

Here's a fairly simple way to do it:
load the image and discard the superfluous alpha channel
skeletonise
filter looking for 3x3 neighbourhoods that have the central pixel set and just one other
#!/usr/bin/env python3
import numpy as np
from PIL import Image
from scipy.ndimage import generic_filter
from skimage.morphology import medial_axis
# Line ends filter
def lineEnds(P):
"""Central pixel and just one other must be set to be a line end"""
return 255 * ((P[4]==255) and np.sum(P)==510)
# Open image and make into Numpy array
im = Image.open('lines.png').convert('L')
im = np.array(im)
# Skeletonize
skel = (medial_axis(im)*255).astype(np.uint8)
# Find line ends
result = generic_filter(skel, lineEnds, (3, 3))
# Save result
Image.fromarray(result).save('result.png')
Note that you can obtain exactly the same result, for far less effort, with ImageMagick from the command-line like this:
convert lines.png -alpha off -morphology HMT LineEnds result.png
Or, if you want them as numbers rather than an image:
convert result.png txt: | grep "gray(255)"
Sample Output
134,78: (65535) #FFFFFF gray(255) <--- line end at coordinates 134,78
106,106: (65535) #FFFFFF gray(255) <--- line end at coordinates 106,106
116,139: (65535) #FFFFFF gray(255) <--- line end at coordinates 116,139
196,140: (65535) #FFFFFF gray(255) <--- line end at coordinates 196,140
Another way of doing it is to use scipy.ndimage.morphology.binary_hit_or_miss and set up your "Hits" as the white pixels in the below image and your "Misses" as the black pixels:
The diagram is from Anthony Thyssen's excellent material here.
In a similar vein to the above, you could equally use the "Hits" and "Misses" kernels above with OpenCV as described here:
morphologyEx(input_image, output_image, MORPH_HITMISS, kernel);
I suspect this would be the fastest method.
Keywords: Python, image, image processing, line ends, line-ends, morphology, Hit or Miss, HMT, ImageMagick, filter.

The method you mentioned should work well, you just need to do a morphological operation before to reduce the width of the lines to one pixel. You can use scikit-image for that:
from skimage.morphology import medial_axis
import cv2
# read the lines image
img = cv2.imread('/tmp/tPVCc.png', 0)
# get the skeleton
skel = medial_axis(img)
# skel is a boolean matrix, multiply by 255 to get a black and white image
cv2.imwrite('/tmp/res.png', skel*255)
See this page on the skeletonization methods in skimage.

I would tackle this with watershed-style algorithm. I described method below, however it is created to deal only with single (multisegment) line, so you would need to split your image into images of separate lines.
Toy example:
0000000
0111110
0111110
0110000
0110000
0000000
Where 0 denotes black and 1 denotes white.
Now my implemention of solution:
import numpy as np
img = np.array([[0,0,0,0,0,0,0],
[0,255,255,255,255,255,0],
[0,255,255,255,255,255,0],
[0,255,255,0,0,0,0],
[0,0,0,0,0,0,0]],dtype='uint8')
def flood(arr,value):
flooded = arr.copy()
for y in range(1,arr.shape[0]-1):
for x in range(1,arr.shape[1]-1):
if arr[y][x]==255:
if arr[y-1][x]==value:
flooded[y][x] = value
elif arr[y+1][x]==value:
flooded[y][x] = value
elif arr[y][x-1]==value:
flooded[y][x] = value
elif arr[y][x+1]==value:
flooded[y][x] = value
return flooded
ends = np.zeros(img.shape,dtype='uint64')
for y in range(1,img.shape[0]-1):
for x in range(1,img.shape[1]-1):
if img[y][x]==255:
temp = img.copy()
temp[y][x] = 127
count = 0
while 255 in temp:
temp = flood(temp,127)
count += 1
ends[y][x] = count
print(ends)
Output:
[[0 0 0 0 0 0 0]
[0 5 4 4 5 6 0]
[0 5 4 3 4 5 0]
[0 6 5 0 0 0 0]
[0 0 0 0 0 0 0]]
Now ends are denoted by positions of maximal values in above array (6 in this case).
Explanation: I am examing all white pixels as possible ends. For each such pixel I am "flooding" image - I place special value (127 - different than 0 and different than 255) and then propogate it - in every step all 255 which are neighbors (in von Neumann's sense) of special value become special values themselves. I am counting steps required to remove all 255. Because if you start (constant velocity) flooding from end it would take more time than if you have source in any other location, then maximal times of flooding are ends of your line.
I must admit that I did not tested this deeply, so I can't guarantee correct working in special case, like for example in case of self-intersecting line. I am also aware of roughness of my solution especially in area of detecting neighbors and propagation of special values, so feel free to improve it. I assumed that all border pixels are black (no line is touching "frame" of your image).

Related

Recognizing black and white images with OpenCV

I have this set of images :
The leftmost one is the reference image.
I want to have a value telling me how close is any of the other images to the leftmost one.
I experimented with matchShapes(), by calling it for each contour and averaging the values, but I didn't get useful result (the rightmost one had a too high value, for example)
I would also want the matching to work only in the correct orientation.
If they're purely black and white images it would probably be easier to just AND the two pictures together and sum up the total pixels left in the result.
Something like this:
import cv2
import numpy as np
x = np.zeros((100,100))
y = np.zeros((100,100))
for i in range(25,75):
x[i][i] = 255
y[i][100-i] = 255
cv2.imshow('x', x)
cv2.imshow('y', y)
z = cv2.bitwise_and(x,y)
sum = 0
for i in range(0,z.shape[0]):
for j in range(0,z.shape[1]):
if z[i][j] == 255:
sum += 1
print(f"Similarity Score: {sum}")
cv2.imshow('z',z)
cv2.waitKey(0)
There probably exists some better library to perform this all in one line but if performance isn't much of a concern perhaps this could work.
It was difficult to not recognize images that were too different. With the methods proposed here, I always got really close values for images that I thought were too different to correspond.
In the end, I did a multistep process:
First I got the contour of the test image like so :
testContours, _ = cv.findContours(testImage, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
Then, if the contour count between the test image and the original image are not the same, I abort.
If they have the same contour count, I then calculate the average between all shape distances of the contours :
distances = []
sd = cv2.createShapeContextDistanceExtractor()
for i in range(len(testContours)):
d2 = sd.computeDistance(testContours[i], originalContours[i])
distances.append(d2)
value = sum(distances) / len(distances)
Then, I count the number of white pixels after AND-ing the two images, divided by the total number of pixels in the source image (in case the contours match but are not placed correctly)
exactly_placed_ratio = cv.countNonZero(cv.bitwise_and(testImage, originalImage)) / cv.countNonZero(originalImage)
In the end I have two values, I can use the first one to check if the shapes are close enough, and the second one to check if they are in the right position relative to the whole image.

How to randomly generate scratch like lines with opencv automatically

I am trying to generate synthetic images for my deep learning model. I need to draw scratches on a black surface. I already have a little script that can generate random white scratch like lines but only horizontally. I need the scratches to also be vertically and curved. On top of that it would also be very helpfull if the thickness of the scratches would also be random so I have thick and thin scratches.
This is my code so far:
import cv2
import numpy as np
import random
height = 384
width = 384
blank_image = np.zeros((height, width, 3), np.uint8)
num_scratches= random.randint(0,5)
for _ in range(num_scratches):
row_random = random.randint(20,370)
blank_image[row_random:(row_random+1), row_random:(row_random+random.randint(25,75))] = (255,255,255)
cv2.imshow("synthetic", blank_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
This is one example result outcome:
How do I have to edit my script so I can get more diverse looking scratches?
The scratches should somehow look like this for example (Done with paint):
need the scratches to also be vertically
Your method might be adopted as follows
import numpy as np # cv2 read image into np.array
img = np.zeros((5,5),dtype='uint8') # same as loading 5 x 5 px black rectangle
img[1:4,2:3] = 255
print(img)
Output:
[[ 0 0 0 0 0]
[ 0 0 255 0 0]
[ 0 0 255 0 0]
[ 0 0 255 0 0]
[ 0 0 0 0 0]]
Explanation: I set all elements (pixel) which have y-cordinate between 1 (inclusive) and 4 (exclusive) and x-cordinate between 2 (inclusive) and 3 (exclusive).
Nonetheless cv2 provide function for drawing lines namely cv2.line which is more handy to use, it does accept img on which to work, start point, end point, color and thickness, docs give following example:
# Draw a diagonal blue line with thickness of 5 px
img = cv2.line(img,(0,0),(511,511),(255,0,0),5)
If you are working in grayscale use value rather than 3-tuple as color.

How can I segment these Lemons seperately? [duplicate]

I'm writing for Android with OpenCV. I'm segmenting an image similar to below using marker-controlled watershed, without the user manually marking the image. I'm planning to use the regional maxima as markers.
minMaxLoc() would give me the value, but how can I restrict it to the blobs which is what I'm interested in? Can I utilize the results from findContours() or cvBlob blobs to restrict the ROI and apply maxima to each blob?
First of all: the function minMaxLoc finds only the global minimum and global maximum for a given input, so it is mostly useless for determining regional minima and/or regional maxima. But your idea is right, extracting markers based on regional minima/maxima for performing a Watershed Transform based on markers is totally fine. Let me try to clarify what is the Watershed Transform and how you should correctly use the implementation present in OpenCV.
Some decent amount of papers that deal with watershed describe it similarly to what follows (I might miss some detail, if you are unsure: ask). Consider the surface of some region you know, it contains valleys and peaks (among other details that are irrelevant for us here). Suppose below this surface all you have is water, colored water. Now, make holes in each valley of your surface and then the water starts to fill all the area. At some point, differently colored waters will meet, and when this happen, you construct a dam such that they don't touch each other. In the end you have a collection of dams, which is the watershed separating all the different colored water.
Now, if you make too many holes in that surface, you end up with too many regions: over-segmentation. If you make too few you get an under-segmentation. So, virtually any paper that suggests using watershed actually presents techniques to avoid these problems for the application the paper is dealing with.
I wrote all this (which is possibly too naïve for anyone that knows what the Watershed Transform is) because it reflects directly on how you should use watershed implementations (which the current accepted answer is doing in a completely wrong manner). Let us start on the OpenCV example now, using the Python bindings.
The image presented in the question is composed of many objects that are mostly too close and in some instances overlapping. The usefulness of watershed here is to separate correctly these objects, not to group them into a single component. So you need at least one marker for each object and good markers for the background. As an example, first binarize the input image by Otsu and perform a morphological opening for removing small objects. The result of this step is shown below in the left image. Now with the binary image consider applying the distance transform to it, result at right.
With the distance transform result, we can consider some threshold such that we consider only the regions most distant to the background (left image below). Doing this, we can obtain a marker for each object by labeling the different regions after the earlier threshold. Now, we can also consider the border of a dilated version of the left image above to compose our marker. The complete marker is shown below at right (some markers are too dark to be seen, but each white region in the left image is represented at the right image).
This marker we have here makes a lot of sense. Each colored water == one marker will start to fill the region, and the watershed transformation will construct dams to impede that the different "colors" merge. If we do the transform, we get the image at left. Considering only the dams by composing them with the original image, we get the result at right.
import sys
import cv2
import numpy
from scipy.ndimage import label
def segment_on_dt(a, img):
border = cv2.dilate(img, None, iterations=5)
border = border - cv2.erode(border, None)
dt = cv2.distanceTransform(img, 2, 3)
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
_, dt = cv2.threshold(dt, 180, 255, cv2.THRESH_BINARY)
lbl, ncc = label(dt)
lbl = lbl * (255 / (ncc + 1))
# Completing the markers now.
lbl[border == 255] = 255
lbl = lbl.astype(numpy.int32)
cv2.watershed(a, lbl)
lbl[lbl == -1] = 0
lbl = lbl.astype(numpy.uint8)
return 255 - lbl
img = cv2.imread(sys.argv[1])
# Pre-processing.
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, img_bin = cv2.threshold(img_gray, 0, 255,
cv2.THRESH_OTSU)
img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_OPEN,
numpy.ones((3, 3), dtype=int))
result = segment_on_dt(img, img_bin)
cv2.imwrite(sys.argv[2], result)
result[result != 255] = 0
result = cv2.dilate(result, None)
img[result == 255] = (0, 0, 255)
cv2.imwrite(sys.argv[3], img)
I would like to explain a simple code on how to use watershed here. I am using OpenCV-Python, but i hope you won't have any difficulty to understand.
In this code, I will be using watershed as a tool for foreground-background extraction. (This example is the python counterpart of the C++ code in OpenCV cookbook). This is a simple case to understand watershed. Apart from that, you can use watershed to count the number of objects in this image. That will be a slightly advanced version of this code.
1 - First we load our image, convert it to grayscale, and threshold it with a suitable value. I took Otsu's binarization, so it would find the best threshold value.
import cv2
import numpy as np
img = cv2.imread('sofwatershed.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Below is the result I got:
( even that result is good, because great contrast between foreground and background images)
2 - Now we have to create the marker. Marker is the image with same size as that of original image which is 32SC1 (32 bit signed single channel).
Now there will be some regions in the original image where you are simply sure, that part belong to foreground. Mark such region with 255 in marker image. Now the region where you are sure to be the background are marked with 128. The region you are not sure are marked with 0. That is we are going to do next.
A - Foreground region:- We have already got a threshold image where pills are white color. We erode them a little, so that we are sure remaining region belongs to foreground.
fg = cv2.erode(thresh,None,iterations = 2)
fg :
B - Background region :- Here we dilate the thresholded image so that background region is reduced. But we are sure remaining black region is 100% background. We set it to 128.
bgt = cv2.dilate(thresh,None,iterations = 3)
ret,bg = cv2.threshold(bgt,1,128,1)
Now we get bg as follows :
C - Now we add both fg and bg :
marker = cv2.add(fg,bg)
Below is what we get :
Now we can clearly understand from above image, that white region is 100% foreground, gray region is 100% background, and black region we are not sure.
Then we convert it into 32SC1 :
marker32 = np.int32(marker)
3 - Finally we apply watershed and convert result back into uint8 image:
cv2.watershed(img,marker32)
m = cv2.convertScaleAbs(marker32)
m :
4 - We threshold it properly to get the mask and perform bitwise_and with the input image:
ret,thresh = cv2.threshold(m,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
res = cv2.bitwise_and(img,img,mask = thresh)
res :
Hope it helps!!!
ARK
Foreword
I'm chiming in mostly because I found both the watershed tutorial in the OpenCV documentation (and C++ example) as well as mmgp's answer above to be quite confusing. I revisited a watershed approach multiple times to ultimately give up out of frustration. I finally realized I needed to at least give this approach a try and see it in action. This is what I've come up with after sorting out all of the tutorials I've come across.
Aside from being a computer vision novice, most of my trouble probably had to do with my requirement to use the OpenCVSharp library rather than Python. C# doesn't have baked-in high-power array operators like those found in NumPy (though I realize this has been ported via IronPython), so I struggled quite a bit in both understanding and implementing these operations in C#. Also, for the record, I really despise the nuances of, and inconsistencies in most of these function calls. OpenCVSharp is one of the most fragile libraries I've ever worked with. But hey, it's a port, so what was I expecting? Best of all, though -- it's free.
Without further ado, let's talk about my OpenCVSharp implementation of the watershed, and hopefully clarify some of the stickier points of watershed implementation in general.
Application
First of all, make sure watershed is what you want and understand its use. I am using stained cell plates, like this one:
It took me a good while to figure out I couldn't just make one watershed call to differentiate every cell in the field. On the contrary, I first had to isolate a portion of the field, then call watershed on that small portion. I isolated my region of interest (ROI) via a number of filters, which I will explain briefly here:
Start with source image (left, cropped for demonstration purposes)
Isolate the red channel (left middle)
Apply adaptive threshold (right middle)
Find contours then eliminate those with small areas (right)
Once we have cleaned the contours resulting from the above thresholding operations, it is time to find candidates for watershed. In my case, I simply iterated through all contours greater than a certain area.
Code
Say we've isolated this contour from the above field as our ROI:
Let's take a look at how we'll code up a watershed.
We'll start with a blank mat and draw only the contour defining our ROI:
var isolatedContour = new Mat(source.Size(), MatType.CV_8UC1, new Scalar(0, 0, 0));
Cv2.DrawContours(isolatedContour, new List<List<Point>> { contour }, -1, new Scalar(255, 255, 255), -1);
In order for the watershed call to work, it will need a couple of "hints" about the ROI. If you're a complete beginner like me, I recommend checking out the CMM watershed page for a quick primer. Suffice to say we're going to create hints about the ROI on the left by creating the shape on the right:
To create the white part (or "background") of this "hint" shape, we'll just Dilate the isolated shape like so:
var kernel = Cv2.GetStructuringElement(MorphShapes.Ellipse, new Size(2, 2));
var background = new Mat();
Cv2.Dilate(isolatedContour, background, kernel, iterations: 8);
To create the black part in the middle (or "foreground"), we'll use a distance transform followed by threshold, which takes us from the shape on the left to the shape on the right:
This takes a few steps, and you may need to play around with the lower bound of your threshold to get results that work for you:
var foreground = new Mat(source.Size(), MatType.CV_8UC1);
Cv2.DistanceTransform(isolatedContour, foreground, DistanceTypes.L2, DistanceMaskSize.Mask5);
Cv2.Normalize(foreground, foreground, 0, 1, NormTypes.MinMax); //Remember to normalize!
foreground.ConvertTo(foreground, MatType.CV_8UC1, 255, 0);
Cv2.Threshold(foreground, foreground, 150, 255, ThresholdTypes.Binary);
Then we'll subtract these two mats to get the final result of our "hint" shape:
var unknown = new Mat(); //this variable is also named "border" in some examples
Cv2.Subtract(background, foreground, unknown);
Again, if we Cv2.ImShow unknown, it would look like this:
Nice! This was easy for me to wrap my head around. The next part, however, got me quite puzzled. Let's look at turning our "hint" into something the Watershed function can use. For this we need to use ConnectedComponents, which is basically a big matrix of pixels grouped by the virtue of their index. For example, if we had a mat with the letters "HI", ConnectedComponents might return this matrix:
0 0 0 0 0 0 0 0 0
0 1 0 1 0 2 2 2 0
0 1 0 1 0 0 2 0 0
0 1 1 1 0 0 2 0 0
0 1 0 1 0 0 2 0 0
0 1 0 1 0 2 2 2 0
0 0 0 0 0 0 0 0 0
So, 0 is the background, 1 is the letter "H", and 2 is the letter "I". (If you get to this point and want to visualize your matrix, I recommend checking out this instructive answer.) Now, here's how we'll utilize ConnectedComponents to create the markers (or labels) for watershed:
var labels = new Mat(); //also called "markers" in some examples
Cv2.ConnectedComponents(foreground, labels);
labels = labels + 1;
//this is a much more verbose port of numpy's: labels[unknown==255] = 0
for (int x = 0; x < labels.Width; x++)
{
for (int y = 0; y < labels.Height; y++)
{
//You may be able to just send "int" in rather than "char" here:
var labelPixel = (int)labels.At<char>(y, x); //note: x and y are inexplicably
var borderPixel = (int)unknown.At<char>(y, x); //and infuriatingly reversed
if (borderPixel == 255)
labels.Set(y, x, 0);
}
}
Note that the Watershed function requires the border area to be marked by 0. So, we've set any border pixels to 0 in the label/marker array.
At this point, we should be all set to call Watershed. However, in my particular application, it is useful just to visualize a small portion of the entire source image during this call. This may be optional for you, but I first just mask off a small bit of the source by dilating it:
var mask = new Mat();
Cv2.Dilate(isolatedContour, mask, new Mat(), iterations: 20);
var sourceCrop = new Mat(source.Size(), source.Type(), new Scalar(0, 0, 0));
source.CopyTo(sourceCrop, mask);
And then make the magic call:
Cv2.Watershed(sourceCrop, labels);
Results
The above Watershed call will modify labels in place. You'll have to go back to remembering about the matrix resulting from ConnectedComponents. The difference here is, if watershed found any dams between watersheds, they will be marked as "-1" in that matrix. Like the ConnectedComponents result, different watersheds will be marked in a similar fashion of incrementing numbers. For my purposes, I wanted to store these into separate contours, so I created this loop to split them up:
var watershedContours = new List<Tuple<int, List<Point>>>();
for (int x = 0; x < labels.Width; x++)
{
for (int y = 0; y < labels.Height; y++)
{
var labelPixel = labels.At<Int32>(y, x); //note: x, y switched
var connected = watershedContours.Where(t => t.Item1 == labelPixel).FirstOrDefault();
if (connected == null)
{
connected = new Tuple<int, List<Point>>(labelPixel, new List<Point>());
watershedContours.Add(connected);
}
connected.Item2.Add(new Point(x, y));
if (labelPixel == -1)
sourceCrop.Set(y, x, new Vec3b(0, 255, 255));
}
}
Then, I wanted to print these contours with random colors, so I created the following mat:
var watershed = new Mat(source.Size(), MatType.CV_8UC3, new Scalar(0, 0, 0));
foreach (var component in watershedContours)
{
if (component.Item2.Count < (labels.Width * labels.Height) / 4 && component.Item1 >= 0)
{
var color = GetRandomColor();
foreach (var point in component.Item2)
watershed.Set(point.Y, point.X, color);
}
}
Which yields the following when shown:
If we draw on the source image the dams that were marked by a -1 earlier, we get this:
Edits:
I forgot to note: make sure you're cleaning up your mats after you're done with them. They WILL stay in memory and OpenCVSharp may present with some unintelligible error message. I should really be using using above, but mat.Release() is an option as well.
Also, mmgp's answer above includes this line: dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8), which is a histogram stretching step applied to the results of the distance transform. I omitted this step for a number of reasons (mostly because I didn't think the histograms I saw were too narrow to begin with), but your mileage may vary.

How to remove a rough line artifact from image after binarization

I am stuck in a problem where I want to differentiate between an object and the background(having a semi-transparent white sheet with backlight) i.e a fixed rough line introduced in the background and is merged with the object. My algorithm right now is I am taking the image from the camera, smoothing with gaussian blur, then extracting Value component from HSV, applying local binarization using wolf method to get the binarized image after which using OpenCV connected component algorithm I remove some small artifacts that are not connected to object as seen here. Now there is only this line artifact which is merged with the object but I want only the object as seen in this image. Please note that there are 2 lines in the binary image so using the 8 connected logic to detect lines not making a loop is not possible this is what I think and tried also. here is the code for that
size = np.size(thresh_img)
skel = np.zeros(thresh_img.shape,np.uint8)
element = cv2.getStructuringElement(cv2.MORPH_RECT,(3,3))
done = False
while( not done):
eroded = cv2.erode(thresh_img,element)
temp = cv2.dilate(eroded,element)
temp = cv2.subtract(thresh_img,temp)
skel = cv2.bitwise_or(skel,temp)
thresh_img = eroded.copy()
zeros = size - cv2.countNonZero(thresh_img)
if zeros==size:
done = True
# set max pixel value to 1
s = np.uint8(skel > 0)
count = 0
i = 0
while count != np.sum(s):
# non-zero pixel count
count = np.sum(s)
# examine 3x3 neighborhood of each pixel
filt = cv2.boxFilter(s, -1, (3, 3), normalize=False)
# if the center pixel of 3x3 neighborhood is zero, we are not interested in it
s = s*filt
# now we have pixels where the center pixel of 3x3 neighborhood is non-zero
# if a pixels' 8-connectivity is less than 2 we can remove it
# threshold is 3 here because the boxfilter also counted the center pixel
s[s < 1] = 0
# set max pixel value to 1
s[s > 0] = 1
i = i + 1
Any help in the form of code would be highly appreciated thanks.
Since you are already using connectedComponents the best way is to exclude, not only the ones which are small, but also the ones that are touching the borders of the image.
You can know which ones are to be discarded using connectedComponentsWithStats() that gives you also information about the bounding box of each component.
Alternatively, and very similarly you can switch from connectedComponents() to findContours() which gives you directly the Components so you can discard the external ones and the small ones to retrieved the part you are interested in.

How to detect colored text from 6 meters away?

I am using python, PIL, opencv and numpy to detect single color texts (i.e one is red, one is green). I want to detect these colorful text up to 6 meters away during live stream. I have used color detection methods but they did not work after 30-50 cm. Camera should be close to colors. As a second method to detect these texts, I used ctpn method. Although it detects texts, It does not provide the coordinate of these texts since I need coordinate points of texts also. I also tried OCR method in Matlab to automatically detect text in natural image but it failed since it finds another small objects as text. I am so stuck about what to do.
Let say for example, there are two different texts in an image that is captured 6 meters away. One text is green, the other one is red. The width of these texts are approximately 40-50 cm. In addition, they are only two different words, not long texts. How can I detect them and specify their location as (x1,y1) and (x2,y2)? Is that possible ? needy for any succesfull hint ?
import numpy as np
from PIL import Image
# Open image and make RGB and HSV versions
RGBim = Image.open("AdjustedNewMaze3.jpg").convert('RGB')
HSVim = RGBim.convert('HSV')
# Make numpy versions
RGBna = np.array(RGBim)
HSVna = np.array(HSVim)
# Extract Hue
H = HSVna[:,:,0]
# Find all green pixels, i.e. where 100 < Hue < 140
lo,hi = 100,140
# Rescale to 0-255, rather than 0-360 because we are using uint8
lo = int((lo * 255) / 360)
hi = int((hi * 255) / 360)
green = np.where((H>lo) & (H<hi))
# Make all green pixels black in original image
RGBna[green] = [0,0,0]
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
value = 120 & 125
green = find_nearest(RGBna, value)
print(green)
count = green[0].size
print("Pixels matched: {}".format(count))
Image.fromarray(green).save('resultgreen.png')

Categories

Resources