Detect Fluid Pathlines in Images - python

I have several thousand images of fluid pathlines -- below is a simple example --
and I would like to automatically detect them: Length and position.
For the position a defined point would be sufficient (e.g. left end).
I don't need the full shape information.
This is a pretty common task but I did not find a reliable method.
How could I do this?
My choice would be Python but it's no necessity as long as I can export the results.

Counting curves, angles and straights in a binary image in openCV and python pretty much answers your question.
I tried it on your image and it works.
I used:
ret, thresh = cv2.threshold(gray, 90, 255, cv2.THRESH_BINARY_INV)
and commented out these two lines:
pts = remove_straight(pts) # remove almost straight angles
pts = max_corner(pts) # remove nearby points with greater angle

Related

OpenCV in python complains when i try to use 64F in Sobel blur

I am trying to make an object detection tool (given a sample) using Contours.
I have had some progress however when the object is in front of another object with a complicated structure (a hand or a face for example), or the object and its background merge in colors, it stops detecting the edges and thus doesnt give a good contour.
After reading through the algorithms documentation, I discovered that it works on the basis that the edges are detected by difference in color intensity - for example if the object is black and the background is black it will not detect it.
So now i am trying to apply some effects and blurs to try and make it work.
I am currently trying to get a combined Sobel blur (in both axis) with hopes that given enough light it will define the edges - since the product will be used by phones who have flash.
So when i tried to do it:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.GaussianBlur(frame, (5, 5), 10)
frameX = cv2.Sobel(frame, cv2.CV_64F, 1, 0)
frameY = cv2.Sobel(frame, cv2.CV_64F, 0, 1)
frame = cv2.bitwise_or(frameX, frameY)
I get an error saying the cv2.findContours supports only CV_8UC1 images when the mode is not CV_RETR_FLOODFILL.
Here is the line that triggers the error:
counturs, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
I started messing around with this thing only 1 week ago and im surprised how easy it is to get results but some of the error messages are ridiculous.
Edit: I did try to swap the mode to be CV_RETR_FLOODFILL but that did not fix the problem, then it didnt work at all.
The reason is that findContours function expects a binary image (an image consists of 0's and 1's) whose type is 8 bit integer (uint8). The developers might have done this to reduce the memory usage since there is no point in storing binary values with 64 bits instead of 8 bits. Convert frame into uint8 type by just using
frame = np.uint8(frame)

How can I remove lines (detected by HoughLines) from the image?

I followed (and modified) the method from the best-rated answer of this post.
My image is a little bit different. I used HoughLinesP and managed to detect the majority of red lines.
I was wondering is there a way to remove detected lines from the image, without damage to the other black intersecting lines? I am interested in black lines only. Is there a smarter way to isolate black lines without too many missing pixels and segments?
If you want to isolate just black lines, a simple Otsu's threshold and bitwise-and should do it
import cv2
image = cv2.imread('3.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
result = cv2.bitwise_and(image,image,mask=thresh)
result[thresh==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
This looks like a problem of signal-separation/processing.
I don't know if this would work or not. But this is just a hunch. Give it a shot and see if it works. Assume your image as a convolved image of the measuring strip and the ECG. So, if you process this in the Fourier domain, perhaps you could dis-entangle these two types of signals.
Take fourier transform (FFT) of the image. (scipy has fft functionality). Call the original image: f and fft-image: F.
Take an image of just the measuring strip (but no measured pattern on it for ECG) and evaluate the FFT for this one as well. Call this image: g, fft-image: G.
Calculate inverse FFT of (F/G) and see if that clears up the background effect.
In case this does not work, please leave a note in the comment section.

Python OpenCV - perspective transformation issues

I'm writing a script to process an image and extract a PDF417 2D barcode, which will then be decoded. I'm extracting the ROI without problems, but when I try to correct the perspective using cv2.warpPerspective, the result is not as expected.
The following is the extracted barcode, the red dots are the detected corners:
This is the resulting image:
This is the code I'm using for the transformation (the values are found by the script, but for the previous images are as follow):
box_or = np.float32([[23, 30],[395, 23],[26, 2141],[389, 2142]])
box_fix = np.float32([[0,0],[415,0],[0,2159],[415,2159]])
M = cv2.getPerspectiveTransform(box_or,box_fix)
warped = cv2.warpPerspective(img,M,(cols,rows))
I've checked and I don't find anything wrong with the code, yet the transformation is definitely wrong. The amount of perspective distortion in the extracted ROI is minimum, but may affect the decoding process.
So, is there a way to get rid of the perspective distortion? Am I doing something wrong? Is this a known bug or something? Any help is very much welcome.
BTW, I'm using OpenCV 3.3.0
It looks like you're giving the image coordinates as (y, x). I know the interpretation of coordinates varies within OpenCV.
In the homography example code they provide the coordinates as (x,y) - at least based on their use of 'h' and 'w' in this snippet:
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
So try providing the coordinates as (x,y) to both getPerspectiveTransform and warpPerspective.

How to keep track of different contours in opencv python

I am trying to track objects with opencv in python from recorded video. I want to give a unique object nr to each visible object when it appears. For example, one object enters the screen and gets nr1, then a second joins the first and gets nr2, then the first object leaves the screen but the second is still visible and still gets object nr2 and not 1 (being the only object). I can't find any information on how to do this online. Any help (including code) is appreciated.
The code I have written so far for getting the contours and drawing object numbers:
cap = cv2.VideoCapture("video.mov")
while True:
flag, frame = cap.read()
cv2.drawContours(frame, contours, -1, (255,0,0) ,1)
for i in range(len(contours)):
cnt = contours[i]
cnt_nr = i+1
x,y,w,h = cv2.boundingRect(cnt)
cv2.putText(frame, str(cnt_nr), ((x+w)/2,(y+h)/2), cv2.FONT_HERSHEY_PLAIN, 1.8, (0,0,0))
cv2.imshow("Tracked frame",frame)
k = cv2.waitKey(0)
if k == 27:
cv2.destroyAllWindows()
break
What kind of objects are you trying to track? If it's easy to distinguish them you can try to collect some features of objects and check whether object with similar features appeared earlier. It's hard to tell what kind of features will be the best in your situation, but you may try following:
contour size, area and length (or ratio: area/length or some other)
convex hull of object and it length (same as above - you may try ratio as well)
object colour (average colour) - if lighting can change consider using only H channel from HSV color space
some more complicated - "sum" of edges inside object (use some edge detector on object and just calculate sum of the result image)
Other solution is to use more powerfull tool designed for such task - object tracker. In one of my projects i'm using TLD tracker and it works fine, another option is to use CMT tracker, which might be better for you, because it's written in Python. Note that for tracking multiple objects you will need multiple tracker objects (or find tracker which is capable of tracking multiple different objects).

Categorize different images

I have a number of images from Chinese genealogies, and I would like to be able to programatically categorize them. Generally speaking, one type of image has primarily line-by-line text, while the other type may be in a grid or chart format.
Example photos
'Desired' type: http://www.flickr.com/photos/63588871#N05/8138563082/
'Other' type: http://www.flickr.com/photos/63588871#N05/8138561342/in/photostream/
Question: Is there a (relatively) simple way to do this? I have experience with Python, but little knowledge of image processing. Direction to other resources is appreciated as well.
Thanks!
Assuming that at least some of the grid lines are exactly or almost exactly vertical, a fairly simple approach might work.
I used PIL to find all the columns in the image where more than half of the pixels were darker than some threshold value.
Code
import Image, ImageDraw # PIL modules
withlines = Image.open('withgrid.jpg')
nolines = Image.open('nogrid.jpg')
def findlines(image):
w,h, = image.size
s = w*h
im = image.point(lambda i: 255 * (i < 60)) # threshold
d = im.getdata() # faster than per-pixel operations
linecolumns = []
for col in range(w):
black = sum( (d[x] for x in range(col, s, w)) )//255
if black > 450:
linecolumns += [col]
# return an image showing the detected lines
im2 = image.convert('RGB')
draw = ImageDraw.Draw(im2)
for col in linecolumns:
draw.line( (col,0,col,h-1), fill='#f00', width = 1)
return im2
findlines(withlines).show()
findlines(nolines).show()
Results
showing detected vertical lines in red for illustration
As you can see, four of the grid lines are detected, and, with some processing to ignore the left and right sides and the center of the book, there should be no false positives on the desired type.
This means that you could use the above code to detect black columns, discard those that are near to the edge or the center. If any black columns remain, classify it as the "other" undesired class of pictures.
AFAIK, there is no easy way to solve this. You will need a decent amount of image processing and some basic machine learning to classify these kinds of images (and even than it probably won't be 100% successful)
Another note:
While this can be solved by only using machine learning techniques, I would advice you to start searching for some image processing techniques first and try to convert your image to a form that has a decent difference for both images. For this you best start reading about the fft. After that have a look at some digital image processing techniques. When you feel comfortable that you have a decent understanding of these, you can read up on pattern recognition.
This is only one suggested approach though, there are more ways to achieve this.

Categories

Resources