I followed (and modified) the method from the best-rated answer of this post.
My image is a little bit different. I used HoughLinesP and managed to detect the majority of red lines.
I was wondering is there a way to remove detected lines from the image, without damage to the other black intersecting lines? I am interested in black lines only. Is there a smarter way to isolate black lines without too many missing pixels and segments?
If you want to isolate just black lines, a simple Otsu's threshold and bitwise-and should do it
import cv2
image = cv2.imread('3.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
result = cv2.bitwise_and(image,image,mask=thresh)
result[thresh==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
This looks like a problem of signal-separation/processing.
I don't know if this would work or not. But this is just a hunch. Give it a shot and see if it works. Assume your image as a convolved image of the measuring strip and the ECG. So, if you process this in the Fourier domain, perhaps you could dis-entangle these two types of signals.
Take fourier transform (FFT) of the image. (scipy has fft functionality). Call the original image: f and fft-image: F.
Take an image of just the measuring strip (but no measured pattern on it for ECG) and evaluate the FFT for this one as well. Call this image: g, fft-image: G.
Calculate inverse FFT of (F/G) and see if that clears up the background effect.
In case this does not work, please leave a note in the comment section.
Related
I have several thousand images of fluid pathlines -- below is a simple example --
and I would like to automatically detect them: Length and position.
For the position a defined point would be sufficient (e.g. left end).
I don't need the full shape information.
This is a pretty common task but I did not find a reliable method.
How could I do this?
My choice would be Python but it's no necessity as long as I can export the results.
Counting curves, angles and straights in a binary image in openCV and python pretty much answers your question.
I tried it on your image and it works.
I used:
ret, thresh = cv2.threshold(gray, 90, 255, cv2.THRESH_BINARY_INV)
and commented out these two lines:
pts = remove_straight(pts) # remove almost straight angles
pts = max_corner(pts) # remove nearby points with greater angle
I am trying to make an object detection tool (given a sample) using Contours.
I have had some progress however when the object is in front of another object with a complicated structure (a hand or a face for example), or the object and its background merge in colors, it stops detecting the edges and thus doesnt give a good contour.
After reading through the algorithms documentation, I discovered that it works on the basis that the edges are detected by difference in color intensity - for example if the object is black and the background is black it will not detect it.
So now i am trying to apply some effects and blurs to try and make it work.
I am currently trying to get a combined Sobel blur (in both axis) with hopes that given enough light it will define the edges - since the product will be used by phones who have flash.
So when i tried to do it:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.GaussianBlur(frame, (5, 5), 10)
frameX = cv2.Sobel(frame, cv2.CV_64F, 1, 0)
frameY = cv2.Sobel(frame, cv2.CV_64F, 0, 1)
frame = cv2.bitwise_or(frameX, frameY)
I get an error saying the cv2.findContours supports only CV_8UC1 images when the mode is not CV_RETR_FLOODFILL.
Here is the line that triggers the error:
counturs, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
I started messing around with this thing only 1 week ago and im surprised how easy it is to get results but some of the error messages are ridiculous.
Edit: I did try to swap the mode to be CV_RETR_FLOODFILL but that did not fix the problem, then it didnt work at all.
The reason is that findContours function expects a binary image (an image consists of 0's and 1's) whose type is 8 bit integer (uint8). The developers might have done this to reduce the memory usage since there is no point in storing binary values with 64 bits instead of 8 bits. Convert frame into uint8 type by just using
frame = np.uint8(frame)
I am trying to detect lines in a certain image. I run it through a skeletonization process before applying the cv2.HoughLinesP. I used the skeletonization code here.
No matter what I try I keep getting results similar to what is described here i.e. 'only fragments of a line..'
As suggested by Jiby, I use the named notation for the parameters and also high rho and theta, but to no avail.
Here is my code:
lines = cv2.HoughLinesP(skel, rho=5, theta=np.deg2rad(10), threshold=0, minLineLength=0, maxLineGap=0)
Prior to this I threshold a RGB image to extract most of my 'blue' hollow rectangle. Then I convert it to gray scale which I then feed to the skeletonizer.
Please advise.
I am testing the cv2.threshold() function in with different values but I get each time unexpected results. So this means simply I do not understand the effect of the parameter:
maxval
Could someone clear me on this ?
For example, I want to draw the contours of this star following the white color:
Here is what I got:
From this code:
import cv2
im=cv2.imread('image.jpg') # read picture
imgray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) # BGR to grayscale
ret,thresh=cv2.threshold(imgray,200,255,cv2.THRESH_BINARY_INV)
countours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,countours,-1,(0,255,0),3)
cv2.imshow("Contour",im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Each time I change the value of maxval I get a strange result that I can not understand. How can I draw the contour of this star correctly using this function then ?
Thank you in advance.
You may want to experiment with a very simple image that clearly lets you understand the various parameters. The interesting thing about the image attached below is that the grayscale value of a number shown in the image is equal to the number. E.g. 200 is written with grayscale value 200. Here is example python code you can use.
import cv2
# Read image
src = cv2.imread("threshold.png", cv2.CV_LOAD_IMAGE_GRAYSCALE)
# Set threshold and maxValue
thresh = 127
maxValue = 255
# Basic threshold example
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY);
# Find Contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Draw Contour
cv2.drawContours(dst,countours,-1,(255,255,255),3)
cv2.imshow("Contour",dst)
cv2.waitKey(0)
I have copied the following image from a OpenCV Threshold Tutorial I wrote recently. It explains the various parameters with example image, Python and C++ Code. Hope this helps.
Input Image
Result Image
well here you can use COLOR_BGR2HSV and then choose a color and the making of contour will be quite easy try it and let me know
in black and while conversion u have same color of yellow and white thats why this is not working
For better accuracy at finding contours one may apply threshold on image as binary image tend to give higher accuracy and then use contours method.Hope this will help..!!!
It was my understanding that when converting an image from BGR to LAB, that the L-component was supposed to represent the grayscale component of the image. However, when I convert from BGR to Grayscale, the expected values don't match. For example,
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
print img1[0][0]
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print img2[0][0]
The first pixel in my image in LAB produces [168 133 162] while the second produces 159. I was under the impression that they should be equivalent somehow (which is reinforced by the fact that there is no COLOR_LAB2GRAY constant).
Can someone clarify and explain why this is the case? Is my understanding of LAB incorrect, or am I just misusing something in my code?
If they are indeed different, then which is the better one to use? The rest of my application is manipulating images in the LAB model, so I am tempted to use the L-component as my grayscale baseline, but it some areas look lighter than they should be.... unlike in the BGR2GRAY scenario. Thoughts?
gray = 0.299R + 0.587G + 0.114B
But the conversion from RGB to the L channel of LAB differs. (which is a non-linear function)
The exact conversion can be found here.
And the non-linearity of LAB conversion explains the last part of your question.