I am testing the cv2.threshold() function in with different values but I get each time unexpected results. So this means simply I do not understand the effect of the parameter:
maxval
Could someone clear me on this ?
For example, I want to draw the contours of this star following the white color:
Here is what I got:
From this code:
import cv2
im=cv2.imread('image.jpg') # read picture
imgray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) # BGR to grayscale
ret,thresh=cv2.threshold(imgray,200,255,cv2.THRESH_BINARY_INV)
countours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,countours,-1,(0,255,0),3)
cv2.imshow("Contour",im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Each time I change the value of maxval I get a strange result that I can not understand. How can I draw the contour of this star correctly using this function then ?
Thank you in advance.
You may want to experiment with a very simple image that clearly lets you understand the various parameters. The interesting thing about the image attached below is that the grayscale value of a number shown in the image is equal to the number. E.g. 200 is written with grayscale value 200. Here is example python code you can use.
import cv2
# Read image
src = cv2.imread("threshold.png", cv2.CV_LOAD_IMAGE_GRAYSCALE)
# Set threshold and maxValue
thresh = 127
maxValue = 255
# Basic threshold example
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY);
# Find Contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Draw Contour
cv2.drawContours(dst,countours,-1,(255,255,255),3)
cv2.imshow("Contour",dst)
cv2.waitKey(0)
I have copied the following image from a OpenCV Threshold Tutorial I wrote recently. It explains the various parameters with example image, Python and C++ Code. Hope this helps.
Input Image
Result Image
well here you can use COLOR_BGR2HSV and then choose a color and the making of contour will be quite easy try it and let me know
in black and while conversion u have same color of yellow and white thats why this is not working
For better accuracy at finding contours one may apply threshold on image as binary image tend to give higher accuracy and then use contours method.Hope this will help..!!!
Related
I followed (and modified) the method from the best-rated answer of this post.
My image is a little bit different. I used HoughLinesP and managed to detect the majority of red lines.
I was wondering is there a way to remove detected lines from the image, without damage to the other black intersecting lines? I am interested in black lines only. Is there a smarter way to isolate black lines without too many missing pixels and segments?
If you want to isolate just black lines, a simple Otsu's threshold and bitwise-and should do it
import cv2
image = cv2.imread('3.jpg')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
result = cv2.bitwise_and(image,image,mask=thresh)
result[thresh==0] = (255,255,255)
cv2.imshow('thresh', thresh)
cv2.imshow('result', result)
cv2.waitKey()
This looks like a problem of signal-separation/processing.
I don't know if this would work or not. But this is just a hunch. Give it a shot and see if it works. Assume your image as a convolved image of the measuring strip and the ECG. So, if you process this in the Fourier domain, perhaps you could dis-entangle these two types of signals.
Take fourier transform (FFT) of the image. (scipy has fft functionality). Call the original image: f and fft-image: F.
Take an image of just the measuring strip (but no measured pattern on it for ECG) and evaluate the FFT for this one as well. Call this image: g, fft-image: G.
Calculate inverse FFT of (F/G) and see if that clears up the background effect.
In case this does not work, please leave a note in the comment section.
I have an input image and would like to draw ellipses on all contours that are found. I'm using python-opencv and getting the following error. Can anyone help me with this? I do know to draw the ellipse, but unsure about how to draw over every detected objects in a image. I am new to this area, hence please excuse me for silly question.
OpenCV Error: Incorrect size of input array (There should be at least 5 points to fit the ellipse) in cv::fitEllipse, file
C:\bld\opencv_1498171314629\work\opencv-3.2.0\modules\imgproc\src\shapedescr.cpp, line 358
Traceback (most recent call last):
File "D:/project/test.py", line 41, in <module>
ellipse = cv2.fitEllipse(contours[0])
cv2.error: C:\bld\opencv_1498171314629\work\opencv-
3.2.0\modules\imgproc\src\shapedescr.cpp:358: error: (-201) There should be
at least 5 points to fit the ellipse in function cv::fitEllipse
With ellipse=cv2.fitEllipse(contours[0])you are fitting only the first contour, which has less than 5 points... you should iterate through all the contours and draw only the ones with more than 5 points...
Try something like this:
import numpy as np
import cv2
image=cv2.imread("cell.jpg")
grey=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
grey_hist=cv2.calcHist([grey],[0],None,[256],[0,256])
eq=cv2.equalizeHist(grey)
blurredA1=cv2.blur(eq,(3,3))
(T,thresh)=cv2.threshold(blurredA1,190,255,cv2.THRESH_BINARY)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for i in range(len(contours)):
if len(contours[i]) >= 5:
cv2.drawContours(thresh,contours,-1,(150,10,255),3)
ellipse=cv2.fitEllipse(contours[0])
else:
# optional to "delete" the small contours
cv2.drawContours(thresh,contours,-1,(0,0,0),-1)
cv2.imshow("Perfectlyfittedellipses",thresh)
cv2.waitKey(0)
As you can see I iterate all of the contours and check if it is bigger than 5. Also, I added something to cloud all the contours that are not big enough.
I do not know how your input image is and so this might or might not work. When you use cv2.RETR_EXTERNAL in the find contours function, it only returns the external contours. Instead, use 'cv2.RETR_TREE'.
This retrieves all of the contours and reconstructs a full hierarchy of nested contours. Refer here for documentation. Your code should change as follows.
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
Hope this helps. If it does not work, it would be great if you could upload your input image so that we can work on it!
Also, as for putting an ellipse on every contour found, like api55 mentioned, you are trying fit ellipse only on the first contour. Hope his answer helps you on that. If you want to put an ellipse on the largest contour, you can sort the found contours based on their area and then fit ellipses on the largest contour or on contours bigger than a certain size.
I am trying to detect lines in a certain image. I run it through a skeletonization process before applying the cv2.HoughLinesP. I used the skeletonization code here.
No matter what I try I keep getting results similar to what is described here i.e. 'only fragments of a line..'
As suggested by Jiby, I use the named notation for the parameters and also high rho and theta, but to no avail.
Here is my code:
lines = cv2.HoughLinesP(skel, rho=5, theta=np.deg2rad(10), threshold=0, minLineLength=0, maxLineGap=0)
Prior to this I threshold a RGB image to extract most of my 'blue' hollow rectangle. Then I convert it to gray scale which I then feed to the skeletonizer.
Please advise.
I have a .png image that contains three grayscale values. It contains black (0), white (255) and gray (128) blobs. I want to resize this image to a smaller size while preserving only these three grayscale values.
Currently, I am using scipy.misc.imresize to do it but I noticed that when I reduce the size, the edges get blurred and now contains more than 3 grayscale values.
Does anyone know how to do this in python?
From the docs for imresize, note the interp keyword argument:
interp : str, optional
Interpolation to use for re-sizing
(‘nearest’, ‘lanczos’, ‘bilinear’, ‘bicubic’ or ‘cubic’).
The default is bilinear filtering; switch to nearest and it will instead use the exact color of the nearest existing pixel, which will preserve your precise grayscale values rather than trying to linearly interpolate between them.
I believe that PIL.Image.resize does exactly what you want. Take a look at the docs.
Basically what you need is:
from PIL import Image
im = Image.open('old.png')
# The Image.NEAREST is the default, I'm just being explicit
im = im.resize((im.size[0]/2, im.size[1]/2), Image.NEAREST)
im.save('new.png')
Actually you can pretty much do that with the scipy.misc.imresize
Take a look at its docs.
The interp parameter is what you need. If you set it to nearest the image colors won't be affected.
It was my understanding that when converting an image from BGR to LAB, that the L-component was supposed to represent the grayscale component of the image. However, when I convert from BGR to Grayscale, the expected values don't match. For example,
img1 = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
print img1[0][0]
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
print img2[0][0]
The first pixel in my image in LAB produces [168 133 162] while the second produces 159. I was under the impression that they should be equivalent somehow (which is reinforced by the fact that there is no COLOR_LAB2GRAY constant).
Can someone clarify and explain why this is the case? Is my understanding of LAB incorrect, or am I just misusing something in my code?
If they are indeed different, then which is the better one to use? The rest of my application is manipulating images in the LAB model, so I am tempted to use the L-component as my grayscale baseline, but it some areas look lighter than they should be.... unlike in the BGR2GRAY scenario. Thoughts?
gray = 0.299R + 0.587G + 0.114B
But the conversion from RGB to the L channel of LAB differs. (which is a non-linear function)
The exact conversion can be found here.
And the non-linearity of LAB conversion explains the last part of your question.