Show Brisk keypoints with less keypoints in Python - python

I have the following code in python
import cv2
import numpy as np
def save_keypoints(image_path, type_image):
img = cv2.imread(image_path)
gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
kp, descriptors =cv2.BRISK_create(10).detectAndCompute(gray,None)
mg=cv2.drawKeypoints(gray, kp, None,
flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
cv2.imwrite('brisk_keypoints-'+ type_image+'.jpg',mg)
if __name__=="__main__":
save_keypoints("original.bmp" ,"original")
save_keypoints("fake600.bmp" ,"fake600")
save_keypoints("fake1200.bmp" ,"fake1200")
save_keypoints("fake2400.bmp" ,"fake2400")
Basically, the code will save an image with BRISK keypoints detected. However, here are the results of applying this code in four images:
Although the images are different (I can easily discriminate them using these BRISK descriptors in a bag of visual words approach), it seems that the keypoints detected in all these four images are visually the same or maybe the high number of concentric circles are confusing the viewer. How can I reduce the number of keypoints shown in such a way that I can see how these images are different through these descriptors?

The ideal answer would be as #Silencer suggested to filter the Keypoints. There are several ways you can achieve that. If you debug, you can see what information is contained in the ndarray Keypoints. The information should be something like this. So, using this, you can either sort the Keypoints based on the response (I'd suggest to start with that) or with the coordinates of the Keypoints. Response is basically how-good the Keypoint is, roughly speaking, how good is the corner-ness of the particular Keypoint.
For example:
Based on index
keypoints = detector.detect(frame) #list of keypoints
x = keypoints[i].pt[0] #i is the index of the Keypoint you want to get the position
y = keypoints[i].pt[1]
This you can use in a lamda expression (and not a loop) or a numpy function for fast optimization. Similarly, for response, you can do:
res = keypoints[i].response
I have seen responses from 31 to 320 for BRISK but you have to find the best value for your image.
Hope it helps!

Related

Python OpenCV - perspective transformation issues

I'm writing a script to process an image and extract a PDF417 2D barcode, which will then be decoded. I'm extracting the ROI without problems, but when I try to correct the perspective using cv2.warpPerspective, the result is not as expected.
The following is the extracted barcode, the red dots are the detected corners:
This is the resulting image:
This is the code I'm using for the transformation (the values are found by the script, but for the previous images are as follow):
box_or = np.float32([[23, 30],[395, 23],[26, 2141],[389, 2142]])
box_fix = np.float32([[0,0],[415,0],[0,2159],[415,2159]])
M = cv2.getPerspectiveTransform(box_or,box_fix)
warped = cv2.warpPerspective(img,M,(cols,rows))
I've checked and I don't find anything wrong with the code, yet the transformation is definitely wrong. The amount of perspective distortion in the extracted ROI is minimum, but may affect the decoding process.
So, is there a way to get rid of the perspective distortion? Am I doing something wrong? Is this a known bug or something? Any help is very much welcome.
BTW, I'm using OpenCV 3.3.0
It looks like you're giving the image coordinates as (y, x). I know the interpretation of coordinates varies within OpenCV.
In the homography example code they provide the coordinates as (x,y) - at least based on their use of 'h' and 'w' in this snippet:
h,w = img1.shape
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
So try providing the coordinates as (x,y) to both getPerspectiveTransform and warpPerspective.

cv2.HoughLinesP on a skeletonized image

I am trying to detect lines in a certain image. I run it through a skeletonization process before applying the cv2.HoughLinesP. I used the skeletonization code here.
No matter what I try I keep getting results similar to what is described here i.e. 'only fragments of a line..'
As suggested by Jiby, I use the named notation for the parameters and also high rho and theta, but to no avail.
Here is my code:
lines = cv2.HoughLinesP(skel, rho=5, theta=np.deg2rad(10), threshold=0, minLineLength=0, maxLineGap=0)
Prior to this I threshold a RGB image to extract most of my 'blue' hollow rectangle. Then I convert it to gray scale which I then feed to the skeletonizer.
Please advise.

exact position of match with OpenCV ORB matcher

I have built a simple algorithm for visual mark detection with OpenCV on Python, that uses their ORB detector as the second step. I use ORB with the BFmatcher, the code is borrowed from this project: https://rdmilligan.wordpress.com/2015/03/01/road-sign-detection-using-opencv-orb/
The detection part in the code looks like this:
# find the keypoints and descriptors for object
kp_o, des_o = orb.detectAndCompute(obj,None)
if len(kp_o) == 0 or des_o == None: continue
# match descriptors
matches = bf.match(des_r,des_o)
Then there is a check on the number of feature matches, so it can tell if there is a match between the template image and the query. The question is: if yes, how do I get exact position and rotation angle of the found match?
The position is already known at this step. It is stored in variables x and y. To find the rotation, blur both template and the source, then either generate 360 rotated representations of the blurred template and then find the one that has the smallest difference with the region of interest or convert both images to polar coordinates and try to shift one of the images to achieve the best math (the shift will be the angle you want to rotate by).

draw contour with cv2.threshold() function

I am testing the cv2.threshold() function in with different values but I get each time unexpected results. So this means simply I do not understand the effect of the parameter:
maxval
Could someone clear me on this ?
For example, I want to draw the contours of this star following the white color:
Here is what I got:
From this code:
import cv2
im=cv2.imread('image.jpg') # read picture
imgray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY) # BGR to grayscale
ret,thresh=cv2.threshold(imgray,200,255,cv2.THRESH_BINARY_INV)
countours,hierarchy=cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(im,countours,-1,(0,255,0),3)
cv2.imshow("Contour",im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Each time I change the value of maxval I get a strange result that I can not understand. How can I draw the contour of this star correctly using this function then ?
Thank you in advance.
You may want to experiment with a very simple image that clearly lets you understand the various parameters. The interesting thing about the image attached below is that the grayscale value of a number shown in the image is equal to the number. E.g. 200 is written with grayscale value 200. Here is example python code you can use.
import cv2
# Read image
src = cv2.imread("threshold.png", cv2.CV_LOAD_IMAGE_GRAYSCALE)
# Set threshold and maxValue
thresh = 127
maxValue = 255
# Basic threshold example
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY);
# Find Contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Draw Contour
cv2.drawContours(dst,countours,-1,(255,255,255),3)
cv2.imshow("Contour",dst)
cv2.waitKey(0)
I have copied the following image from a OpenCV Threshold Tutorial I wrote recently. It explains the various parameters with example image, Python and C++ Code. Hope this helps.
Input Image
Result Image
well here you can use COLOR_BGR2HSV and then choose a color and the making of contour will be quite easy try it and let me know
in black and while conversion u have same color of yellow and white thats why this is not working
For better accuracy at finding contours one may apply threshold on image as binary image tend to give higher accuracy and then use contours method.Hope this will help..!!!

Categorize different images

I have a number of images from Chinese genealogies, and I would like to be able to programatically categorize them. Generally speaking, one type of image has primarily line-by-line text, while the other type may be in a grid or chart format.
Example photos
'Desired' type: http://www.flickr.com/photos/63588871#N05/8138563082/
'Other' type: http://www.flickr.com/photos/63588871#N05/8138561342/in/photostream/
Question: Is there a (relatively) simple way to do this? I have experience with Python, but little knowledge of image processing. Direction to other resources is appreciated as well.
Thanks!
Assuming that at least some of the grid lines are exactly or almost exactly vertical, a fairly simple approach might work.
I used PIL to find all the columns in the image where more than half of the pixels were darker than some threshold value.
Code
import Image, ImageDraw # PIL modules
withlines = Image.open('withgrid.jpg')
nolines = Image.open('nogrid.jpg')
def findlines(image):
w,h, = image.size
s = w*h
im = image.point(lambda i: 255 * (i < 60)) # threshold
d = im.getdata() # faster than per-pixel operations
linecolumns = []
for col in range(w):
black = sum( (d[x] for x in range(col, s, w)) )//255
if black > 450:
linecolumns += [col]
# return an image showing the detected lines
im2 = image.convert('RGB')
draw = ImageDraw.Draw(im2)
for col in linecolumns:
draw.line( (col,0,col,h-1), fill='#f00', width = 1)
return im2
findlines(withlines).show()
findlines(nolines).show()
Results
showing detected vertical lines in red for illustration
As you can see, four of the grid lines are detected, and, with some processing to ignore the left and right sides and the center of the book, there should be no false positives on the desired type.
This means that you could use the above code to detect black columns, discard those that are near to the edge or the center. If any black columns remain, classify it as the "other" undesired class of pictures.
AFAIK, there is no easy way to solve this. You will need a decent amount of image processing and some basic machine learning to classify these kinds of images (and even than it probably won't be 100% successful)
Another note:
While this can be solved by only using machine learning techniques, I would advice you to start searching for some image processing techniques first and try to convert your image to a form that has a decent difference for both images. For this you best start reading about the fft. After that have a look at some digital image processing techniques. When you feel comfortable that you have a decent understanding of these, you can read up on pattern recognition.
This is only one suggested approach though, there are more ways to achieve this.

Categories

Resources