I am new to the computer vision area and i have been given this task,
I need to recognize an amount of images with a camera as soon as they enter the camera focus, this images would be scanned previously and stored in some sort of database.(maybe the key-points collection to each image)
well, i've been doing some research and found that SIFT may do the trick but i don't know how to use it properly, i need to do this on Python-opencv
Note: I already found examples in which I can get the key-points on an image using SIFT, but the code is very confusing to someone who does not know the language, any help is appreciated.
Here is a good page for you to get started and learn the basics along the way.
Related
So I have been asked to motion deblur a frame captured from a video, I am kind of new to this deblur filters so need help. The video does not contain any noise, just a vertical motion blur. I am not allowed to use skimage, or any other library except cv2. It would be a great help even if what technique or function I have to use comes to know. Thanks!
You can use the Motion Deblur Filter of opencv, if you specifically want to use opencv.
Following is the link to its documentation, which is fairly easy to understand:
http://amroamroamro.github.io/mexopencv/opencv/weiner_deconvolution_demo_gui.html
You can go for skimage as well. It has many function like deconvolution which can help in deblurring images.
I think that for this kind of problem you have to use the recent deep learning techniques. They outperform the classical approaches. I recommend to look on github for a repository that would already provide a trained network that can deblur the same kind of blur that you have.
I never tried it, but this could be a nice candidate.
Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!
As an exercise, I'm attempting to break the following CAPTCHA:
It doesn't seem like it would be too difficult to break as the edges seems to fairly solid and noise should be relatively easy to remove. Problem is, I have very little experience with image manipulation. Currently I'm using Python with the Pillow library to manipulate the CAPTCHA image, after which it will be passed into Tesseract for OCR.
In the following code I attempt to bring out the edges by sharpening the image and the convert the image to black and white
from PIL import Image, ImageFilter
try:
img = Image.open("Captcha.jpg")
except:
print("Can't load captcha.")
exit()
# Bring out the edges by sharpening.
out = img.filter(ImageFilter.SHARPEN)
out = out.convert("L")
out = out.point(lambda x: 0 if x<136 else 255, "1")
width, height = out.size
out = out.resize((width*5, height*5), Image.NEAREST)
out.save("captcha_modified.png")
At this point I see the following:
However, Tesseract is still unable to read the characters. As an experiment, I used good ol' mspaint to manually modify the image to a point to where it could be read by Tesseract:
So if can get the image to that point, I think Tesseract will do a fairly good job at detecting characters. So my current thoughts are that I need to enhance the edges and reduce the noise the image. Also, I imagine it would be easier for Tesseract to detect the letters if the letters will filled in rather than outlined, but I have not idea how I'd do this.
Any suggestions on how to go about this? Is there a better way to process the images?
I am short on time so this answer may not be incredibly useful but goes over my own 2 algorithms exactly. There isn't much code but a few method reccomendations. It is a good idea to use code rather than MS Paint.With code its actually really easy to break a captcha and achieve above 50% success rate. Behavioral recognition may be a better security mechanism or may be an additional one.
A. Edge Detection Method you use:
Edge detection really isn't necessary. In this case, just use the getpixel((x,y)) function and fill in the area between the bounding lines, recognizing to fill at lines 1,3,5;etc. and turn off the fill after intersection 2,4,6;etc. Luckilly, you chose an easy Captcha so edge detection is a decent solution without decluttering,rotating, and re-alignment.
B. Manipulation Method:
Another method I use utilizes OpenCV and pillow as well. I am really busy but am posting a blog article on this later at druid5.wordpress.com/ which will contain code examples of this method. Since it isn't illegal to get through them, at least I am told, I use the method I will post to collect data all the time. Mostly, contrast and detail from pillow, some basic clutter removal with stats, re-alignment with a basic dfs, and rotation (performable with opencv or easily with a kernal). Tesseract is a good choice for open source but it isn't too hard to create an OCR with opencv either.
This exercies is a decent introduction to OpenCV, PIL (pillow), image manipulation with math, and some other things that help with everything from robotics to AI.
Using flow control to find the failed conditions and try different routes may be necessary but the aim should always be a generic solution.
I am working on a project where I need to program a Raspberry Pi to grab an image from a webcam, search that image for a box and identify what box it is by it's size ratio. The boxes will be a unique color to the rest of the environment. It would also be good to identify the distance from the box and angle to the box.
Everything I've seen seems to indicate that this should be possible, but after several days of searching I have yet to find anything that really helps me to do this. This project is my first experience using Python, so I'm pretty newbish. Any help even with how to do little portions of this would be greatly appreciated.
Here's my working code so far. It's not much, all it does is grab an image from a webcam :/
import imgproc
from img imgproc *
camera = Camera(160, 120)
viewer = Viewer(160, 120)
n = 1
while (n > 0):
img = camera.grabImage()
viewer.displayImage(img)
This is not a complete solution, but some good ideas on how to get started :)
First off, there are Python bindings for OpenCV, an open source free computer vision library originally written in C: http://opencv.willowgarage.com/documentation/python/index.html
The first thing you have to do when solving a computer vision problem is pre-process. In particular, knowing that the box is a different colour helps a LOT - it means we can threshold by colour and create an image that is black where the box is not, and white where the box is, using a technique such as in http://aishack.in/tutorials/thresholding/ .
Then, you'd follow a process similar to the Sudoku grabber/solver described in this blog - you do blob extraction ( http://en.wikipedia.org/wiki/Blob_extraction ) then do a hough transform to get lines, and then you can compare the lines' distances to each other to determine the box's ratio. http://aishack.in/tutorials/sudoku-grabber-with-opencv-plot/
Pretty much just read about people's OpenCV Sudoku solvers until you get the gist of how it's done, because there are a lot of good tutorials and it's a simple illustration of how computer vision projects go: https://www.google.com.au/search?q=sudoku+opencv&aq=f&oq=sudoku+opencv&aqs=chrome.0.57j60l3j0l2.1506&sourceid=chrome&ie=UTF-8
You may want to try installing SimpleCV from the github repo. Using SimpleCV you should be able to get the blob's color using the Image.hueDistance command. If you use the findBlobs command to find your boxes each blob should have its aspect ratio as a parameter. We just posted our full PyCon tutorial about SimpleCV here. You can view just the slides here. We've heard that there are some issues installing PyGame (a SimpleCV dependency) on the RaspberryPi. This walk through might address those issues.
I am currently using opencv, image and pygame for capturing frames from a hd webcam. What I want to achieve is to check if the captured frames are focused or not. This is aka focus testing but I couldn't come up with a solution in Python. IMHO problem can be reduced to determining the blurness percentage of each pixel and come up with a decision. What is the name of this algorithm I am seeking for ? Does anyone have any experience implementing similar algorithms in Python ?
I would appreciate any guidance.
If you want to check if an image is blurry this Is there a way to detect if an image is blurry? might be able to help you, pretty interesting.