Detecting a table in an image - python

Im fairly new to image processing so I was looking for some help on where to start or some input on how to do this if you know the answer.
What I would like to be able to do is take an image with a table in it, and be able to detect the table. Id even be happy to settle with detecting many planar surfaces as I can sift through that part easily. What I get right now with openCV when I do some contour detection is:
Original,
Contour Detected
As you can see this may have potential with some sophistication maybe, but it missed the bulk of the table. Im currently working in Python for this as well.

Related

How to Save ONLY ONE face picture using OpenCV?

What am I trying to do?
Listen to camera video
Detect faces
Save only faces to folder
The problem:
I have done all of those things above EXCEPT it gives me bunch of same person faces because it checks it on every frame it captures and gives as new face when it's the same person.
I want the script to understand that it's the same person and skip it. (of course it depends on accuracy overall, but that's okay, as long as it's not giving me 60 files of same face a second)
So I was thinking to somehow use face_verify within the same library, but couldn't make it working quickly and decided to ask first instead of wasting time on something what most likely not gonna work.
Any suggestions? Hopefully described it well, also didn't find any duplicates of this question.
Thanks in advance
It depends on what you are using, face-detection or face-recognition.
Face-detection will just detect faces and return bounding box for each frame. In this case you can use any simple tracking algorithm (like SORT) which will give tracking ID for each bounding boxes in each frame. This way you can ensure to save only 1 face for each tracking ID
In case of Face-recognition, it's all the more easier as each detected face has an associated label (similar to a tracking ID in the above approach). So you can simply save face-images based on labels (1 image per label)

Object Detection using opencv python

fig:Shoe in the red circle is to be detected
I am trying to create a python script using cv2 that can recognize the shoe of the baller and determine whether the shoe is beyond, on or before the white line(refer to the image).
I have no idea about any kind of approach to use, what kind of algorithms might be helpful. Need some guideline, please help!
(Image is attached)
I realize this would work better as a comment because it isn't a full answer, but I don't have enough rep yet to leave comments, haha.
You may be interested in OpenCV's Canny Edge detection algorithm:
http://docs.opencv.org/trunk/da/d22/tutorial_py_canny.html
This will allow you to find shapes within your image.
Also, you can find similarly colored blobs using SimpleBlobDetector:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This should make it fairly easy to detect the white line.
In order to detect a more complex object like the shoe, you'll probably have to make something like a object detection cascade file and use a CascadeClassifier to find it:
http://docs.opencv.org/2.4/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
http://johnallen.github.io/opencv-object-detection-tutorial/
Basically, you take a bunch of pictures to "teach" what the object looks like, and output that info to a file that a CascadeClassifier can use to detect objects in input images. It may be hard to distinguish between different brands of shoe though, if you need it to be that specific. Also, you may need to adjust the input images (saturation, brightness, etc) before trying to detect objects in order to get good results.

Algorithm for extracting background from image

Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!

CAPTCHAs Image Manipulation using Pillow

As an exercise, I'm attempting to break the following CAPTCHA:
It doesn't seem like it would be too difficult to break as the edges seems to fairly solid and noise should be relatively easy to remove. Problem is, I have very little experience with image manipulation. Currently I'm using Python with the Pillow library to manipulate the CAPTCHA image, after which it will be passed into Tesseract for OCR.
In the following code I attempt to bring out the edges by sharpening the image and the convert the image to black and white
from PIL import Image, ImageFilter
try:
img = Image.open("Captcha.jpg")
except:
print("Can't load captcha.")
exit()
# Bring out the edges by sharpening.
out = img.filter(ImageFilter.SHARPEN)
out = out.convert("L")
out = out.point(lambda x: 0 if x<136 else 255, "1")
width, height = out.size
out = out.resize((width*5, height*5), Image.NEAREST)
out.save("captcha_modified.png")
At this point I see the following:
However, Tesseract is still unable to read the characters. As an experiment, I used good ol' mspaint to manually modify the image to a point to where it could be read by Tesseract:
So if can get the image to that point, I think Tesseract will do a fairly good job at detecting characters. So my current thoughts are that I need to enhance the edges and reduce the noise the image. Also, I imagine it would be easier for Tesseract to detect the letters if the letters will filled in rather than outlined, but I have not idea how I'd do this.
Any suggestions on how to go about this? Is there a better way to process the images?
I am short on time so this answer may not be incredibly useful but goes over my own 2 algorithms exactly. There isn't much code but a few method reccomendations. It is a good idea to use code rather than MS Paint.With code its actually really easy to break a captcha and achieve above 50% success rate. Behavioral recognition may be a better security mechanism or may be an additional one.
A. Edge Detection Method you use:
Edge detection really isn't necessary. In this case, just use the getpixel((x,y)) function and fill in the area between the bounding lines, recognizing to fill at lines 1,3,5;etc. and turn off the fill after intersection 2,4,6;etc. Luckilly, you chose an easy Captcha so edge detection is a decent solution without decluttering,rotating, and re-alignment.
B. Manipulation Method:
Another method I use utilizes OpenCV and pillow as well. I am really busy but am posting a blog article on this later at druid5.wordpress.com/ which will contain code examples of this method. Since it isn't illegal to get through them, at least I am told, I use the method I will post to collect data all the time. Mostly, contrast and detail from pillow, some basic clutter removal with stats, re-alignment with a basic dfs, and rotation (performable with opencv or easily with a kernal). Tesseract is a good choice for open source but it isn't too hard to create an OCR with opencv either.
This exercies is a decent introduction to OpenCV, PIL (pillow), image manipulation with math, and some other things that help with everything from robotics to AI.
Using flow control to find the failed conditions and try different routes may be necessary but the aim should always be a generic solution.

Best Method for Identifying Broken Ellipses

I've been trying to identify ellipses in these pictures for a long time now for a project I'm working on. At the moment I'm trying a new method with a bit of success. I blur the image then subtract the original from it. After that I threshold that image which is how I get this: http://imgur.com/gIkv30A
I've been trying a few methods but have had pretty much no success with any of them. I can't get any more of the noise removed without compromising the quality of the ellipses I have found, but the ellipses I want to find seem to be decently defined.
If anyone has an idea on where I can go now I'd love to hear it.
Thanks,
Andy
edit:
Original Image: http://imgur.com/3ttIFiz
The main method I've tried so far using an adaptive threshold on the image then fitting an ellipse around each of the contours I find after that. It works quite well in one set of images, but performs very poorly in this set. I can see my current method working well in both I get it right.
How well it works with old images: http://imgur.com/eUYiYNa
How well it works with the new (more relevant to the program) images: http://imgur.com/1UXxXAp

Categories

Resources