Given an image of puzzles on some background (depends on difficulty of the task), recognize the number of puzzles on it and classify each puzzle on the image (for each puzzle tell how many peninsulas and bays it has)
Background can be red
Or colored
I managed to solve (or almost solve) easy version of this problem (red background) using opencv (otsu thresholding -> dilating -> some smoothing convolutions -> findContours and then pass contours to some classifier)
But I have serious difficulties when trying to solve a complicated version. This
is the best I have achieved so far using otsu thresholding and erode + dilate. It appears that this thresholding method does not work so good for hard background.
My dataset is very small (less than 10 images), so i guess its not possible to use some deep learning segmentation techniques. But may be i can use some pre-trained models?
This is my first CV problem, so i dont have much knowledge about it. I'm kinda out of ideas and asking you to help me.
Thanks!
I think you need to use shape based matching. For instance you compute gradients for each shape for each rotation angle and scale you will ise with some small step. Then train neural network detector.
For implementation instance check: https://github.com/meiqua/shape_based_matching
Related
I am trying to figure out the most effective way of doing image segmentation without using deep learning with a steep learning curve.
In my efforts I have come across the watershed algorithm which seems promising, but as far as I can understand it requires a black background to detect the edges.
This is my first problem because I have a gray background on my images(static shot from a top down camera), which can be solved by recoloring the background to something darker.
My second problem is that I want to detect various objects, some of them being black.
These problems messes up my dreams of using the watershed algorithm for any kind of automatic image segmentation, because each image would need to consider the color composition of the objects I want to segment.
Does anyone know of a solution to this or a method similar to the watershed algorithm?
Edit:
Reference image
Best regards
Martin
I am new to python and opencv. I am analysing images of clouds, and I need to remove the buildings, so that the subsequent analysis will have less noise. I tried using Canny edge detection and then fill in the contours, but did not get too far. I also tried thresholding by pixel colours, but cannot reliably exclude just the buildings and not other parts of the image containing the clouds.
Is there a way I can efficiently and accurately remove the buildings and keep all of the clouds/sky? Thanks for the tips in advance.
You could use a computer vision model that finds the buildings. There may be some open source ones out there. The only one I can think of at the moment is this semantic segmentation model. There should be details on how to implement it, but there could definitely be others out there.
https://github.com/CSAILVision/semantic-segmentation-pytorch
I think one of the classes is buildings and you could theoretically run the model and get the dimensions of the building and take it out.
I was wondering if there was a simple python toolkit for region-based image segmentation. I have a grayscale image, and my goal is to efficiently find a complete segmentation such that the pixel values in each region are similar (presumably the definition of "similar" will be determined by some tolerance parameter). I am looking for an instance segmentation where every pixel belongs exactly one region.
I have looked at the scikit-image segmentation module (https://scikit-image.org/docs/dev/api/skimage.segmentation.html), but the tools there didn't seem to do what I was looking for. For instance, skimage.segmentation.watershed looked attractive, but gave poor results using markers=None.
The flood fill algorithm from scikit-image seems close to what you want, has a tolerance parameter as well.
For more fine-tuned control you can check out OpenCV
I'm currently working with SqueezeDet for detection purposes. I trained the network on synthetic data and it performs reasonably well. detection results
For my project I would like to be able to visualize which parts of the input were more relevant for the detection process. So in case of the detection of a pedestrian, I'd assume that its pixel would be more important than for example the surroundings. I tried a couple of different methods, but none of them is fully satisfactory.
I did my own research and couldnt't really any papers that talk about visualization for object detection. So I implemented VisualBackProp, the results however don't look all to promising. If instead I compute the relevance things look slightly better, but still not as expected.
I started thinking that perhaps the issues might be related to the complexity of my outputs, with respect to a network that might only be dealing with classification, or as in the VisualBackProp paper just the prediction of steering angle.
I was wondering if anyone has idea of what visualization technique might best suit the detection task.
You could try just augmenting different areas of the image and see how it affects the detection confidence. For example, you could put the area containing the pedestrian on just a black background instead of the natural background to see how much the surroundings actually affect things. You could also add moderate to severe noise to select areas of the image and observe which areas correspond to the biggest change in detection confidence.
More directly, mathematically you seem to be interested in the gradient of detection confidence WRT pixel data. Depending on what deep learning platform you are using, if you run a single training iteration you may be able to obtain the gradients in the data layer (dL/dx) which will directly show these. This will only represent the effect of small changes to the pixel data - if you are aiming for more macroscopic insights than that, I think my first suggestion is probably your only option.
I'm writing an OCR application to read characters from a screenshot image. Currently, I'm focusing only on digits. I'm partially basing my approach on this blog post: http://blog.damiles.com/2008/11/basic-ocr-in-opencv/.
I can successfully extract each individual character using some clever thresholding. Where things get a bit tricky is matching the characters. Even with fixed font face and size, there are some variables such as background color and kerning that cause the same digit to appear in slightly different shapes. For example, the below image is segmented into 3 parts:
Top: a target digit that I successfully extracted from a screenshot
Middle: the template: a digit from my training set
Bottom: the error (absolute difference) between the top and middle images
The parts have all been scaled (the distance between the two green horizontal lines represents one pixel).
You can see that despite both the top and middle images clearly representing a 2, the error between them is quite high. This causes false positives when matching other digits -- for example, it's not hard to see how a well-placed 7 can match the target digit in the image above better than the middle image can.
Currently, I'm handling this by having a heap of training images for each digit, and matching the target digit against those images, one-by-one. I tried taking the average image of the training set, but that doesn't resolve the problem (false positives on other digits).
I'm a bit reluctant to perform matching using a shifted template (it'd be essentially the same as what I'm doing now). Is there a better way to compare the two images than simple absolute difference? I was thinking of maybe something like the EMD (earth movers distance, http://en.wikipedia.org/wiki/Earth_mover's_distance) in 2D: basically, I need a comparison method that isn't as sensitive to global shifting and small local changes (pixels next to a white pixel becoming white, or pixels next to a black pixel becoming black), but is sensitive to global changes (black pixels that are nowhere near white pixels become black, and vice versa).
Can anybody suggest a more effective matching method than absolute difference?
I'm doing all this in OpenCV using the C-style Python wrappers (import cv).
I would look into using Haar cascades. I've used them for face detection/head tracking, and it seems like you could build up a pretty good set of cascades with enough '2's, '3's, '4's, and so on.
http://alereimondo.no-ip.org/OpenCV/34
http://en.wikipedia.org/wiki/Haar-like_features
OCR on noisy images is not easy - so simple approaches no not work well.
So, I would recommend you to use HOG to extract features and SVM to classify. HOG seems to be one of the most powerful ways to describe shapes.
The whole processing pipeline is implemented in OpenCV, however I do not know the function names in python wrappers. You should be able to train with the latest haartraining.cpp - it actually supports more than haar - HOG and LBP also.
And I think the latest code (from trunk) is much improved over the official release (2.3.1).
HOG usually needs just a fraction of the training data used by other recognition methods, however, if you want to classify shapes that are partially ocludded (or missing), you should make sure you include some such shapes in training.
I can tell you from my experience and from reading several papers on character classification, that a good way to start is by reading about Principal Component Analysis (PCA), Fisher's Linear Discriminant Analysis (LDA), and Support Vector Machines (SVMs). These are classification methods that are extremely useful for OCR, and it turns out that OpenCV already includes excellent implementations on PCAs and SVMs. I haven't seen any OpenCV code examples for OCR, but you can use some modified version of face classification to perform character classification. An excellent resource for face recognition code for OpenCV is this website.
Another library for Python that I recommend you is "scikits.learn". It is very easy to send cvArrays to scikits.learn and run machine learning algorithms on your data. A basic example for OCR using SVM is here.
Another more complicated example using manifold learning for handwritten character recognition is here.