Perhaps it's a very newbie question but I have images from a room like this
where I am interested in the grey border at the bottom of every wall. As I know the dimensions of this, I can estimate the distance to it. However, I am not yet successful in correctly extracting this through opencv. As I can make two masks, one for the green wall and one for the grey border, they can also be added to each other. But is there any way to define that they have to connect? As that makes the script much more robust instead of something like this draft where it simply follows the two masks, without any 'connection' rules
Thanks in advance!
Related
Forgive me but I'm new in OpenCV.
I would like to delete the common background in 3 images, where there is a landscape and a man.
I tried some subtraction codes but I can't solve the problem.
I would like output each image only with the man and without landscape
Are there in OpenCV Algorithms what do this do? (then without any manual operation so no markers or other)
I tried this python code CV - Extract differences between two images
but not works because in my case i don't have an image with only background (without man).
I thinks that good solution should to Compare all the images and save those "points" that are the same at least in an image.
In this way I can extrapolate a background (which we call "Result.jpg") and finally analyze each image and cut those portions that are also present in "Result.jpg".
You say it's a good idea? Do you have other simplest ideas?
Without semantic segmentation, you can't do that.
Because all you can compute is where two images differ, and this does not give you the silhouette of the person, but an overlapping of two silhouettes. You'll never know the exact outline.
I have been working on a python program using opencv that will help the user solve the Rubik's Cube. The most important & complicated part is identifying the cube to read the value of each of its sides.
I have had a decent amount of luck so far but am wanting to change the processing pipeline a little. I think it would make sense to isolate the cube from its background before trying to detect the (rounded) square stickers and read their colors.
Attached is an example of the sort of frame we would be dealing with. I'm not sure what the best method for isolating the cube from the background would be. I have tried background selection, which seemed somewhat promising (though the mask was very grainy & blotchy). But I am also wondering if it would make more sense to use something like object detection.
I have considered just making a "dumb" crop which makes the user align the cube within a reticle. However, I would prefer a more elegant solution and don't mind spending the additional time that entails.
Edit: maybe the mild bokeh could be used to identify the background, or is it too miniscule to detect consistently?
Thanks for any help!
I am quite new to OpenCV and DIP in general so I need bit of help in stitching of two images. The problem background is, there are two pieces which have their adhesives/glue torn apart from two joined pieces of plastic. This is the image of "glue" on the base:
and this is the image of "glue" on the other attached face:
As the background of the the images is not the same, I read that it's not possible to do stitching (because different features). And these two pieces are like jigsaw pieces which needs to rotated, so the problem is not straightforward like panaroma stitching.
How do I join such images together?
I was thinking of finding the white color countours and then keeping one image fixed, rotating the other one and finding area of merged countours, also storing the angle of what I rotate. The area would become smallest when there would be perfect match.
This is not a complete answer, (no-one said answers have to be complete), but it may inspire you or someone else to work out a method.
I flipped vertically and flopped horizontally one of your images and then put them both into Photoshop on two separate layers. I then set the "Blending Mode" to Difference which is always a great way to align images - because they normally go black when images are aligned and there is no difference.
I then moved one layer around on the other. I guess you will need to do something similar to solve your problem - you just need to find something that your code can maximise or minimise.
My question is rather about feasibility of a task.
Note that I have read the solution of this question, however you can guess I am not dealing with rectangles and cameras here.
Situation:
I need to save lot of pictures in a folder all of them obeying to these rules:
In each picture, there is ONLY one object.
The object can be anything (car, horse, human hand ...)
The size and the format of the picture belong to certain set.
The background of the object is ALWAYS white.
The color of the object itself can be anything else (including, why not, areas of white pixels)
Goal:
I want to detect if the object of each image is CENTERED.
Development environment:
Python
OpenCV
Do you think this is feasible ?
I hope my question is not too broad. I just ask if this can be done automatically without human intervention on the pictures. I have thousands of them. The program will save in a separate folder pictures in which the object is not centered.
EDIT:
Following the comments and answer above: for me, a centered object is the one if I draw a square or rectangle around it, the edges of the square/rectangle must be equivalently distant from let and right sides of the image, whereas the top and the bottom of the object must be equivalently distant from the top and bottom of the picture.
Yep this is very feasible. However, depending on the type of objects the images contain, they are different ways to accomplish this. Assuming the objects in the images all have a uniform color you can easily perform a color detection algorithm, find the centre point of the object in terms of pixels and find it's position using the image resolution as the reference.
As the background is always white as specified, this is probably your best method as you can just extract all the non white (Or different shade of white) objects within the image.
if you do decide to go with this approach, i should be able to point you to some relevant code
Although writing in c++, more information on this can be found in the link below.
http://opencv-srf.blogspot.co.uk/2010/09/object-detection-using-color-seperation.html
the link is based on object detection in a video but as a video is just a series images the same concept can be used on images
I've been trying to identify ellipses in these pictures for a long time now for a project I'm working on. At the moment I'm trying a new method with a bit of success. I blur the image then subtract the original from it. After that I threshold that image which is how I get this: http://imgur.com/gIkv30A
I've been trying a few methods but have had pretty much no success with any of them. I can't get any more of the noise removed without compromising the quality of the ellipses I have found, but the ellipses I want to find seem to be decently defined.
If anyone has an idea on where I can go now I'd love to hear it.
Thanks,
Andy
edit:
Original Image: http://imgur.com/3ttIFiz
The main method I've tried so far using an adaptive threshold on the image then fitting an ellipse around each of the contours I find after that. It works quite well in one set of images, but performs very poorly in this set. I can see my current method working well in both I get it right.
How well it works with old images: http://imgur.com/eUYiYNa
How well it works with the new (more relevant to the program) images: http://imgur.com/1UXxXAp