Interpolating blacked out ROI in an image with python - python

I have been browsing the internet and stack overflow in order to find a solution to my problem, but to no avail.
So here is my problem:
Problem
I have a series of images with specific ROIs, where I detect a signal change. In order to extract the signal I need subtract the background of the image from the actual signal. Unfortunately I can't just subtract the images, as this doesn't delete the background noise sufficiently.
Solution Idea
What I want to do is to cut out (black out) my ROIs and then do an interpolation across the entire "reduced" image. Then I want to fill in the blacked out ROIs again via interpolation. This way I can get an idea of what the background below my signal is actually doing. I have been playing around with griddata, RectBivariateSpline, but I haven't found a way that works.
So far I have been doing this in MATLAB with the function scatteredInterpolant, but I would like to do it in python.
Below an image series, that describes the concept. One can see the third image being slightly blurry in the before blacked out ROIs.
Imageprocessing concept
So, does python provide a solution or way, which is similar to MATLABs scatteredInterpolant or how could I best tackle this problem?
Thank you.

Related

Eliminate the background (the common points) of 3 images - OpenCV

Forgive me but I'm new in OpenCV.
I would like to delete the common background in 3 images, where there is a landscape and a man.
I tried some subtraction codes but I can't solve the problem.
I would like output each image only with the man and without landscape
Are there in OpenCV Algorithms what do this do? (then without any manual operation so no markers or other)
I tried this python code CV - Extract differences between two images
but not works because in my case i don't have an image with only background (without man).
I thinks that good solution should to Compare all the images and save those "points" that are the same at least in an image.
In this way I can extrapolate a background (which we call "Result.jpg") and finally analyze each image and cut those portions that are also present in "Result.jpg".
You say it's a good idea? Do you have other simplest ideas?
Without semantic segmentation, you can't do that.
Because all you can compute is where two images differ, and this does not give you the silhouette of the person, but an overlapping of two silhouettes. You'll never know the exact outline.

How can I compute orthographical projection image from perspective projection image?

My goal is to transform an image captured by a camera and transform that image to orthographical image without effects of perspective.
I have a few objects of known size on a surface. I have a camera, placed above and directed to those objects, as exemplified in the scene. The camera is capturing images as in image captured by the camera. I want to get an orthographical image of the environment as in orthographical image I want to get.
I have read few posts, but did not really understand their relevance to my problem, as I am not expert on these transforms. The answer from this question made me think it is possible, although I did not get how.
I would appreciate a clear explanation or pointing a clear tutorial, using Python or Lua if possible.
Any help is appreciated.
This was not possible without distorting the image. A straightforward explanation is that the perspective causes some parts of the image to be not visible, for example the white line in the marked area is not visible, and there could be something small that we are not able to observe. For those parts, the algorithm is supposed to produce some kind of prediction based on heuristics.

Detect if a camera angle changed

I have a fixed camera and I need to check if its position or orientation has been changed. I am trying to use OpenCV (calculating diiferencies between a reference image and a new one) for this, but I am pretty new to OpenCV (and image processing in general) but I am not really sure what specific algorithm would be the best to use for this, or how to interpret the results to find if the camera has been moved/rotated. Any ideas?
Please help,
One way to do it would be to register the two frames to each other using affine image registration from openCV. From this you can extract the rotation and displacement difference between the two frames. Unfortunately this will only work well for in-plane rotations but I still think it is your best bet.
If you post some sample code and data I would be happy to take a look.
You can use Canny or HoughLinesP to find lines,From this you can get two lines,compare it.Maybe this will be effective in some simple background.if some object in your picture,try sift or other feature extractor,you can take features to find the relationship from two frames.

How to get background from cv2.BackgroundSubtractorMOG2?

Is there any way to obtain background from cv2.BackgroundSubtractorMOG2 in python?
In other words, is there any technique to compute an image based on last n frames of a video, which can be used as background?
Such a technique would be pretty complicated, but you might want to look at some keywords: image-stitching, gradient-based methods, patch-match, image filling. Matlab, for example, has a function that tries to interpolate missing values from nearby pixels. You could extend this method to work with 3D (shouldn't be so difficult in linear case).
More generally, it is sort of an ill-posed problem since there is no way to know what goes in the missing region.
Specifically to address your question, you might first take the difference between the original frame, and the extracted image, which should reveal the background. Then, use ROI fill in or similar method. There is likely some examples you can find on the web, such as this.

Best Method for Identifying Broken Ellipses

I've been trying to identify ellipses in these pictures for a long time now for a project I'm working on. At the moment I'm trying a new method with a bit of success. I blur the image then subtract the original from it. After that I threshold that image which is how I get this: http://imgur.com/gIkv30A
I've been trying a few methods but have had pretty much no success with any of them. I can't get any more of the noise removed without compromising the quality of the ellipses I have found, but the ellipses I want to find seem to be decently defined.
If anyone has an idea on where I can go now I'd love to hear it.
Thanks,
Andy
edit:
Original Image: http://imgur.com/3ttIFiz
The main method I've tried so far using an adaptive threshold on the image then fitting an ellipse around each of the contours I find after that. It works quite well in one set of images, but performs very poorly in this set. I can see my current method working well in both I get it right.
How well it works with old images: http://imgur.com/eUYiYNa
How well it works with the new (more relevant to the program) images: http://imgur.com/1UXxXAp

Categories

Resources