Detect if a camera angle changed - python

I have a fixed camera and I need to check if its position or orientation has been changed. I am trying to use OpenCV (calculating diiferencies between a reference image and a new one) for this, but I am pretty new to OpenCV (and image processing in general) but I am not really sure what specific algorithm would be the best to use for this, or how to interpret the results to find if the camera has been moved/rotated. Any ideas?
Please help,

One way to do it would be to register the two frames to each other using affine image registration from openCV. From this you can extract the rotation and displacement difference between the two frames. Unfortunately this will only work well for in-plane rotations but I still think it is your best bet.
If you post some sample code and data I would be happy to take a look.

You can use Canny or HoughLinesP to find lines,From this you can get two lines,compare it.Maybe this will be effective in some simple background.if some object in your picture,try sift or other feature extractor,you can take features to find the relationship from two frames.

Related

How can I compute orthographical projection image from perspective projection image?

My goal is to transform an image captured by a camera and transform that image to orthographical image without effects of perspective.
I have a few objects of known size on a surface. I have a camera, placed above and directed to those objects, as exemplified in the scene. The camera is capturing images as in image captured by the camera. I want to get an orthographical image of the environment as in orthographical image I want to get.
I have read few posts, but did not really understand their relevance to my problem, as I am not expert on these transforms. The answer from this question made me think it is possible, although I did not get how.
I would appreciate a clear explanation or pointing a clear tutorial, using Python or Lua if possible.
Any help is appreciated.
This was not possible without distorting the image. A straightforward explanation is that the perspective causes some parts of the image to be not visible, for example the white line in the marked area is not visible, and there could be something small that we are not able to observe. For those parts, the algorithm is supposed to produce some kind of prediction based on heuristics.

How do you detect if there is motion between frames using opencv without simply subtracting the frames?

I have a camera in a fixed position looking at a target and I want to detect whether someone walks in front of the target. The lighting in the scene can change so subtracting the new changed frame from the previous frame would therefore detect motion even though none has actually occurred. I have thought to compare the number of contours (obtained by using findContours() on a binary edge image obtained with canny and then getting size() of this) between the two frames as a big change here could denote movement while also being less sensitive to lighting changes, I am quite new to OpenCV and my implementations have not been successful so far. Is there a way I could make this work or will I have to just subtract the frames. I don't need to track the person, just detect whether they are in the scene.
I am a bit rusty but there are various ways to do this.
SIFT and SURF are very expensive operations, so I don't think you would want to use them.
There are a couple of 'background removal' methods.
Average removal: in this one you get the average of N frames, and consider it as BG. This is vulnerable to many things, light changes, shadow, moving object staying at a location for long time etc.
Gaussian Mixture Model: a bit more advanced than 1. Still vulnerable to a lot of things.
IncPCP (incremental principal component pursuit): I can't remember the algorithm totally but basic idea was they convert each frame to a sparse form, then extract the moving objects from sparse matrix.
Optical flow: you find the change across the temporal domain of a video. For example, you compare frame2 with frame1 block by block and tell the direction of change.
CNN based methods: I know there are a bunch of them, but I didn't really follow them. You might have to do some research. As far as I know, they often are better than the methods above.
Notice that, for a #30Fps, your code should complete in 33ms per frame, so it could be real time. You can find a lot of code available for this task.
There are a handful of ways you could do this.
The first that comes to mind is doing a 2D FFT on the incoming images. Color shouldn't affect the FFT too much, but an object moving, entering/exiting a frame will.
The second is to use SIFT or SURF to generate a list of features in an image, you can insert these points into a map, sorted however you like, then do a set_difference between the last image you took, and the current image that you have. You could also use the FLANN functionality to compare the generated features.

Interpolating blacked out ROI in an image with python

I have been browsing the internet and stack overflow in order to find a solution to my problem, but to no avail.
So here is my problem:
Problem
I have a series of images with specific ROIs, where I detect a signal change. In order to extract the signal I need subtract the background of the image from the actual signal. Unfortunately I can't just subtract the images, as this doesn't delete the background noise sufficiently.
Solution Idea
What I want to do is to cut out (black out) my ROIs and then do an interpolation across the entire "reduced" image. Then I want to fill in the blacked out ROIs again via interpolation. This way I can get an idea of what the background below my signal is actually doing. I have been playing around with griddata, RectBivariateSpline, but I haven't found a way that works.
So far I have been doing this in MATLAB with the function scatteredInterpolant, but I would like to do it in python.
Below an image series, that describes the concept. One can see the third image being slightly blurry in the before blacked out ROIs.
Imageprocessing concept
So, does python provide a solution or way, which is similar to MATLABs scatteredInterpolant or how could I best tackle this problem?
Thank you.

opencv-python object detection

I'm a beginner in opencv using python. I have many 16 bit gray scale images and need to detect the same object every time in the different images. Tried template matching in opencv python but needed to take different templates for different images which could be not desirable. Can any one suggest me any algorithm in python to do it efficiently.
Your question is way too general. Feature matching is a very vast field.
The type of algorithm to be used totally depends on the object you want to detect, its environment etc.
So if your object won't change its size or angle in the image then use Template Matching.
If the image will change its size and orientation you can use SIFT or SURF.
If your object has unique color features that is different from its background, you can use hsv method.
If you have to classify a group of images as you object,for example all the cricket bats should be detected then you can train a number of positive images to tell the computer how the object looks like and negative image to tell how it doesn't, it can be done using haar training.
u can try out sliding window method. if ur object is the same in all samples
One way to do this is to look for known colors, shapes, and sizes.
You could start by performing an HSV threshold on your image, by converting your image to HSV colorspace and then calling
cv2.inRange(source, (minHue, minSat, minVal), (maxHue, maxSat, maxVal))
Next, you could use cv2.findContours to find all the areas in your image that meet your color requirements. Then, you could use methods such as boundingRect and contourArea to find specific attributes of the object that you want.
What you will end up with is essentially a 'pipeline' that can take a frame, and look for a shape that fits the criteria you have set. Depending on the complexity of what you want to do (you didn't say what you're looking for), this may or may not work, but I have used it with reasonable success.
GRIP is an application that allows you to threshold things in a visual way, and it will also generate Python code for you if you want. I don't really recommend using the generated code as-is because I've run into some problems that way. Here's the link to GRIP: https://github.com/WPIRoboticsProjects/GRIP
If the object you want to detect has different size in every image and also slightly varies in shape too, then I recommend you use HaarCascade of that object. If the object is very general then you can easily find haar cascade for it online. Otherwise it is not very difficult to make haar cascades(can be a littile time consuming though).
You can use this tutorial by sentdex to make HaarCascade here.
Or If you want to know how to use HaarCascades then you can get it on this link
here.

Eliminate unwanted keypoints

I would like to eliminate the keypoints detected around the frame of an image (an artwork of a museum gallery ). In other words I want to separate out the actual artwork from its frame. Each artwork consist of different types of frames.
![Keypoints detected using sift][1]
I have already written a Python wrapper for David Lowe's SIFT implementation to detect keypoints as well as to compute descriptors.
However my question is what is the best approach to solve this problem? any of the following or something else?
Using Hough transformation (using Python Image Library)
Template matching
Your help is highly appreciated
Thanks again
I'd go with Hough transform and try to detect lines which form a quadrilateral.
You might get into trouble if the painting actually does contain a square or something. I'd look for some assumptions like: acceptable aspect ratio, acceptable size. Also find the outermost quadrilateral, and work your way towards the center of the image picking up inner quadrilaterals, if applicable. This would give you the frame and its thickness, so you can disregard any keypoints here or beyond the frame.
P.S. If you got some random replies from me, it's because I accidentally replied to another post in your thread... ^^
For each artwork, do you have a clean, properly framed reference image?
If so another solution to remove the background clutter is:
to use the ratio test algorithm to compute keypoints correspondences between your frame and the reference image,
to perform a geometric consistency check to filter out false matches.
In addition the geometric check will provide you with the homography matrix that you can use to warp your input frame or alternatively to project the corners of the reference images.
That way you will natively obtain the artwork area within your frame.
Here's an example about how you can do that with opensift's match tool - below is an illustration.

Categories

Resources