Detecting shift in image using opencv - python

I'm looking for some advice on what to use for this problem.
I have a camera feed of a flat surface. I want to be able to mark a point on that surface, and then to automatically track the movement of that point in the frame. In other words, I want to track the shift and rotation of the image compared to the initial state.
I think this should be possible using opencv or something similar, but I'm unable to find the right tools.
Could someone point me in the right direction?
Preferably some way in python but other methods are welcome too.

Related

How to track a real track image, for race Level design?

i am creating a game using pygame and python.
i have develoved a square that goes up and down, flows a path.
how do i set a particular path for the square?
My question is how do i take this image and convert it into a path equation the square follows?
Any other idea to use this or any track from an image is appreciated.
i tried imread() of matlab and many other things. i dont think grayscale applies here. i even tried superimposing it on a grid and plot points along the track.
i am out of ideas and from all the vedios i have seen i am missing something very basic here i think. any help?

How to detect the direction of a finger pointing

I am using python with OpenCV and pyautogui to play the game tetris (via a website) with your hands. In order to play the game there are 4 directions (up/down/left/right) that I want to detect to match with inputs for the pyautogui. I have tried so many ways to detect the direction of the finger that it pointing but nothing works consistently,any help on the current solution I am working on, or another idea would be great.
I am currently trying to use findContours with convexHull to detect the perimeter of the hand and attempt to find the sides with the smallest angle which in turn should tell me the side the finger is pointing. But I don't know how to extract that information.
I have tried using a BoundingRect to find which side is longer (w or h) then splitting the rectangle in half and seeing which side has more pixels in it (from the binary of the contour), but it wasn't consistent enough.
I have also tried using HoughLines and HoughLinesP to find a vanishing point, but I couldn't figure out how to get that to work.
Basically I am at a loss.
Edit: I'm trying to avoid MediaPipe so I can show I understand how to use the elements of openCV

Best OpenCV algorithm for detecting fast moving ball?

I am new to OpenCV. I am working on a project that involves tracking and detecting a spinning roulette ball. Here is the video I want to use: https://www.youtube.com/watch?v=IzZNaVQ3FnA&list=LL_a67IPXKsmu48W4swCQpMQ&index=7&t=0s
I want to get the ball time for 1 revolution. But the ball is quite fast and hard to detect. I am not sure how to overcome this.
What would be the best algorithm for doing this?
By subtracting successive images, you will isolate the ball as a (slightly curved) line segment. Both its length and its angular position are cues for the speed.
Anyway, these parameters are a little tricky to extract for a side view, as the ellipse has to be "unprojected" to a top view, to see the original circle. You need to know the relative position of the wheel and the viewer, which you most probably don't know.
An approximate solution is obtained by stretching the ellipse in the direction of the small axis.

How to calculate spin from a set of translating images in python

I am wondering if this can be done:
I have a set of images that start out looking forward, as they move forward the camera spins horizontally in a 360 direction.
So each image has a slightly different view as it spins around going forward.
The question is; can I accurately calculate the spin that camera is moving?
A follow up question is can I calculate the direction the image is moving with the spin?
The idea would be to use a few points that you would track across the transformation. And from those points you could find the angle of rotation between each frame.
You might want to have a look at this that explains the maths.
http://nghiaho.com/?page_id=671
If you don't need to stick to python, you could use matlab :
http://uk.mathworks.com/help/vision/examples/find-image-rotation-and-scale-using-automated-feature-matching.html?requestedDomain=uk.mathworks.com

How to use gluUnproject?

I am working on an OpenGL project where I need to be able to click on stuff in 3D space. As far as I can tell gluUnproject() will do that job. But I have heard unexpected things might happen, and the accuracy will be thrown off. It could just be that these people used it wrong, or something else. Is there anything unusual I should know about gluUnproject()?
I once asked a question, which contains what you seem to be searching, click here to see my question.
But basically what you can use gluUnproject() for is to calculate 2D Screen Coordinates (Probably Mouse Coordinates) to 3D World Space Coordinates.
Then you can calculate two points. The first point could be the point on the near plane and the second point could be at the far plane, thereby you can create a line which you then can use to perform collision detection with.
The above images comes from a post (click here to see the post), the post actually describes and tells about probably what you seem to be seeking.

Categories

Resources