I am new to OpenCV. I am working on a project that involves tracking and detecting a spinning roulette ball. Here is the video I want to use: https://www.youtube.com/watch?v=IzZNaVQ3FnA&list=LL_a67IPXKsmu48W4swCQpMQ&index=7&t=0s
I want to get the ball time for 1 revolution. But the ball is quite fast and hard to detect. I am not sure how to overcome this.
What would be the best algorithm for doing this?
By subtracting successive images, you will isolate the ball as a (slightly curved) line segment. Both its length and its angular position are cues for the speed.
Anyway, these parameters are a little tricky to extract for a side view, as the ellipse has to be "unprojected" to a top view, to see the original circle. You need to know the relative position of the wheel and the viewer, which you most probably don't know.
An approximate solution is obtained by stretching the ellipse in the direction of the small axis.
Related
I'm looking for some advice on what to use for this problem.
I have a camera feed of a flat surface. I want to be able to mark a point on that surface, and then to automatically track the movement of that point in the frame. In other words, I want to track the shift and rotation of the image compared to the initial state.
I think this should be possible using opencv or something similar, but I'm unable to find the right tools.
Could someone point me in the right direction?
Preferably some way in python but other methods are welcome too.
in my setup I have a depth camera looking down on the box. There is one object which should be moved (red rectangle) and obstacles (black things). I need to find a free direction to move the object there on the certain distance (1 m lets say). I have the point cloud of the scene and the transformation between the camera and the ground plane. My idea was to reduce the pointcloud to 2 dimensions, build some sort of occupancy map and try to build a free line pixel by pixel from the objects center with lets say 5 degrees step clockwise. However, I feel that its a too complicated aproach for such task. Is there any simplier solution? Otherwise, how could I take in account objects size? Just add half of the biggest object`s dimesion to each obstacle? But in this case it will consume a lot of safe space as well, because the object is not symmetrical. I use python so any library suggestion would also be very helpful. Thanks!
Setup:
https://i.stack.imgur.com/WjxFL.png
I'm currently working on my first assignment in image processing (using OpenCV in Python, but I'm open to any libraries and languages). My assignment is to calculate a precise score (to tenths of point) of one to several shooting holes in an image uploaded by a user. The issue is that the image uploaded by the user can be taken on different backgrounds (although it will never match the rest of the target mean colors). Due to this, I have ruled out most of the solutions found on the internet and most of the solutions I could come up with.
Summary of my problem
Bullet holes identification:
bullet holes can be on different backgrounds
bullet holes can overlap
single bullet holes will always be of similar size (there is only one type of caliber used on all of the calculated shooting targets)
I'm able to calculate a very precise radius of the shooting hole
Shooting targets:
there are two types of shooting targets that my app is going to calculate (images provided below)
photos of the shooting targets can be taken in different lighting conditions
Shooting target 1 example:
Shooting target 2 example:
Shooting target examples to find bullet holes in:
shooting target example 1
shooting target example 2
shooting target example 3
shooting target example 4
shooting target example 5
What I tried so far:
Color segmentation
due to the reasons mentioned above
Difference matching
to be able to actually compare the target images (empty and fired on), I have written an algorithm that crops the target by its outer largest circle (its radius + bullet size in pixels)
after that, I have probably tried all of the ways of images comparison found on the internet
for example: brute force matching, histogram comparisons, feature matching and many more
I failed here mostly because the colors on both compared images were a bit different and also because one of the images was sometimes taken in a slight angle and therefore the circles weren't overlapping and they were calculated as differences
Hough circles algorithm
since I know the radius (in pixels) of the shots on the target I thought I could simply detect them using this algorithm
after several hours/days of playing with parameters of HoughCircles function, I figured it would never work on all of the uploaded images without changing the parameters based on the uploaded image
Edge detection and finding contours of the bullet holes
I have tried two edge detection methods (Canny and Sobel) while playing with image smoothening algorithms (like blurring, bilateral filtering, metamorphization, etc..)
after that, I have tried to find all of the contours in the edge detected image and filter out the circles of the target with a similar center point
this seemed like the solution at first, but on several test images it wouldn't work properly :/
At this point, I have ran out of ideas and therefore came here for any kind of advice or an idea that would push me further. Is it possible that there simply isn't a solution to such complicated shooting target recognition or am I just too inexperienced to come up with it?
Thank you in advance for any help.
Edit: I know I could simply put a single color paper behind the shooting target and find the bullets that way. This is not how I want the app to work thought and therefore it's not a valid solution to my problem.
I have a Python program where people can draw simple line drawings using a touch screen. The images are documented in two ways. First, they are saved as actual image files. Second, I record 4 pieces of information at every refresh: the time point, whether contact was being made with the screen at the time (1 or 0), the x coordinate, and the y coordinate.
What I'd like to do is gain some measure of how similar a given drawing is to any other drawing. I've tried a few things, including simple Euclidian distance and similarity between each pixel, and I've looked at Frechet distance. None of these can give what I'm looking for.
The issues are that each drawing might have a different number of points, one segment does not always immediately connect to the next, and the order of the points is irrelevant. For instance, if you and I both draw something as simple as an ice cream cone, I might draw ice cream first, and you might draw the cone first. We may get an identical end result, but many of the most intuitive metrics would be totally thrown off.
Any ideas anyone has would be greatly appreciated.
if you care about how similar a drawing is to another, then there's no need to collect data at every refresh. just collect it once the drawer is done drawing
Then, you can use fourier analysis to break the images down in to frequency domains and run cross correlations on that
or some kind of 2D cross correlation on the images, I guess
I am wondering if this can be done:
I have a set of images that start out looking forward, as they move forward the camera spins horizontally in a 360 direction.
So each image has a slightly different view as it spins around going forward.
The question is; can I accurately calculate the spin that camera is moving?
A follow up question is can I calculate the direction the image is moving with the spin?
The idea would be to use a few points that you would track across the transformation. And from those points you could find the angle of rotation between each frame.
You might want to have a look at this that explains the maths.
http://nghiaho.com/?page_id=671
If you don't need to stick to python, you could use matlab :
http://uk.mathworks.com/help/vision/examples/find-image-rotation-and-scale-using-automated-feature-matching.html?requestedDomain=uk.mathworks.com