I am working on a project to figure out the difference between two objects and tag them with the proper model code.
I need help with a suggestion on how can we tackle such problem with image processing using OpenCV, following are the images
Till now I tried calculating black pixel difference between two images after doing binary threshold and also calculated a number of holes present on the gasket.
I also tried using feature points but it didn't worked well
what else can be done to improve the detection?
Thank you
The holes are excellent features that can be robustly detected by blob analysis.
In the first place, locate the large circle and determine its center and radius. The radius might be a first discriminant feature.
Next, establish the configuration of the screw holes around the center. You can use the distance to the center, the number of holes and the angles they define around the center.
If this is still not enough, you can register the gaskets and compare them to the models by matching the screw holes, adjusting the rotation, then comparing pixel-wise with a similarity measure such as SAD or SSD.
Related
I have two recordings from security cameras which are located on opposite corners. Each of the cameras has a dead spot (image attached).
I am looking for a technique in python that allows creating a map of the objects in the room space based on these two cameras' views.
Can you please suggest something?
Problem definition
Given two images from the same scene, stitch these images side-by-side by finding common parts (features) that overlap with each other.
Approach
In theory, the steps followed to solve the problem are as follows:
Detect feature points using Harris corner detection.
Create key-point descriptors by creating patches around them
Estimate key-point matches based on the Euclidean distance
Perform RANSAC to find the affine transformation matrix
Warp and apply geometric transformation
For the abovementioned steps you can refer to the OpenCV tutorial and other libraries that might come in use for some specific step sklearn.feature module used on the feature extraction step.
Also, you can find many image stitching projects on github in case you don't want to reinvent the wheel.
I am new to python and opencv. I am analysing images of clouds, and I need to remove the buildings, so that the subsequent analysis will have less noise. I tried using Canny edge detection and then fill in the contours, but did not get too far. I also tried thresholding by pixel colours, but cannot reliably exclude just the buildings and not other parts of the image containing the clouds.
Is there a way I can efficiently and accurately remove the buildings and keep all of the clouds/sky? Thanks for the tips in advance.
You could use a computer vision model that finds the buildings. There may be some open source ones out there. The only one I can think of at the moment is this semantic segmentation model. There should be details on how to implement it, but there could definitely be others out there.
https://github.com/CSAILVision/semantic-segmentation-pytorch
I think one of the classes is buildings and you could theoretically run the model and get the dimensions of the building and take it out.
I´m currently working on a project to measure the surface of plant leaves. Until now I´ve successfully implemented an RCNN model to segment individual leaves and also generated a depth map using stereo computer vision which allows me to calculate distances between any two points.
Now I´m stuck trying to connect everything together in order to calculate the area of a leaf/polygon.
**I got original RGB images, Binary masks containing leaves, and also the depth information of every pixel.
Can someone please point me in the right direction?**
I reckon the right way would be to use Delauney triangulation on the polygons in the binary masks and then calculate the surface using the distance between the 3 points of each triangle. I haven't been able to find something quite similar to my problem which is implemented in python.
Thanks so much for your help in advance. I´ll upload a picture of an RGB image with the masks plotted.
leaf instance segmentation
Count the pixels inside the outlines (by polygon filling) or use the shoelace formula.
I'm currently working on my first assignment in image processing (using OpenCV in Python, but I'm open to any libraries and languages). My assignment is to calculate a precise score (to tenths of point) of one to several shooting holes in an image uploaded by a user. The issue is that the image uploaded by the user can be taken on different backgrounds (although it will never match the rest of the target mean colors). Due to this, I have ruled out most of the solutions found on the internet and most of the solutions I could come up with.
Summary of my problem
Bullet holes identification:
bullet holes can be on different backgrounds
bullet holes can overlap
single bullet holes will always be of similar size (there is only one type of caliber used on all of the calculated shooting targets)
I'm able to calculate a very precise radius of the shooting hole
Shooting targets:
there are two types of shooting targets that my app is going to calculate (images provided below)
photos of the shooting targets can be taken in different lighting conditions
Shooting target 1 example:
Shooting target 2 example:
Shooting target examples to find bullet holes in:
shooting target example 1
shooting target example 2
shooting target example 3
shooting target example 4
shooting target example 5
What I tried so far:
Color segmentation
due to the reasons mentioned above
Difference matching
to be able to actually compare the target images (empty and fired on), I have written an algorithm that crops the target by its outer largest circle (its radius + bullet size in pixels)
after that, I have probably tried all of the ways of images comparison found on the internet
for example: brute force matching, histogram comparisons, feature matching and many more
I failed here mostly because the colors on both compared images were a bit different and also because one of the images was sometimes taken in a slight angle and therefore the circles weren't overlapping and they were calculated as differences
Hough circles algorithm
since I know the radius (in pixels) of the shots on the target I thought I could simply detect them using this algorithm
after several hours/days of playing with parameters of HoughCircles function, I figured it would never work on all of the uploaded images without changing the parameters based on the uploaded image
Edge detection and finding contours of the bullet holes
I have tried two edge detection methods (Canny and Sobel) while playing with image smoothening algorithms (like blurring, bilateral filtering, metamorphization, etc..)
after that, I have tried to find all of the contours in the edge detected image and filter out the circles of the target with a similar center point
this seemed like the solution at first, but on several test images it wouldn't work properly :/
At this point, I have ran out of ideas and therefore came here for any kind of advice or an idea that would push me further. Is it possible that there simply isn't a solution to such complicated shooting target recognition or am I just too inexperienced to come up with it?
Thank you in advance for any help.
Edit: I know I could simply put a single color paper behind the shooting target and find the bullets that way. This is not how I want the app to work thought and therefore it's not a valid solution to my problem.
I have an image captured by android camera. Is it possible to calculate depth of object in the image ? Image contains object and background only. Any suggestion, explanation or links that you think can help me will be appreciated.
OpenCV is the library you need.
I did some depth identification of water levels in pure white background a few days ago. Generally, if you want to identify the depth, you can convert the question to identify the edge of the changing colors. In this case, you can convert the colorful pictures to grey and identify the changing of while-black-grey interface. OpenCV is capable of doing the job at high speed.
Hope it helps. Let me know if you need further help.
Edits:
If you want to find the actual depths, you need to project the coordinate system of your pictures to the real world, or vice versa. To do it, you have to know a fix location as your reference and the relationship between pixels and real distances.
What I did is find the fixed location and set it as zero. Afterwards, I measured a length of an object in the picture, and also calculated the pixel amount of the object. Therefore I obtained the relationship between pixels and real distances.
Note that these procedures may involve errors in the identification. I did it very carefully and the error was acceptable in my case.
With only one image, accurate depth estimation is near impossible. However, there are various methods of estimating depth under certain assumptions or the availability of the camera calibration matrix. As mentioned by #WenlongLiu, OpenCV is a very good place to start with.