Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have an image and some ROIs (in form of seperate files) cropped from it. I want to find the coordinates of corner points of these ROI in the original image.
Is there a simple way to do this besides checking for all pixel values in the original image ?
Example -
Entire Image
Cropped ROI
I need to find the coordinates of the car in the original image.
If the cropped image isn't rotated or scaled then you can do template matching. Your ROI image will be your template. Check this, This will draw a rectangle over your original image from where you can get the coordinates.
If the crops are exact (i.e. no changes from the original pixels were introduced after cropping, for example as a result of JPG compression when saving), then you could use integral images to speed up the search.
Compute the sum of the pixel values of the cropped images, then the integral image of the original ones, then use the integral image to search for a window of equal size of the crop that has equal sum. This algorithm is, of course, linear time in the size of the original image.
Note that the sum is a "weak" signature, and you may find multiple matches, but you can then verify these candidate matches directly at the pixel level.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 days ago.
Improve this question
I have a stock of photos, of a spinning disk of varying angles. I wish to find the edge of the top of the disk. The top is colored in a distinct black color in comparison to the rest of the photo.
A
B
I first tried using canny edge detection which does a decent job, but also identifies the bottom half of the disk, which I wish to avoid.
Next idea was to use the distinct black color- perhaps by dividing the photo into domains characterized by colors\intensities- and by choosing the largest domain\most black domain or some other parameter, perhaps to isolate that black circle, and only then to use canny edge detector.
Is there any existing function that can divide a greyscale image into domains? Transferring from matlab to python, so I'm new to it's syntax and functions.
Thanks
The Canny disaster goes on !
People playing with image processing keep willing to rely on edge detection when they have beautifully segmenting scenes. With a careful selection of a binarization threshold, you can extract the ellipse as a single piece.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have an analog speedometer image, with the needle pointing to the current speed. I am trying to find a way to get the speed that the needle is pointing out to. I tried using HoughCircles() from OpenCV, but it is throwing an error as the image contains only speedometer and which is a semi-circle. Any resources to help me move forward will be appreciated.
Assuming the needle will have a different colour to the rest of the speedometer OR its size is distinctly larger than the rest of the elements on the speedometer (which is often the case), I'll do something like below.
Convert the image to grayscale.
Apply colour thresholding (or size-based thresholding) to detect the pixel area representing the needle.
Use HoughLines() or HoughLinesP() functions in OpenCV to fit a line to the shape you detected in Step 2.
Now it's a matter of measuring the angle of the line you generated in Step 3 (example provided here: How can I determine the angle a line found by HoughLines function using OpenCV?)
You can then map the angle of the line to the speed through a simple equation (Will need to see an image of the speedometer to generate this).
let me know how it went.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
If i have a large image and i resample it to a smaller size. How can i apply the same transformation to the coordinates in the larger image. Specifically, if I resample an image to a smaller size, what are the new coordinates of the points in the larger image and how can i get them in the new coordinate system. It seems that multiple coordinates in the larger image should get mapped to the same coordinates in the smaller image. But i have no idea how to go about getting the transformed coordinates
It depends on the library you are using. ITK and SimpleITK have two sets of coordinates: physical coordinates which are independent of image details, and index coordinates which depend on image's size and buffered region. Physical coordinates will be the same in both the high-resolution ("big") and low-resolution ("small") image. To get index coordinates from a physical coordinate point p you use imageBig->TransformPhysicalPointToIndex(p) or imageSmall->TransformPhysicalPointToIndex(p).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a couple of standard ways of detecting a modified image such as
Luminance gradient
Copy move detection
Metadata Extraction
Histogram analysis
ELA(Error level analysis)
Quantization matrix analysis
Thumbnail analysis
are there any other standard ways of detecting a modified image?
Tried out
Finding the EXIF of the image to check the created and modified date and check for modification. I also had some rules for EXIF camera make and make note validation along with checking for the software used such as photoshop, Shotwell, etc.
Was able to segment the image and use SLIC(simple linear iterative clustering) to find out the similar cluster regions in an image
Find the largest contour with less pixel inconsistency with luminance gradient to mark that as a potential modified region
Largest contour with ELA as a potential modified region
Check for inconsistencies in histogram graph and mark it as a potential editted image.
Here are my questions
Are there any standard logics to verify the image with metadata such as using the created and modified dates, the camera make or maker note, etc. As these details are not consistent for any given image.
Finding out the least pixel inconsistency contour in the Luminance gradient would always give me an image that is modified?
If the histogram graph has a regular interval fluctuation could it be considered a modified image?
How could I use Quantization matrices to find out image anomalies
what is the best way to compare the thumbnail image to the original image to check for inconsistencies?
The answer to this question needs more detailed so, I will give some references to the subject itself and I will share with you the code of every part of your question :
You need to use exif to verify the image with metadata
For Anomaly Detection in Images see here
To compare the thumbnail image to the original image read this. where it showed you how to compare two images using Python.
References :
ccse.kfupm.edu.sa
github.com/redaelli
github.com/Ghirensics
www.amazon.com/Learning
books.google.com.tw
hal.inria.fr/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently trying to contour a human body from an image, but I am stuck right now.
I have taken different video lectures on contour, but they were related to objects like rectangles, circles and other simle shapes.
Can someone guide me in Human body contour? This picture shows an example of contour I am looking for.
You have to understand that detecting a human body is not so simple because it is hard to diferentiate the background from the body. That being said, if you have a simple background like the uploaded image, you can try to apply number of image tranformations (like applying binary threshold, otsu... look at opencv documentation - OpenCV documentation) to make your ROI "stand out" so you can detect with cv2.findContours() - same as drawing contour for circles, squares, etc. You can even apply cv2.Canny() (Canny edge detection) which detects a wide range of edges in the image and then search for contour. Here is an example for your image (the results could be better if the image didn't already have a red contour surrounding the body). Steps are desribed in comments in the code. Note that this is very basic stuff and would not work in most cases as the human detection is very difficult and broad question.
Example:
import cv2
# Read image and convert it to grayscale.
img = cv2.imread('human.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Search for edges in the image with cv2.Canny().
edges = cv2.Canny(img,150,200)
# Search for contours in the edged image with cv2.findContour().
_, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
# Filter out contours that are not in your interest by applying size criterion.
for cnt in contours:
size = cv2.contourArea(cnt)
if size > 100:
cv2.drawContours(img, [cnt], -1, (255,0,0), 3)
# Display the image.
cv2.imshow('img', img)
Result:
Here is another useful link in the OpenCV documentation regarding this subject: Background Subtraction. Hope it helps a bit and gives you an idea on how to proceede. Cheers!