Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
How to visualize the image gradient of this picture in Python, like the pic above?
Use the Pyton Imaging Library PIL (http://www.pythonware.com/products/pil/)
Load the image using Image.open and read the colors of each pixel into a 2D list using Image.getpixel (http://effbot.org/imagingbook/image.htm). One of the three RGB values will do because the image is grayscale thus the R, B, and G values of each pixel are equal to each other.
Calculate the gradient for each pixel except of those at the edge:
grad[x][y] = [(list[x+1][y]-list[x-1][y])/2.0, (list[x][y+1]-list[x][y-1])/2.0]
(Note that the gradient is a 2D vector. It has an x and a y value.)
Create a quiver plot e.g. with MatPlotLib's PyPlot (https://www.getdatajoy.com/examples/python-plots/vector-fields)
Also check this document where a gradient function from numpy is used: http://elektromagnetisme.no/2011/09/12/calculating-the-gradient-in-python/
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have an analog speedometer image, with the needle pointing to the current speed. I am trying to find a way to get the speed that the needle is pointing out to. I tried using HoughCircles() from OpenCV, but it is throwing an error as the image contains only speedometer and which is a semi-circle. Any resources to help me move forward will be appreciated.
Assuming the needle will have a different colour to the rest of the speedometer OR its size is distinctly larger than the rest of the elements on the speedometer (which is often the case), I'll do something like below.
Convert the image to grayscale.
Apply colour thresholding (or size-based thresholding) to detect the pixel area representing the needle.
Use HoughLines() or HoughLinesP() functions in OpenCV to fit a line to the shape you detected in Step 2.
Now it's a matter of measuring the angle of the line you generated in Step 3 (example provided here: How can I determine the angle a line found by HoughLines function using OpenCV?)
You can then map the angle of the line to the speed through a simple equation (Will need to see an image of the speedometer to generate this).
let me know how it went.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
If i have a large image and i resample it to a smaller size. How can i apply the same transformation to the coordinates in the larger image. Specifically, if I resample an image to a smaller size, what are the new coordinates of the points in the larger image and how can i get them in the new coordinate system. It seems that multiple coordinates in the larger image should get mapped to the same coordinates in the smaller image. But i have no idea how to go about getting the transformed coordinates
It depends on the library you are using. ITK and SimpleITK have two sets of coordinates: physical coordinates which are independent of image details, and index coordinates which depend on image's size and buffered region. Physical coordinates will be the same in both the high-resolution ("big") and low-resolution ("small") image. To get index coordinates from a physical coordinate point p you use imageBig->TransformPhysicalPointToIndex(p) or imageSmall->TransformPhysicalPointToIndex(p).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
From a given image i'm able to create a binary mask that detect certain objects, how i can draw multiple rectangles a round those detected objects so that i're draw it to the original image also if it possible to obtain the corrdinates of those rectangle so i can plot them in the original image
As you haven't provide code, I will answer without code as well.
You should use findCountours. There is an opencv tutorial that helps you in this exact task: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
cv2.findContours returns an array of contours, for each contour in contour you will need to:
x,y,w,h = cv2.boundingRect(cnt)
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I would like compare a frame of video with another image but i don't know how can i do it with python.
Someone can help me please
You can use various metrics, look them up to see how they're calculated and when you should use them. In Python this can be achieved easily with scikit-image.
import cv2
from skimage.measure import compare_mse, compare_nrmse, compare_ssim, compare_psnr
img1 = cv2.imread('img1.jpg')
img2 = cv2.imread('img2.jpg')
# mean squared error
compare_mse(img1, img2)
# normalized root-mean-square
compare_nrmse(img1, img2)
# peak signal-to-noise ratio
compare_psnr(img1, img2)
# structural similarity index
compare_ssim(img1, img2, multichannel=True)
The images must have the same size.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I would like to transform my RGB image to grayscale image by not using converting function but with the red green blue values . For example, if my image is totally blue, it will be converted to white if I get blue components of it and it will be black if I get red components of my RGB image. It will be done in Python via OpenCV.
Thanks in advance.
The converting function that you are referring to does the same - it weights the R,G and B channel values of each pixel, and takes the sum. Since OpenCV uses the BGR colorspace on reading images, your conversion function will be something like this-
def rgbToGray(img):
grayImg = 0.0722*img(:,:,1) + 0.7152*img(:,:,2) + 0.2126*img(:,:,3)
return grayImg
The specific weights mentioned here are taken from the ITU-R BT.709 standard used for HDTV, developed by the ATSC (https://en.wikipedia.org/wiki/Grayscale)