Compare frame of video with another image python? [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I would like compare a frame of video with another image but i don't know how can i do it with python.
Someone can help me please

You can use various metrics, look them up to see how they're calculated and when you should use them. In Python this can be achieved easily with scikit-image.
import cv2
from skimage.measure import compare_mse, compare_nrmse, compare_ssim, compare_psnr
img1 = cv2.imread('img1.jpg')
img2 = cv2.imread('img2.jpg')
# mean squared error
compare_mse(img1, img2)
# normalized root-mean-square
compare_nrmse(img1, img2)
# peak signal-to-noise ratio
compare_psnr(img1, img2)
# structural similarity index
compare_ssim(img1, img2, multichannel=True)
The images must have the same size.

Related

Create a rectangle around a binary image mask [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
From a given image i'm able to create a binary mask that detect certain objects, how i can draw multiple rectangles a round those detected objects so that i're draw it to the original image also if it possible to obtain the corrdinates of those rectangle so i can plot them in the original image
As you haven't provide code, I will answer without code as well.
You should use findCountours. There is an opencv tutorial that helps you in this exact task: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
cv2.findContours returns an array of contours, for each contour in contour you will need to:
x,y,w,h = cv2.boundingRect(cnt)
img = cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)

Differentiate dark and light areas in image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My problem is that I want to differentiate the light and dark areas in the following image to generate a binary mask.
https://i.stack.imgur.com/7ZRKB.jpg
An approximation to the output can be this:
https://i.stack.imgur.com/2UuJb.jpg
I've tried a lot of things but the results still have some noise or I lost a lot of data, like in this image:
https://i.stack.imgur.com/hUyjY.png
I've used python with opencv and numpy, gaussian filters, opening, closing, etc...
Somebody have some idea to doing this?
Thanks in advance!
I first reduced the size of the image using pyrDown then used CLAHE to equalize the histogram. I used medianblur as this will create patches then used opening 3 times. After that it was a simple binary_inv threshold. If you want to get the original image size, use cv2.pyrUp on image. By playing with the parameters you can manage to get better results.
import cv2
image = cv2.imread("7ZRKB.jpg",0)
image = cv2.pyrDown(image)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(16,16))
image = clahe.apply(image)
image = cv2.medianBlur(image, 7)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
ret,image = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("image",image)
cv2.waitKey()
cv2.destroyAllWindows()
Result:

How to convert RGB to grayscale by using color dimensions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I would like to transform my RGB image to grayscale image by not using converting function but with the red green blue values . For example, if my image is totally blue, it will be converted to white if I get blue components of it and it will be black if I get red components of my RGB image. It will be done in Python via OpenCV.
Thanks in advance.
The converting function that you are referring to does the same - it weights the R,G and B channel values of each pixel, and takes the sum. Since OpenCV uses the BGR colorspace on reading images, your conversion function will be something like this-
def rgbToGray(img):
grayImg = 0.0722*img(:,:,1) + 0.7152*img(:,:,2) + 0.2126*img(:,:,3)
return grayImg
The specific weights mentioned here are taken from the ITU-R BT.709 standard used for HDTV, developed by the ATSC (https://en.wikipedia.org/wiki/Grayscale)

How to visualize Image gradient in Python? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
How to visualize the image gradient of this picture in Python, like the pic above?
Use the Pyton Imaging Library PIL (http://www.pythonware.com/products/pil/)
Load the image using Image.open and read the colors of each pixel into a 2D list using Image.getpixel (http://effbot.org/imagingbook/image.htm). One of the three RGB values will do because the image is grayscale thus the R, B, and G values of each pixel are equal to each other.
Calculate the gradient for each pixel except of those at the edge:
grad[x][y] = [(list[x+1][y]-list[x-1][y])/2.0, (list[x][y+1]-list[x][y-1])/2.0]
(Note that the gradient is a 2D vector. It has an x and a y value.)
Create a quiver plot e.g. with MatPlotLib's PyPlot (https://www.getdatajoy.com/examples/python-plots/vector-fields)
Also check this document where a gradient function from numpy is used: http://elektromagnetisme.no/2011/09/12/calculating-the-gradient-in-python/

How to remove light's deflections [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I work with image processing in OpenCV in python.
My main problem is light deflection. Can these deflections be removed with some method?
I implemented a lot of code here, but cant find this particular lights deflections effect. 1)I implemented grayscale, sobel filter, median blur, histogram analysis for plates detections, but this deflections cause that my histogram is bad for edges from sobel filtering, removing these flashes cause that it should works good.
An input image:
Use a colorspace transformation. For instance, if you transform your image to the HSV space, you'll see the "light" components in the V("value") channel:
This is the HSV image:
This is the V channel:
This is the regions of the V channel above a certain level (i.e. a thresholded image):
Now, you can use this kind of stuff to get things done by removing the high values of this V channel, then merging the channels back again. Good luck!
NOTE: as you see, I'm not giving you the code. I think that this should be easy to program if you search the documentation on OpenCV's cvtColor, split/merge or threshold methods ;)

Categories

Resources