Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Okay, so I have 2 images:
and
and I want to compare them
from PIL import Image, ImageChops
im1 = Image.open('im1.png')
im2= Image.open('im2.png')
def compare(im1, im2):
#blah blah blah
Basically the 2 images are practically the same but 1 is larger and the other is smaller, so one has more pixels and the other has less pixels. I want a function that compares the 2 images and, for example, expresses the difference in numbers. If the number is small, I know the difference is almost non-existent, but if the number is large, they are different.
Or any other function that compares images. If you want use these 2 images, which I have used, so the result will be the same. Thanks
You could just subtract the values of the images after reshaping OR cropping them:
img1 = img1.reshape(100, 200)
img2 = img2.reshape(100, 200)
# Calculate the absolute difference on each channel separately
dif = np.fabs(np.subtract(img2[:], img1[:]))
If you want to see the difference visually you could create a heatmap of the difference between the two images.
#Show image
imgplot = plt.imshow(dif)
# Choose a color palette
imgplot.set_cmap('jet')
plt.axis('off')
pylab.show()
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Is there any way I can mirror an image along a custom axis to generate an image like below if given a transparent image like It custom axis that passes through the bottom-most points of the tires. And map every point of the car onto the bottom through the axis.
You can use cv2.flip
import cv2
img = cv2.imread("car.png")
flipcode = 0 # 0: along x axis, 1: along y axis, -1: along both x and y axes
flipped_img = cv2.flip(img, flipcode)
cv2.imshow("car", img)
cv2.imshow("flipped_car", flipped_img)
cv2.waitKey()
cv2.destroyAllWindows()
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to compare the two circular areas in Image1 and Image2. My current code gets me a white circle for image1, but falls apart for image2.
Is there another way to find the two circular areas for both images?
import cv2
import numpy as np
img = cv2.imread('Resources/Image2.jpg')
imgHSV = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
lower = np.array([40,28,18])
upper = np.array([70,161,255])
#imgGray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
mask = cv2.inRange(imgHSV, lower, upper)
maskInv = cv2.bitwise_not(mask)
#imgCanny = cv2.Canny(imgBitGrey,150,200)
cv2.imwrite('Resources/Image2_Inv.jpg', maskInv)
#cv2.imshow('Original',img)
#cv2.imshow('Grey',imgGray)
#cv2.imshow('BitGrey',imgBitGrey)
#cv2.imshow('Canny',imgCanny)
#cv2.imshow('InvCanny',imgInvCanny)
cv2.imshow('maskInv',maskInv)
cv2.imshow('Mask',mask)
#cv2.imshow('Invert',imgInvMask)
cv2.waitKey(0)
Original:
Modified:
It is ok to do it with HSV colorspace.
Try these bounds:
Lower: [18, 100, 44]
Upper: [36, 156, 71]
With them you may be able to get image mask as below.
To further improve it you can use connected components with OpenCV function: cv2.connectedComponentsWithStats.
You can get the biggest component (which is what you want) by checking this answer
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
How can to split image in parts, apply histogram equalization, compose an image form the parts and display it
I am trying to figure a way to split an image into defined number of parts, meaning the image is split into a number of rectangeles that when combined form the whole image. Then to each of the parts I want to apply histogram equalization. After that I want to be able to form a new image from the parts of the first image that have the histogram equalization already applied
## so far I know how to apply the histogram equalization to the entire image
import cv2
import numpy as np
from matplotlib import pyplot as plt
## load image
img = cv2.imread('/home/pi/Downloads/bear.bmp',0)
## equalize
equ = cv2.equalizeHist(img)
plt.subplot(111),plt.imshow(equ, cmap = "gray"),plt.title('Equalized')
plt.xticks([]), plt.yticks([])
plt.show()
Since you have figured out how to do histogram equalization on an image, there is left the question of splitting the original image and then merging back the pieces.
OpenCV is rather nice and provides you the concept of a region of interest (ROI), which is a part of an image (defined as a rectangle). For all practical purposes, a ROI acts as an image, but if you modify it, the original image is modified too.
Therefore, you have to extract all ROIs that interest you and apply histogram equalization on them. The merging is handled implicitly, since the ROI is a part of the image itself.
Look here and here for more information about ROIs.
Try this out:
import cv2
import numpy as np
img = cv2.imread("1.jpg")
(h,w,c) = img.shape
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
parts = []
step_x = 3
step_y = 3
eqs = []
eq_img = np.zeros_like(gray_img)
for x in range(step_x):
for y in range(step_y):
xratio1 = x/step_x
xratio2 = (x+1)/step_x
yratio1 = y/step_y
yratio2 = (y+1)/step_y
part = gray_img[int(yratio1*h):int(yratio2*h),int(xratio1*w):int(xratio2*w)].copy()
parts.append(part)
cv2.imshow("x = {0}, y = {1}".format(x,y),part)
eq = cv2.equalizeHist(part)
eqs.append(eq)
eq_img[int(yratio1*h):int(yratio2*h),int(xratio1*w):int(xratio2*w)] = eq
cv2.imshow("eq_img",eq_img)
cv2.waitKey(0)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
An image has two ellipses ( for simple case we can consider both ellipses are same). One of ellipse is rotated and translated from the other. We have only image of these ellipses. How to estimate rotation angle ( angle between two ellipses) ?
I will present some initial pre-processing steps and then with the use of some OpenCV internal methods, we can get what you are asking for.
If the image is RGB or RGBA:
convert it to GRAY [use cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)]
Threshold the image to get a binary image with 2 ellipses. [cv2.threshold()]
Find contours in the binary image [cv2.findContours()]
Call the cv2.fitline() on each contour to get the line equation.
Apply the formula to get the angle between 2 lines.
For more operations on contours visit http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
Off the top of my head, I'd do this: (considering the tags, I assume you're using opencv)
1-Use "findContours" command to get the boundaries' pixels of each ellipsis separately.
2-For each ellipsis, calculate the distance between each pairs of pixels (for all pixels in the boundary - in a dual loop) by the equation:(D=sqrt((y1-y2)^2 + (x1-x2)^2)) and find the pair that shows the most distance. This pair comprises of two ends of the major axis of the ellipsis.
3-Using two mentioned points, calculate the angle of the major axis with respect to x-axis of the image by the equation:
angle = arctan((y2-y1)/(x2-x1))
4-Find the angle for the other ellipsis and subtract two angles to find the angle between them.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'll use this picture as an example
I need to extract the RGB values and compare them with all of the color values to see if I can figure out which color is in it without hard coding it.
For example I get (4,5,0) and I determined this color = red. I don't know if those are the real values of red, but it's an example.
How can I extract the RGB values from the inside of the red box and how can I search for the color that corresponds to those values.
this is what I tried:
img = Image('car.png')
pixel = img.getPixel(120, 150)
print(pixel)
This retrieves the rgb on those dimensions, but I need an average around that whole box.
Please explain solution, thanks
Here's an idea of what you should do:
width = XX
height = YY
#crops to (x1,y1) to (x2,y2) when x2>x1 and y2>y1
frame = img[width/4:(width/4+width/2), height/4:(height/4+height/2)]
And then,
r = np.array(frame[:,:,0])
avg_r = np.average(r)
Repeat for G and B.