Extract RGB using simpleCV [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'll use this picture as an example
I need to extract the RGB values and compare them with all of the color values to see if I can figure out which color is in it without hard coding it.
For example I get (4,5,0) and I determined this color = red. I don't know if those are the real values of red, but it's an example.
How can I extract the RGB values from the inside of the red box and how can I search for the color that corresponds to those values.
this is what I tried:
img = Image('car.png')
pixel = img.getPixel(120, 150)
print(pixel)
This retrieves the rgb on those dimensions, but I need an average around that whole box.
Please explain solution, thanks

Here's an idea of what you should do:
width = XX
height = YY
#crops to (x1,y1) to (x2,y2) when x2>x1 and y2>y1
frame = img[width/4:(width/4+width/2), height/4:(height/4+height/2)]
And then,
r = np.array(frame[:,:,0])
avg_r = np.average(r)
Repeat for G and B.

Related

Mirror image along a custom axis (like shadow under the car) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Is there any way I can mirror an image along a custom axis to generate an image like below if given a transparent image like It custom axis that passes through the bottom-most points of the tires. And map every point of the car onto the bottom through the axis.
You can use cv2.flip
import cv2
img = cv2.imread("car.png")
flipcode = 0 # 0: along x axis, 1: along y axis, -1: along both x and y axes
flipped_img = cv2.flip(img, flipcode)
cv2.imshow("car", img)
cv2.imshow("flipped_car", flipped_img)
cv2.waitKey()
cv2.destroyAllWindows()

Detecting colors in an Image in sequence using opencv [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am working on a project where I need to recognize colors in sequential order from an Image
I am new to openCv. Help needed. Image here
You may try this (your image is saved to color_strip.jpg), basically you load the file and cut the image into the pieces of the same color, then average over the piece to get the average color:
>>> import cv2
>>> img = cv2.imread( 'color_strip.jpg' )
>>> img.shape
(3677, 235, 3)
>>> square_size = 2640 / 10
>>> for i in range(11) :
... top = i * square_size + 10
... bottom = top + 160
... data = img[top:bottom,80:210,:]
... _ = cv2.imwrite( 'data_%02d.jpg' % i, data )
... print 'mean color', data.mean(axis=0).mean(axis=0)
...
mean color [ 92.55783654 127.716875 143.74230769]
mean color [ 95.17754808 126.11514423 157.42605769]
mean color [ 84.09365385 151.56105769 190.28004808]
mean color [ 83.29528846 148.21403846 165.08956731]
mean color [ 50.76451923 140.88158654 211.09413462]
mean color [ 19.91221154 150.03350962 221.26485577]
mean color [ 41.71350962 150.38677885 200.61456731]
mean color [ 114.19682692 155.68245192 190.50230769]
mean color [ 106.44120192 160.234375 194.67211538]
mean color [ 106.43980769 148.12759615 102.86701923]
mean color [ 117.02735577 151.62211538 171.19259615]
>>>
And you may check the data_XX.jpg files to make sure they actually contain the color stripes, and not something else.
A minor detail, the printed results are in BGR format used by OpenCV, you may reorder them if you need RGB or any specific order.
since I think that you have to recognize colors inside the squares, not all colours in the image, you should first of all detect all squares.
After that for each square it's so easy to recognize the colour since you can have the pixel value information in different color spaces (rgb, hsv and so on).
I can suggest you to read tutorials that you can find in the official documentation before to start, they can be really useful.

Distance between two points in OpenCv based on known measurement [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 3 years ago.
Improve this question
I have an image in which, I have two set of coordinates between which I have draw a line.
#Get image
im_res = requests.get(image_url)
img = Image.open(BytesIO(im_res.content))
img = np.asarray(img)
#Draw first line
lineThickness = 3
cv.line(img, (ax, ay), (bx, by), (0,255,0), lineThickness)
#Draw second line
lineThickness = 3
cv.line(img, (cx, cy), (dx, dy), (0,255,0), lineThickness)
cv.imshow("Image", img)
cv.waitKey(0)
cv.destroyAllWindows()
Coordinates are A,B,C & D. I know the distance between C to D. However, the distance between A to B is unknown. What is the best way to calculate this in OpenCv?
Is there an OpenCv specific function or method to do this? Especially the distance we are taking about is in pixels? I am sorry if this question is foolish, I really don't want to to end up getting wrong values due to lack of understanding in this topic.
I saw certain references to cv2.norm() and cv2.magnitude() as solution to this problem. However, I quite didnt't understand how to choose for my situation, keeping in mind in this case the distance is within an image/photo.
Compute Euclidean from C to D and find the ratio of that with the known measurement.
ratio = known / Euclidean
Then find the Euclidean between A & B and use the earlier found ratio to convert the Euclidean to actual distance.
distance = euclidean * ratio
euclidean "sqrt((x2-x1)**2+(y2-y1)**2)"

How to find angle between two ellipses in an Image? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
An image has two ellipses ( for simple case we can consider both ellipses are same). One of ellipse is rotated and translated from the other. We have only image of these ellipses. How to estimate rotation angle ( angle between two ellipses) ?
I will present some initial pre-processing steps and then with the use of some OpenCV internal methods, we can get what you are asking for.
If the image is RGB or RGBA:
convert it to GRAY [use cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)]
Threshold the image to get a binary image with 2 ellipses. [cv2.threshold()]
Find contours in the binary image [cv2.findContours()]
Call the cv2.fitline() on each contour to get the line equation.
Apply the formula to get the angle between 2 lines.
For more operations on contours visit http://docs.opencv.org/trunk/dd/d49/tutorial_py_contour_features.html
Off the top of my head, I'd do this: (considering the tags, I assume you're using opencv)
1-Use "findContours" command to get the boundaries' pixels of each ellipsis separately.
2-For each ellipsis, calculate the distance between each pairs of pixels (for all pixels in the boundary - in a dual loop) by the equation:(D=sqrt((y1-y2)^2 + (x1-x2)^2)) and find the pair that shows the most distance. This pair comprises of two ends of the major axis of the ellipsis.
3-Using two mentioned points, calculate the angle of the major axis with respect to x-axis of the image by the equation:
angle = arctan((y2-y1)/(x2-x1))
4-Find the angle for the other ellipsis and subtract two angles to find the angle between them.

Python PIL - how to compare two images [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Okay, so I have 2 images:
and
and I want to compare them
from PIL import Image, ImageChops
im1 = Image.open('im1.png')
im2= Image.open('im2.png')
def compare(im1, im2):
#blah blah blah
Basically the 2 images are practically the same but 1 is larger and the other is smaller, so one has more pixels and the other has less pixels. I want a function that compares the 2 images and, for example, expresses the difference in numbers. If the number is small, I know the difference is almost non-existent, but if the number is large, they are different.
Or any other function that compares images. If you want use these 2 images, which I have used, so the result will be the same. Thanks
You could just subtract the values of the images after reshaping OR cropping them:
img1 = img1.reshape(100, 200)
img2 = img2.reshape(100, 200)
# Calculate the absolute difference on each channel separately
dif = np.fabs(np.subtract(img2[:], img1[:]))
If you want to see the difference visually you could create a heatmap of the difference between the two images.
#Show image
imgplot = plt.imshow(dif)
# Choose a color palette
imgplot.set_cmap('jet')
plt.axis('off')
pylab.show()

Categories

Resources