This question already has answers here:
OpenCV giving wrong color to colored images on loading
(7 answers)
Closed 4 years ago.
I am following this course on computer vision: https://in.udacity.com/course/introduction-to-computer-vision--ud810
The instructor explains how gaussian filter causes blurring of image. The instructor uses matlab to demonstrate it but I am using python 3 with opencv. I ran the following code:
import cv2
from matplotlib import pyplot as pl
image = cv2.imread("Desert.jpg")
blur = cv2.GaussianBlur(image,(95,95),5)
cv2.imshow("desert", image)
pl.imshow(blur)
pl.xticks([]), pl.yticks([])
pl.show()
This is the original image:
And this is is the "blur" image:
The image is blurred, no doubt. But how colors have interchanged ? The mountain is blue while the sky is brick red ?
Because you plot one with opencv and another with matplotlib.
The explanation given here is as follows:
There is a difference in pixel ordering in OpenCV and Matplotlib. OpenCV follows BGR order, while matplotlib likely follows RGB order.
Since you read and show the image with opencv, it is in BGR order and you see nothing wrong. But when you show it with matplotlib it thinks that the image is in RGB format and it changes the order of blue and red channels.
Related
This question already has answers here:
Difference between plt.imshow and cv2.imshow?
(3 answers)
Closed 8 months ago.
so at the moment im accesing the BGR Channels of images and do a bit of calculation around them.
Like the mean or standard deviation.. stuff like that.
As far as i know i dont have to convert numPy Arrays to display them with cv2.imshow().
But when I display my array with this command:
#with the help of the PIL Libary
data = Image.fromarray(image_array)
data.save('SavedArrayAsPic.png')
My Output is correct. Its an Image with another color.
But when I write:
cv2.imshow("my Array as a Pic", image_array)
It shows the wrong image with an old color pattern.
I want to use cv2.imshow to display videos in RealTime. With the PIL Libary i just save the images.
So what could be the difference?
Thank you for reading
opencv has BGR channel ordering, PIL and matplotlib use RGB order
try not to mix different libraries with different paradigms
This question already has answers here:
Display image as grayscale using matplotlib
(9 answers)
Closed 5 years ago.
I've been trying to convert an image to grayscale using opencv in Python but it converts the image to some kind of thermal camera image. What am I doing wrong?
Here is the code for image below:
img =X_tr[9999]
plt.imshow(img)
plt.show()
img = cv2.cvtColor(img.astype(np.uint8), cv2.COLOR_RGB2GRAY)
plt.imshow(img)
plt.show()
img.shape
This image is taken from CIFAR10 dataset.
Thanks.
Gray scale images, i.e. images with only one colorchannel, are interpreted by imshow as to be plotted using a colormap. You therefore need to specify the colormap you want to use (and the normalization, if it matters).
plt.imshow(img, cmap="gray", vmin=0, vmax=255)
I need to extract a object of interest (a vehicle ) from a large picture, now I know the 4 coordinates of this vehicle in the picture. How could I crop the image of this vehicle in the picture and then rotate it to 90 degree as shown below
I need to program it in python, but I don’t know which library to use for this functionality ?
You can use PIL (http://www.pythonware.com/products/pil/)
from PIL import Image
im = Image.open("img.jpg")
im.rotate(45)
You also have a crop method ...
You could use PIL and do it like here :
Crop the image using PIL in python
You could use OpenCV and do it like here:
How to crop an image in OpenCV using Python
For the rotation you could use OpenCV's cv::transpose().
Rotating using PIL: http://matthiaseisen.com/pp/patterns/p0201/
I have a small gray image. I need to create many colored copies of this image (yellow one, green one, ...).
I don't need replace single color. My original image contains many shades of gray, and I need to create images with many shades of another colors.
How to do this using Python?
I came across an article today on Hacker News that shows how to mix an image by a constant base color with an affine transform. The article is Making thumbnails fast by William Chargin and its about improving image processing performance. The source code mentioned in it is at affine transforms on PIL images.
Here is a demo starting with a greyscale Lena image resized to 231x231 pixels. This image was chosen because its "a standard test image widely used in the field of image processing since 1973".
from PIL import Image
from transforms import RGBTransform # from source code mentioned above
lena = Image.open("lena.png")
lena = lena.convert('RGB') # ensure image has 3 channels
lena
red = RGBTransform().mix_with((255, 0, 0),factor=.30).applied_to(lena)
red
green = RGBTransform().mix_with((0, 255, 0),factor=.30).applied_to(lena)
green
blue = RGBTransform().mix_with((0, 0, 255),factor=.30).applied_to(lena)
blue
This might be overkill, but you could easily use functionalities from the OpenCV library (python bindings) to tint your gray scale images in color.
Try looking at these folks C++ code: http://answers.opencv.org/question/50781/false-coloring-of-grayscale-image/. Analogs the the functions that they use likely exist in the python library.
Here's a recommended course of action:
Convert the image to BGR (opencv convention lists red green blue
in reverse order) from grayscale using cv2.cvtColor()
Apply an
artificial color map of your choice (cv2.applyColorMap()) see:
http://docs.opencv.org/modules/contrib/doc/facerec/colormaps.html
This is my first stack overflow question so please correct me if its not a good one:
I am currently processing a bunch of grayscale images as numpy ndarrays (dtype=uint8) in python 2.7. When I resize the images using resized=misc.imresize(image,.1), the resulting image will sometimes show up with different gray levels when I plot it with pyplot. Here is what my code looks like. I would post an image of the result, but I do not have the reputation points yet:
import cv2
from scipy import misc
from matplotlib import pyplot as plt
image=cv2.imread("gray_image.tif",cv2.CV_LOAD_IMAGE_GRAYSCALE)
resize=misc.imresize(image,.1)
plt.subplot(1,2,1),plt.imshow(image,"gray")
plt.subplot(1,2,2),plt.imshow(resize,"gray")
plt.show()
If I write the image to a file, the gray level appears normal.
If I compare the average gray level using numpy:
np.average(image) and np.average(resized),
the average gray level values are about the same, as one would expect.
If I display the image with cv2.imshow, the gray level appears normal.
Its not only an issue with resizing the image, but the gray level also gets screwy when I add images together (when most of one image is black and shouldn't darken the resulting image), and when I build an image pixel-by-pixel such as in:
import numpy as np
image_copy = np.zeros(image.shape)
for row in range(image.shape[0]):
for col in range(image.shape[1]):
image_copy[row,col]=image[row,col]
plt.imshow(image_copy,"gray") #<-- Will sometimes show up darker than original image
plt.show()
Does anyone have an idea as to what may be going on?
I apologize for the wordiness and lack of clarity in this question.
imshow is automatically scaling the color information to fit the whole available range. After resizing, the color range be smaller, resulting in changes of the apparent color (but not of the actual values, which explains why saved images work well).
You likely want to tell imshow not to scale your colors. This can be done using the vmin and vmax arguments as explained in the documentation. You probably want to use something like plt.imshow(image, "gray", vmin=0, vmax=255) to achieve an invariant appearance.