This question already has answers here:
Difference between plt.imshow and cv2.imshow?
(3 answers)
Closed 8 months ago.
so at the moment im accesing the BGR Channels of images and do a bit of calculation around them.
Like the mean or standard deviation.. stuff like that.
As far as i know i dont have to convert numPy Arrays to display them with cv2.imshow().
But when I display my array with this command:
#with the help of the PIL Libary
data = Image.fromarray(image_array)
data.save('SavedArrayAsPic.png')
My Output is correct. Its an Image with another color.
But when I write:
cv2.imshow("my Array as a Pic", image_array)
It shows the wrong image with an old color pattern.
I want to use cv2.imshow to display videos in RealTime. With the PIL Libary i just save the images.
So what could be the difference?
Thank you for reading
opencv has BGR channel ordering, PIL and matplotlib use RGB order
try not to mix different libraries with different paradigms
Related
I cannot find a way to save my graphs and images. I have tried from
from PIL import Image
im = Image.fromarray()
im.save("your_file.jpeg")
but doesn't work!
Please format your code correctly, it is important for readability and python is intend-sensitive. Also, there is already a post for this problem:
How can I save an image with PIL?
This question already has answers here:
OpenCV giving wrong color to colored images on loading
(7 answers)
Closed 4 years ago.
I am following this course on computer vision: https://in.udacity.com/course/introduction-to-computer-vision--ud810
The instructor explains how gaussian filter causes blurring of image. The instructor uses matlab to demonstrate it but I am using python 3 with opencv. I ran the following code:
import cv2
from matplotlib import pyplot as pl
image = cv2.imread("Desert.jpg")
blur = cv2.GaussianBlur(image,(95,95),5)
cv2.imshow("desert", image)
pl.imshow(blur)
pl.xticks([]), pl.yticks([])
pl.show()
This is the original image:
And this is is the "blur" image:
The image is blurred, no doubt. But how colors have interchanged ? The mountain is blue while the sky is brick red ?
Because you plot one with opencv and another with matplotlib.
The explanation given here is as follows:
There is a difference in pixel ordering in OpenCV and Matplotlib. OpenCV follows BGR order, while matplotlib likely follows RGB order.
Since you read and show the image with opencv, it is in BGR order and you see nothing wrong. But when you show it with matplotlib it thinks that the image is in RGB format and it changes the order of blue and red channels.
Using Python's PIL module, we can read an digital image into an array of integers,
from PIL import Image
from numpy import array
img = Image.open('x.jpg')
im = array(img) # im is the array representation of x.jpg
I wonder how does PIL interpret an image as an array? First I tried this
od -tu1 x.jpg
and it indeed gave a sequence of numbers, but how does PIL interpret a color image into a 3D array?
In short, my question is that I want to know how can I get a color image's array representation without using any module like PIL, how could do the job using Python?
Well, it depends on the image format I would say.
For a .jpg, there is a complete description of the format that permits to read the image .
You can read it here
What PIL does is exactly what you did at first. But then it reads the bytes following the specifications, which allow it to transform this into a human readable format (in this case an array).
It may seem complex for JPEG, but if you take png (the version without compression) everything can seem way more simple.
For example this image
If you open it, you will see something like that :
You can see several information on top that corresponds to the header.
Then you see all those zeroes, that are the numerical representation of black pixels of the image (the top left corner).
If you open the image with PIL, you will get an array that is mostly filled with 0.
If you want to know more about the first bytes, look at the png specifications chapter 11.2.2.
You will see that some of the bytes corresponds to the width and height of the image. This is how PIL is able to create the array :).
Hope this helps !
Depends on the color mode. In PIL an image is stored as a list of integers with all channels interleaved on a per-pixel basis.
To illustrate this:
Grayscale image: [pixel1, pixel2, pixel3, ...]
RGB: [pixel1_R, pixel1_G, pixel1_B, pixel2_R, pixel_2_G, ...]
Same goes for RBGA and so on.
I'm making live video GUI using Python and Glade-3, but I'm finding it hard to convert the Numpy array that I have into something that can be displayed in Glade. The images are in black and white with just a single value giving the brightness of each pixel. I would like to be able to draw over the images in the GUI so I don't know whether there is a specific format I should use (bitmap/pixmap etc) ?
Any help would be much appreciated!
In the end i decided to create a buffer for the pixels using:
self.pixbuf = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB,0,8,1280,1024)
I then set the image from the pixel buffer:
self.liveImage.set_from_pixbuf(self.pixbuf)
I think these are the steps you need:
use scipy.misc.toimage to convert your array to a PIL image
check out the answer to this question to convert your PIL image to a cairo surface
use gdk_pixbuf_get_from_surface to convert this to a pixbuf (I don't know it's name in the python api)
make a Gtk.Image out of this using Gtk.Image.new_from_pixbuf
I'm sorry it needs so many conversion steps.
This question already has answers here:
Detecting thresholds in HSV color space (from RGB) using Python / PIL
(4 answers)
Closed 9 years ago.
well i've seen some code to convert RGB to HSL; but how to do it fast in python.
Its strange to me, that for example photoshop does this within a second on a image, while in python this often takes forever. Well at least the code i use; so think i'm using wrong code to do it
In my case my image is a simple but big raw array [r,g,b,r,g,b,r,g,b ....]
I would like this to be [h,s,l,h,s,l,h,s,l .......]
Also i would like to be able to do hsl to rgb
the image is actually 640x 480 pixels;
Would it require some library or wrapper around c code (i never created a wrapper) to get it done fast ?
For manipulating image data, many use the Python Imaging Library. However, it doesn't handle HSL colour. Luckily, Python comes with a library called colorsys. Here's an example of colorsys being used to convert between colour modes on a per-pixel level: http://effbot.org/librarybook/colorsys.htm
colorsys also provides a function to convert HSL to RGB: http://docs.python.org/library/colorsys.html
I wrote this RGB to HSV converter a little while back. It starts with a PIL image but uses numpy to do the array operations efficently. It could very easily be modified to do HSL. Let me know if you want the modified version.
One option is to use OpenCV. Their Python bindings are pretty good (although not amazing). The upside is that it is a very powerful library, so this would just be the tip of the iceberg.
You could probably also do this very efficiently using numpy.