Python CV2 Color Space Conversion Fidelity Loss - python

Observe the following image:
Observe the following Python code:
import cv2
img = cv2.imread("rainbow.png", cv2.IMREAD_COLOR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # convert it to hsv
img = cv2.cvtColor(img, cv2.COLOR_HSV2BGR) # convert back to BGR
cv2.imwrite("out.png", img)
Here's the output image:
If you can't see it, there's a clear loss of visual fidelity in the image here. For comparison's sake, here's the original next to the output image zoomed in around the yellows:
What's going on here? Is there any way to prevent these blocky artifacts from appearing? I need to convert to the HSL color space to rotate the hue, but I can't do that if I'm going to get these kinds of artifacts.
As a note, the output image does not have the artifacts when I don't do the two conversions; the conversions themselves are indeed the cause.

Back at a computer now - try like this:
#!/usr/bin/env python3
import numpy as np
import cv2
img = cv2.imread("rainbow.png", cv2.IMREAD_COLOR)
img = img.astype(np.float32)/255 # go to 32-bit float on 0..1
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # convert it to hsv
img = cv2.cvtColor(img, cv2.COLOR_HSV2BGR) # convert back to BGR
cv2.imwrite("output.png", (img*255).astype(np.uint8))
I think the problem is that when you use unsigned 8-bit representation, the Hue gets "squished" from a range of 0..360 into a range of 0..180, in 2 degree increments in order to stay within 8-bit unsigned range of 0..255 causing steps between nearby values. A solution is to move to 32-bit floats and scale to the range 0..1.

Related

Problem in returning images in Python for LabVIEW

I am just starting to learn LabVIEW. I want to get a threshold from my image in a python function and display the image in LabVIEW. But when the function returns the image, it gives an error in LabVIEW. I am sending the relevant code in Python and the LabVIEW program as an attachment.
Thanks
import numpy as np
import cv2
def thershold(data):
gray = np.array(data,dtype=np.uint8)
ret, thresh1 = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
return np.array( thresh1, dtype=np.float64)
if __name__ == '__main__':
data = cv2.imread('C:/Users/user00/Desktop/LabView/1120220711_151148.tiff', 1)
thresh = thershold(data)
cv2.imshow('thresh1',thresh)
cv2.waitKey(0)
As the commenters on your post have suggested, it appears that the python code and LabVIEW code are expecting different types. When you perform the test just in Python the code adapts as required to show the image but the types need to match when passing between the two environments.
As per the OP's comment below, we need to pass a grayscale image and return an RGB image.
The grayscale image is easier as it is a 2D array of uint8 types. We can convert a Grayscale IMAQ image into the correct array type using IMAQ ImageToArray.vi.
When it comes to passing an RGB image back to LabVIEW we need to know the following:
In OpenCV an RGB image is a 2-dimensional image with multiple "channels". Each channel represents one of the colours and the OpenCV convention is to store the channels in the Blue-Green-Red channel order
In LabVIEW IMAQ RGB images are represented as a 2-dimensional image of unsigned 32-bit integers. The most significant byte is the Alpha channel which IMAQ cannot handle but is still stored. The next byte is the Red Channel, then the Green Channel and finally the least significant byte is the Blue Channel
We have two options - we can either format the image data before passing it from the Python side or we can take the Python image data as-is and transform it to the format LabVIEW/IMAQ needs in LabVIEW.
In The example code below I choose the latter (because I have more experience manipulating data in LabVIEW). Once theRGB image data is an array of U32 integers we can use the IMAQ ArrayToColorImage.vi to write the data to the IMAQ image.
The associated Python code is
import numpy as np
import cv2
def threshold(data):
gray = np.array(data,dtype=np.uint8)
# perform threshold operation
ret, thresh1 = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY)
#
# create an RGB image to demonstrate output
#
height = 200
width = 300
rgb = np.zeros((height,width,3), np.uint8)
# create RGB verticle stripes
# note cv2 channels are arranged BGR
# red stripe
rgb[:,0:width//3] = (0,0,255)
# green stripe
rgb[:,width//3:2*width//3] = (0,255,0)
# blue stripe
rgb[:,2*width//3:width] = (255,0,0)
# return rgb 3d-array
return rgb
Note - the labVIEW code is attached as a VI snippet so you should be able to drag it into a fresh LabVIEW Block-Diagram
Alternatively all the code is in this github gist

Imageio and cv2 Read Image - jpeg [duplicate]

As I'm lead to believe, OpenCV reads images in BGR colorspace ordering and we usually have to convert it back to RGB like this:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
But when I try to simply read an image and show it, the coloring seems fine (without the need to convert BGR to RGB):
img_bgr = cv2.imread(image_path)
cv2.imshow('BGR Image',img_bgr)
img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
cv2.imshow('RGB Image',img_rgb )
cv2.waitkey(0)
So is imshow() changing the ordering within the function automatically (from BGR to RGB) or the ordering has been BGR all along?
BGR and RGB are not color spaces, they are just conventions for the order of the different color channels. cv2.cvtColor(img, cv2.COLOR_BGR2RGB) doesn't do any computations (like a conversion to say HSV would), it just switches around the order. Any ordering would be valid - in reality, the three values (red, green and blue) are stacked to form one pixel. You can arrange them any way you like, as long as you tell the display what order you gave it.
OpenCV imread, imwrite and imshow indeed all work with the BGR order, so there is no need to change the order when you read an image with cv2.imread and then want to show it with cv2.imshow.
While BGR is used consistently throughout OpenCV, most other image processing libraries use the RGB ordering. If you want to use matplotlib's imshow but read the image with OpenCV, you would need to convert from BGR to RGB.
screen = cv2.cvtColor(screen, cv2.COLOR_RGB2BGR)
this one line code changes rgb to bgr
for matplotlib we need to change BGR to RGB:
img = cv2.imread("image_name")
img = img[...,::-1]
plt.imshow(img)
opencv_image_with_bgr_channels = cv2.imread('path/to/color_image.jpg')
matplotlib_compatible_image_with_rgb_channels = opencv_image_with_bgr_channels[:,:, ::-1]
This converts BGR to RGB Channels Image by reversing the channels.
If you do not need to use any other Image processing library (example Matplotlib's imshow), there is no need to do color scale conversion. Below code is an example, where the color scale conversion is done but when the image is loaded, it is still loaded in BGR. This conversion is not needed as the image is displayed using cv2.imshow().
import cv2
# read the image #
image = cv2.imread('<<Image Path>>')
image_rgb = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
# write a function to draw circles on the image #
def draw_circle(event,x,y,flags,params):
if event == cv2.EVENT_RBUTTONDOWN:
cv2.circle(img=image_rgb,center=(x,y),radius=100,color=(0,0,255),thickness=10)
# Open CV callbacks #
cv2.namedWindow(winname='ImageWindow')
cv2.setMouseCallback('ImageWindow',draw_circle)
# display the image till the user hits the ESC key #
while True:
cv2.imshow('ImageWindow',image_rgb)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
Alternatively, you can use imutils.opencv2matplotlib() function, which does not need BGR to RGB conversion.

How to have a partial grayscale image using Python Pillow (PIL)?

Example:
1st image: the original image.
2nd, 3rd and 4th images: the outputs I
want.
I know PIL has the method PIL.ImageOps.grayscale(image) that returns the 4th image, but it doesn't have parameters to produce the 2nd and 3rd ones (partial grayscale).
When you convert an image to greyscale, you are essentially desaturating it to remove saturated colours. So, in order to achieve your desired effect, you probably want to convert to HSV mode, reduce the saturation and convert back to RGB mode.
from PIL import Image
# Open input image
im = Image.open('potato.png')
# Convert to HSV mode and separate the channels
H, S, V = im.convert('HSV').split()
# Halve the saturation - you might consider 2/3 and 1/3 saturation
S = S.point(lambda p: p//2)
# Recombine channels
HSV = Image.merge('HSV', (H,S,V))
# Convert to RGB and save
result = HSV.convert('RGB')
result.save('result.png')
If you prefer to do your image processing in Numpy rather than PIL, you can achieve the same result as above with this code:
from PIL import Image
import numpy as np
# Open input image
im = Image.open('potato.png')
# Convert to HSV and go to Numpy
HSV = np.array(im.convert('HSV'))
# Halve the saturation with Numpy. Hue will be channel 0, Saturation is channel 1, Value is channel 2
HSV[..., 1] = HSV[..., 1] // 2
# Go back to "PIL Image", go back to RGB and save
Image.fromarray(HSV, mode="HSV").convert('RGB').save('result.png')
Of course, set the entire Saturation channel to zero for full greyscale.
from PIL import ImageEnhance
# value: float between 0.0 (grayscale) and 1.0 (original)
ImageEnhance.Color(image).enhance(value)
P.S.: Mark's solution works, but it seems to be increasing the exposure.

Get brightness value from HSV - Python

how I can determine brightness value of a photo?
Here's my code, I can not understand how to determine it:
def rgb2hsv(img_path):
img = cv2.imread(img_path)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
return hsv
Any ideas?
Not sure what you mean by the "brightness value" of an image, but whatever you mean, it is stored in the Value channel (i.e. 3rd channel) of the Hue, Saturation and Value image you have already calculated.
So, if you want a single, mean brightness number for the whole image, you can use:
hsv[...,2].mean()
If you want a single, peak brightness number for the brightest spot in the image:
hsv[...,2].max()
And if you want a greyscale "map" of the brightness at each point of the image, just display or save the 3rd channel:
cv2.imwrite('brightness.png',hsv[...,2])
In HSV, 'value' is the brightness of the color and varies with color saturation. It ranges from 0 to 100%. When the value is ’0′ the color space will be totally black. With the increase in the value, the color space brightness up and shows various colors."
So use OpenCV method:
cvCvtColor(const CvArr* src, CvArr* dst, int code)
that converts an image from one color space to another. You may use:
code = CV_BGR2HSV
Then calculate histogram of third channel V, which is the brightness.
Probably it might help you!
If you need brightest pixel in the image use the following code:
import numpy as np
import cv2
img = cv2.imread(img_path)
img_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(img_hsv)
bright_pixel = np.amax(v)
print(bright_pixel)
# bright_pixel will give max illumination value in the image

imshow doesn't need convert from BGR to RGB

As I'm lead to believe, OpenCV reads images in BGR colorspace ordering and we usually have to convert it back to RGB like this:
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
But when I try to simply read an image and show it, the coloring seems fine (without the need to convert BGR to RGB):
img_bgr = cv2.imread(image_path)
cv2.imshow('BGR Image',img_bgr)
img_rgb = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB)
cv2.imshow('RGB Image',img_rgb )
cv2.waitkey(0)
So is imshow() changing the ordering within the function automatically (from BGR to RGB) or the ordering has been BGR all along?
BGR and RGB are not color spaces, they are just conventions for the order of the different color channels. cv2.cvtColor(img, cv2.COLOR_BGR2RGB) doesn't do any computations (like a conversion to say HSV would), it just switches around the order. Any ordering would be valid - in reality, the three values (red, green and blue) are stacked to form one pixel. You can arrange them any way you like, as long as you tell the display what order you gave it.
OpenCV imread, imwrite and imshow indeed all work with the BGR order, so there is no need to change the order when you read an image with cv2.imread and then want to show it with cv2.imshow.
While BGR is used consistently throughout OpenCV, most other image processing libraries use the RGB ordering. If you want to use matplotlib's imshow but read the image with OpenCV, you would need to convert from BGR to RGB.
screen = cv2.cvtColor(screen, cv2.COLOR_RGB2BGR)
this one line code changes rgb to bgr
for matplotlib we need to change BGR to RGB:
img = cv2.imread("image_name")
img = img[...,::-1]
plt.imshow(img)
opencv_image_with_bgr_channels = cv2.imread('path/to/color_image.jpg')
matplotlib_compatible_image_with_rgb_channels = opencv_image_with_bgr_channels[:,:, ::-1]
This converts BGR to RGB Channels Image by reversing the channels.
If you do not need to use any other Image processing library (example Matplotlib's imshow), there is no need to do color scale conversion. Below code is an example, where the color scale conversion is done but when the image is loaded, it is still loaded in BGR. This conversion is not needed as the image is displayed using cv2.imshow().
import cv2
# read the image #
image = cv2.imread('<<Image Path>>')
image_rgb = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
# write a function to draw circles on the image #
def draw_circle(event,x,y,flags,params):
if event == cv2.EVENT_RBUTTONDOWN:
cv2.circle(img=image_rgb,center=(x,y),radius=100,color=(0,0,255),thickness=10)
# Open CV callbacks #
cv2.namedWindow(winname='ImageWindow')
cv2.setMouseCallback('ImageWindow',draw_circle)
# display the image till the user hits the ESC key #
while True:
cv2.imshow('ImageWindow',image_rgb)
if cv2.waitKey(20) & 0xFF == 27:
break
cv2.destroyAllWindows()
Alternatively, you can use imutils.opencv2matplotlib() function, which does not need BGR to RGB conversion.

Categories

Resources