python simple substitute variableㄴ (opencv image processing) - python

enter image description here
Why img is same result with img_gray?
I think img must be showed original image.

You have to duplicate image
img_gray = img.copy()
Without copy() both variables gives access to the same image in memory.
It is standard behavior in Python.

Related

Image.open() gives a plain white image

I am trying to edit this image:
However, when I run
im = Image.open(filename)
im.show()
it outputs a completely plain white image of the same size. Why is Image.open() not working? How can I fix this? Is there another library I can use to get non-255 pixel values (the correct pixel array)?
Thanks,
Vinny
Image.open actually seems to work fine, as does getpixel, putpixel and save, so you can still load, edit and save the image.
The problem seems to be that the temp file the image is saved in for show is just plain white, so the image viewer shows just a white image. Your original image is 16 bit grayscale, but the temp image is saved as an 8 bit grayscale.
My current theory is that there might actually be a bug in show where a 16 bit grayscale image is just "converted" to 8 bit grayscale by capping all pixel values to 255, resulting in an all-white temp image since all the pixels values in the original are above 30,000.
If you set a pixel to a value below 255 before calling show, that pixel shows correctly. Thus, assuming you want to enhance the contrast in the picture, you can open the picture, map the values to a range from 0 to 255 (e.g. using numpy), and then use show.
from PIL import Image
import numpy as np
arr = np.array(Image.open("Rt5Ov.png"))
arr = (arr - arr.min()) * 255 // (arr.max() - arr.min())
img = Image.fromarray(arr.astype("uint8"))
img.show()
But as said before, since save seems to work as it should, you could also keep the 16 bit grayscale depth and just save the edited image instead of using show.
you can use openCV library for loading images.
import cv2
img = cv2.imread('image file')
plt.show(img)

how i can show image after image writing

I'm trying simple code that read an image and convert it to grey scale then show both of them, and finally save the grey scale image and display it after saving. The problem is that cv2.imshow (image show) for saved image doesn't work.
The images before image writing are displayed correctly and the image saved correctly in the same path but can't be displayed using cv2.imshow.
'''
python
'''
import cv2
img=cv2.imread('cover.jpg')
cv2.imshow('image', img)
img_grey = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
cv2.imshow('image_grey', img_grey)
savedimage='new.jpg'
cv2.imwrite('new.jpg',img_grey)
cv2.imshow('testsavedimage',savedimage)
cv2.waitKey(0)
I receive error for showing saved image
File "C:/1.py", line 8, in <module>
cv2.imshow('testsavedimage',savedimage)
TypeError: Expected Ptr<cv::UMat> for argument '%s
savedimage is just a string in this case. If you want to make sure that your grey scale image was saved properly, you need to first read it back into a Mat object:
cv2.imwrite('new.jpg',img_grey)
savedImg = cv2.imread('new.jpg')
cv2.imshow('testsavedimage', savedImg)
Hope this helps.
(PS #Mark Setchel is right, you should be using cv2.COLOR_BGR2GRAY here. This is because imread() and imshow() default to BGR colorspace, not RGB)

PIL image in grayscale to OpenCV format

I found the previous answer related to a more general conversion from RGB image here: Convert image from PIL to openCV format
I would like to know the difference when an image has to be read as a grayscale format.
images = [None, None]
images[0] = Image.open('image1')
images[1] = Image.open('image2')
print(type(images[0]))
a = np.array(images[0])
b = np.array(images[1])
print(type(a))
im_template = cv2.imread(a, 0)
im_source = cv2.imread(b, 0)
I get the following output:
<class 'PIL.JpegImagePlugin.JpegImageFile'>
<class 'numpy.ndarray'>
Even though I am able to convert the image to ndarray, cv2 says: "bad argument type for built-in operation". I do not need an RGB to BGR conversion. What else should I consider while passing a cv2 read argument?
You are making life unnecessarily difficult for yourself. If you want to load an image as greyscale, and use it with OpenCV, you should just do:
im = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)
and that's all. No need to use PIL (which is slower), no need to use cvtColor() because you have already wasted all the memory reading it in BGR anyway.
If you absolutely want to read it using PIL (for some odd reason), use:
import numpy as np
from PIL import Image
# Read in and make greyscale
PILim = Image.open('image.jpg').convert('L')
# Make Numpy/OpenCV-compatible version
openCVim = np.array(PILim)
By the way, if you want to go back to a PIL image from an OpenCV/Numpy image, use:
PILim = Image.fromarray(openCVim)
Since you already have loaded the image, you should use an image conversion function:
im_template = cv2.cvtColor(a, cv2.COLOR_RGB2GRAY)
im_source = cv2.cvtColor(b, cv2.COLOR_RGB2GRAY)

How to save HSV converted Image in Python?

I am reading an RGB image and converting it into HSV mode using PIL. Now I am trying to save this HSV image but I am getting an error.
filename = r'\trial_images\cat.jpg'
img = Image.open(filename)
img = img.convert('HSV')
destination = r'\demo\temp.jpg'
img.save(destination)
I am getting the following error:
OSError: cannot write mode HSV as JPEG
How can I save my transformed image? Please help
Easy one...save as a numpy array. This works fine, but the file might be pretty big (for me it go about 7 times bigger than the jpeg image). You can numpy's savez_compressed
function to cut that in half to about 3-4 times the size of the original image. Not fantastic, but when you are doing image processing you are probably fine.

OpenCV gaussianblur making image loose colors

I am applying OpenCV's GaussianBlur on an image. Resulting image looks to lack the colors original image has.
My code:
originalImage = cv2.imread('path to original image',0)
blurredImage = cv2.GaussianBlur(originalImage,(15,15),0)
cv2.imwrite('path to save the new image', blurredImage)
Original image:
New image:
Is this correct behaviour?I want to retain the color details.
The problem is with the line reading the image as:
originalImage = cv2.imread('path to original image',0)
The param 0 in the cv2.imread() instructs the library to read grayscale image irrespective of the original image configuration. To fix this you can call cv2.imread() with no params as:
originalImage = cv2.imread('path to original image')
This command instructs the library to read the image in BGR config.
But if you want to read the image in the exact same format as it is, then you may need to call:
originalImage = cv2.imread('path to original image', -1)
You may refer to cv2.imread() docs for more info.

Categories

Resources