I'm new to image processing. I just wanted to get a tiff image from raw format(NEF). I used rawpy module to get the desired output, yet the tiff image is RGB with 4 channels. I couldn't know why there is a fourth channel in the new image?
Can anyone please explain to me what is going on, and how I can get tiff image with three RGB channels?
import rawpy
import matplotlib.pylab as plt
raw_image = "DSC_0001.NEF"
raw = rawpy.imread(raw_image)
rgb = raw.postprocess()
plt.imsave("new.tiff", rgb )
image = plt.imread("new.tiff")
print(image.shape)
The array shape is : (2868, 4310, 4) !
Finally I found the reason:
plt.imsave saves the image in RGBA , while I can use skim age.io.imsave and it will save it as RGB.
Source: Github Issue entry
Related
I am trying to import a Nikon '.NEF' file into OpenCV. '.NEF' is the file extension for a RAW file format for pictures captured by Nikon cameras. When I open the file in Preview on a Mac, I see that the resolution is 6000 by 4000, and the picture is extremely clear. However, when I import it into OpenCV, I see only 120 by 160 (by 3 for RGB channels) data points, and this leads to a big loss in resolution.
My understanding is that there are 120 by 160 pixels in the NumPy array storing the information about pixels for OpenCV. I tried using -1 for the IMREAD_UNCHANGED flag, but many pixels were left out and image quality was greatly affected.
For your reference, here is my code:
# first Jupyter block
img = cv2.imread('DSC_1051.NEF', -1)
img.shape
Performing img.shape returns (120, 160, 3).
# second Jupyter block
cv2.namedWindow("Resize", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Resize", 1000, 700)
# Displaying the image
cv2.imshow("Resize", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Summary of problem:
Original image shape is (6000, 4000)
Open CV imports (120, 160), leading to a big loss in resolution
Using the IMREAD_UNCHANGED flag did not lead to OpenCV importing all the pixels in the image, leading to a loss in quality of the image upon performing cv2.imshow().
My question: how can I use OpenCV to import the desired number of pixels? Is there a specific function that I can use? Am I missing an argument to be passed?
If you want to manipulate RAW images without losing resolution with python you'd need to check on a specialized library like rawpy
import rawpy
with rawpy.imread('filename.NEF') as raw:
raw_image = raw.raw_image
You can check the rawpy documentation for more information
Notes:
To install rawpy, Python<=3.7 is required
If you explain a little bit more what do u need to do with the image I could help you with that
Example 1: how to save .NEF as .jpg
Option A: rawpy + Pillow (you need to install Pillow too)
import rawpy
from PIL import Image
with rawpy.imread('filename.NEF') as raw:
rgb = raw.postprocess(use_camera_wb=True)
Image.fromarray(rgb).save('image.jpg', quality=90, optimize=True)
Option B: rawpy + cv2
import rawpy
import cv2
with rawpy.imread('filename.NEF') as raw:
rgb = raw.postprocess(use_camera_wb=True)
bgr = cv2.cvtColor(rgb, cv2.COLOR_RGB2BGR)
cv2.imwrite("image.jpg",bgr)
Quality comparison
I test the code with this 19.2mb .NEF image and I got these results:
Method
.jpg output size
Dimensions
PIL
9kb
320x212
cv2
14kb
320x212
rawpy + PIL
1.4mb
4284 × 2844
rawpy + cv2
2.5mb
4284 × 2844
Example 2: show .NEF with cv2
import rawpy
import cv2
with rawpy.imread('filename.NEF') as raw:
rgb = raw.postprocess(use_camera_wb=True)
bgr = cv2.cvtColor(rgb, cv2.COLOR_RGB2BGR)
cv2.imshow('image', bgr)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am using code that is not throwing errors but it is not behaving as expected. I expect this code to save the pdf as grayscale images. It does save the images but they are still in color.
I know they are not in grayscale because I'm using numpy to convert them to pixel tabels (list of lists) and they contain the
[[255,255,255][255,255,255]]... structure of a color image.
from wand.image import Image as Img
with Img(filename=pdf_file, resolution=300) as img:
img.type = 'grayscale'
img.compression_quality = 99
img.save(filename=image_file)
I am opening the files with PIL using .convert('L'). I think I'm opening them as grayscale but that doesn't seem to be solving my problem either.
img = Image.open(first_page).convert('L')
I want to convert a 24-bit PNG image to 32-bit so that it can be displayed on the LED matrix. Here is the code which I have used, but it converted 24-bit to 48-bit
import cv2
import numpy as np
i = cv2.imread("bbb.png")
img = np.array(i, dtype = np.uint16)
img *= 256
cv2.imwrite('test.png', img)
I looked at the christmas.png image in the code you linked to, and it appears to be a 624x8 pixel image with a palette and an 8-bit alpha channel.
Assuming the sample image works, you can make one with the same characteristics by taking a PNG image and adding a fully opaque alpha channel like this:
#!/usr/local/bin/python3
from PIL import Image
# Load the image and convert to 32-bit RGBA
im = Image.open("image.png").convert('RGBA')
# Save result
im.save("result.png")
I generated a gradient image and applied that processing and got this, so maybe you can try that:
I think you have confused the color bit-depth with the size of the input image/array. From the links posted in the comments, there is no mention of 32 as a bit depth. The script at that tutorial link uses an image with 3-channel, 8-bit color (red, green, and blue code values each represented as numbers from 0-255). The input image must have the same height as the array, but can be a different width to allow scrolling.
For more on bit-depth: https://en.wikipedia.org/wiki/Color_depth
I am reading an RGB image and converting it into HSV mode using PIL. Now I am trying to save this HSV image but I am getting an error.
filename = r'\trial_images\cat.jpg'
img = Image.open(filename)
img = img.convert('HSV')
destination = r'\demo\temp.jpg'
img.save(destination)
I am getting the following error:
OSError: cannot write mode HSV as JPEG
How can I save my transformed image? Please help
Easy one...save as a numpy array. This works fine, but the file might be pretty big (for me it go about 7 times bigger than the jpeg image). You can numpy's savez_compressed
function to cut that in half to about 3-4 times the size of the original image. Not fantastic, but when you are doing image processing you are probably fine.
I want to read the alpha channel from a tiff image using Python OpenCV. I am using Enthought Canopy with OpenCV 2.4.5-3 module.
I followed the OpenCV website's tutorial using cv2.imread, but it doesn't seem to work.
What I have now is:
import cv2
image = cv2.imread('image.tif', -1)
Then I used: print (image.shape), it still shows the (8192, 8192, 3). But I used Matlab to read the same image, I can see the dimension of this image is (8192, 8192, 4).
I am not sure what should I do to read the alpha channel of this image.
Thanks in advance!!
Nan
This is an old question, but just in case someone else stumbles on it: if img.tiff is a 4-channel TIFF, then
import cv2
img = cv2.imread('img.tiff', cv2.IMREAD_UNCHANGED)
print img.shape
yields (212,296,4) as expected.
If you then use
channels = cv2.split(img)
you can reference the alpha layer (channels[3]) - for instance, as a mask.
The idea for this was taken from How do I use Gimp / OpenCV Color to separate images into coloured RGB layers? which cleverly uses a fake layer and merge to enable recovery of the individual RGB layers in their actual colours.
I found a solution this problem in converting the original image to RBGA format through PIL library.
from PIL import Image
import numpy as np
import cv2
path_to_image = 'myimg.png'
image = Image.open(path_to_image).convert('RGBA')
image.save(path_to_image)
image = cv2.imread(path_to_image, cv2.IMREAD_UNCHANGED)
print image.shape
out > (800, 689, 4)
I solved the same problem with:
pip install --upgrade opencv-python