I want to write a single channel png image from a numpy array in python?
In Matlab that would be
A = randi(100,100,255)
imwrite(uint8(A),'myFilename.png','png');
I saw exampels using from PIL import Image and Image.fromarray() but they are for jpeg and 3-channel pngs only it appears...
I already found the solution using opencv, I will post it here. Hopefully it will shorten someone else's searching...
Here is a solution using opencv / cv2
import cv2
myImg = np.random.randint(255, size=(200, 400)) # create a random image
cv2.imwrite('myImage.png',myImg)
PIL's Image.fromarray() automatically determines the mode to use from the datatype of the passed numpy array, for example for an 8-bit greyscale image you can use:
from PIL import Image
import numpy as np
data = np.random.randint(256, size=(100, 100), dtype=np.uint8)
img = Image.fromarray(data) # uses mode='L'
This however only works if your array uses a compatible datatype, if you simply use data = np.random.randint(256, size=(100, 100)) that can result in a int64 array (typestr <i8), which PIL can't handle.
You can also specify a different mode, e.g. to interpret a 32bit array as an RGB image:
data = np.random.randint(2**32, size=(100, 100), dtype=np.uint32)
img = Image.fromarray(data, mode='RGB')
Internally Image.fromarray() simply tries to guess the correct mode and size and then invokes Image.frombuffer().
The image can then be saved as any format PIL can handle e.g: img.save('filename.png')
You might want not to utilise OpenCV for simple image manipulation. As suggested, use PIL:
im = Image.fromarray(arr)
im.save("output.png", "PNG")
Have you tried this? What has failed here that led you to concluding that this is JPEG-only?
Related
I am trying to convert 8 bit images to 10 bit. I thought it would be as easy as changing the bin values. I've tried to pillow and cv-python:
from PIL import Image
from numpy import asarray
import cv2
path = 'path/to/image'
img = Image.open(path)
data = asarray(img)
newdata = (data/255)*1023 #2^10 is 1024
img2 = Image.fromarray(newdata) #this fails
cv2.imwrite('path/newimage.png, newdata)
While cv2.imwrite successfully writes the new file, it is still encoded as an 8bit image even though bin goes up to 1023.
$ file newimage.png
newimage.png: PNG Image data, 640 x 480, 8-bit/color RGB, non-interlaced
Is there another way in either python or linux that can convert 8-bit to 10-bit?
Lots of things going wrong here.
You are mixing OpenCV (cv2.imwrite) with PIL (Image.open) for no good reason. Don't do that, you will confuse yourself as they use different RGB/BGR orderings and conventions,
You are trying to store 10-bit numbers in 8-bit vectors,
You are trying to hold 3 16-bit RGB pixels in a PIL Image which will not work as RGB images must be 8-bit in PIL.
I would suggest:
import cv2
import numpy as np
# Load image
im = cv2.imread(IMAGE, cv2.IMREAD_COLOR)
res = im.astype(np.uint16) * 4
cv2.imwrite('result.png', res)
I found a solution using pgmagick wrapper for python
import pgmagick as pgm
imagePath = 'path/to/image.png'
saveDir = '/path/to/save'
img = pgm.Image(imagePath)
img.depth(10) #sets to 10 bit
save_path = os.path.join(saveDir,'.'.join([filename,'dpx']))
img.write(save_path)
I found the previous answer related to a more general conversion from RGB image here: Convert image from PIL to openCV format
I would like to know the difference when an image has to be read as a grayscale format.
images = [None, None]
images[0] = Image.open('image1')
images[1] = Image.open('image2')
print(type(images[0]))
a = np.array(images[0])
b = np.array(images[1])
print(type(a))
im_template = cv2.imread(a, 0)
im_source = cv2.imread(b, 0)
I get the following output:
<class 'PIL.JpegImagePlugin.JpegImageFile'>
<class 'numpy.ndarray'>
Even though I am able to convert the image to ndarray, cv2 says: "bad argument type for built-in operation". I do not need an RGB to BGR conversion. What else should I consider while passing a cv2 read argument?
You are making life unnecessarily difficult for yourself. If you want to load an image as greyscale, and use it with OpenCV, you should just do:
im = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)
and that's all. No need to use PIL (which is slower), no need to use cvtColor() because you have already wasted all the memory reading it in BGR anyway.
If you absolutely want to read it using PIL (for some odd reason), use:
import numpy as np
from PIL import Image
# Read in and make greyscale
PILim = Image.open('image.jpg').convert('L')
# Make Numpy/OpenCV-compatible version
openCVim = np.array(PILim)
By the way, if you want to go back to a PIL image from an OpenCV/Numpy image, use:
PILim = Image.fromarray(openCVim)
Since you already have loaded the image, you should use an image conversion function:
im_template = cv2.cvtColor(a, cv2.COLOR_RGB2GRAY)
im_source = cv2.cvtColor(b, cv2.COLOR_RGB2GRAY)
I want to create a RGB image made from a random array of pixel values in Python with OpenCV/Numpy setup.
I'm able to create a Gray image - which looks amazingly live; with this code:
import numpy as np
import cv2
pic_array=np.random.randint(255, size=(900,800))
pic_array_8bit=slika_array.astype(np.uint8)
pic_g=cv2.imwrite("pic-from-random-array.png", pic_array_8bit)
But I want to make it in color as well. I've tried converting with cv2.cvtColor() but it couldnt work.
The issue might be in an array definition or a missed step. Couldn't find a similar situation... Any help how to make a random RGB image in color, would be great.
thanks!
RGB image is composed of three grayscale images. You can make three grayscale images like
rgb = np.random.randint(255, size=(900,800,3),dtype=np.uint8)
cv2.imshow('RGB',rgb)
cv2.waitKey(0)
First, you should define a random image data consisting of 3 channels using numpy as shown below-
import numpy as np
data = np.random.randint(0, 255, size=(900, 800, 3), dtype=np.uint8)
Now use, python imaging library as shown below-
from PIL import Image
img = Image.fromarray(data, 'RGB')
img.show()
You can also save the image easily using save function
img.save('image.png')
I've managed to come very far on a program I'm writing. I don't know how to load CR2 files into an OpenCV Image. I've tried the following:
raw = rawpy.imread(sys.argv[1])
rgb = raw.postprocess()
PILrgb = scipy.misc.toimage(rgb)
image = cv2.imdecode(PILrgb, 1)
It was an attempt at converting the numpyarray returned by Postprocess the currently loaded RAW image and return the new resulting image as numpy array. Then calling spicy.misc.toimage to Takes a numpy array and returns a PIL image..
I get the following msg though TypeError: buf is not a numpy array, neither a scalar
It may be easier if you only rawpy
import rawpy
import cv2
raw = rawpy.imread(sys.argv[1]) # access to the RAW image
rgb = raw.postprocess() # a numpy RGB array
image = cv2.cvtColor(rgb, cv2.COLOR_RGB2BGR) # the OpenCV image
My system is Mac OS X v10.8.2. I have several 2560x500 uncompressed 16-bit TIFF images (grayscale, unsigned 16-bit integers). I first attempt to load them using PIL (installed via Homebrew, version 1.7.8):
from PIL import Image
import numpy as np
filename = 'Rocks_2ptCal_750KHz_20ms_1ma_120KV_2013-03-06_20-02-12.tif'
img = Image.open(filename)
# >>> img
# <PIL.TiffImagePlugin.TiffImageFile image mode=I;16B size=2560x500 at 0x10A383C68>
img.show()
# almost all pixels displayed as white. Not correct.
# MatLab, EZ-draw, even Mac Preview show correct images in grayscale.
imgdata = list(img.getdata())
# most values negative:
# >>> imgdata[0:10]
# [-26588, -24079, -27822, -26045, -27245, -25368, -26139, -28454, -30675, -28455]
imgarray = np.asarray(imgdata, dtype=np.uint16)
# values now correct
# >>> imgarray
# array([38948, 41457, 37714, ..., 61922, 59565, 60035], dtype=uint16)
The negative values are off by 65,536... probably not a coincidence.
If I pretend to alter pixels and revert back to TIFF image via PIL (by just putting the array back as an image):
newimg = Image.fromarray(imgarray)
I get errors:
File "/usr/local/lib/python2.7/site-packages/PIL/Image.py", line 1884, in fromarray
raise TypeError("Cannot handle this data type")
TypeError: Cannot handle this data type
I can't find Image.fromarray() in the PIL documentation. I've tried loading via Image.fromstring(), but I don't understand the PIL documentation and there is little in the way of example.
As shown in the code above, PIL seems to "detect" the data as I;16B. From what I can tell from the PIL docs, mode I is:
*I* (32-bit signed integer pixels)
Obviously, that is not correct.
I find many posts on SX suggesting that PIL doesn't support 16-bit images. I've found suggestions to use pylibtiff, but I believe that is Windows only?
I am looking for a "lightweight" way to work with these TIFF images in Python. I'm surprised it is this difficult and that leads me to believe the problem will be obvious to others.
It turns out that Matplotlib handles 16-bit uncompressed TIFF images in two lines of code:
import matplotlib.pyplot as plt
img = plt.imread(filename)
# >>> img
# array([[38948, 41457, 37714, ..., 61511, 61785, 61824],
# [39704, 38083, 36690, ..., 61419, 60086, 61910],
# [41449, 39169, 38178, ..., 60192, 60969, 63538],
# ...,
# [37963, 39531, 40339, ..., 62351, 62646, 61793],
# [37462, 37409, 38370, ..., 61125, 62497, 59770],
# [39753, 36905, 38778, ..., 61922, 59565, 60035]], dtype=uint16)
Et voila. I suppose this doesn't meet my requirements as "lightweight" since Matplotlib is (to me) a heavy module, but it is spectacularly simple to get the image into a Numpy array. I hope this helps someone else find a solution quickly as this wasn't obvious to me.
Try Pillow, the “friendly” PIL fork. They've somewhat recently added better support for 16- and 32-bit images including in the numpy array interface. This code will work with the latest Pillow:
from PIL import Image
import numpy as np
img = Image.open('data.tif')
data = np.array(img)