I have a binary image in Python and I want to save it in my pc.
I need it to be a 1 bit deep png image once stored in my computer.
How can I do that? I tried with both PIL and cv2 but I'm not able to save it with 1 bit depth.
I found myself in a situation where I needed to create a lot of binary images, and was frustrated with the available info online. Thanks to the answers and comments here and elsewhere on SO, I was able to find an acceptable solution. The comment from #Jimbo was the best so far. Here is some code to reproduce my exploration of some ways to save binary images in python:
Load libraries and data:
from skimage import data, io, util #'0.16.2'
import matplotlib.pyplot as plt #'3.0.3'
import PIL #'6.2.1'
import cv2 #'4.1.1'
check = util.img_as_bool(data.checkerboard())
The checkerboard image from skimage has dimensions of 200x200. Without compression, as a 1-bit image it should be represented by (200*200/8) 5000 bytes
To save with skimage, note that the package will complain if the data is not uint, hence the conversion. Saving the image takes an average of 2.8ms and has a 408 byte file size
io.imsave('bw_skimage.png',util.img_as_uint(check),plugin='pil',optimize=True,bits=1)
Using matplotlib, 4.2ms and 693 byte file size
plt.imsave('bw_mpl.png',check,cmap='gray')
Using PIL, 0.5ms and 164 byte file size
img = PIL.Image.fromarray(check)
img.save('bw_pil.png',bits=1,optimize=True)
Using cv2, also complains about a bool input. The following command takes 0.4ms and results in a 2566 byte file size, despite the png compression...
_ = cv2.imwrite('bw_cv2.png', check.astype(int), [cv2.IMWRITE_PNG_BILEVEL, 1])
PIL was clearly the best for speed and file size.
I certainly missed some optimizations, comments welcome!
Use:
cv2.imwrite(<image_name>, img, [cv2.IMWRITE_PNG_BILEVEL, 1])
(this will still use compression, so in practice it will most likely have less than 1 bit per pixel)
If you're not loading pngs or anything the format does behave pretty reasonably to just write it. Then your code doesn't need PIL or any of the headaches of various imports and imports on imports etc.
import struct
import zlib
from math import ceil
def write_png_1bit(buf, width, height, stride=None):
if stride is None:
stride = int(ceil(width / 8))
raw_data = b"".join(
b'\x00' + buf[span:span + stride] for span in range(0, (height - 1) * stride, stride))
def png_pack(png_tag, data):
chunk_head = png_tag + data
return struct.pack("!I", len(data)) + chunk_head + struct.pack("!I", 0xFFFFFFFF & zlib.crc32(chunk_head))
return b"".join([
b'\x89PNG\r\n\x1a\n',
png_pack(b'IHDR', struct.pack("!2I5B", width, height, 1, 0, 0, 0, 0)),
png_pack(b'IDAT', zlib.compress(raw_data, 9)),
png_pack(b'IEND', b'')])
Adapted from:
http://code.activestate.com/recipes/577443-write-a-png-image-in-native-python/ (MIT)
by reading the png spec:
https://www.w3.org/TR/PNG-Chunks.html
Keep in mind the 1 bit data from buf, should be written left to right like the png spec wants in normal non-interlace mode (which we declared). And the excess data pads the final bit if it exists, and stride is the amount of bytes needed to encode a scanline. Also, if you want those 1 bit to have palette colors you'll have to write a PLTE block and switch the type to 3 rather than 0. Etc.
Related
so There is a 4GB .TIF image that needs to be processed, as a memory constraint I can't load the whole image into numpy array so I need to load it lazily in parts from hard disk.
so basically I need and that needs to be done in python as the project requirement. I also tried looking for tifffile library in PyPi tifffile but I found nothing useful please help.
pyvips can do this. For example:
import sys
import numpy as np
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
for y in range(0, image.height, 100):
area_height = min(image.height - y, 100)
area = image.crop(0, y, image.width, area_height)
array = np.ndarray(buffer=area.write_to_memory(),
dtype=np.uint8,
shape=[area.height, area.width, area.bands])
The access option to new_from_file turns on sequential mode: pyvips will only load pixels from the file on demand, with the restriction that you must read pixels out top to bottom.
The loop runs down the image in blocks of 100 scanlines. You can tune this, of course.
I can run it like this:
$ vipsheader eso1242a-pyr.tif
eso1242a-pyr.tif: 108199x81503 uchar, 3 bands, srgb, tiffload_stream
$ /usr/bin/time -f %M:%e ./sections.py ~/pics/eso1242a-pyr.tif
273388:479.50
So on this sad old laptop it took 8 minutes to scan a 108,000 x 82,000 pixel image and needed a peak of 270mb of memory.
What processing are you doing? You might be able to do the whole thing in pyvips. It's quite a bit quicker than numpy.
import pyvips
img = pyvips.Image.new_from_file("space.tif", access='sequential')
out = img.resize(0.01, kernel = "linear")
out.write_to_file("resied_image.jpg")
if you want to convert the file to other format have a smaller size this code will be enough and will help you do it without without any memory spike and in very less time...
I have a folder, say video1 with bunch of images in order frame_00.png, frame_01.png, ...
What I want is a 4D numpy array in the format (number of frames, w, h, 3)
This is what I did, but I think it is quite slow, is there any faster or more effecient method to achieve the same thing?
folder = "video1/"
import os
images = sorted(os.listdir(folder)) #["frame_00", "frame_01", "frame_02", ...]
from PIL import Image
import numpy as np
video_array = []
for image in images:
im = Image.open(folder + image)
video_array.append(np.asarray(im)) #.transpose(1, 0, 2))
video_array = np.array(video_array)
print(video_array.shape)
#(75, 50, 100, 3)
There's an older SO thread that goes into a great deal of detail (perhaps even a bit too much) on this very topic. Rather than vote to close this question as a dup, I'm going to give a quick rundown of that thread's top bullet points:
The fastest commonly available image reading function is imread from the cv2 package.
Reading the images in and then adding them to a plain Python list (as you are already doing) is the fastest approach for reading in a large number of images.
However, given that you are eventually converting the list of images to an array of images, every possible method of building up an array of images is almost exactly as fast as any other
Although, interestingly enough, if you take the approach of assigning images directly to a preallocated array, it actually matters which indices (ie which dimension) you assign to in terms of getting optimal performance.
So basically, you're not going to be able to get much faster while working in pure, single-threaded Python. You might get a boost from switching to cv2.imread (in place of PIL.Image.open).
PNG is an extremely slow format, so if you can use almost anything else, you'll see a big speedup.
For example, here's an opencv version of your program that gets the filenames from command-line args:
#!/usr/bin/python3
import sys
import cv2
import numpy as np
video_array = []
for filename in sys.argv[1:]:
im = cv2.imread(filename)
video_array.append(np.asarray(im))
video_array = np.array(video_array)
print(video_array.shape)
I can run it like this:
$ mkdir sample
$ for i in {1..100}; do cp ~/pics/k2.png sample/$i.png; done
$ time ./readframes.py sample/*.png
(100, 2048, 1450, 3)
real 0m6.063s
user 0m5.758s
sys 0m0.839s
So 6s to read 100 PNG images. If I try with TIFF instead:
$ for i in {1..100}; do cp ~/pics/k2.tif sample/$i.tif; done
$ time ./readframes.py sample/*.tif
(100, 2048, 1450, 3)
real 0m1.532s
user 0m1.060s
sys 0m0.843s
1.5s, so four times faster.
You might get a small speedup with pyvips:
#!/usr/bin/python3
import sys
import pyvips
import numpy as np
# map vips formats to np dtypes
format_to_dtype = {
'uchar': np.uint8,
'char': np.int8,
'ushort': np.uint16,
'short': np.int16,
'uint': np.uint32,
'int': np.int32,
'float': np.float32,
'double': np.float64,
'complex': np.complex64,
'dpcomplex': np.complex128,
}
# vips image to numpy array
def vips2numpy(vi):
return np.ndarray(buffer=vi.write_to_memory(),
dtype=format_to_dtype[vi.format],
shape=[vi.height, vi.width, vi.bands])
video_array = []
for filename in sys.argv[1:]:
vi = pyvips.Image.new_from_file(filename, access='sequential')
video_array.append(vips2numpy(vi))
video_array = np.array(video_array)
print(video_array.shape)
I see:
$ time ./readframes.py sample/*.tif
(100, 2048, 1450, 3)
real 0m1.360s
user 0m1.629s
sys 0m2.153s
Another 10% or so.
Finally, as other posters have said, you could load frames in parallel. That wouldn't help TIFF much, but it would certainly boost PNG.
I have a following code:
import cv2 as cv
import numpy as np
im = cv.imread('outline.png', cv.IMREAD_UNCHANGED)
cv.imwrite('output.png', im)
f1 = open('outline.png', 'rb')
f2 = open('output.png', 'rb')
img1_b = b64encode(f1.read())
img2_b = b64encode(f2.read())
print(img1_b)
print(img2_b)
What is the reason that img1_b and img2_b are different? img2_b is much longer - why?.
I do not want to copy the file - I would like to process it before saving but this part of code is not included.
Both outline.png and output.png looks same after the operation.
What can I change in my code to make img2_b value same as img1_b??
I have tried PIL Image with same result.
The phenomenon you have run into is the result of data compression not being 100% rigidly defined. PNG files use DEFLATE compression, which requires a given compressed file must always decompress to the same output, but does not require that a given input must produce the same compressed file. This gives room for improvement in the compression algorithm where a more optimal compression may be found over a different type of file. It sounds like your original image was compressed using a better (or just different) algorithm than cv2 is using. In order to duplicate the exact compressed version you'll likely need the exact same implementation of compression algorithm that was used to create the original image.
If you want to ensure that the images are indeed identical, you should compare the decoded pixel values. In the name of not re-inventing the wheel, I'll refer you to this excellent blog post on the subject.
Edit: linked article wasn't loading consistently for me so I copied the code here for referencing.
import cv2
import numpy as np
original = cv2.imread("imaoriginal_golden_bridge.jpg")
duplicate = cv2.imread("images/duplicate.jpg")
# 1) Check if 2 images are equals
if original.shape == duplicate.shape:
print("The images have same size and channels")
difference = cv2.subtract(original, duplicate)
b, g, r = cv2.split(difference)
if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0:
print("The images are completely Equal")
I have a 800x800 RGB bitmap, filesize is 2501 kilobyte, and do the following (using python 3.6):
(unfortunately i cannot share the image)
from PIL import Image
import numpy as np
im = Image.open('original_image.bmp')
im.save("test_size_manual.bmp", "BMP")
For some reason the new file is only 1876 KB. And even though the file size is different, the following holds:
import matplotlib.pylab as plt
original_image = plt.imread('original_image.bmp')
test_size_image = plt.imread('test_size_manual.bmp')
assert (original_image == test_size_image).all()
This means that pixel-for-pixel the resulting numpy.ndarray is the same. From a 'random' sampling of 800x800 bmp's found on google images most had the same file size as the new image, 1876 KB, but there also was at least one which had the same file size as the original image, 2501 KB.
What is causing this difference in filesize, or how would you go about finding out?
The answer is indeed found in the metadata.
The original image turns out to be a 32-bit bitmap and the new image is a 24-bit bitmap. This explains the difference in file size: 2501 * 3/4 is just under 1876.
At offset 28 (0x1c) of the binary the bit-depth is stored and for the original it was 32 and for the new image it was 24.
Reference: BMP file format on Wikipedia
My system is Mac OS X v10.8.2. I have several 2560x500 uncompressed 16-bit TIFF images (grayscale, unsigned 16-bit integers). I first attempt to load them using PIL (installed via Homebrew, version 1.7.8):
from PIL import Image
import numpy as np
filename = 'Rocks_2ptCal_750KHz_20ms_1ma_120KV_2013-03-06_20-02-12.tif'
img = Image.open(filename)
# >>> img
# <PIL.TiffImagePlugin.TiffImageFile image mode=I;16B size=2560x500 at 0x10A383C68>
img.show()
# almost all pixels displayed as white. Not correct.
# MatLab, EZ-draw, even Mac Preview show correct images in grayscale.
imgdata = list(img.getdata())
# most values negative:
# >>> imgdata[0:10]
# [-26588, -24079, -27822, -26045, -27245, -25368, -26139, -28454, -30675, -28455]
imgarray = np.asarray(imgdata, dtype=np.uint16)
# values now correct
# >>> imgarray
# array([38948, 41457, 37714, ..., 61922, 59565, 60035], dtype=uint16)
The negative values are off by 65,536... probably not a coincidence.
If I pretend to alter pixels and revert back to TIFF image via PIL (by just putting the array back as an image):
newimg = Image.fromarray(imgarray)
I get errors:
File "/usr/local/lib/python2.7/site-packages/PIL/Image.py", line 1884, in fromarray
raise TypeError("Cannot handle this data type")
TypeError: Cannot handle this data type
I can't find Image.fromarray() in the PIL documentation. I've tried loading via Image.fromstring(), but I don't understand the PIL documentation and there is little in the way of example.
As shown in the code above, PIL seems to "detect" the data as I;16B. From what I can tell from the PIL docs, mode I is:
*I* (32-bit signed integer pixels)
Obviously, that is not correct.
I find many posts on SX suggesting that PIL doesn't support 16-bit images. I've found suggestions to use pylibtiff, but I believe that is Windows only?
I am looking for a "lightweight" way to work with these TIFF images in Python. I'm surprised it is this difficult and that leads me to believe the problem will be obvious to others.
It turns out that Matplotlib handles 16-bit uncompressed TIFF images in two lines of code:
import matplotlib.pyplot as plt
img = plt.imread(filename)
# >>> img
# array([[38948, 41457, 37714, ..., 61511, 61785, 61824],
# [39704, 38083, 36690, ..., 61419, 60086, 61910],
# [41449, 39169, 38178, ..., 60192, 60969, 63538],
# ...,
# [37963, 39531, 40339, ..., 62351, 62646, 61793],
# [37462, 37409, 38370, ..., 61125, 62497, 59770],
# [39753, 36905, 38778, ..., 61922, 59565, 60035]], dtype=uint16)
Et voila. I suppose this doesn't meet my requirements as "lightweight" since Matplotlib is (to me) a heavy module, but it is spectacularly simple to get the image into a Numpy array. I hope this helps someone else find a solution quickly as this wasn't obvious to me.
Try Pillow, the “friendly” PIL fork. They've somewhat recently added better support for 16- and 32-bit images including in the numpy array interface. This code will work with the latest Pillow:
from PIL import Image
import numpy as np
img = Image.open('data.tif')
data = np.array(img)