I am trying to read JPEG2000 images which are roughly 86kB (size = (4096,21504)) using Python. I used skimage and pillow modules to read them. The time it takes to read either of these files is ~1s.
Here are snippets on how I read the images.
SKIMAGE
im = io.imread(filename,plugin='freeimage')
PILLOW
im = Image.open(filename)
Using profiling I can see that the JPEG2000 decoder is responsible for the slowdown.
On the other hand, reading the same image using MATLAB's imread takes ~0.5s.
Is it possible to speed up the reading of JPEG2000 using Python libraries? Are there other libraries that could speed decoding process? I tried searching for ways to speed it up and couldn't find anything on this. I would appreciate links/reports that will point me to the right direction.
Computer Specs
OS: Windows 8.1
Processor: Intel i7-4930K#3.40GHz
RAM: 32.0 GB
Related
import pygame
from pygame.locals import * \
image = pygame.image.load('.\\w.jpg')
I'm trying to load an 59MB jpg file and it fails saying 'Out of memory'
I'm using windows 64bit and python 3.10.4 64bit.
My computer had more than 10GB of free ram left when the program ran.
As seen in the Wikipedia entry for JPEG, JPEG is a method of compression for digital images. So even if your jpg file is only 59MB the uncompressed image could take much more than that, depending on the amount of redundancy in the original image. The Wikipedia article asserts that JPEG typically achieves 10:1 compression and at first one might think, based on this assertion about typical compression rates, that even in uncompressed form that the image would not be too large. However, JPEG also uses Huffman coding and the amount of compression in Huffman coding can be extremely high if the uncompressed data is sufficiently redundant.
I try to find way for compressing images(PNG as an example) with any S3TC/DXT algorithm using python libraries.
As I can see in Pillow(PIL) library DDS format in Read-only formats section. Therefore Pillow can't be used for this purpose.
Searching in google didn't give positive results.
Question:
Is it possible to do with python?
Could someone please provide link to libraries with such functional?(which is checked on practice)
DDS format is not mandatory for my case. I need only compressed file.
PS:
It's required for creating textures for future use.
Library should support different algorithms of compression.
You could use Python Wand. Here I create a pseudo image with a magenta-yellow gradient and save as DDS:
from wand.image import Image
with Image(width=200, height=80, pseudo='gradient:magenta-yellow') as img:
img.save(filename='result.dds')
Or, if you want to load a PNG file and save as DDS:
with Image(filename='input.png') as img:
img.save(filename='result.dds')
I am playing with stacking and processing astronomical photographs. I'm as interested in understanding algorithms as I am in the finished images, so I have not (yet) tried any of the numerous polished products floating around.
I have moderately-sized collections of still photographs (dozens at a time) which I can successfully import using
img = imread("filename.jpg")
This produces a numpy ndarray matrix, which I can manipulate using the tools available from numpy and scipy.ndimage, and display using imshow(). This is supported on the back end by the Python Imaging Library, PIL, which as far as I can tell supports only still images.
For longer exposures, it'd be nice to set my camera to take video, then extract frames from the video and run them through the same analysis pipeline as the still images. As far as I can tell, PIL supports only still images. My camera produces Quicktime movies with .MOV file extensions.
Is there a Python library that will let me access the data from frames of a video?
Alternatively, I'd appreciate guidance on using an external tool (there seems to exist a command-line ffmpeg, but I haven't tried it) to generate temporary files that I can feed into my still-image pipeline. Since I might want to examine all 18k frames in a ten-minute, 30fps movie, just extracting all the frames into one big folder is probably not an option.
I am running Python 2.7 on OSX Mavericks; I have easy access to MacPorts to install things.
The following line of ffmpeg will let you extract 10 seconds of video, starting at a prespecified time (here 20 seconds after the start of the movie) :
ffmpeg -i myvideo.MOV -ss 00:00:20.00 -t 10 img%3d.jpg
It is easy to figure out how you can use that in a Bash loop or run the command in a loop via Python.
I'm trying to compute the difference in pixel values of two images, but I'm running into memory problems because the images I have are quite large. Is there way in python that I can read an image lets say in 10x10 chunks at a time rather than try to read in the whole image? I was hoping to solve the memory problem by reading an image in small chunks, assigning those chunks to numpy arrays and then saving those numpy arrays using pytables for further processing. Any advice would be greatly appreciated.
Regards,
Berk
You can use numpy.memmap and let the operating system decide which parts of the image file to page in or out of RAM. If you use 64-bit Python the virtual memory space is astronomic compared to the available RAM.
If you have time to preprocess the images you can convert them to bitmap files (which will be large, not compressed) and then read particular sections of the file via offset as detailed here:
Load just part of an image in python
Conversion from any file type to bitmap can be done in Python with this code:
from PIL import Image
file_in = "inputCompressedImage.png"
img = Image.open(file_in)
file_out = "largeOutputFile.bmp"
img.save(file_out)
We got 50TB of 16bit uncompressed TIF images from a industrial sensor in our server, and we want to compress them all with lossless zip compression using python. Using python because it's easier to use Python to communicate our database.
However after hours of search and documentation reading, I found that there's not even a matured python library that can convert 16bit TIF into zip compressed tif. The latest PIL cannot write compressed tif, OpenCV hardcoded output file into LZW tif not zip(deflate). And there is no sufficient documentation in smc.freeimage, PythonImageMagick so I don't know if they can do it. I also found this tifffile.py, there seems something about compression in its source code, but there is no example code that let me understand how to config compression option for output.
Of course I can use an external executable, but I just don't want to use python as scripting language here.
So that I really appreciate if anyone give me an efficient example here, thanks.
Update:
cgohlke's code works, here I provide another light weight solution.
Checkout the patched pythontifflib code from here https://github.com/delmic/pylibtiff.
The original PythonTiffLib from google code doesn't handle RGB information well and it didn't work on my data, this patched version works, however because the code is very old, it implies PythonTiffLib may be not maintained very well.
Use the code like this:
from libtiff import TIFF
tif = TIFF.open('Image.tiff', mode='r')
image = tif.read_image()
tifw = TIFF.open('testpylibtiff.tiff', mode='w')
tifw.write_image(image, compression='deflate', write_rgb=True)
PythonMagick works for me on Windows:
from PythonMagick import Image, CompressionType
im = Image('tiger-rgb-strip-contig-16.tif')
im.compressType(CompressionType.ZipCompression)
im.write("tiger-rgb-strip-contig-16-zip.tif")
Scikit-image includes a wrapper for the FreeImage library:
import skimage.io._plugins.freeimage_plugin as fi
im = fi.read('tiger-rgb-strip-contig-16.tif')
fi.write(im, 'tiger-rgb-strip-contig-16-zip.tif',
fi.IO_FLAGS.TIFF_ADOBE_DEFLATE)
Or via tifffile.py, 2013.11.03 or later:
from tifffile import imread, imsave
im = imread('tiger-rgb-strip-contig-16.tif')
imsave("tiger-rgb-strip-contig-16-zip.tif", im, compress=6)
These might not preserve all other TIFF tags or properties but that wasn't specified in the question.