I work processing really big images of the likes of GIS and Astronomy images. I need to find a library preferably in python that allows me to append bits to an image and write it piece by piece to disk without having to have all the image in RAM at once.
Edit:
Thanks to those who commented. I work with microscopy images. Mostly those that can be opened with Openslide. Some of them are in this list. My goal is to have just one big file containing an image, a file that can be opened by other people instead of having a bunch of tiles.
But unless I have lots and lots of RAM (which I don't always have and people don't always have) I can't create images as big as the original and store them with things like PIL.image. I wish I could create an initial file, and then append to it the rest of the image as I create it.
Just like with GIS and AStronomy, microscopy has to create images based on the scans, and process them, so I was wondering if anyone knew a way to do this.
I don't think that's totally possible. to use data, a computer copies it to RAM.
If you just want to append your data to your image, use PIL.Image
Related
I would like to export a PSD file for which we put some group layers as invisible to JPG.
Currently, I tried to put the concerned group layers as invisible (by looping through the PSD layers and put the concerned group as invisible like group.visible = False) and then save that PSD.
The saved new PSD does have the concerned group layers invisible.
Later, the new PSD is converted to JPG.
However, the JPG output shows also the invisible layers.
The python code used for passing from the new saved PSD to JPG is like the same for saving (we used psd_tools).
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.save(outputPath, "JPEG")
I have also tried with the command line "convert" on linux but it has also showed the invisible layers after conversion.
So, my question is to know whether there's a way to either remove invisible layers before saving to JPG within the same script without calling a script inside PhotoShop (which requires to open instances of PhotoShop) or export it to JPG without removing with a python code or maybe a command line.
I have found the last days something that does the trick from this StackOverflow Post by adding composite(force=True)
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.composite(force=True).save(outputPath) #outputPath is expected to be a JPG file
This is pretty good for light files. However, when the PSD's size is very large like 1GB, it takes too much time.
As I would like to execute this operation on a daily basis on about > 1000 files, this would take days to be done.
So, I am always looking for another solution.
Here's a sample to a lighter file, unfortunately, I could not put the real one for professional reasons.
https://file.io/yxdDxlzMeMsA
from torchvision.utils import save_image
...
save_image(im, f'im_name.png')
In my case (standard mnist), using code from here, im is a Tensor:96, and save_image works.
I want that image in memory to show it in other plots, and I don't want to read it back after saving it, which seems kind of stupid.
Is there a way to separate the functionality of generating the image and of saving it?
Edit
clarification:
I want an equivalent to
save_image(im, f'im_name.png')
reread = plt.imread(f'im_name.png')
without saving the image and reading it back.
I just want the image, and I want to save it later.
the save_image function does some work, like stacking multiple images into one, converting the tensor to images of correct sizes and so on. I want only that part without the saving to disk.
About 2 weeks later, I stumbled upon the solution by accident.
grid = torchvision.utils.make_grid(im)
grid will be the image save_image was just saving.
I'm using OpenCV and Python. I have loaded a jpeg image into a numpy array. Now i want to save it back into jpeg format, but since the image was not modified, I don't want to compress it again. Is it possible to create a jpeg from the numpy array that is identical with the jpeg that it was loaded from?
I know this workflow (decode-encode without doing anything) sounds a bit stupid, but keeping the original jpeg data is not an option. I'm interested if it is possible to recreate the original jpeg just using the data at hand.
The question is different from Reading a .JPG Image and Saving it without file size change, as I don't modify anything in the picture. I really want to restore the original jpeg file based on the data at hand. I assume one could bypass the compression steps (the compression artifacts are already in the data) and just write the file in jpeg format. The question is, if this is possible with OpenCV.
Clarified answer, following comment below:
What you say makes no sense at all; You say that you have the raw, unmodified, RGB data. No you don't. You have the uncompressed data that has been reconstructed from the compressed jpeg file.
The JPEG standards specify how to un-compress an image / video. There is nothing in the standard about how to actually do this compression, so your original image data could have been compressed any one of a zillion different ways. You have no way of knowing the decoding steps that were required to recreate your data, so you cannot reverse them.
Image this.
"I have a number, 44, please tell me how I can get the original
numbers that this came from"
This is, essentially, what you are asking.
The only way you can do what you want (other than just copy the original file) is to read the image into an array before loading into openCV. Then if you want to save it, then just write the raw array to a file, something like this:
fi = 'C:\\Path\\to\\Image.jpg'
fo = 'C:\\Path\\to\\Copy_Image.jpg'
with open(fi,'rb') as myfile:
im_array = np.array(myfile.read())
# Do stuff here
image = cv2.imdecode(im_array)
# Do more stuff here
with open(fo,'wb') as myfile:
myfile.write(im_array)
Of course, it means you will have the data stored twice, effectively, in memory, but this seems to me to be your only option.
Sometimes, no matter how hard you want to do something, you have to accept that it just cannot be done.
I am writing a file browser using pygtk. For image files I am showing some previews by loading images by pixbuf_new_from_file and scaling them. In directories with large files (like when browsing a portfolio) it takes too long. Is it possible to load the images with lower resolution?
Whole code can be found on Git. In dirFrame.py the function renderMainDirContent is the part that takes too long.
pixbuf_new_from_file_at_size seems to load full image and scale, as it has almost no effect on performance.
It seems like there is no faster way to do this with python. Using numpy to load and scale images improves performance, but you need to save thumbnails for acceptable performance, at least for large images.
I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.