I have a local directory full of geotiff files which make up a map of the UK.
I'm using mapnik to render different images at various locations in the UK.
I'm wondering what is the best way to approach this?
I can create a single RasterSymbolizer then loop through the tiff directory and add each tiff as a seperate layer, then use mapniks zoom_to_box to render at the correct location.
But would this cause the rendering time to be unnecessarily slow? I have no information on how the tiles fit together (other than the data in each individual tiff of course).
I imagine there may be a way to setup some kind of vector file defining the tiff layout so I can quickly query that to find out which tile I need to render for a given bounding box?
You can either generate a big tiff file from the original tiffs with gdal_merge.py (you can find it in the python-gdal package on Debian or Ubuntu) or create a virtual file that mixes them all with gdal_merge-vrt. This second option saves space but probably is slower.
Related
I would like to export a PSD file for which we put some group layers as invisible to JPG.
Currently, I tried to put the concerned group layers as invisible (by looping through the PSD layers and put the concerned group as invisible like group.visible = False) and then save that PSD.
The saved new PSD does have the concerned group layers invisible.
Later, the new PSD is converted to JPG.
However, the JPG output shows also the invisible layers.
The python code used for passing from the new saved PSD to JPG is like the same for saving (we used psd_tools).
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.save(outputPath, "JPEG")
I have also tried with the command line "convert" on linux but it has also showed the invisible layers after conversion.
So, my question is to know whether there's a way to either remove invisible layers before saving to JPG within the same script without calling a script inside PhotoShop (which requires to open instances of PhotoShop) or export it to JPG without removing with a python code or maybe a command line.
I have found the last days something that does the trick from this StackOverflow Post by adding composite(force=True)
from psd_tools import PSDImage
image= PSDImage.open(PSDFilePath)
image.composite(force=True).save(outputPath) #outputPath is expected to be a JPG file
This is pretty good for light files. However, when the PSD's size is very large like 1GB, it takes too much time.
As I would like to execute this operation on a daily basis on about > 1000 files, this would take days to be done.
So, I am always looking for another solution.
Here's a sample to a lighter file, unfortunately, I could not put the real one for professional reasons.
https://file.io/yxdDxlzMeMsA
I work processing really big images of the likes of GIS and Astronomy images. I need to find a library preferably in python that allows me to append bits to an image and write it piece by piece to disk without having to have all the image in RAM at once.
Edit:
Thanks to those who commented. I work with microscopy images. Mostly those that can be opened with Openslide. Some of them are in this list. My goal is to have just one big file containing an image, a file that can be opened by other people instead of having a bunch of tiles.
But unless I have lots and lots of RAM (which I don't always have and people don't always have) I can't create images as big as the original and store them with things like PIL.image. I wish I could create an initial file, and then append to it the rest of the image as I create it.
Just like with GIS and AStronomy, microscopy has to create images based on the scans, and process them, so I was wondering if anyone knew a way to do this.
I don't think that's totally possible. to use data, a computer copies it to RAM.
If you just want to append your data to your image, use PIL.Image
I am using a QGIS plugin of a friend written in Python, which reclassifies the pixels of a raster by setting points, these points span a polygon and all the pixels within the polygon will be converted or reclassified. So far, it works more or less fine if I use a normal raster image from my hard disk in the format of .img or .tiff. By reclassifying the pixels, all the changes will be automatically saved in the image on the disk.
In a next step, I want to store all my raster images in a PostGIS database and manipulate them with that tool. Unfortunately, the tool cannot convert the pixels of the image if I load them into QGIS from the database.
The tool does not produce any error message. It starts loading and then nothing happens.
So the question is: Do I need to adapt the saving method of the plugin or is it generally impossible to manipulate raster images in QGIS which are stored in a database, or do I need special rights to access the raster images raster data type?
Since it's very easy to display the content of a SVG file inside the iPython notebook, is there also a way (easy too) to get what we see inside a png file or other ?
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
If I do svg = SVG(filename='../images/python_logo.svg')
How can I save it to a png file ?
SVG are vectors images (the drawings are saved as commands to draw lines, circles, etc). PNGs are bitmaps. So to convert SVG to PNG, you need a renderer.
The most obvious solution is ImageMagick, a library you have already installed, as it is used in several programs. A less obvious approach is using Inkscape. Using the commandline options, it's possible to use Inkscape as a conversion program. As Inkscape is vector oriented, I suspect quality to be better than ImageMagick (which is more bitmap-minded).
As a vector image (SVG) is a text file containing drawing instructions, it's easier to understand. PNGs contain just pixel information, and, to make things worse, they are compressed with a fairly complicated algorithm. Making sense of them is not as easy.
Have a look at the Inkscape man page, it's fairly obvious how to use it. This is the IMagick convert help.
I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.