Since it's very easy to display the content of a SVG file inside the iPython notebook, is there also a way (easy too) to get what we see inside a png file or other ?
from IPython.display import SVG
SVG(filename='../images/python_logo.svg')
If I do svg = SVG(filename='../images/python_logo.svg')
How can I save it to a png file ?
SVG are vectors images (the drawings are saved as commands to draw lines, circles, etc). PNGs are bitmaps. So to convert SVG to PNG, you need a renderer.
The most obvious solution is ImageMagick, a library you have already installed, as it is used in several programs. A less obvious approach is using Inkscape. Using the commandline options, it's possible to use Inkscape as a conversion program. As Inkscape is vector oriented, I suspect quality to be better than ImageMagick (which is more bitmap-minded).
As a vector image (SVG) is a text file containing drawing instructions, it's easier to understand. PNGs contain just pixel information, and, to make things worse, they are compressed with a fairly complicated algorithm. Making sense of them is not as easy.
Have a look at the Inkscape man page, it's fairly obvious how to use it. This is the IMagick convert help.
Related
I am expanding my limited Python knowledge by converting some MATLAB image analysis code to Python. I am following Image manipulation and processing using Numpy and Scipy. The code in Section 2.6.1 saves an image using both imageio.imsave and face.tofile, where type(face)=<class 'imageio.core.util.Array>'.
I am trying to understand why there are two ways to export an image. I tried web-searching tofile, but got numpy.ndarray.tofile. It's very sparse, and doesn't seem to be specific to images. I also looked for imageio.core.util.Array.tofile, but wasn't able to find anything.
Why are there two ways to export files? And why does imageio.core.util.Array.tofile seem to be un-findable online?
The difference is in what the two functions write in the file.
imageio.imsave() saves a conventional image, like a picture or photo, in JPEG/PNG format that can be viewed with an image viewer like GIMP, feh, eog, Photoshop or MSxPaint.
tofile() saves in a Numpy-compatible format that only Numpy (and a small number of other Python tools) use.
So, I have a PNG image file like the following example, and I need it to be converted into PGM format.
I'm using Ubuntu and Python, so any of terminal or Python tools would suit just fine. And there sure is a plenty of ways to do this: using ImageMagick convert command or pngtopam package or Python PIL library, etc.
But the point is, the quality of the image is essential in my case, and all of those failed in keeping it, always ending up with:
No need to mention this is totally not what I want to see. And the interesting thing is that when I tried to convert the same image into PGM manually using GIMP, it turned out quite well, looking exactly the way I'd like it to, i.e. the same as the PNG one.
So, that means it is possible to get a PGM image in fine quality after all, and now I'd really appreciate if someone can tell me how do I do that using terminal/Python tools. I guess, there should be some ImageMagick option that does the trick, it's just that I'm not aware of any.
You lost the antialiasing, which is conveyed via the alpha channel. To preserve it, use:
convert in.png -flatten out.pgm
Without -flatten, convert simply deletes the alpha channel; with -flatten it composites the input image against the background color, which is white by default.
Here are the results, magnified 10x so you can see what's going on:
Not flattened:
Flattened:
I have a local directory full of geotiff files which make up a map of the UK.
I'm using mapnik to render different images at various locations in the UK.
I'm wondering what is the best way to approach this?
I can create a single RasterSymbolizer then loop through the tiff directory and add each tiff as a seperate layer, then use mapniks zoom_to_box to render at the correct location.
But would this cause the rendering time to be unnecessarily slow? I have no information on how the tiles fit together (other than the data in each individual tiff of course).
I imagine there may be a way to setup some kind of vector file defining the tiff layout so I can quickly query that to find out which tile I need to render for a given bounding box?
You can either generate a big tiff file from the original tiffs with gdal_merge.py (you can find it in the python-gdal package on Debian or Ubuntu) or create a virtual file that mixes them all with gdal_merge-vrt. This second option saves space but probably is slower.
I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.
I would like to generate 2D images of 3D books with custom covers on demand.
Ideally, I'd like to import a 3D model of a book (created by an artist), change the cover texture to the custom one, and export a bitmap image (jpeg, png, etc...). I'm fairly ignorant about 3D graphics, so I'm not sure if that's possible or feasible, but it describes what I want to do. Another method would be fine if it accomplishes something similar. Like maybe I could start with a rendered 2D image and distort the custom cover somehow then put it in the right place over the original image?
It would be best if I could do this using Python, but if that's not possible, I'm open to other solutions.
Any suggestions on how to accomplish this?
Sure it's possible.
Blender would probably be overkill, but you can script blender with python, so that's one solution.
The latter solution is (I'm pretty sure) what most of those e-book cover generators do, which is why they always look a little off.
The PIL is an excellent tool for manipulating images and pixel data, so if you wanted to distort your own, that would be a great tool to look at, and if it goes too slow it's trivial to convert the image to a numpy array so you can get some speedup.