SVG, forming lines into an image - python

I'm forming an image in SVG from a bitmap using svgwrite and python where each line is rotated by theta around a common origin into a fan like-pattern. Currently this image is running around 10 MB for 128 lines largely because the excessive amount of floating point retained in the 79000+ line segments (one line segment per pixel).
I'd like to get the image size down significantly by drawing a single line from the origin out to the end-point and then stretch one line 'image' over that SVG line. Is this possible using SVG? Beyond that, I'm open to any suggestion that might get the size down significantly and so I later I can animate the lines in place.

How about this solution (see fiddle):
Slice the image up in the number of strips you need. For the example in the linked fiddle, I used ImageMagick as follows to cut up a 128x128 PNG image into 128 vertical strips:
convert image.png -crop 1x128 +repage +adjoin strip-%d.png
Convert all the image strips to data URI format and embed them in the SVG. I used this bash one-liner:
for i in strip-*.png; do echo "data:image/png;base64,$(openssl base64 < $i | tr -d '\n')"; done > data-uris.txt
In the SVG, use the transform() attribute on the image strip elements to rotate them to the required degree.
For a 128x128 PNG icon of 16.4KB, I end up with an SVG file of 128.1KB. Most of that is the base64-encoded image data (the total size of the 128 PNG strips is already 85.1KB). It could be further reduced a little by rounding of some floats, but I don't think there's a whole lot to be gained.
There might be another approach possible where you embed the image as a whole, and reference another clipped section of the same image over and over, but I couldn't get that to work.

Related

Pillow script results in image losing saturation / vibrance

I have a Python script that uses Pillow to resize an image to an Instagram sized image with blurred background from the original image.
Before and after images (both JPGs):
https://app.box.com/s/jpv2mxlncp9871zvx9ygt0be4gf0zc9q
Is this simply a function of the 'after' JPG being too small to reflect all the colors in the original image? (Instagram only allows 2048x2048 images max, my original is a JPG converted from TIF from a 24.2-megapixel RAW image taken from a Nikon DSLR). Perhaps it's all in my head, but in my opinion, the 'after' image has lost some saturation / vibrance (compare the yellow buildings and car lights for example)
Has anyone experienced similar issues? Is there some default mode in Pillow that is reducing the number of colors available? I'm thinking of adding an additional saturation step to my script, but that seems like a hack.
EDIT: Added another before / after pair of images to the link above. I also realize I can easily share the script's source (GitHub repo):
https://github.com/princefishthrower/instagramize
The difference is that the original image contains an "ICC Colour Profile" (amongst others) which are not preserved in the output image.
You can see this most easily with exiftool:
exiftool Mountains_Before.jpg | grep -i profile
Or with ImageMagick:
magick identify -verbose Mountains_Before.jpg | grep -A999 Profiles:
Output
Profiles:
Profile-8bim: 92 bytes
Profile-exif: 17796 bytes
Profile-icc: 560 bytes
Profile-iptc: 80 bytes
City[1,90]: 0x00000000: 254700 -%G
Created Date[2,55]: 2020-7-1
unknown[2,62]: 2020-06-30
unknown[2,63]: 21:11:26+00:00
unknown[2,0]: 4
Created Time[2,60]: 20:22:05-20:22
Profile-xmp: 9701 bytes
If you strip the profiles from the original, you will see it too, is washed out and flatter:
magick Mountains_Before.jpg -strip NoProfile.jpg
You can extract the ICC Profile and look at out like this if that sort of thing excites you:
magick Mountains_Before.jpg profile.icc
If you did that, I guess you could re-attach the profile from the BEFORE image to the AFTER image like this:
magick Mountains_After.jpg -profile profile.icc AfterWithProfile.jpg
Keywords: Image processing, ImageMagick, profile, ICC profile, saturation, saturated, desaturated, washed out.
As Mark Setchell pointed out, it is a matter of preserving the color profile of the image, which is possible natively in Pillow, first by retrieving the profile after opening the image:
image = Image.open('mycoolimage.jpg')
iccProfile = image.info.get('icc_profile')
iccBytes = io.BytesIO(iccProfile)
originalColorProfile = ImageCms.ImageCmsProfile(iccBytes)
and when calling save with Pillow you can pass an icc_profile:
image.save('outputimagename.jpg', icc_profile=originalColorProfile.tobytes())
(Obviously, I am doing other manipulations to image in between these two steps here. Apparently one or more of them cause the icc_profile to disappear.)
This answer was also helpful in building this solution.
I added Mountains_After_NEW.jpg for those interested to see the results of these additions.

Random ImageMagick error/"unexpected token" error on basic script that has neither

Right now, I am running a basic python script from bash that looks like this:
import sys
def main():
sys.stdout.write("hi")
if __name__=="__main__":
main()
When run using python (py justdosomething.py) it works fine. When I try to run it from bash, it gives this massive error message from imagemagick, despite me not importing it in this file:
Version: ImageMagick 6.9.4-1 Q16 x86_64 2016-05-11 http://www.imagemagick.org
Copyright: Copyright (C) 1999-2016 ImageMagick Studio LLC
License: http://www.imagemagick.org/script/license.php
Features: Cipher DPC Modules
Delegates (built-in): bzlib freetype jng jpeg ltdl lzma png tiff xml zlib
Usage: import [options ...] [ file ]
Image Settings:
-adjoin join images into a single multi-image file
-border include window border in the output image
-channel type apply option to select image channels
-colorspace type alternate image colorspace
-comment string annotate image with comment
-compress type type of pixel compression when writing the image
-define format:option
define one or more image format options
-density geometry horizontal and vertical density of the image
-depth value image depth
-descend obtain image by descending window hierarchy
-display server X server to contact
-dispose method layer disposal method
-dither method apply error diffusion to image
-delay value display the next image after pausing
-encipher filename convert plain pixels to cipher pixels
-endian type endianness (MSB or LSB) of the image
-encoding type text encoding type
-filter type use this filter when resizing an image
-format "string" output formatted image characteristics
-frame include window manager frame
-gravity direction which direction to gravitate towards
-identify identify the format and characteristics of the image
-interlace type None, Line, Plane, or Partition
-interpolate method pixel color interpolation method
-label string assign a label to an image
-limit type value Area, Disk, Map, or Memory resource limit
-monitor monitor progress
-page geometry size and location of an image canvas
-pause seconds seconds delay between snapshots
-pointsize value font point size
-quality value JPEG/MIFF/PNG compression level
-quiet suppress all warning messages
-regard-warnings pay attention to warning messages
-respect-parentheses settings remain in effect until parenthesis boundary
-sampling-factor geometry
horizontal and vertical sampling factor
-scene value image scene number
-screen select image from root window
-seed value seed a new sequence of pseudo-random numbers
-set property value set an image property
-silent operate silently, i.e. don't ring any bells
-snaps value number of screen snapshots
-support factor resize support: > 1.0 is blurry, < 1.0 is sharp
-synchronize synchronize image to storage device
-taint declare the image as modified
-transparent-color color
transparent color
-treedepth value color tree depth
-verbose print detailed information about the image
-virtual-pixel method
Constant, Edge, Mirror, or Tile
-window id select window with this id or name
Image Operators:
-annotate geometry text
annotate the image with text
-colors value preferred number of colors in the image
-crop geometry preferred size and location of the cropped image
-encipher filename convert plain pixels to cipher pixels
-geometry geometry preferred size or location of the image
-help print program options
-monochrome transform image to black and white
-negate replace every pixel with its complementary color
-repage geometry size and location of an image canvas
-quantize colorspace reduce colors in this colorspace
-resize geometry resize the image
-rotate degrees apply Paeth rotation to the image
-strip strip image of all profiles and comments
-thumbnail geometry create a thumbnail of the image
-transparent color make this color transparent within the image
-trim trim image edges
-type type image type
Miscellaneous Options:
-debug events display copious debugging information
-help print program options
-list type print a list of supported option arguments
-log format format of debugging information
-version print version information
By default, 'file' is written in the MIFF image format. To
specify a particular image format, precede the filename with an image
format name and a colon (i.e. ps:image) or specify the image type as
the filename suffix (i.e. image.ps). Specify 'file' as '-' for
standard input or output.
import: delegate library support not built-in `' (X11) # error/import.c/ImportImageCommand/1297.
It then gives me two error messages that seem unrelated:
./justdosomething.py: line 3: syntax error near unexpected token `('
./justdosomething.py: line 3: `def main():'
Why would running it from bash cause a totally unrelated and unused library to proc an error? Why would "def main()" be unrecognizable as a command? I'm lost here.
Add the following as the first line of the script.
#!/usr/bin/env python
Why would running it from bash cause a totally unrelated and unused library to proc an error?
The bash interpreter doesn't understand the Python programming language. Instead, it calls the import utility (provided by ImageMagick).
As sys is not a valid argument for import (the utility) it writes to stderr + basic usage info.
Why would "def main()" be unrecognizable as a command?
Simple. That is Python - not anything Bash will understand. You'll need to invoke the Python run-time to execute python scripts.

Is there a way to convert bitmap into instructions, or easily read it?

I have code that takes an image and converts it into a bitmap. I was wondering if there was a way to save the bitmap in a separate file to be used later. I would like to also be able to open that file in plain text and not an actual image so that I can read the bitmap.
code:
image_file = Image.open("edge.png")
image_file = image_file.convert('1')
print(image_file.mode)
print(type(image_file.tobitmap()))
tobit_image = image_file.tobitmap() # convert image to bitmap
print(tobit_image)
I think you are looking for "Chain Codes", or "Freeman Chain Codes". Basically, you store a direction, which is one of the 8 points of the compass encoded as a digit at each location to tell you how to get get to the next point, i.e. which direction your turtle must move.
Try looking here and also Googling.
OpenCV can generate them too with findContours()
Or, you may be looking for potrace which is a tool that converts images into vector paths.

Why can't I preview my TIFF images with Jupyter ? And why am I failing to make a video out of those TIFF images?

I have a file with a lot of .tif images.
Part 1. Preview of TIFF images
When I try to opreview them by clicking on them in the jupyter folder (which look like this one the Jupyter folder), I get the following message :
Error ! D:...\image.tif is not UTF-8 encoded
On the opposite, if I click on a png in the Jupyter folder, Jupyter does display an image.
How could I fix my images, knowing that I have more than 1000 of them in my folder ?
Nonetheless, if I write :
sph = cv2.imread('A1.tif',-1)
plt.imshow(sph)
plt.show()
I do get the image : image of 'A1.tif'.
Now I also checked :
import chardet
chardet.detect('A1.det')
--> {'confidence': 1.0, 'encoding': 'ascii', 'language': ''} # result
So apparently I it is encoded in ascii. Is it the same as utf-8 or should I convert them ?
Edit : Answer : In one of the comments, #FabienP answers that "According the official documentation, Jupyter lab does not support TIFF format for image preview (as of now)", which answers this question.
Part 2 : writing a video out of TIFF images
I have another question and I don't know if both questions are connected.
I want to make a video out of them.
import cv2
import os
image_folder = 'A549_A1'
video_name = 'video.avi'
images = [img for img in os.listdir(image_folder) if img.endswith(".tif")]
frame = cv2.imread(os.path.join(image_folder, images[0]))
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, 0, 1, (width,height))
for image in images:
video.write(cv2.imread(os.path.join(image_folder, image)))
cv2.destroyAllWindows()
video.release()
But instead of getting the expected video, I get a strange one with many images at one step : caption of the video. You can compare it to the image above to check that it's not normal.
How can I fix that ?
Converting the bytes in an image from ASCII to UTF-8 makes only slightly more sense than converting them from Fahrenheit to Celsius, or transposing them to B♭ major. If you can find a way to do it technically, all it will do is wreck the image. Indeed, this is completely a red herring, and has absolutely nothing to do with your video conversion problem.
Text encodings like ASCII and UTF-8 describe how characters map between code points or glyphs and computer representations. There is no text in an image file; it is just a bunch of pixels. Maybe see also the seminal 2003 blog post The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Moreover, because UTF-8 is ASCII compatible, every ASCII file is already trivially a UTF-8 file. There is no transformation you can apply to make it "more UTF-8".
Binary formats, on the other hand, typically have an internal structure which is quite different. For just an image, a trivial format might simply encode each black pixel as a 1 bit and each white pixel as a 0 bit. (In fact, the very first version of TIFF did exactly this, with a few additional frills.) You can add a constant to each byte, for example, but this will simply transform it into a jumble which no longer contains a valid picture. Examine what happens if you add one to a number like 63 which has a lot of 1 bits in the lower half in its binary representation:
63 0011 1111 ..XX XXXX <- sequence of black pixels
+ 1 + 0000 0001 .... ...X
---- ----------- ----------
64 0100 0000 .X.. .... <- one black pixel, lots of white
Modern binary formats are quite a bit more complex, and often contain header sequences which indicate how many bytes of data follow or where to look for a particular feature to populate a data structure in memory. Replacing these values with other values will almost certainly create a stream which is simply corrupted unless you know exactly what you are doing.
Comparing against https://stackoverflow.com/a/34555939/874188 and googling for a bit suggests that passing 0 as the fourcc parameter might be the source of your problems.

How to save an .EPS file to PNG with transparency in Python

I'm building a Paint-like app Since I want the freedom to reposition and modify the shape properties later, I am using Tkinter to draw shapes on Canvas instead of PIL Draw or anything else. From other answers, I found how to save a canvas as PNG by 1st creating a postscript file and then converting it to PNG using PIL.
Now the problem is the EPS file has transparent spaces but the PNG file fills those voids with a White background color. I'm not sure where I am going wrong.
Below is the function I used.
def saveImg(event):
global canvas
canvas.postscript(file="my_drawing.eps", colormode='color')
imgNew = Image.open("my_drawing.eps")
imgNew.convert("RGBA")
imgNew.thumbnail((2000,2000), Image.ANTIALIAS)
imgNew.save('testImg.png', quality=90)
Looks like transparency is not supported. From the docs:
The EPS driver can read EPS images in L, LAB, RGB and CMYK mode, but Ghostscript may convert the images to RGB mode rather than leaving them in the original color space.
When you load in RGB (instead of RGBA) the alpha channel information is discarded and converting it to RGBA later will not recover it.
Your best shot is porting it to more recent toolkits like cairo or QT or converting the file using GhostScript directly as suggested by PM2Ring.
For the GS approach in order to set the width and height of the output file you must use the -rN switch where N is the resolution in PPI (pixels per inch). You must do the math in order to get target resolution from the EPS bounding box and the desired output size.
Or you can render to a fixed resolution first, lets say, 100 PPI, see the width you got and do the math in order to get the correct resolution. For example, if rendering with -r100 gives you a file 500 pixels wide but you want it to be 1024:
desired_resolution = initial_resolution * desired_width // initial_width
In order to get a file 1024 pixels wide:
>>> 100 * 1024 // 500
204
So you must render the EPS again using -r204.
Edit 1:
I got the solution from this Question
We can set custom width and height using -gNNNNxMMMM
but the dpi value crops only a small area. I tried with the usual 72dpi and I got a decent output(I'm not sure if it's perfect or not). Now I need to find how to execute this command every time when I run the program and provide the custom image size value. :\

Categories

Resources