Python PIL: best scaling method that preserves lines - python

I have a 2D drawing with a black background and white lines (exported from Autocad) and I want to create a thumbnail preserving lines, using Python PIL library.
But what I obtain using the 'thumbnail' method is just a black picture scattered with white dots.
Note that if I put the image into an IMG tag with fixed width, I obtain exactly what I want (but the image is entirely loaded).
After your comments, here is my sample code:
from PIL import Image
fn = 'filename.gif'
im = Image(fn)
im.convert('RGB')
im.thumbnail((300, 300), Image.ANTIALIAS)
im.save('newfilename.png', 'PNG')
How can I do?

The default resizing method used by thumbnail is NEAREST, which is a really bad choice. If you're resizing to 1/5 of the original size for example, it will output one pixel and throw out the next 4 - a one-pixel wide line has only a 1 out of 5 chance of showing up at all in the result!
The surprising thing is that BILINEAR and BICUBIC aren't much better. They take a formula and apply it to the 2 or 3 closest pixels to the source point, but there's still lots of pixels they don't look at, and the formula will deemphasize the line anyway.
The best choice is ANTIALIAS, which appears to take all of the original image into consideration without throwing away any pixels. The lines will become dimmer but they won't disappear entirely; you can do an extra step to improve the contrast if necessary.
Note that all of these methods will fall back to NEAREST if you're working with a paletted image, i.e. im.mode == 'P'. You must always convert to 'RGB'.
from PIL import Image
im = Image.open(fn)
im = im.convert('RGB')
im.thumbnail(size, Image.ANTIALIAS)
Here's an example taken from the electronics.stackexchange site https://electronics.stackexchange.com/questions/5412/easiest-and-best-poe-ethernet-chip-micro-design-for-diy-interface-with-custom-ard/5418#5418
Using the default NEAREST algorithm, which I assume is similar to the results you had:
Using the ANTIALIAS algorithm:

By default, im.resize uses the NEAREST filter, which is going to do what you're seeing -- lose information unless it happens to fall on an appropriately moduloed pixel.
Instead call
im.resize(size, Image.BILINEAR)
This should preserve your lines. If not, try Image.BICUBIC or Image.ANTIALIAS. Any of those should work better than NEAREST.

Related

How to adjust Pillow EPS to JPG quality

I'm trying to convert EPS images to JPEG using Pillow. But the results are of low quality. I'm trying to use resize method, but it gets completely ignored. I set up the size of JPEG image as (3600, 4700), but the resulted image has (360, 470) size. My code is:
eps_image = Image.open('img.eps')
height = eps_image.height * 10
width = eps_image.width * 10
new_size = (height, width)
print(new_size) # prints (3600, 4700)
eps_image.resize(new_size, Image.ANTIALIAS)
eps_image.save(
'img.jpeg',
format='JPEG'
dpi=(9000, 9000),
quality=95)
UPD. Vasu Deo.S noticed one my error, and thanks to him the JPG image has become bigger, but quality is still low. I've tried different DPI, sizes, resample values for resize function, but the result does not change much. How can i make it better?
The problem is that PIL is a raster image processor, as opposed to a vector image processor. It "rasterises" vector images (such as your EPS file and SVG files) onto a grid when it opens them because it can only deal with rasters.
If that grid doesn't have enough resolution, you can never regain it. Normally, it rasterises at 100dpi, so if you want to make bigger images, you need to rasterise onto a larger grid before you even get started.
Compare:
from PIL import Image
eps_image = Image.open('image.eps')
eps_image.save('a.jpg')
The result is 540x720:
And this:
from PIL import Image
eps_image = Image.open('image.eps')
# Rasterise onto 4x higher resolution grid
eps_image.load(scale=4)
eps_image.save('a.jpg')
The result is 2160x2880:
You now have enough quality to resize however you like.
Note that you don't need to write any Python to do this at all - ImageMagick will do it all for you. It is included in most Linux distros and is available for macOS and Windows and you just use it in Terminal. The equivalent command is like this:
magick -density 400 input.eps -resize 800x600 -quality 95 output.jpg
It's because eps_image.resize(new_size, Image.ANTIALIAS) returns an resized copy of an image. Therefore you have to store it in a separate variable. Just change:-
eps_image.resize(new_size, Image.ANTIALIAS)
to
eps_image = eps_image.resize(new_size, Image.ANTIALIAS)
UPDATE:-
These may not solve the problem completely, but still would help.
You are trying to save your output image as a .jpeg, which is a
lossy compression format, therefore information is lost during the
compression/transformation (for the most part). Change the output
file extension to a lossless compression format like .png so that
data would not be compromised during compression. Also change
quality=95 to quality=100 in Image.save()
You are using Image.ANTIALIAS for resampling the image, which is
not that good when upscaling the image (it has been replaced by
Image.LANCZOS in newer version, the clause still exists for
backward compatibility). Try using Image.BICUBIC, which produces
quite favorable results (for the most part) when upscaling the image.

OpenCV cv2.rectangle output binary image

I have been trying to draw rectangle on a black image, uscv2.rectangle.Here is my code : (It is just a sample, in actual code there is a loop i.e values x2,y2,w2,h2 changes in a loop)
heir = np.zeros((np.shape(image1)[0],np.shape(image1)[1]),np.uint8);
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
It is giving the following output:
Why the image is like that? Why the boundaries are not just a line a width 5.
I have tried, but I am not able to figure it out.
Can't post this in a comment, but it's a negative answer: the same operations work for me on Windows/python 2.7.8/opencv3.1
import numpy as np
import cv2
heir = np.zeros((100,200),np.uint8);
x2=10
y2=20
w2=30
h2=40
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
Because you are loading the image to be tagged (draw rectangles) in grayscale, thats why when you are adding rectangles/bounding boxes the colors are being converted to grayscale.
To fix the issue, open image in "color" format. Since, you didn't included that part of code, here is the proposed solution:
tag_img = cv2.imread(MYIMAGE,1)
Pay attention to the second parameter here, which is "1" and means load image as color. Read more about reading images here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html

RGB Values Being Returned by PIL don't match RGB color

I'm attempting to make a reasonably simple code that will be able to read the size of an image and return all the RGB values. I'm using PIL on Python 2.7, and my code goes like this:
import os, sys
from PIL import Image
img = Image.open('C:/image.png')
pixels = img.load()
print(pixels[0, 1])
now this code was actually gotten off of this site as a way to read a gif file. I'm trying to get the code to print out an RGB tuple (in this case (55, 55, 55)) but all it gives me is a small sequence of unrelated numbers, usually containing 34.
I have tried many other examples of code, whether from here or not, but it doesn't seem to work. Is it something wrong with the .png format? Do I need to further code in the rgb part? I'm happy for any help.
My guess is that your image file is using pre-multiplied alpha values. The 8 values you see are pretty close to 55*34/255 (where 34 is the alpha channel value).
PIL uses the mode "RGBa" (with a little a) to indicate when it's using premultiplied alpha. You may be able to tell PIL to covert the to normal "RGBA", where the pixels will have roughly the values you expect:
img = Image.open('C:/image.png').convert("RGBA")
Note that if your image isn't supposed to be partly transparent at all, you may have larger issues going on. We can't help you with that without knowing more about your image.

Pygame, set transparency on an image imported using convert_alpha()

in my pygame game, to import jpeg image, I use convert()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert
then, to play with the image transparency (how much we can see trough the image), I use set_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.set_alpha
However, to import my png image, which have a tranparent background, I use convert_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert_alpha
but with this way of importing, I can't play with the general transparency using set_alpha(). Any other idea to adjust the transparency (how much we see trough the image) ?
When you read the documentation for set_alpha you can read this :
If the Surface format contains per pixel alphas, then this alpha value will be ignored.
%In your case, with a png image, it's a per pixel alphas. So, you must manage alpha "per pixel". For example, you can do that (not the best code, but easy to understand. Work with png with no/yes transparency only):
def change_alpha(img,alpha=255):
width,height=img.get_size()
for x in range(0,width):
for y in range(0,height):
r,g,b,old_alpha=img.get_at((x,y))
if old_alpha>0:
img.set_at((x,y),(r,g,b,alpha))
Be carefull, it's "slow", because you manage each pixel which is not at 0 (transparent from your png).
If your png has multiple level of transparency, you should manage the transparency with a better formula, like this:
r,g,b,old_alpha=img.get_at((x,y))
img.set_at((x,y),(r,g,b,(alpha*old_alpha)/255))
And in this case, never modify the original image, but work on a copy to never lose your original alpha.
I hope it will help
===================== EDIT ===================
Add some optimisation because asking in comment
With some caching methodology:
class image_with_alpha(object):
def __init__(self,name=None):
self.img=None
self.alpha={}
if name:
self.load_image(name)
def load_image(self,name):
self.img=pygame.image.load(name)
self.alpha[255]=self.img
#self.pre_compute_alpha()
def pre_compute_alpha(self):
for alpha in range(0,10):
self.alpha[alpha]=change_alpha(self.img,alpha)
def get_img(self,a=255):
try:
return self.alpha[a]
except:
self.alpha[a]=change_alpha(self.img,a)
return self.alpha[a]
And use it like this :
Load image:
image=image_with_alpha("test.png")
Blit with 60 for alpha:
screen.blit(image.get_img(60),(0,0))
And now, it's fast I hope
The fastest solution is probably to use numpy array manipulation, it should be fast enough to avoid the need for caching. What's really slow about calculating the alpha value pixel-wise is iterating in Python, while numpy does it all in C.
Start out by referencing the image's alpha channel into a numpy array. This will create a lock on the image surface; let's remember that for later. Then take the minimum (pixel-wise) of your original alpha and an array full of ones (that will leave you with an array of only ones and zeros), multiply that (pixel-wise) by your desired alpha and copy the result back to the image's alpha channel (still represented by that array reference). Before you can blit the image to the screen, the array reference to the alpha array must be cleared, releasing the lock on the image surface.
def change_alpha(img, alpha=255):
chan = pygame.surfarray.pixels_alpha(img)
chan2 = numpy.minimum(chan, numpy.ones(chan.shape, dtype=chan.dtype)) * alpha
numpy.copyto(chan, chan2)
del chan
Ok I have the answer for you. Pygame's "convert_alpha()" function does not support per pixel alpha. Based on the docs setting alpha will have no effect. However, you can get around this limitation by doing the following. If you load your image using "convert()", and then setting alpha you can get your image to become transparent. Then all you have to do is use "set_colorkey(background color)" to eliminate the background image. Be carful with colorkey because any color in the image that is set as the colorkey will become transparent. The colorkey does not care about per pixel alpha so you can change the alpha of an image and use colorkey at the same time.
Here is the code...
#Loading Image
image = pygame.image.load("test.png").convert()
#Setting Alpha
image.set_alpha(some desired alpha)
#Set colorkey To Eliminate Background Color
image.set_colorkey(some background color)
I threw this test picture together for testing the code. The image does have transparency around the edges of it.
This is what it looks like blitted onto a green background without the added alpha. The white part was transparent until loaded with ".convert()"
This is the finished look of the image with the whole code applied. The alpha has been stripped and reset, and the colorkey has been set to white because it was the background
I hope this is what you are looking for and hoped my answer helped here.
NOTE* You may want to make a copy of the image before you change its alpha like this without the risk of the image having a "spillover" affect from previous uses.
use set_colorkey(color) to set a transparent color. for example, if you have an image of an apple, and everything but the apple is the color black, you'd use apple.set_colorkey(black), and everything but the apple would be transparent. also, if your having trouble using a jpg image, I'd suggest changing it to a png and then doing .convert().

Python Image Library: clean Downsampling

I've been having trouble trying to get PIL to nicely downsample images. The goal, in this case, is for my website to automagically downsample->cache the original image file whenever a different size is required, thus removing the pain of maintaining multiple versions of the same image. However, I have not had any luck. I've tried:
image.thumbnail((width, height), Image.ANTIALIAS)
image.save(newSource)
and
image.resize((width, height), Image.ANTIALIAS).save(newSource)
and
ImageOps.fit(image, (width, height), Image.ANTIALIAS, (0, 0)).save(newSource)
and all of them seem to perform a nearest-neighbout downsample, rather than averaging over the pixels as it should Hence it turns images like
http://www.techcreation.sg/media/projects//software/Java%20Games/images/Tanks3D%20Full.png
to
http://www.techcreation.sg/media/temp/0x5780b20fe2fd0ed/Tanks3D.png
which isn't very nice. Has anyone else bumped into this issue?
That image is an indexed-color (palette or P mode) image. There are a very limited number of colors to work with and there's not much chance that a pixel from the resized image will be in the palette, since it will need a lot of in-between colors. So it always uses nearest-neighbor mode when resizing; it's really the only way to keep the same palette.
This behavior is the same as in Adobe Photoshop.
You want to convert to RGB mode first and resize it, then go back to palette mode before saving, if desired. (Actually I would just save it in RGB mode, and then turn PNGCrush loose on the folder of resized images.)
This is over a year old, but in case anyone is still looking:
Here is a sample of code that will see if an image is in a palette mode, and make adjustments
import Image # or from PIL import Image
img = Image.open(sourceFile)
if 'P' in img.mode: # check if image is a palette type
img = img.convert("RGB") # convert it to RGB
img = img.resize((w,h),Image.ANTIALIAS) # resize it
img = img.convert("P",dither=Image.NONE, palette=Image.ADAPTIVE)
#convert back to palette
else:
img = img.resize((w,h),Image.ANTIALIAS) # regular resize
img.save(newSourceFile) # save the image to the new source
#img.save(newSourceFile, quality = 95, dpi=(72,72), optimize = True)
# set quality, dpi , and shrink size
By converting the paletted version to RGB, we can resize it with the anti alias. If you want to reconvert it back, then you have to set dithering to NONE, and use an ADAPTIVE palette. If there options aren't included your result (if reconverted to palette) will be grainy. Also you can use the quality option, in the save function, on some image formats to improve the quality even more.

Categories

Resources