Pygame, set transparency on an image imported using convert_alpha() - python

in my pygame game, to import jpeg image, I use convert()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert
then, to play with the image transparency (how much we can see trough the image), I use set_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.set_alpha
However, to import my png image, which have a tranparent background, I use convert_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert_alpha
but with this way of importing, I can't play with the general transparency using set_alpha(). Any other idea to adjust the transparency (how much we see trough the image) ?

When you read the documentation for set_alpha you can read this :
If the Surface format contains per pixel alphas, then this alpha value will be ignored.
%In your case, with a png image, it's a per pixel alphas. So, you must manage alpha "per pixel". For example, you can do that (not the best code, but easy to understand. Work with png with no/yes transparency only):
def change_alpha(img,alpha=255):
width,height=img.get_size()
for x in range(0,width):
for y in range(0,height):
r,g,b,old_alpha=img.get_at((x,y))
if old_alpha>0:
img.set_at((x,y),(r,g,b,alpha))
Be carefull, it's "slow", because you manage each pixel which is not at 0 (transparent from your png).
If your png has multiple level of transparency, you should manage the transparency with a better formula, like this:
r,g,b,old_alpha=img.get_at((x,y))
img.set_at((x,y),(r,g,b,(alpha*old_alpha)/255))
And in this case, never modify the original image, but work on a copy to never lose your original alpha.
I hope it will help
===================== EDIT ===================
Add some optimisation because asking in comment
With some caching methodology:
class image_with_alpha(object):
def __init__(self,name=None):
self.img=None
self.alpha={}
if name:
self.load_image(name)
def load_image(self,name):
self.img=pygame.image.load(name)
self.alpha[255]=self.img
#self.pre_compute_alpha()
def pre_compute_alpha(self):
for alpha in range(0,10):
self.alpha[alpha]=change_alpha(self.img,alpha)
def get_img(self,a=255):
try:
return self.alpha[a]
except:
self.alpha[a]=change_alpha(self.img,a)
return self.alpha[a]
And use it like this :
Load image:
image=image_with_alpha("test.png")
Blit with 60 for alpha:
screen.blit(image.get_img(60),(0,0))
And now, it's fast I hope

The fastest solution is probably to use numpy array manipulation, it should be fast enough to avoid the need for caching. What's really slow about calculating the alpha value pixel-wise is iterating in Python, while numpy does it all in C.
Start out by referencing the image's alpha channel into a numpy array. This will create a lock on the image surface; let's remember that for later. Then take the minimum (pixel-wise) of your original alpha and an array full of ones (that will leave you with an array of only ones and zeros), multiply that (pixel-wise) by your desired alpha and copy the result back to the image's alpha channel (still represented by that array reference). Before you can blit the image to the screen, the array reference to the alpha array must be cleared, releasing the lock on the image surface.
def change_alpha(img, alpha=255):
chan = pygame.surfarray.pixels_alpha(img)
chan2 = numpy.minimum(chan, numpy.ones(chan.shape, dtype=chan.dtype)) * alpha
numpy.copyto(chan, chan2)
del chan

Ok I have the answer for you. Pygame's "convert_alpha()" function does not support per pixel alpha. Based on the docs setting alpha will have no effect. However, you can get around this limitation by doing the following. If you load your image using "convert()", and then setting alpha you can get your image to become transparent. Then all you have to do is use "set_colorkey(background color)" to eliminate the background image. Be carful with colorkey because any color in the image that is set as the colorkey will become transparent. The colorkey does not care about per pixel alpha so you can change the alpha of an image and use colorkey at the same time.
Here is the code...
#Loading Image
image = pygame.image.load("test.png").convert()
#Setting Alpha
image.set_alpha(some desired alpha)
#Set colorkey To Eliminate Background Color
image.set_colorkey(some background color)
I threw this test picture together for testing the code. The image does have transparency around the edges of it.
This is what it looks like blitted onto a green background without the added alpha. The white part was transparent until loaded with ".convert()"
This is the finished look of the image with the whole code applied. The alpha has been stripped and reset, and the colorkey has been set to white because it was the background
I hope this is what you are looking for and hoped my answer helped here.
NOTE* You may want to make a copy of the image before you change its alpha like this without the risk of the image having a "spillover" affect from previous uses.

use set_colorkey(color) to set a transparent color. for example, if you have an image of an apple, and everything but the apple is the color black, you'd use apple.set_colorkey(black), and everything but the apple would be transparent. also, if your having trouble using a jpg image, I'd suggest changing it to a png and then doing .convert().

Related

OpenCV cv2.rectangle output binary image

I have been trying to draw rectangle on a black image, uscv2.rectangle.Here is my code : (It is just a sample, in actual code there is a loop i.e values x2,y2,w2,h2 changes in a loop)
heir = np.zeros((np.shape(image1)[0],np.shape(image1)[1]),np.uint8);
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
It is giving the following output:
Why the image is like that? Why the boundaries are not just a line a width 5.
I have tried, but I am not able to figure it out.
Can't post this in a comment, but it's a negative answer: the same operations work for me on Windows/python 2.7.8/opencv3.1
import numpy as np
import cv2
heir = np.zeros((100,200),np.uint8);
x2=10
y2=20
w2=30
h2=40
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
Because you are loading the image to be tagged (draw rectangles) in grayscale, thats why when you are adding rectangles/bounding boxes the colors are being converted to grayscale.
To fix the issue, open image in "color" format. Since, you didn't included that part of code, here is the proposed solution:
tag_img = cv2.imread(MYIMAGE,1)
Pay attention to the second parameter here, which is "1" and means load image as color. Read more about reading images here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html

Python PIL: Convert RGBA->P - ValueError: image has wrong mode [duplicate]

I would like to convert a PNG32 image (with transparency) to PNG8 with Python Image Library.
So far I have succeeded converting to PNG8 with a solid background.
Below is what I am doing:
from PIL import Image
im = Image.open("logo_256.png")
im = im.convert('RGB').convert('P', palette=Image.ADAPTIVE, colors=255)
im.save("logo_py.png", colors=255)
After much searching on the net, here is the code to accomplish what I asked for:
from PIL import Image
im = Image.open("logo_256.png")
# PIL complains if you don't load explicitly
im.load()
# Get the alpha band
alpha = im.split()[-1]
im = im.convert('RGB').convert('P', palette=Image.ADAPTIVE, colors=255)
# Set all pixel values below 128 to 255,
# and the rest to 0
mask = Image.eval(alpha, lambda a: 255 if a <=128 else 0)
# Paste the color of index 255 and use alpha as a mask
im.paste(255, mask)
# The transparency index is 255
im.save("logo_py.png", transparency=255)
Source: http://nadiana.com/pil-tips-converting-png-gif
Although the code there does not call im.load(), and thus crashes on my version of os/python/pil. (It looks like that is the bug in PIL).
As mentioned by Mark Ransom, your paletized image will only have one transparency level.
When saving your paletized image, you'll have to specify which color index you want to be the transparent color like this :
im.save("logo_py.png", transparency=0)
to save the image as a paletized colors and using the first color as a transparent color.
This is an old question so perhaps older answers are tuned to older version of PIL?
But for anyone coming to this with Pillow>=6.0.0 then the following answer is many magnitudes faster and simpler.
im = Image.open('png32_or_png64_with_alpha.png')
im = im.quantize()
im.save('png8_with_alpha_channel_preserved.png')
Don't use PIL to generate the palette, as it can't handle RGBA properly and has quite limited quantization algorithm.
Use pngquant instead.

Python Image Library: clean Downsampling

I've been having trouble trying to get PIL to nicely downsample images. The goal, in this case, is for my website to automagically downsample->cache the original image file whenever a different size is required, thus removing the pain of maintaining multiple versions of the same image. However, I have not had any luck. I've tried:
image.thumbnail((width, height), Image.ANTIALIAS)
image.save(newSource)
and
image.resize((width, height), Image.ANTIALIAS).save(newSource)
and
ImageOps.fit(image, (width, height), Image.ANTIALIAS, (0, 0)).save(newSource)
and all of them seem to perform a nearest-neighbout downsample, rather than averaging over the pixels as it should Hence it turns images like
http://www.techcreation.sg/media/projects//software/Java%20Games/images/Tanks3D%20Full.png
to
http://www.techcreation.sg/media/temp/0x5780b20fe2fd0ed/Tanks3D.png
which isn't very nice. Has anyone else bumped into this issue?
That image is an indexed-color (palette or P mode) image. There are a very limited number of colors to work with and there's not much chance that a pixel from the resized image will be in the palette, since it will need a lot of in-between colors. So it always uses nearest-neighbor mode when resizing; it's really the only way to keep the same palette.
This behavior is the same as in Adobe Photoshop.
You want to convert to RGB mode first and resize it, then go back to palette mode before saving, if desired. (Actually I would just save it in RGB mode, and then turn PNGCrush loose on the folder of resized images.)
This is over a year old, but in case anyone is still looking:
Here is a sample of code that will see if an image is in a palette mode, and make adjustments
import Image # or from PIL import Image
img = Image.open(sourceFile)
if 'P' in img.mode: # check if image is a palette type
img = img.convert("RGB") # convert it to RGB
img = img.resize((w,h),Image.ANTIALIAS) # resize it
img = img.convert("P",dither=Image.NONE, palette=Image.ADAPTIVE)
#convert back to palette
else:
img = img.resize((w,h),Image.ANTIALIAS) # regular resize
img.save(newSourceFile) # save the image to the new source
#img.save(newSourceFile, quality = 95, dpi=(72,72), optimize = True)
# set quality, dpi , and shrink size
By converting the paletted version to RGB, we can resize it with the anti alias. If you want to reconvert it back, then you have to set dithering to NONE, and use an ADAPTIVE palette. If there options aren't included your result (if reconverted to palette) will be grainy. Also you can use the quality option, in the save function, on some image formats to improve the quality even more.

Transparent PNG in PIL turns out not to be transparent

I have been hitting my head against the wall for a while with this, so maybe someone out there can help.
I'm using PIL to open a PNG with transparent background and some random black scribbles, and trying to put it on top of another PNG (with no transparency), then save it to a third file.
It comes out all black at the end, which is irritating, because I didn't tell it to be black.
I've tested this with multiple proposed fixes from other posts. The image opens in RGBA format, and it's still messed up.
Also, this program is supposed to deal with all sorts of file formats, which is why I'm using PIL. Ironic that the first format I tried is all screwy.
Any help would be appreciated. Here's the code:
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff)) # xoff and yoff are 0 in my tests
img.save(outfile)
I think what you want to use is the paste mask argument.
see the docs, (scroll down to paste)
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff), mask=layer)
# the transparancy layer will be used as the mask
img.save(outfile)

Python PIL: best scaling method that preserves lines

I have a 2D drawing with a black background and white lines (exported from Autocad) and I want to create a thumbnail preserving lines, using Python PIL library.
But what I obtain using the 'thumbnail' method is just a black picture scattered with white dots.
Note that if I put the image into an IMG tag with fixed width, I obtain exactly what I want (but the image is entirely loaded).
After your comments, here is my sample code:
from PIL import Image
fn = 'filename.gif'
im = Image(fn)
im.convert('RGB')
im.thumbnail((300, 300), Image.ANTIALIAS)
im.save('newfilename.png', 'PNG')
How can I do?
The default resizing method used by thumbnail is NEAREST, which is a really bad choice. If you're resizing to 1/5 of the original size for example, it will output one pixel and throw out the next 4 - a one-pixel wide line has only a 1 out of 5 chance of showing up at all in the result!
The surprising thing is that BILINEAR and BICUBIC aren't much better. They take a formula and apply it to the 2 or 3 closest pixels to the source point, but there's still lots of pixels they don't look at, and the formula will deemphasize the line anyway.
The best choice is ANTIALIAS, which appears to take all of the original image into consideration without throwing away any pixels. The lines will become dimmer but they won't disappear entirely; you can do an extra step to improve the contrast if necessary.
Note that all of these methods will fall back to NEAREST if you're working with a paletted image, i.e. im.mode == 'P'. You must always convert to 'RGB'.
from PIL import Image
im = Image.open(fn)
im = im.convert('RGB')
im.thumbnail(size, Image.ANTIALIAS)
Here's an example taken from the electronics.stackexchange site https://electronics.stackexchange.com/questions/5412/easiest-and-best-poe-ethernet-chip-micro-design-for-diy-interface-with-custom-ard/5418#5418
Using the default NEAREST algorithm, which I assume is similar to the results you had:
Using the ANTIALIAS algorithm:
By default, im.resize uses the NEAREST filter, which is going to do what you're seeing -- lose information unless it happens to fall on an appropriately moduloed pixel.
Instead call
im.resize(size, Image.BILINEAR)
This should preserve your lines. If not, try Image.BICUBIC or Image.ANTIALIAS. Any of those should work better than NEAREST.

Categories

Resources