Python not updating alpha parameter in PIL images - python

def apply_alpha(img, alpha_value):
print("alpha_value" + str(alpha_value))
mask_value = int(alpha_value * 255)
print("mask_value" + str(mask_value))
img.putalpha(mask_value)
return img
def apply_alpha(img, alpha_value):
import copy
tmp = copy.copy(img)
print("alpha_value" + str(alpha_value))
mask_value = int(alpha_value * 255)
print("mask_value" + str(mask_value))
tmp.putalpha(mask_value)
return tmp
working_image = apply_alpha(obs, alpha)
I tried both of the above apply_alpha functions, where "img" is a PIL image, and neither of them correctly apply alpha (nothing changes).
I am stitching together individual tiles of a composite image, and using "put alpha" to set the transparency of each individual tile. I believe the 'paste' in the merging of the individual tiles is erasing the putalpha for each individual image. How can I get this to work?
I'm using this merge_images to stitch together the individual tile images: Stitching Photos together
This scenario is distinct from other questions asked because the img.putalpha(...) is used within a function, which causes it to not work

I figured it out: the cause of the issue was that, in the merge function for the images, there is this code:
result = Image.new('RGB', (result_width, result_height))
result.paste(im=img1, box=(0, 0), mask=img1)
result.paste(im=img2, box=(width1, 0), mask=img2)
Because the image type was "RGB", the alpha channels were being ignored when composing the tiles. Make sure the image type is "RGBA"

Related

Combining 3 channel numpy array to form an rgb image

I was trying to combine 3 gray scale images into a single overlapping image with three different colors for each.
For that, I added each into a 3 channel numpy array.
But when plotting with im.show I don't get a colourful image. Till adding 2nd channel it works, but when I add the third channel, it doesn't work. The final image has only red and blue colour.
It is supposed to be red, green and blue for corresponding to the overlapping images.
Why would it be?
image1 = Image.open("E:/imaging/04102022_Bronze/Copper_4_2/10.tif") #openingimage1
image1_norm =(np.array(image1)-np.array(image1).min() ) / (np.array(image1).max() -
np.array(image1).min()) #normalisingimage1
image2 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage2
image2_norm = (np.array(image2)-np.array(image2).min()) / (np.array(image2).max() -
np.array(image2).min())#normalisingimage2
image3 = Image.open("E:/imaging/04102022_Bronze/Oxygen_1_2/10.tif")#openingimage3
image3_norm = (np.array(image3)-np.array(image3).min()) / (np.array(image3).max() -
np.array(image3).min())#normalisingimage3
im=np.array(image2)
new_image = np.zeros(im.shape + (3,)) #creating an empty 3 channel numpy array .shape of this
array is (255, 1024, 3)
new_image[:,:,0] = image1_norm #adding the three images into three channels
new_image[:,:,1] = image2_norm
new_image[:,:,2] = image3_norm
new_image1=new_image*255.999
new_image2= new_image1.astype(np.uint8)
final_image=final_image=Image.fromarray(new_image2, mode='RGB')
A few possible issues...
When you open an image in PIL, if you want to be sure it is single-channel greyscale, and not accidentally 3-channel RGB, or a palette image, force it to greyscale:
im = Image.open('image.png').convert('L')
Try not to repeat complicated calculations or expressions several times - it just makes for a maintenance nightmare. Maybe use a function instead:
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
You can use Numpy's dstack() to merge channels - it means "depth"-stack, as opposed to np.vstack() which stacks images vertically above/below each other and np.hstack() which stacks images side-by-side horizontally. It is a lot simpler than creating an image of the right size and individually pushing each channel into it.
result = np.dstack((im1, im2, im3))
That would make the overall code more like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
def normalize(im):
# Normalise image to range 0..1
min, max = im.min(), im.max()
return (im.astype(float)-min)/(max-min)
# Load images as single channel Numpy arrays
im1 = np.array(Image.open('ch1.png').convert('L'))
im2 = np.array(Image.open('ch2.png').convert('L'))
im3 = np.array(Image.open('ch3.png').convert('L'))
# Normalize and scale
n1 = normalize(im1) * 255.999
n2 = normalize(im2) * 255.999
n3 = normalize(im3) * 255.999
# Merge channels to RGB
result = np.dstack((n1,n2,n3))
result = Image.fromarray(result.astype(np.uint8))
result.save('result.png')
That makes these three input images:
into this merged image:

How to remove the background of a noisy image and extract transparent objects?

I have an image processing problem that I can't solve. I have a set of 375 images like the one below (1). I'm trying to remove the background, so to make "background substraction" (or "foreground extraction") and get only the waste on a plain background (black/white/...).
(1) Image example
I tried many things, including createBackgroundSubtractorMOG2 from OpenCV, or threshold. I also tried to remove the background pixel by pixel by subtracting it from the foreground because I have a set of 237 background images (2) (the carpet without the waste, but which is a little bit offset from the image with the objects). There are also variations in brightness on the background images.
(2) Example of a background image
Here is a code example that I was able to test and that gives me the results below (3) and (4). I use Python 3.8.3.
# Function to remove the sides of the images
def delete_side(img, x_left, x_right):
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if j<=x_left or j>=x_right:
img[i,j] = (0,0,0)
return img
# Intialize the background model
backSub = cv2.createBackgroundSubtractorMOG2(history=250, varThreshold=2, detectShadows=True)
# Read the frames and update the background model
for frame in frames:
if frame.endswith(".png"):
filepath = FRAMES_FOLDER + '/' + frame
img = cv2.imread(filepath)
img_cut = delete_side(img, x_left=190, x_right=1280)
gray = cv2.cvtColor(img_cut, cv2.COLOR_BGR2GRAY)
mask = backSub.apply(gray)
newimage = cv2.bitwise_or(img, img, mask=mask)
img_blurred = cv2.GaussianBlur(newimage, (5, 5), 0)
gray2 = cv2.cvtColor(img_blurred, cv2.COLOR_BGR2GRAY)
_, binary = cv2.threshold(gray2, 10, 255, cv2.THRESH_BINARY)
final = cv2.bitwise_or(img, img, mask=binary)
newpath = RESULT_FOLDER + '/' + frame
cv2.imwrite(newpath, final)
I was inspired by many other cases found on Stackoverflow or others (example: removing pixels less than n size(noise) in an image - open CV python).
(3) The result obtained with the code above
(4) Result when increasing the varThreshold argument to 10
Unfortunately, there is still a lot of noise on the resulting pictures.
As a beginner in "background substraction", I don't have all the keys to get an optimal solution. If someone would have an idea to do this task in a more efficient and clean way (Is there a special method to handle the case of transparent objects? Can noise on objects be eliminated more effectively? etc.), I'm interested :)
Thanks
Thanks for your answers. For information, I simply change of methodology and use a segmentation model (U-Net) with 2 labels (foreground, background), to identify the background. It works quite well.

Use PIL to recolor a monochrome image and preserve transparency [duplicate]

Okay, here's the situation:
I want to use the Python Image Library to "theme" an image like this:
Theme color: "#33B5E5"
IN:
OUT:
I got the result using this commands with ImageMagick:
convert image.png -colorspace gray image.png
mogrify -fill "#33b5e5" -tint 100 image.png
Explanation:
The image is first converted to black-and-white, and then it is themed.
I want to get the same result with the Python Image Library.
But it seems I'm having some problems using it since:
Can not handle transparency
Background (transparency in main image) gets themed too..
I'm trying to use this script:
import Image
import ImageEnhance
def image_overlay(src, color="#FFFFFF", alpha=0.5):
overlay = Image.new(src.mode, src.size, color)
bw_src = ImageEnhance.Color(src).enhance(0.0)
return Image.blend(bw_src, overlay, alpha)
img = Image.open("image.png")
image_overlay(img, "#33b5e5", 0.5)
You can see I did not convert it to a grayscale first, because that didn't work with transparency either.
I'm sorry to post so many issues in one question, but I couldn't do anything else :$
Hope you all understand.
Note: There's a Python 3/pillow fork of PIL version of this answer here.
Update 4: Guess the previous update to my answer wasn't the last one after all. Although converting it to use PIL exclusively was a major improvement, there were a couple of things that seemed like there ought to be better, less awkward, ways to do, if only PIL had the ability.
Well, after reading the documentation closely as well as some of the source code, I realized what I wanted to do was in fact possible. The trade-off was that now it has to build the look-up table used manually, so the overall code is slightly longer. However the result is that it only needs to make one call to the relatively slow Image.point() method, instead of three of them.
from PIL import Image
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
def image_tint(src, tint='#ffffff'):
if Image.isStringType(src): # file path?
src = Image.open(src)
if src.mode not in ['RGB', 'RGBA']:
raise TypeError('Unsupported source image mode: {}'.format(src.mode))
src.load()
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (map(lambda lr: int(lr*sr + 0.5), range(256)) +
map(lambda lg: int(lg*sg + 0.5), range(256)) +
map(lambda lb: int(lb*sb + 0.5), range(256)))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
if __name__ == '__main__':
import os
input_image_path = 'image1.png'
print 'tinting "{}"'.format(input_image_path)
root, ext = os.path.splitext(input_image_path)
result_image_path = root+'_result'+ext
print 'creating "{}"'.format(result_image_path)
result = image_tint(input_image_path, '#33b5e5')
if os.path.exists(result_image_path): # delete any previous result file
os.remove(result_image_path)
result.save(result_image_path) # file name's extension determines format
print 'done'
Here's a screenshot showing input images on the left with corresponding outputs on the right. The upper row is for one with an alpha layer and the lower is a similar one that doesn't have one.
You need to convert to grayscale first. What I did:
get original alpha layer using Image.split()
convert to grayscale
colorize using ImageOps.colorize
put back original alpha layer
Resulting code:
import Image
import ImageOps
def tint_image(src, color="#FFFFFF"):
src.load()
r, g, b, alpha = src.split()
gray = ImageOps.grayscale(src)
result = ImageOps.colorize(gray, (0, 0, 0, 0), color)
result.putalpha(alpha)
return result
img = Image.open("image.png")
tinted = tint_image(img, "#33b5e5")

How can i find cycles in a skeleton image with python libraries?

I have many skeletonized images like this:
How can i detect a cycle, a loop in the skeleton?
Are there "special" functions that do this or should I implement it as a graph?
In case there is only the graph option, can the python graph library NetworkX can help me?
You can exploit the topology of the skeleton. A cycle will have no holes, so we can use scipy.ndimage to find any holes and compare. This isn't the fastest method, but it's extremely easy to code.
import scipy.misc, scipy.ndimage
# Read the image
img = scipy.misc.imread("Skel.png")
# Retain only the skeleton
img[img!=255] = 0
img = img.astype(bool)
# Fill the holes
img2 = scipy.ndimage.binary_fill_holes(img)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img == img2).all()
# As a test break the cycles
img3 = img.copy()
img3[0:200, 0:200] = 0
img4 = scipy.ndimage.binary_fill_holes(img3)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img3 == img4).all()
I've used your "B" picture as an example. The first two images are the original and the filled version which detects a cycle. In the second version, I've broken the cycle and nothing gets filled, thus the two images are the same.
First, let's build an image of the letter B with PIL:
import Image, ImageDraw, ImageFont
image = Image.new("RGBA", (600,150), (255,255,255))
draw = ImageDraw.Draw(image)
fontsize = 150
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Regular.ttf", fontsize)
txt = 'B'
draw.text((30, 5), txt, (0,0,0), font=font)
img = image.resize((188,45), Image.ANTIALIAS)
print type(img)
plt.imshow(img)
you may find a better way to do that, particularly with path to the fonts. Ii would be better to load an image instead of generating it. Anyway, we have now something to work on:
Now, the real part:
import mahotas as mh
img = np.array(img)
im = img[:,0:50,0]
im = im < 128
skel = mh.thin(im)
noholes = mh.morph.close_holes(skel)
plt.subplot(311)
plt.imshow(im)
plt.subplot(312)
plt.imshow(skel)
plt.subplot(313)
cskel = np.logical_not(skel)
choles = np.logical_not(noholes)
holes = np.logical_and(cskel,noholes)
lab, n = mh.label(holes)
print 'B has %s holes'% str(n)
plt.imshow(lab)
And we have in the console (ipython):
B has 2 holes
Converting your skeleton image to a graph representation is not trivial, and I don't know of any tools to do that for you.
One way to do it in the bitmap would be to use a flood fill, like the paint bucket in photoshop. If you start a flood fill of the image, the entire background will get filled if there are no cycles. If the fill doesn't get the entire image then you've found a cycle. Robustly finding all the cycles could require filling multiple times.
This is likely to be very slow to execute, but probably much faster to code than a technique where you trace the skeleton into graph data structure.

Colorize image while preserving transparency with PIL?

Okay, here's the situation:
I want to use the Python Image Library to "theme" an image like this:
Theme color: "#33B5E5"
IN:
OUT:
I got the result using this commands with ImageMagick:
convert image.png -colorspace gray image.png
mogrify -fill "#33b5e5" -tint 100 image.png
Explanation:
The image is first converted to black-and-white, and then it is themed.
I want to get the same result with the Python Image Library.
But it seems I'm having some problems using it since:
Can not handle transparency
Background (transparency in main image) gets themed too..
I'm trying to use this script:
import Image
import ImageEnhance
def image_overlay(src, color="#FFFFFF", alpha=0.5):
overlay = Image.new(src.mode, src.size, color)
bw_src = ImageEnhance.Color(src).enhance(0.0)
return Image.blend(bw_src, overlay, alpha)
img = Image.open("image.png")
image_overlay(img, "#33b5e5", 0.5)
You can see I did not convert it to a grayscale first, because that didn't work with transparency either.
I'm sorry to post so many issues in one question, but I couldn't do anything else :$
Hope you all understand.
Note: There's a Python 3/pillow fork of PIL version of this answer here.
Update 4: Guess the previous update to my answer wasn't the last one after all. Although converting it to use PIL exclusively was a major improvement, there were a couple of things that seemed like there ought to be better, less awkward, ways to do, if only PIL had the ability.
Well, after reading the documentation closely as well as some of the source code, I realized what I wanted to do was in fact possible. The trade-off was that now it has to build the look-up table used manually, so the overall code is slightly longer. However the result is that it only needs to make one call to the relatively slow Image.point() method, instead of three of them.
from PIL import Image
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
def image_tint(src, tint='#ffffff'):
if Image.isStringType(src): # file path?
src = Image.open(src)
if src.mode not in ['RGB', 'RGBA']:
raise TypeError('Unsupported source image mode: {}'.format(src.mode))
src.load()
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (map(lambda lr: int(lr*sr + 0.5), range(256)) +
map(lambda lg: int(lg*sg + 0.5), range(256)) +
map(lambda lb: int(lb*sb + 0.5), range(256)))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
if __name__ == '__main__':
import os
input_image_path = 'image1.png'
print 'tinting "{}"'.format(input_image_path)
root, ext = os.path.splitext(input_image_path)
result_image_path = root+'_result'+ext
print 'creating "{}"'.format(result_image_path)
result = image_tint(input_image_path, '#33b5e5')
if os.path.exists(result_image_path): # delete any previous result file
os.remove(result_image_path)
result.save(result_image_path) # file name's extension determines format
print 'done'
Here's a screenshot showing input images on the left with corresponding outputs on the right. The upper row is for one with an alpha layer and the lower is a similar one that doesn't have one.
You need to convert to grayscale first. What I did:
get original alpha layer using Image.split()
convert to grayscale
colorize using ImageOps.colorize
put back original alpha layer
Resulting code:
import Image
import ImageOps
def tint_image(src, color="#FFFFFF"):
src.load()
r, g, b, alpha = src.split()
gray = ImageOps.grayscale(src)
result = ImageOps.colorize(gray, (0, 0, 0, 0), color)
result.putalpha(alpha)
return result
img = Image.open("image.png")
tinted = tint_image(img, "#33b5e5")

Categories

Resources