Okay, here's the situation:
I want to use the Python Image Library to "theme" an image like this:
Theme color: "#33B5E5"
IN:
OUT:
I got the result using this commands with ImageMagick:
convert image.png -colorspace gray image.png
mogrify -fill "#33b5e5" -tint 100 image.png
Explanation:
The image is first converted to black-and-white, and then it is themed.
I want to get the same result with the Python Image Library.
But it seems I'm having some problems using it since:
Can not handle transparency
Background (transparency in main image) gets themed too..
I'm trying to use this script:
import Image
import ImageEnhance
def image_overlay(src, color="#FFFFFF", alpha=0.5):
overlay = Image.new(src.mode, src.size, color)
bw_src = ImageEnhance.Color(src).enhance(0.0)
return Image.blend(bw_src, overlay, alpha)
img = Image.open("image.png")
image_overlay(img, "#33b5e5", 0.5)
You can see I did not convert it to a grayscale first, because that didn't work with transparency either.
I'm sorry to post so many issues in one question, but I couldn't do anything else :$
Hope you all understand.
Note: There's a Python 3/pillow fork of PIL version of this answer here.
Update 4: Guess the previous update to my answer wasn't the last one after all. Although converting it to use PIL exclusively was a major improvement, there were a couple of things that seemed like there ought to be better, less awkward, ways to do, if only PIL had the ability.
Well, after reading the documentation closely as well as some of the source code, I realized what I wanted to do was in fact possible. The trade-off was that now it has to build the look-up table used manually, so the overall code is slightly longer. However the result is that it only needs to make one call to the relatively slow Image.point() method, instead of three of them.
from PIL import Image
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
def image_tint(src, tint='#ffffff'):
if Image.isStringType(src): # file path?
src = Image.open(src)
if src.mode not in ['RGB', 'RGBA']:
raise TypeError('Unsupported source image mode: {}'.format(src.mode))
src.load()
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (map(lambda lr: int(lr*sr + 0.5), range(256)) +
map(lambda lg: int(lg*sg + 0.5), range(256)) +
map(lambda lb: int(lb*sb + 0.5), range(256)))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
if __name__ == '__main__':
import os
input_image_path = 'image1.png'
print 'tinting "{}"'.format(input_image_path)
root, ext = os.path.splitext(input_image_path)
result_image_path = root+'_result'+ext
print 'creating "{}"'.format(result_image_path)
result = image_tint(input_image_path, '#33b5e5')
if os.path.exists(result_image_path): # delete any previous result file
os.remove(result_image_path)
result.save(result_image_path) # file name's extension determines format
print 'done'
Here's a screenshot showing input images on the left with corresponding outputs on the right. The upper row is for one with an alpha layer and the lower is a similar one that doesn't have one.
You need to convert to grayscale first. What I did:
get original alpha layer using Image.split()
convert to grayscale
colorize using ImageOps.colorize
put back original alpha layer
Resulting code:
import Image
import ImageOps
def tint_image(src, color="#FFFFFF"):
src.load()
r, g, b, alpha = src.split()
gray = ImageOps.grayscale(src)
result = ImageOps.colorize(gray, (0, 0, 0, 0), color)
result.putalpha(alpha)
return result
img = Image.open("image.png")
tinted = tint_image(img, "#33b5e5")
Related
EDIT: Code is working now, thanks to Mark and zephyr. zephyr also has two alternate working solutions below.
I want to divide blend two images with PIL. I found ImageChops.multiply(image1, image2) but I couldn't find a similar divide(image, image2) function.
Divide Blend Mode Explained (I used the first two images here as my test sources.)
Is there a built-in divide blend function that I missed (PIL or otherwise)?
My test code below runs and is getting close to what I'm looking for. The resulting image output is similar to the divide blend example image here: Divide Blend Mode Explained.
Is there a more efficient way to do this divide blend operation (less steps and faster)? At first, I tried using lambda functions in Image.eval and ImageMath.eval to check for black pixels and flip them to white during the division process, but I couldn't get either to produce the correct result.
EDIT: Fixed code and shortened thanks to Mark and zephyr. The resulting image output matches the output from zephyr's numpy and scipy solutions below.
# PIL Divide Blend test
import Image, os, ImageMath
imgA = Image.open('01background.jpg')
imgA.load()
imgB = Image.open('02testgray.jpg')
imgB.load()
# split RGB images into 3 channels
rA, gA, bA = imgA.split()
rB, gB, bB = imgB.split()
# divide each channel (image1/image2)
rTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=rA, b=rB).convert('L')
gTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=gA, b=gB).convert('L')
bTmp = ImageMath.eval("int(a/((float(b)+1)/256))", a=bA, b=bB).convert('L')
# merge channels into RGB image
imgOut = Image.merge("RGB", (rTmp, gTmp, bTmp))
imgOut.save('PILdiv0.png', 'PNG')
os.system('start PILdiv0.png')
You are asking:
Is there a more efficient way to do this divide blend operation (less steps and faster)?
You could also use the python package blend modes. It is written with vectorized Numpy math and generally fast. Install it via pip install blend_modes. I have written the commands in a more verbose way to improve readability, it would be shorter to chain them. Use blend_modes like this to divide your images:
from PIL import Image
import numpy
import os
from blend_modes import blend_modes
# Load images
imgA = Image.open('01background.jpg')
imgA = numpy.array(imgA)
# append alpha channel
imgA = numpy.dstack((imgA, numpy.ones((imgA.shape[0], imgA.shape[1], 1))*255))
imgA = imgA.astype(float)
imgB = Image.open('02testgray.jpg')
imgB = numpy.array(imgB)
# append alpha channel
imgB = numpy.dstack((imgB, numpy.ones((imgB.shape[0], imgB.shape[1], 1))*255))
imgB = imgB.astype(float)
# Divide images
imgOut = blend_modes.divide(imgA, imgB, 1.0)
# Save images
imgOut = numpy.uint8(imgOut)
imgOut = Image.fromarray(imgOut)
imgOut.save('PILdiv0.png', 'PNG')
os.system('start PILdiv0.png')
Be aware that for this to work, both images need to have the same dimensions, e.g. imgA.shape == (240,320,3) and imgB.shape == (240,320,3).
There is a mathematical definition for the divide function here:
http://www.linuxtopia.org/online_books/graphics_tools/gimp_advanced_guide/gimp_guide_node55_002.html
Here's an implementation with scipy/matplotlib:
import numpy as np
import scipy.misc as mpl
a = mpl.imread('01background.jpg')
b = mpl.imread('02testgray.jpg')
c = a/((b.astype('float')+1)/256)
d = c*(c < 255)+255*np.ones(np.shape(c))*(c > 255)
e = d.astype('uint8')
mpl.imshow(e)
mpl.imsave('output.png', e)
If you don't want to use matplotlib, you can do it like this (I assume you have numpy):
imgA = Image.open('01background.jpg')
imgA.load()
imgB = Image.open('02testgray.jpg')
imgB.load()
a = asarray(imgA)
b = asarray(imgB)
c = a/((b.astype('float')+1)/256)
d = c*(c < 255)+255*ones(shape(c))*(c > 255)
e = d.astype('uint8')
imgOut = Image.fromarray(e)
imgOut.save('PILdiv0.png', 'PNG')
The problem you're having is when you have a zero in image B - it causes a divide by zero. If you convert all of those values to one instead I think you'll get the desired result. That will eliminate the need to check for zeros and fix them in the result.
The code below is intended to take an infrared image (B&W) and convert it to RGB. It does so successfully, but with significant noise. I have included a few lines for noise reduction but they don't seem to help. I've included the starting/resulting photos below. Any advice/corrections are welcome and thank you in advance!
from skimage import io
import numpy as np
import glob, os
from tkinter import Tk
from tkinter.filedialog import askdirectory
import cv2
path = askdirectory(title='Select PNG Folder') # shows dialog box and return the path
outpath = askdirectory(title='Select SAVE Folder')
# wavelength in microns
MWIR = 4.5
R = .642
G = .532
B = .44
vector = [R, G, B]
vectorsum = np.sum(vector)
vector = vector / vectorsum #normalize
vector = vector*255 / MWIR #changing this value changes the outcome significantly so I
#have been messing with it in the hopes of fixing it but no luck so far.
vector = np.power(vector, 4)
for file in os.listdir(path):
if file.endswith(".png"):
imIn = io.imread(os.path.join(path, file))
imOut = imIn * vector
ret,thresh = cv2.threshold(imOut,64,255,cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
erode = cv2.erode(thresh, kernel, iterations = 1)
result = cv2.bitwise_or(imOut, erode)
io.imsave(os.path.join(outpath, file) + '_RGB.png',imOut.astype(np.uint8))
Your noise looks like completely random values, so I suspect you have an error in your conversion from float to uint8. But instead of rolling everything for yourself, why don't you just use:
imOut = cv2.cvtColor(imIn,cv2.COLOR_GRAY2BGR)
Here is one way to do that in Python/OpenCV.
Your issue is likely that your channel values are exceeding the 8-bit range.
Sorry, I do not understand the relationship between your R,G,B weights and your MWIR. Dividing by MWIR will do nothing if your weights are properly normalized.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('car.jpg')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# make color channels
red = gray.copy()
green = gray.copy()
blue = gray.copy()
# set weights
R = .642
G = .532
B = .44
MWIR = 4.5
# get sum of weights and normalize them by the sum
R = R**4
G = G**4
B = B**4
sum = R + G + B
R = R/sum
G = G/sum
B = B/sum
print(R,G,B)
# combine channels with weights
red = (R*red)
green = (G*green)
blue = (B*blue)
result = cv2.merge([red,green,blue])
# scale by ratio of 255/max to increase to fully dynamic range
max=np.amax(result)
result = ((255/max)*result).clip(0,255).astype(np.uint8)
# write result to disk
cv2.imwrite("car_colored.png", result)
# display it
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result
If the noise is coming from the sensor itself, like a grainy noise, you'll need to look into denoising algorithms. scikit-image and opencv provide some denoising algorithms you can try. Maybe take a look at this and this.
I recently learned about matplotlib.cm, which handles colormaps. I've been using those to artificially color IR images, and made a brief example using the same black & white car image used above. Basically, I create a colormap .csv file locally, then refer to it for RGB weights. You may have to pick and choose which colormap you prefer, but that's up to personal preference.
Input image:
Python:
import os
import numpy as np
import cv2
from matplotlib import cm
# Multiple colormap options are available- I've hardcoded viridis for this example.
colormaps = ["viridis", "plasma", "inferno", "magma", "cividis"]
def CreateColormap():
if not os.path.exists("viridis_colormap.csv"):
# Get 256 entries from "viridis" or any other Matplotlib colormap
colormap = cm.get_cmap("viridis", 256)
# Make a Numpy array of the 256 RGB values
# Each line corresponds to an RGB colour for a greyscale level
np.savetxt("viridis_colormap.csv", (colormap.colors[...,0:3]*255).astype(np.uint8), fmt='%d', delimiter=',')
def RecolorInfraredImageToRGB(ir_image):
# Load RGB lookup table from CSV file
lookup_table = np.loadtxt("viridis_colormap.csv", dtype=np.uint8, delimiter=",")
# Make output image, same height and width as IR image, but 3-channel RGB
result = np.zeros((*ir_image.shape, 3), dtype=np.uint8)
# Take entries from RGB LUT according to greyscale values in image
np.take(lookup_table, ir_image, axis=0, out=result)
return result
if __name__ == "__main__":
CreateColormap()
img = cv2.imread("bwcar.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
recolored = RecolorInfraredImageToRGB(gray)
cv2.imwrite("car_recolored.png", recolored)
cv2.imshow("Viridis recolor", recolored)
cv2.waitKey(0)
Output:
Okay, here's the situation:
I want to use the Python Image Library to "theme" an image like this:
Theme color: "#33B5E5"
IN:
OUT:
I got the result using this commands with ImageMagick:
convert image.png -colorspace gray image.png
mogrify -fill "#33b5e5" -tint 100 image.png
Explanation:
The image is first converted to black-and-white, and then it is themed.
I want to get the same result with the Python Image Library.
But it seems I'm having some problems using it since:
Can not handle transparency
Background (transparency in main image) gets themed too..
I'm trying to use this script:
import Image
import ImageEnhance
def image_overlay(src, color="#FFFFFF", alpha=0.5):
overlay = Image.new(src.mode, src.size, color)
bw_src = ImageEnhance.Color(src).enhance(0.0)
return Image.blend(bw_src, overlay, alpha)
img = Image.open("image.png")
image_overlay(img, "#33b5e5", 0.5)
You can see I did not convert it to a grayscale first, because that didn't work with transparency either.
I'm sorry to post so many issues in one question, but I couldn't do anything else :$
Hope you all understand.
Note: There's a Python 3/pillow fork of PIL version of this answer here.
Update 4: Guess the previous update to my answer wasn't the last one after all. Although converting it to use PIL exclusively was a major improvement, there were a couple of things that seemed like there ought to be better, less awkward, ways to do, if only PIL had the ability.
Well, after reading the documentation closely as well as some of the source code, I realized what I wanted to do was in fact possible. The trade-off was that now it has to build the look-up table used manually, so the overall code is slightly longer. However the result is that it only needs to make one call to the relatively slow Image.point() method, instead of three of them.
from PIL import Image
from PIL.ImageColor import getcolor, getrgb
from PIL.ImageOps import grayscale
def image_tint(src, tint='#ffffff'):
if Image.isStringType(src): # file path?
src = Image.open(src)
if src.mode not in ['RGB', 'RGBA']:
raise TypeError('Unsupported source image mode: {}'.format(src.mode))
src.load()
tr, tg, tb = getrgb(tint)
tl = getcolor(tint, "L") # tint color's overall luminosity
if not tl: tl = 1 # avoid division by zero
tl = float(tl) # compute luminosity preserving tint factors
sr, sg, sb = map(lambda tv: tv/tl, (tr, tg, tb)) # per component adjustments
# create look-up tables to map luminosity to adjusted tint
# (using floating-point math only to compute table)
luts = (map(lambda lr: int(lr*sr + 0.5), range(256)) +
map(lambda lg: int(lg*sg + 0.5), range(256)) +
map(lambda lb: int(lb*sb + 0.5), range(256)))
l = grayscale(src) # 8-bit luminosity version of whole image
if Image.getmodebands(src.mode) < 4:
merge_args = (src.mode, (l, l, l)) # for RGB verion of grayscale
else: # include copy of src image's alpha layer
a = Image.new("L", src.size)
a.putdata(src.getdata(3))
merge_args = (src.mode, (l, l, l, a)) # for RGBA verion of grayscale
luts += range(256) # for 1:1 mapping of copied alpha values
return Image.merge(*merge_args).point(luts)
if __name__ == '__main__':
import os
input_image_path = 'image1.png'
print 'tinting "{}"'.format(input_image_path)
root, ext = os.path.splitext(input_image_path)
result_image_path = root+'_result'+ext
print 'creating "{}"'.format(result_image_path)
result = image_tint(input_image_path, '#33b5e5')
if os.path.exists(result_image_path): # delete any previous result file
os.remove(result_image_path)
result.save(result_image_path) # file name's extension determines format
print 'done'
Here's a screenshot showing input images on the left with corresponding outputs on the right. The upper row is for one with an alpha layer and the lower is a similar one that doesn't have one.
You need to convert to grayscale first. What I did:
get original alpha layer using Image.split()
convert to grayscale
colorize using ImageOps.colorize
put back original alpha layer
Resulting code:
import Image
import ImageOps
def tint_image(src, color="#FFFFFF"):
src.load()
r, g, b, alpha = src.split()
gray = ImageOps.grayscale(src)
result = ImageOps.colorize(gray, (0, 0, 0, 0), color)
result.putalpha(alpha)
return result
img = Image.open("image.png")
tinted = tint_image(img, "#33b5e5")
def apply_alpha(img, alpha_value):
print("alpha_value" + str(alpha_value))
mask_value = int(alpha_value * 255)
print("mask_value" + str(mask_value))
img.putalpha(mask_value)
return img
def apply_alpha(img, alpha_value):
import copy
tmp = copy.copy(img)
print("alpha_value" + str(alpha_value))
mask_value = int(alpha_value * 255)
print("mask_value" + str(mask_value))
tmp.putalpha(mask_value)
return tmp
working_image = apply_alpha(obs, alpha)
I tried both of the above apply_alpha functions, where "img" is a PIL image, and neither of them correctly apply alpha (nothing changes).
I am stitching together individual tiles of a composite image, and using "put alpha" to set the transparency of each individual tile. I believe the 'paste' in the merging of the individual tiles is erasing the putalpha for each individual image. How can I get this to work?
I'm using this merge_images to stitch together the individual tile images: Stitching Photos together
This scenario is distinct from other questions asked because the img.putalpha(...) is used within a function, which causes it to not work
I figured it out: the cause of the issue was that, in the merge function for the images, there is this code:
result = Image.new('RGB', (result_width, result_height))
result.paste(im=img1, box=(0, 0), mask=img1)
result.paste(im=img2, box=(width1, 0), mask=img2)
Because the image type was "RGB", the alpha channels were being ignored when composing the tiles. Make sure the image type is "RGBA"
I have many skeletonized images like this:
How can i detect a cycle, a loop in the skeleton?
Are there "special" functions that do this or should I implement it as a graph?
In case there is only the graph option, can the python graph library NetworkX can help me?
You can exploit the topology of the skeleton. A cycle will have no holes, so we can use scipy.ndimage to find any holes and compare. This isn't the fastest method, but it's extremely easy to code.
import scipy.misc, scipy.ndimage
# Read the image
img = scipy.misc.imread("Skel.png")
# Retain only the skeleton
img[img!=255] = 0
img = img.astype(bool)
# Fill the holes
img2 = scipy.ndimage.binary_fill_holes(img)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img == img2).all()
# As a test break the cycles
img3 = img.copy()
img3[0:200, 0:200] = 0
img4 = scipy.ndimage.binary_fill_holes(img3)
# Compare the two, an image without cycles will have no holes
print "Cycles in image: ", ~(img3 == img4).all()
I've used your "B" picture as an example. The first two images are the original and the filled version which detects a cycle. In the second version, I've broken the cycle and nothing gets filled, thus the two images are the same.
First, let's build an image of the letter B with PIL:
import Image, ImageDraw, ImageFont
image = Image.new("RGBA", (600,150), (255,255,255))
draw = ImageDraw.Draw(image)
fontsize = 150
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationMono-Regular.ttf", fontsize)
txt = 'B'
draw.text((30, 5), txt, (0,0,0), font=font)
img = image.resize((188,45), Image.ANTIALIAS)
print type(img)
plt.imshow(img)
you may find a better way to do that, particularly with path to the fonts. Ii would be better to load an image instead of generating it. Anyway, we have now something to work on:
Now, the real part:
import mahotas as mh
img = np.array(img)
im = img[:,0:50,0]
im = im < 128
skel = mh.thin(im)
noholes = mh.morph.close_holes(skel)
plt.subplot(311)
plt.imshow(im)
plt.subplot(312)
plt.imshow(skel)
plt.subplot(313)
cskel = np.logical_not(skel)
choles = np.logical_not(noholes)
holes = np.logical_and(cskel,noholes)
lab, n = mh.label(holes)
print 'B has %s holes'% str(n)
plt.imshow(lab)
And we have in the console (ipython):
B has 2 holes
Converting your skeleton image to a graph representation is not trivial, and I don't know of any tools to do that for you.
One way to do it in the bitmap would be to use a flood fill, like the paint bucket in photoshop. If you start a flood fill of the image, the entire background will get filled if there are no cycles. If the fill doesn't get the entire image then you've found a cycle. Robustly finding all the cycles could require filling multiple times.
This is likely to be very slow to execute, but probably much faster to code than a technique where you trace the skeleton into graph data structure.