I'm working on a little problem in my sparetime involving analysis of some images obtained through a microscope. It is a wafer with some stuff here and there, and ultimately I want to make a program to detect when certain materials show up.
Anyways, first step is to normalize the intensity across the image, since the lens does not give uniform lightning. Currently I use an image, with no stuff on, only the substrate, as a background, or reference, image. I find the maximum of the three (intensity) values for RGB.
from PIL import Image
from PIL import ImageDraw
rmax = 0;gmax = 0;bmax = 0;rmin = 300;gmin = 300;bmin = 300
im_old = Image.open("test_image.png")
im_back = Image.open("background.png")
maxx = im_old.size[0] #Import the size of the image
maxy = im_old.size[1]
im_new = Image.new("RGB", (maxx,maxy))
pixback = im_back.load()
for x in range(maxx):
for y in range(maxy):
if pixback[x,y][0] > rmax:
rmax = pixback[x,y][0]
if pixback[x,y][1] > gmax:
gmax = pixback[x,y][1]
if pixback[x,y][2] > bmax:
bmax = pixback[x,y][2]
pixnew = im_new.load()
pixold = im_old.load()
for x in range(maxx):
for y in range(maxy):
r = float(pixold[x,y][0]) / ( float(pixback[x,y][0])*rmax )
g = float(pixold[x,y][1]) / ( float(pixback[x,y][1])*gmax )
b = float(pixold[x,y][2]) / ( float(pixback[x,y][2])*bmax )
pixnew[x,y] = (r,g,b)
The first part of the code determines the maximum intensity of the RED, GREEN and BLUE channels, pixel by pixel, of the background image, but needs only be done once.
The second part takes the "real" image (with stuff on it), and normalizes the RED, GREEN and BLUE channels, pixel by pixel, according to the background. This takes some time, 5-10 seconds for an 1280x960 image, which is way too slow if I need to do this to several images.
What can I do to improve the speed? I thought of moving all the images to numpy arrays, but I can't seem to find a fast way to do that for RGB images.
I'd rather not move away from python, since my C++ is quite low-level, and getting a working FORTRAN code would probably take longer than I could ever save in terms of speed :P
import numpy as np
from PIL import Image
def normalize(arr):
"""
Linear normalization
http://en.wikipedia.org/wiki/Normalization_%28image_processing%29
"""
arr = arr.astype('float')
# Do not touch the alpha channel
for i in range(3):
minval = arr[...,i].min()
maxval = arr[...,i].max()
if minval != maxval:
arr[...,i] -= minval
arr[...,i] *= (255.0/(maxval-minval))
return arr
def demo_normalize():
img = Image.open(FILENAME).convert('RGBA')
arr = np.array(img)
new_img = Image.fromarray(normalize(arr).astype('uint8'),'RGBA')
new_img.save('/tmp/normalized.png')
See http://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.fromimage.html#scipy.misc.fromimage
You can say
databack = scipy.misc.fromimage(pixback)
rmax = numpy.max(databack[:,:,0])
gmax = numpy.max(databack[:,:,1])
bmax = numpy.max(databack[:,:,2])
which should be much faster than looping over all (r,g,b) triplets of your image.
Then you can do
dataold = scip.misc.fromimage(pixold)
r = dataold[:,:,0] / (pixback[:,:,0] * rmax )
g = dataold[:,:,1] / (pixback[:,:,1] * gmax )
b = dataold[:,:,2] / (pixback[:,:,2] * bmax )
datanew = numpy.array((r,g,b))
imnew = scipy.misc.toimage(datanew)
The code is not tested, but should work somehow with minor modifications.
This is partially from FolksTalk webpage:
from PIL import Image
import numpy as np
# Read image file
in_file = "my_image.png"
# convert('RGB') for PNG file type
image = Image.open(in_file).convert('RGB')
pixels = np.asarray(image)
# Convert from integers to floats
pixels = pixels.astype('float32')
# Normalize to the range 0-1
pixels /= 255.0
Related
The code below is intended to take an infrared image (B&W) and convert it to RGB. It does so successfully, but with significant noise. I have included a few lines for noise reduction but they don't seem to help. I've included the starting/resulting photos below. Any advice/corrections are welcome and thank you in advance!
from skimage import io
import numpy as np
import glob, os
from tkinter import Tk
from tkinter.filedialog import askdirectory
import cv2
path = askdirectory(title='Select PNG Folder') # shows dialog box and return the path
outpath = askdirectory(title='Select SAVE Folder')
# wavelength in microns
MWIR = 4.5
R = .642
G = .532
B = .44
vector = [R, G, B]
vectorsum = np.sum(vector)
vector = vector / vectorsum #normalize
vector = vector*255 / MWIR #changing this value changes the outcome significantly so I
#have been messing with it in the hopes of fixing it but no luck so far.
vector = np.power(vector, 4)
for file in os.listdir(path):
if file.endswith(".png"):
imIn = io.imread(os.path.join(path, file))
imOut = imIn * vector
ret,thresh = cv2.threshold(imOut,64,255,cv2.THRESH_BINARY)
kernel = np.ones((5, 5), np.uint8)
erode = cv2.erode(thresh, kernel, iterations = 1)
result = cv2.bitwise_or(imOut, erode)
io.imsave(os.path.join(outpath, file) + '_RGB.png',imOut.astype(np.uint8))
Your noise looks like completely random values, so I suspect you have an error in your conversion from float to uint8. But instead of rolling everything for yourself, why don't you just use:
imOut = cv2.cvtColor(imIn,cv2.COLOR_GRAY2BGR)
Here is one way to do that in Python/OpenCV.
Your issue is likely that your channel values are exceeding the 8-bit range.
Sorry, I do not understand the relationship between your R,G,B weights and your MWIR. Dividing by MWIR will do nothing if your weights are properly normalized.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('car.jpg')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# make color channels
red = gray.copy()
green = gray.copy()
blue = gray.copy()
# set weights
R = .642
G = .532
B = .44
MWIR = 4.5
# get sum of weights and normalize them by the sum
R = R**4
G = G**4
B = B**4
sum = R + G + B
R = R/sum
G = G/sum
B = B/sum
print(R,G,B)
# combine channels with weights
red = (R*red)
green = (G*green)
blue = (B*blue)
result = cv2.merge([red,green,blue])
# scale by ratio of 255/max to increase to fully dynamic range
max=np.amax(result)
result = ((255/max)*result).clip(0,255).astype(np.uint8)
# write result to disk
cv2.imwrite("car_colored.png", result)
# display it
cv2.imshow("RESULT", result)
cv2.waitKey(0)
Result
If the noise is coming from the sensor itself, like a grainy noise, you'll need to look into denoising algorithms. scikit-image and opencv provide some denoising algorithms you can try. Maybe take a look at this and this.
I recently learned about matplotlib.cm, which handles colormaps. I've been using those to artificially color IR images, and made a brief example using the same black & white car image used above. Basically, I create a colormap .csv file locally, then refer to it for RGB weights. You may have to pick and choose which colormap you prefer, but that's up to personal preference.
Input image:
Python:
import os
import numpy as np
import cv2
from matplotlib import cm
# Multiple colormap options are available- I've hardcoded viridis for this example.
colormaps = ["viridis", "plasma", "inferno", "magma", "cividis"]
def CreateColormap():
if not os.path.exists("viridis_colormap.csv"):
# Get 256 entries from "viridis" or any other Matplotlib colormap
colormap = cm.get_cmap("viridis", 256)
# Make a Numpy array of the 256 RGB values
# Each line corresponds to an RGB colour for a greyscale level
np.savetxt("viridis_colormap.csv", (colormap.colors[...,0:3]*255).astype(np.uint8), fmt='%d', delimiter=',')
def RecolorInfraredImageToRGB(ir_image):
# Load RGB lookup table from CSV file
lookup_table = np.loadtxt("viridis_colormap.csv", dtype=np.uint8, delimiter=",")
# Make output image, same height and width as IR image, but 3-channel RGB
result = np.zeros((*ir_image.shape, 3), dtype=np.uint8)
# Take entries from RGB LUT according to greyscale values in image
np.take(lookup_table, ir_image, axis=0, out=result)
return result
if __name__ == "__main__":
CreateColormap()
img = cv2.imread("bwcar.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
recolored = RecolorInfraredImageToRGB(gray)
cv2.imwrite("car_recolored.png", recolored)
cv2.imshow("Viridis recolor", recolored)
cv2.waitKey(0)
Output:
Like the image above suggests, how can I convert the image to the left into an array that represent the darkness of the image between 0 for white and decimals for darker colours closer to 1? as shown in the image usingpython 3`?
Update:
I have tried to work abit more on this. There are good answers below too.
# Load image
filename = tf.constant("one.png")
image_file = tf.read_file(filename)
# Show Image
Image("one.png")
#convert method
def convertRgbToWeight(rgbArray):
arrayWithPixelWeight = []
for i in range(int(rgbArray.size / rgbArray[0].size)):
for j in range(int(rgbArray[0].size / 3)):
lum = 255-((rgbArray[i][j][0]+rgbArray[i][j][1]+rgbArray[i][j][2])/3) # Reversed luminosity
arrayWithPixelWeight.append(lum/255) # Map values from range 0-255 to 0-1
return arrayWithPixelWeight
# Convert image to numbers and print them
image_decoded_png = tf.image.decode_png(image_file,channels=3)
image_as_float32 = tf.cast(image_decoded_png, tf.float32)
numpy.set_printoptions(threshold=numpy.nan)
sess = tf.Session()
squeezedArray = sess.run(image_as_float32)
convertedList = convertRgbToWeight(squeezedArray)
print(convertedList) # This will give me an array of numbers.
I would recommend to read in images with opencv. The biggest advantage of opencv is that it supports multiple image formats and it automatically transforms the image into a numpy array. For example:
import cv2
import numpy as np
img_path = '/YOUR/PATH/IMAGE.png'
img = cv2.imread(img_path, 0) # read image as grayscale. Set second parameter to 1 if rgb is required
Now img is a numpy array with values between 0 - 255. By default 0 equals black and 255 equals white. To change this you can use the opencv built in function bitwise_not:
img_reverted= cv2.bitwise_not(img)
We can now scale the array with:
new_img = img_reverted / 255.0 // now all values are ranging from 0 to 1, where white equlas 0.0 and black equals 1.0
Load the image and then just invert and divide by 255.
Here is the image ('Untitled.png') that I used for this example: https://ufile.io/h8ncw
import numpy as np
import cv2
import matplotlib.pyplot as plt
my_img = cv2.imread('Untitled.png')
inverted_img = (255.0 - my_img)
final = inverted_img / 255.0
# Visualize the result
plt.imshow(final)
plt.show()
print(final.shape)
(661, 667, 3)
Results (final object represented as image):
You can use PIL package to manage images. Here's example how it can be done.
from PIL import Image
image = Image.open('sample.png')
width, height = image.size
pixels = image.load()
# Check if has alpha, to avoid "too many values to unpack" error
has_alpha = len(pixels[0,0]) == 4
# Create empty 2D list
fill = 1
array = [[fill for x in range(width)] for y in range(height)]
for y in range(height):
for x in range(width):
if has_alpha:
r, g, b, a = pixels[x,y]
else:
r, g, b = pixels[x,y]
lum = 255-((r+g+b)/3) # Reversed luminosity
array[y][x] = lum/255 # Map values from range 0-255 to 0-1
I think it works but please note that the only test I did was if values are in desired range:
# Test max and min values
h, l = 0,1
for row in array:
h = max([max(row), h])
l = min([min(row), l])
print(h, l)
You have to load the image from the path and then transform it to a numpy array.
The values of the image will be between 0 and 255. The next step is to standardize the numpy array.
Hope it helps.
Having ordered half a dozen webcams online for a project I notice that the colors on the output are not consistent.
In order to compensate for this I have attempted to take a template image and extract the R,G and B histograms and tried to match the target images's RGB histograms based on this.
This was inspired from the description of the solution for a very similar problem Comparative color calibration
The perfect solution will look like this :
In order to try to solve this I wrote the following script which performed poorly:
EDIT (Thanks to #DanMašek and #api55)
import numpy as np
def show_image(title, image, width = 300):
# resize the image to have a constant width, just to
# make displaying the images take up less screen real
# estate
r = width / float(image.shape[1])
dim = (width, int(image.shape[0] * r))
resized = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
# show the resized image
cv2.imshow(title, resized)
def hist_match(source, template):
"""
Adjust the pixel values of a grayscale image such that its histogram
matches that of a target image
Arguments:
-----------
source: np.ndarray
Image to transform; the histogram is computed over the flattened
array
template: np.ndarray
Template image; can have different dimensions to source
Returns:
-----------
matched: np.ndarray
The transformed output image
"""
oldshape = source.shape
source = source.ravel()
template = template.ravel()
# get the set of unique pixel values and their corresponding indices and
# counts
s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
return_counts=True)
t_values, t_counts = np.unique(template, return_counts=True)
# take the cumsum of the counts and normalize by the number of pixels to
# get the empirical cumulative distribution functions for the source and
# template images (maps pixel value --> quantile)
s_quantiles = np.cumsum(s_counts).astype(np.float64)
s_quantiles /= s_quantiles[-1]
t_quantiles = np.cumsum(t_counts).astype(np.float64)
t_quantiles /= t_quantiles[-1]
# interpolate linearly to find the pixel values in the template image
# that correspond most closely to the quantiles in the source image
interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)
return interp_t_values[bin_idx].reshape(oldshape)
from matplotlib import pyplot as plt
from scipy.misc import lena, ascent
import cv2
source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template = cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]
matched_b = hist_match(s_b, t_b)
matched_g = hist_match(s_g, t_g)
matched_r = hist_match(s_r, t_r)
y,x,c = source.shape
transfer = np.empty((y,x,c), dtype=np.uint8)
transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b
show_image("Template", template)
show_image("Target", source)
show_image("Transfer", transfer)
cv2.waitKey(0)
Template image :
Target Image:
The Matched Image:
Then I found Adrian's (pyimagesearch) attempt to solve a very similar problem in the following link
Fast Color Transfer
The results seem to be fairly good with some saturation defects. I would welcome any suggestions or pointers on how to address this issue so all web cam outputs could be calibrated to output similar colors based on one template image.
Your script performs poorly because you are using the wrong index.
OpenCV images are BGR, so this was correct in your code:
source = cv2.imread('/media/somadetect/Lexar/color_transfer_data/1/frame10.png')
s_b = source[:,:,0]
s_g = source[:,:,1]
s_r = source[:,:,2]
template = cv2.imread('/media/somadetect/Lexar/color_transfer_data/5/frame6.png')
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]
but this is wrong
transfer[:,:,0] = matched_r
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_b
since here you are using RGB and not BGR, so the color changes and your OpenCV still thinks it is BGR. That is why it looks weird.
It should be:
transfer[:,:,0] = matched_b
transfer[:,:,1] = matched_g
transfer[:,:,2] = matched_r
As other possible solutions, you may try to look which parameters can be set in your camera. Sometimes they have some auto parameters which you can set manually for all of them to match. Also, beware of this auto parameters, usually white balance and focus and others are set auto and they may change quite a lot in the same camera from one time to another (depending on illumination, etc etc).
UPDATE:
As DanMašek points out, also
t_b = source[:,:,0]
t_r = source[:,:,1]
t_g = source[:,:,2]
is wrong, since the r should be index 2 and g index 1
t_b = source[:,:,0]
t_g = source[:,:,1]
t_r = source[:,:,2]
I have attempted a white patch based calibration routine. Here is the link https://theiszm.wordpress.com/tag/white-balance/.
The code snippet follows:
import cv2
import math
import numpy as np
import sys
from matplotlib import pyplot as plt
def hist_match(source, template):
"""
Adjust the pixel values of a grayscale image such that its histogram
matches that of a target image
Arguments:
-----------
source: np.ndarray
Image to transform; the histogram is computed over the flattened
array
template: np.ndarray
Template image; can have different dimensions to source
Returns:
-----------
matched: np.ndarray
The transformed output image
"""
oldshape = source.shape
source = source.ravel()
template = template.ravel()
# get the set of unique pixel values and their corresponding indices and
# counts
s_values, bin_idx, s_counts = np.unique(source, return_inverse=True,
return_counts=True)
t_values, t_counts = np.unique(template, return_counts=True)
# take the cumsum of the counts and normalize by the number of pixels to
# get the empirical cumulative distribution functions for the source and
# template images (maps pixel value --> quantile)
s_quantiles = np.cumsum(s_counts).astype(np.float64)
s_quantiles /= s_quantiles[-1]
t_quantiles = np.cumsum(t_counts).astype(np.float64)
t_quantiles /= t_quantiles[-1]
# interpolate linearly to find the pixel values in the template image
# that correspond most closely to the quantiles in the source image
interp_t_values = np.interp(s_quantiles, t_quantiles, t_values)
return interp_t_values[bin_idx].reshape(oldshape)
# Read original image
im_o = cv2.imread('/media/Lexar/color_transfer_data/5/frame10.png')
im = im_o
cv2.imshow('Org',im)
cv2.waitKey()
B = im[:,:, 0]
G = im[:,:, 1]
R = im[:,:, 2]
R= np.array(R).astype('float')
G= np.array(G).astype('float')
B= np.array(B).astype('float')
# Extract pixels that correspond to pure white R = 255,G = 255,B = 255
B_white = R[168, 351]
G_white = G[168, 351]
R_white = B[168, 351]
print B_white
print G_white
print R_white
# Compensate for the bias using normalization statistics
R_balanced = R / R_white
G_balanced = G / G_white
B_balanced = B / B_white
R_balanced[np.where(R_balanced > 1)] = 1
G_balanced[np.where(G_balanced > 1)] = 1
B_balanced[np.where(B_balanced > 1)] = 1
B_balanced=B_balanced * 255
G_balanced=G_balanced * 255
R_balanced=R_balanced * 255
B_balanced= np.array(B_balanced).astype('uint8')
G_balanced= np.array(G_balanced).astype('uint8')
R_balanced= np.array(R_balanced).astype('uint8')
im[:,:, 0] = (B_balanced)
im[:,:, 1] = (G_balanced)
im[:,:, 2] = (R_balanced)
# Notice saturation artifacts
cv2.imshow('frame',im)
cv2.waitKey()
# Extract the Y plane in original image and match it to the transformed image
im_o = cv2.cvtColor(im_o, cv2.COLOR_BGR2YCR_CB)
im_o_Y = im_o[:,:,0]
im = cv2.cvtColor(im, cv2.COLOR_BGR2YCR_CB)
im_Y = im[:,:,0]
matched_y = hist_match(im_o_Y, im_Y)
matched_y= np.array(matched_y).astype('uint8')
im[:,:,0] = matched_y
im_final = cv2.cvtColor(im, cv2.COLOR_YCR_CB2BGR)
cv2.imshow('frame',im_final)
cv2.waitKey()
The input image is:
The result of the script is:
Thank you all for suggestions and pointers!!
I am working on a project that requires me to separate out each color in a CYMK image and generate a halftone image that will be printed on a special halftone printer. The method used is analogues to silk screening in that the process is almost identical. Take a photo and break out each color channel. Then produce a screen for the half tone. Each color screen must have it's screen skewed by 15-45 (adjustable) degrees. Dot size and LPI must be calculated from user configurable values to achieve different effects. This process I am told is used in silk screening but I have been unable to locate any information that explains CYMK halftoning. I find plenty for reducing to a single color and generating new print style b/w halftone image.
I would guess that I need to:
split the file into it's color channels.
generate a monochrome halftone image for that channel.
Skew the resultant halftone image by the number of degrees * channel number.
Does anyone know if this is the correct approach and any existing python code for this? Or of any good explanations for this process or algorithms?
I used to run a screen printing studio (it was a fairly small one), and although I have never actually done colour separation printing, I am reasonably familiar with the principles. This is how I would approach it:
Split the image into C, M, Y, K.
Rotate each separated image by 0, 15, 30, and 45 degrees respectively.
Take the half-tone of each image (dot size will be proportional to the intensity).
Rotate back each half-toned image.
Now you have your colour separated images. As you mention, the rotation step reduces dot alignment issues (which would mess everything up), and things like Moiré pattern effects will be reasonably minimized.
This should be pretty easy to code using PIL.
Update 2:
I wrote some quick code that will do this for you, it also includes a GCR function (described below):
import Image, ImageDraw, ImageStat
def gcr(im, percentage):
'''basic "Gray Component Replacement" function. Returns a CMYK image with
percentage gray component removed from the CMY channels and put in the
K channel, ie. for percentage=100, (41, 100, 255, 0) >> (0, 59, 214, 41)'''
cmyk_im = im.convert('CMYK')
if not percentage:
return cmyk_im
cmyk_im = cmyk_im.split()
cmyk = []
for i in xrange(4):
cmyk.append(cmyk_im[i].load())
for x in xrange(im.size[0]):
for y in xrange(im.size[1]):
gray = min(cmyk[0][x,y], cmyk[1][x,y], cmyk[2][x,y]) * percentage / 100
for i in xrange(3):
cmyk[i][x,y] = cmyk[i][x,y] - gray
cmyk[3][x,y] = gray
return Image.merge('CMYK', cmyk_im)
def halftone(im, cmyk, sample, scale):
'''Returns list of half-tone images for cmyk image. sample (pixels),
determines the sample box size from the original image. The maximum
output dot diameter is given by sample * scale (which is also the number
of possible dot sizes). So sample=1 will presevere the original image
resolution, but scale must be >1 to allow variation in dot size.'''
cmyk = cmyk.split()
dots = []
angle = 0
for channel in cmyk:
channel = channel.rotate(angle, expand=1)
size = channel.size[0]*scale, channel.size[1]*scale
half_tone = Image.new('L', size)
draw = ImageDraw.Draw(half_tone)
for x in xrange(0, channel.size[0], sample):
for y in xrange(0, channel.size[1], sample):
box = channel.crop((x, y, x + sample, y + sample))
stat = ImageStat.Stat(box)
diameter = (stat.mean[0] / 255)**0.5
edge = 0.5*(1-diameter)
x_pos, y_pos = (x+edge)*scale, (y+edge)*scale
box_edge = sample*diameter*scale
draw.ellipse((x_pos, y_pos, x_pos + box_edge, y_pos + box_edge), fill=255)
half_tone = half_tone.rotate(-angle, expand=1)
width_half, height_half = half_tone.size
xx=(width_half-im.size[0]*scale) / 2
yy=(height_half-im.size[1]*scale) / 2
half_tone = half_tone.crop((xx, yy, xx + im.size[0]*scale, yy + im.size[1]*scale))
dots.append(half_tone)
angle += 15
return dots
im = Image.open("1_tree.jpg")
cmyk = gcr(im, 0)
dots = halftone(im, cmyk, 10, 1)
im.show()
new = Image.merge('CMYK', dots)
new.show()
This will turn this:
into this (blur your eyes and move away from the monitor):
Note that the image sampling can be pixel by pixel (thus preserving the resolution of the original image, in the final image). Do this by setting sample=1, in which case you need to set scale to a larger number so that there are a number of possible dot sizes. This will also result in a larger output image size (original image size * scale ** 2, so watch out!).
By default when you convert from RGB to CMYK the K channel (the black channel) is empty. Whether you need the K channel or not depends upon your printing process. There are various possible reasons you might want it: getting a better black than the overlap of CMY, saving ink, improving drying time, reducing ink bleed, etc. Anyhow I've also written a little Grey component replacement function GCR, so you can set the percentage of K channel you want to replace CMY overlap with (I explain this a little further in the code comments).
Here is a couple of examples to illustrate. Processing the letter F from the image, with sample=1 and scale=8, so fairly high resolution.
The 4 CMYK channels, with percentage=0, so empty K channel:
combines to produce:
CMYK channels, with percentage=100, so K channel is used. You can see the cyan channel is fully supressed, and the magenta and yellow channels use a lot less ink, in the black band at the bottom of the image:
My solution also uses PIL, but relies on the internal dithering method (Floyd-Steinberg) supported internally. Creates artifacts, though, so I am considering rewriting its C code.
from PIL import Image
im = Image.open('tree.jpg') # open RGB image
cmyk= im.convert('CMYK').split() # RGB contone RGB to CMYK contone
c = cmyk[0].convert('1').convert('L') # and then halftone ('1') each plane
m = cmyk[1].convert('1').convert('L') # ...and back to ('L') mode
y = cmyk[2].convert('1').convert('L')
k = cmyk[3].convert('1').convert('L')
new_cmyk = Image.merge('CMYK',[c,m,y,k]) # put together all 4 planes
new_cmyk.save('tree-cmyk.jpg') # and save to file
The implicit GCR PIL applies can also be expanded with a more generic one, but I have tried to describe a simple solution, where also resolution and sampling are ignored.
With ordered (Bayer) Dither using D4 pattern, we can modify #fraxel's code to end up with an output dithered image with 4 times the height and width of the input color image, as shown below
import matplotlib.pylab as plt
def ordered_dither(im, D4):
im = (15 * (im / im.max())).astype(np.uint8)
h, w = im.shape
im_out = np.zeros((4 * h, 4 * w), dtype = np.uint8)
x, y = 0, 0
for i in range(h):
for j in range(w):
im_out[x:x + 4, y:y + 4] = 255 * (D4 < im[i, j])
y = (y + 4) % (4 * w)
x = (x + 4) % (4 * h)
return im_out
def color_halftoning_Bayer(cmyk, angles, D4):
out_channels = []
for i in range(4):
out_channel = Image.fromarray(
ordered_dither(np.asarray(cmyk[i].rotate(angles[i],expand=1)), D4)
).rotate(-angles[i], expand=1)
x = (out_channel.size[0] - cmyk[i].size[0]*4) // 2
y = (out_channel.size[1] - cmyk[i].size[1]*4) // 2
out_channel = out_channel.crop((x, y, x + cmyk[i].size[0]*4, y + cmyk[i].size[1]*4))
out_channels.append(out_channel)
return Image.merge('CMYK',out_channels)
image = Image.open('images/tree.jpg')
cmyk = gcr(image,100).split()
D4 = np.array([[ 0, 8, 2, 10],
[12, 4, 14, 6],
[ 3, 11, 1, 9],
[15, 7, 13, 5]], dtype=np.uint8)
out = np.asarray(color_halftoning_Bayer(cmyk, np.linspace(15, 60, 4), D4).convert('RGB'))
plt.figure(figsize=(20,20))
plt.imshow(out)
plt.show()
The above code when ran on the following tree input
it generates the following dithered output:
I am trying to remove a certain color from my image however it's not working as well as I'd hoped. I tried to do the same thing as seen here Using PIL to make all white pixels transparent? however the image quality is a bit lossy so it leaves a little ghost of odd colored pixels around where what was removed. I tried doing something like change pixel if all three values are below 100 but because the image was poor quality the surrounding pixels weren't even black.
Does anyone know of a better way with PIL in Python to replace a color and anything surrounding it? This is probably the only sure fire way I can think of to remove the objects completely however I can't think of a way to do this.
The picture has a white background and text that is black. Let's just say I want to remove the text entirely from the image without leaving any artifacts behind.
Would really appreciate someone's help! Thanks
The best way to do it is to use the "color to alpha" algorithm used in Gimp to replace a color. It will work perfectly in your case. I reimplemented this algorithm using PIL for an open source python photo processor phatch. You can find the full implementation here. This a pure PIL implementation and it doesn't have other dependences. You can copy the function code and use it. Here is a sample using Gimp:
to
You can apply the color_to_alpha function on the image using black as the color. Then paste the image on a different background color to do the replacement.
By the way, this implementation uses the ImageMath module in PIL. It is much more efficient than accessing pixels using getdata.
EDIT: Here is the full code:
from PIL import Image, ImageMath
def difference1(source, color):
"""When source is bigger than color"""
return (source - color) / (255.0 - color)
def difference2(source, color):
"""When color is bigger than source"""
return (color - source) / color
def color_to_alpha(image, color=None):
image = image.convert('RGBA')
width, height = image.size
color = map(float, color)
img_bands = [band.convert("F") for band in image.split()]
# Find the maximum difference rate between source and color. I had to use two
# difference functions because ImageMath.eval only evaluates the expression
# once.
alpha = ImageMath.eval(
"""float(
max(
max(
max(
difference1(red_band, cred_band),
difference1(green_band, cgreen_band)
),
difference1(blue_band, cblue_band)
),
max(
max(
difference2(red_band, cred_band),
difference2(green_band, cgreen_band)
),
difference2(blue_band, cblue_band)
)
)
)""",
difference1=difference1,
difference2=difference2,
red_band = img_bands[0],
green_band = img_bands[1],
blue_band = img_bands[2],
cred_band = color[0],
cgreen_band = color[1],
cblue_band = color[2]
)
# Calculate the new image colors after the removal of the selected color
new_bands = [
ImageMath.eval(
"convert((image - color) / alpha + color, 'L')",
image = img_bands[i],
color = color[i],
alpha = alpha
)
for i in xrange(3)
]
# Add the new alpha band
new_bands.append(ImageMath.eval(
"convert(alpha_band * alpha, 'L')",
alpha = alpha,
alpha_band = img_bands[3]
))
return Image.merge('RGBA', new_bands)
image = color_to_alpha(image, (0, 0, 0, 255))
background = Image.new('RGB', image.size, (255, 255, 255))
background.paste(image.convert('RGB'), mask=image)
Using numpy and PIL:
This loads the image into a numpy array of shape (W,H,3), where W is the
width and H is the height. The third axis of the array represents the 3 color
channels, R,G,B.
import Image
import numpy as np
orig_color = (255,255,255)
replacement_color = (0,0,0)
img = Image.open(filename).convert('RGB')
data = np.array(img)
data[(data == orig_color).all(axis = -1)] = replacement_color
img2 = Image.fromarray(data, mode='RGB')
img2.show()
Since orig_color is a tuple of length 3, and data has
shape (W,H,3), NumPy
broadcasts
orig_color to an array of shape (W,H,3) to perform the comparison data ==
orig_color. The result in a boolean array of shape (W,H,3).
(data == orig_color).all(axis = -1) is a boolean array of shape (W,H) which
is True wherever the RGB color in data is original_color.
#!/usr/bin/python
from PIL import Image
import sys
img = Image.open(sys.argv[1])
img = img.convert("RGBA")
pixdata = img.load()
# Clean the background noise, if color != white, then set to black.
# change with your color
for y in xrange(img.size[1]):
for x in xrange(img.size[0]):
if pixdata[x, y] == (255, 255, 255, 255):
pixdata[x, y] = (0, 0, 0, 255)
You'll need to represent the image as a 2-dimensional array. This means either making a list of lists of pixels, or viewing the 1-dimensional array as a 2d one with some clever math. Then, for each pixel that is targeted, you'll need to find all surrounding pixels. You could do this with a python generator thus:
def targets(x,y):
yield (x,y) # Center
yield (x+1,y) # Left
yield (x-1,y) # Right
yield (x,y+1) # Above
yield (x,y-1) # Below
yield (x+1,y+1) # Above and to the right
yield (x+1,y-1) # Below and to the right
yield (x-1,y+1) # Above and to the left
yield (x-1,y-1) # Below and to the left
So, you would use it like this:
for x in range(width):
for y in range(height):
px = pixels[x][y]
if px[0] == 255 and px[1] == 255 and px[2] == 255:
for i,j in targets(x,y):
newpixels[i][j] = replacementColor
If the pixels are not easily identifiable e.g you say (r < 100 and g < 100 and b < 100) also doesn't match correctly the black region, it means you have lots of noise.
Best way would be to identify a region and fill it with color you want, you can identify the region manually or may be by edge detection e.g. http://bitecode.co.uk/2008/07/edge-detection-in-python/
or more sophisticated approach would be to use library like opencv (http://opencv.willowgarage.com/wiki/) to identify objects.
This is part of my code, the result would like:
source
target
import os
import struct
from PIL import Image
def changePNGColor(sourceFile, fromRgb, toRgb, deltaRank = 10):
fromRgb = fromRgb.replace('#', '')
toRgb = toRgb.replace('#', '')
fromColor = struct.unpack('BBB', bytes.fromhex(fromRgb))
toColor = struct.unpack('BBB', bytes.fromhex(toRgb))
img = Image.open(sourceFile)
img = img.convert("RGBA")
pixdata = img.load()
for x in range(0, img.size[0]):
for y in range(0, img.size[1]):
rdelta = pixdata[x, y][0] - fromColor[0]
gdelta = pixdata[x, y][0] - fromColor[0]
bdelta = pixdata[x, y][0] - fromColor[0]
if abs(rdelta) <= deltaRank and abs(gdelta) <= deltaRank and abs(bdelta) <= deltaRank:
pixdata[x, y] = (toColor[0] + rdelta, toColor[1] + gdelta, toColor[2] + bdelta, pixdata[x, y][3])
img.save(os.path.dirname(sourceFile) + os.sep + "changeColor" + os.path.splitext(sourceFile)[1])
if __name__ == '__main__':
changePNGColor("./ok_1.png", "#000000", "#ff0000")