PIL - apply the same operation to every pixel - python

I create an image and fill the pixels:
img = Image.new( 'RGB', (2000,2000), "black") # create a new black image
pixels = img.load() # create the pixel map
for i in range(img.size[0]): # for every pixel:
for j in range(img.size[1]):
#do some stuff that requires i and j as parameter
Can this be done more elegant (and may be faster, since theoretically the loops are parallelizable)?

Note: I will first answer the question, then propose an, in my opinion, better alternative
Answering the question
It is hard to give advice without knowing what changes you intend to apply and whether the loading of the image as a PIL image is part of the question or a given.
More elegant in Python-speak typically means using list comprehensions
For parallelization, you would look at something like the multiprocessing module or joblib
Depending on your method of creating / loading in images, the list_of_pixels = list(img.getdata()) and img.putdata(new_list_of_pixels) functions may be of interest to you.
An example of what this might look like:
from PIL import Image
from multiprocessing import Pool
img = Image.new( 'RGB', (2000,2000), "black")
# a function that fixes the green component of a pixel to the value 50
def update_pixel(p):
return (p[0], 50, p[2])
list_of_pixels = list(img.getdata())
pool = Pool(4)
new_list_of_pixels = pool.map(update_pixel, list_of_pixels)
pool.close()
pool.join()
img.putdata(new_list_of_pixels)
However, I don't think that is a good idea... When you see loops (and list comprehensions) over thousands of elements in Python and you have performance on your mind, you can be sure there is a library that will make this faster.
Better Alternative
First, a quick pointer to the Channel Operations module,
Since you don't specify the kind of pixel operation you intend to do and you clearly already know about the PIL library, I'll assume you're aware of it and it doesn't do what you want.
Then, any moderately complex matrix manipulation in Python will benefit from pulling in Pandas, Numpy or Scipy...
Pure numpy example:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
img[:,:, 1] = 50
#show
plt.imshow(img)
Since you are just working with a standard numpy.ndarray, you can use any of the available functionalities, such as np.vectorize, apply, map etc. To show a similar solution as above with the update_pixel function:
import numpy as np
import matplotlib.pyplot as plt
#black image
img = np.zeros([100,100,3],dtype=np.uint8)
#show
plt.imshow(img)
#make it green
def update_pixel(p):
return (p[0], 50, p[2])
green_img = np.apply_along_axis(update_pixel, 2, img)
#show
plt.imshow(green_img)
One more example, this time calculating the image content directly from the indexes, instead of from existing image pixel content (no need to create an empty image first):
import numpy as np
import matplotlib.pyplot as plt
def calc_pixel(x,y):
return np.array([100-x, x+y, 100-y])
img = np.frompyfunc(calc_pixel, 2, 1).outer(np.arange(100), np.arange(100))
plt.imshow(np.array(img.tolist()))
#note: I don't know any other way to convert a 2D array of arrays to a 3D array...
And, low and behold, scipy has methods to read and write images and inbetween, you can just use numpy to manipulate them as "classic" mult-dimensional arrays. (scipy.misc.imread depends on PIL, by the way)
More example code.

Related

skeletonization (thinning) of small images not giving expected results - python

I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.

Python Changing color for every n-th pixel on x and y axis

As the title says, I have to take an image and write code that colors in every n-th pixel on x axis and y axis.
I've tried using for loops, but it colors in the whole axis line instead of the one pixel that i need. I either have to use OpenCV or Pillow for this task.
#pillow
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
picture = Image.open('e92m3.jpg')
picture_resized = picture.resize( (500,500) )
pixels = picture_resized.load()
#x,y
for i in range(0,500):
pixels[i,10] = (0,255,0)
for i in range(0,500):
pixels[10,i] = (255,0,0)
%matplotlib notebook
plt.imshow(picture_resized)
This is how it should approximately look like:
You really should avoid for loops with image processing in Python. They are horribly slow and inefficient. As pretty much all image processing suites use Numpy arrays to store images, you should try and use vectorised Numpy access methods such as slicing, indexing and broadcasting:
import numpy as np
import cv2
# Load image
im = cv2.imread('lena.png')
# Use Numpy indexing to make alternate rows and columns black
im[0::2,0::2] = [0,0,0]
im[1::2,1::2] = [0,0,0]
cv2.imwrite('result.png', im)
If you want to use PIL/Pillow in place of OpenCV, load and save the image like this:
from PIL import Image
# Load as PIL Image and make into Numpy array
im = np.array(Image.open('lena.png').convert('RGB'))
... process ...
# Make Numpy array back into PIL Image and save
Image.fromarray(im).save('result.png')
Maybe have a read here about indexing.
I don't think I've understood your question but here is my answer on what i understood of it.
def interval_replace(img, offset_x: int=0, interval_x: int, offset_y: int=0, interval_y: int, replace_pxl: tuple):
for y in range(offset_y, img.shape[0]):
for x in range(offset_x, img.shape[1]):
if x % interval_x == 0 and y % interval_y == 0:
img[y][x] = replace_pxl

Apply erosion to only a portion of an image

I am new to opencv and I couldn't find any solution for this problem of mine.
I wonder if it is possible to apply erosion/dilation only to a specific portion of an image and let the rest of the image remain as it is originally.
Just get a submatrix of the area you want to apply erode/dilate to and apply the operation in-place:
import cv2
import numpy as np
import matplotlib.pyplot as plt
im = cv2.imread('image_to_process.jpg')
roi = im[:100, :100, :]
# define your_kernel as needed
roi[:] = cv2.dilate(roi, your_kernel) # the [:] is important
Note that I use roi[:] to have the result of dilate overwrite the content of roi instead of allocating a new matrix, so that the change actually reflects on im too.

Need to count objects on a white background in Python, shadows cause trouble

This question is kind of a follow-up to my 2 previous auestions: Python Image tutorial works, other images behaves differently (showing images with Pylab) and Detect objects on a white background in Python .
What I am trying to achieve is being able to programmatically count the number of individual objects on a white background. As seen in the other 2 posts I've been able to achieve this to some extent. At this moment I am able to count the number of objects when virtually no shadow is present on the image. The image I'm trying to analyse (bottom of this post) does have some shadows which causes objects to merge and be seen as one individual object.
I need some simple way of getting rid of the shadows, I already tried adaptive thresholding with scikit-image (http://scikit-image.org/docs/dev/auto_examples/plot_threshold_adaptive.html#example-plot-threshold-adaptive-py), but I stumbled upon some problems (https://imgur.com/uYnj6af). Is there any not too complicated way to get this to work? There's already a lot of code in my previous posts but please let me know if you want to see any additional code!
Thanks in advance.
Perhaps it's easier to operate on a binary image. In the code below, I obtain such an image by computing the variance over a sliding window and thresholding it.
from skimage import io, exposure, color, util
import matplotlib.pyplot as plt
image = color.rgb2gray(io.imread('tools.jpg'))
exposure.equalize_adapthist(image)
Z = util.view_as_windows(image, (5, 5))
Z = Z.reshape(Z.shape[0], Z.shape[1], -1)
variance_map = Z.var(axis=2)
plt.imshow(variance_map > 0.005, cmap='gray')
plt.savefig('tools_thresh.png')
plt.show()
Update:
This extended version of the code identifies the 8 tools.
from skimage import io, exposure, color, util, measure, morphology
from scipy import ndimage as ndi
import numpy as np
import matplotlib.pyplot as plt
image = color.rgb2gray(io.imread('tools.jpg'))
exposure.equalize_adapthist(image)
Z = util.view_as_windows(image, (5, 5))
Z = Z.reshape(Z.shape[0], Z.shape[1], -1)
variance_map = Z.var(axis=2)
tools_bw = variance_map > 0.005
tools_bw = morphology.binary_closing(tools_bw, np.ones((5, 5)))
tools_bw = ndi.binary_fill_holes(tools_bw)
labels = measure.label(tools_bw)
regions = measure.regionprops(labels)
regions = [r for r in regions if r.perimeter > 500 and r.major_axis_length > 200]
print(len(regions))
out = np.zeros_like(tools_bw, dtype=int)
for i, r in enumerate(regions):
out[labels == r.label] = i + 1
plt.imshow(out, cmap='spectral')
plt.savefig('tools_identified.png', bbox_inches='tight')
plt.show()

how to save an array as a grayscale image with matplotlib/numpy?

I am trying to save a numpy array of dimensions 128x128 pixels into a grayscale image.
I simply thought that the pyplot.imsave function would do the job but it's not, it somehow converts my array into an RGB image.
I tried to force the colormap to Gray during conversion but eventhough the saved image appears in grayscale, it still has a 128x128x4 dimension.
Here is a code sample I wrote to show the behaviour :
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mplimg
from matplotlib import cm
x_tot = 10e-3
nx = 128
x = np.arange(-x_tot/2, x_tot/2, x_tot/nx)
[X, Y] = np.meshgrid(x,x)
R = np.sqrt(X**2 + Y**2)
diam = 5e-3
I = np.exp(-2*(2*R/diam)**4)
plt.figure()
plt.imshow(I, extent = [-x_tot/2, x_tot/2, -x_tot/2, x_tot/2])
print I.shape
plt.imsave('image.png', I)
I2 = plt.imread('image.png')
print I2.shape
mplimg.imsave('image2.png',np.uint8(I), cmap = cm.gray)
testImg = plt.imread('image2.png')
print testImg.shape
In both cases the results of the "print" function are (128,128,4).
Can anyone explain why the imsave function is creating those dimensions eventhough my input array is of a luminance type?
And of course, does anyone have a solution to save the array into a standard grayscale format?
Thanks!
With PIL it should work like this
from PIL import Image
I8 = (((I - I.min()) / (I.max() - I.min())) * 255.9).astype(np.uint8)
img = Image.fromarray(I8)
img.save("file.png")
There is also an alternative of using imageio. It provides an easy and convenient API and it is bundled with Anaconda. It can save grayscale images as a single color channel file.
Quoting the documentation
>>> import imageio
>>> im = imageio.imread('imageio:astronaut.png')
>>> im.shape # im is a numpy array
(512, 512, 3)
>>> imageio.imwrite('astronaut-gray.jpg', im[:, :, 0])
I didn't want to use PIL in my code and as noted in the question I ran into the same problem with pyplot, where even in grayscale, the file is saved in MxNx3 matrix.
Since the actual image on disk wasn't important to me, I ended up writing the matrix as is and reading it back "as-is" using numpy's save and load methods:
np.save("filename", image_matrix)
And:
np.load("filename.npy")
There is also a possibility to use scikit-image, then there is no need to convert numpy array into a PIL object.
from skimage import io
io.imsave('output.tiff', I.astype(np.uint16))

Categories

Resources