Trying to make naive numpy image processing code faster - python

I'm trying to transform an image containing colored symbols into pixel art as featured on the right (see image below), where each colored symbol (taking up multiple pixels) would be changed into one pixel of the symbol's color.
Example of what I'm trying to achieve
So far I've written a pretty naive algorithm that just loops through all the pixels, and is pretty sluggish. I believe I could make it faster, for instance using native numpy operations, but I've been unable to find how. Any tips?
(I also started by trying to simply resize the image, but couldn't find a resampling algorithm that would make it work).
def resize(img, new_width):
width, height = img.shape[:2]
new_height = height*new_width//width
new_image = np.zeros((new_width, new_height,4), dtype=np.uint8)
x_ratio, y_ratio = width//new_width, height//new_height
for i in range(new_height):
for j in range(new_width):
sub_image = img[i*y_ratio:(i+1)*y_ratio, j*x_ratio:(j+1)*x_ratio]
found = False
for row in sub_image:
for pixel in row:
if any(pixel!=[0,0,0,0]):
new_image[i,j]=pixel
break
if found:
break
return new_image
A larger example

import cv2
import numpy as np
img=cv2.imread('zjZA8.png')
h,w,c=img.shape
new_img=np.zeros((h//7,w//7,c), dtype='uint8')
for k in range(c):
for i in range(h//7):
for j in range(w//7):
new_img[i,j,k]=np.max(img[7*i:7*i+7,7*j:7*j+7,k])
cv2.imwrite('out3.png', new_img)
Left is result with np.mean, center - source image, right - result with np.max
Please test this code:
img=cv2.imread('zjZA8.png')
h,w,c=img.shape
bgr=[0,0,0]
bgr[0], bgr[1],bgr[2] =cv2.split(img)
for k in range(3):
bgr[k].shape=(h*w//7, 7)
bgr[k]=np.mean(bgr[k], axis=1)
bgr[k].shape=(h//7, 7, w//7)
bgr[k]=np.mean(bgr[k], axis=1)
bgr[k].shape=(h//7,w//7)
bgr[k]=np.uint8(bgr[k])
out=cv2.merge((bgr[0], bgr[1],bgr[2]))
cv2.imshow('mean_image', out)

Modifying my code to use the native np.nonzero operation did the trick!
I went down from ~8s to ~0.32s on a 1645x1645 image (with new_width=235).
I also tried using numba on top of that, but the overhead ends up making it slower.
def resize(img, new_width):
height, width = img.shape[:2]
new_height = height*new_width//width
new_image = np.ones((new_height, new_width,3), dtype=np.uint8)
x_ratio, y_ratio = width//new_width, height//new_height
for i in range(new_height):
for j in range(new_width):
sub_image = img[i*y_ratio:(i+1)*y_ratio, j*x_ratio:(j+1)*x_ratio]
non_zero = np.nonzero(sub_image)
if non_zero[0].size>0:
new_image[i, j]=sub_image[non_zero[0][0],non_zero[1][0]][:3]
return new_image

Related

skeletonization (thinning) of small images not giving expected results - python

I am trying to implement a skeletonization of small images. But I am not getting an expected results. I tried also thin() and medial_axis() but nothing seems to work as expected. I am suspicious that this problem occurs because of the small resolutions of images. Here is the code:
import cv2
from numpy import asarray
import numpy as np
# open image
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
# threshold the image
img_binary = cv2.threshold(afterMedian, thresh, 255, cv2.THRESH_BINARY)[1]
# make binary image
arr = asarray(img_binary)
binaryArr = np.zeros(asarray(img_binary).shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if arr[i][j] == 255:
binaryArr[i][j] = 1
else:
binaryArr[i][j] = 0
# perform skeletonization
from skimage.morphology import skeletonize
cv2.imshow("binary arr", binaryArr)
backgroundSkeleton = skeletonize(binaryArr)
# convert to non-binary image
bSkeleton = np.zeros(arr.shape)
for i in range(0, arr.shape[0]):
for j in range(0, arr.shape[1]):
if backgroundSkeleton[i][j] == 0:
bSkeleton[i][j] = 0
else:
bSkeleton[i][j] = 255
cv2.imshow("background skeleton", bSkeleton)
cv2.waitKey(0)
The results are:
I would expect something more like this:
This applies to similar shapes also:
Expectation:
Am I doing something wrong? Or it will truly will not be possible with such small pictures, because I tried skeletonization on bigger images and it worked just fine. Original images:
You could try the skeleton in DIPlib (dip.EuclideanSkeleton):
import numpy as np
import diplib as dip
import cv2
file = "66.png"
img_grey = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
afterMedian = cv2.medianBlur(img_grey, 3)
thresh = 140
bin = afterMedian > thresh
sk = dip.EuclideanSkeleton(bin, endPixelCondition='three neighbors')
dip.viewer.Show(bin)
dip.viewer.Show(sk)
dip.viewer.Spin()
The endPixelCondition input argument can be used to adjust how many branches are preserved or removed. 'three neighbors' is the option that produces the most branches.
The code above produces branches also towards the corners of the image. Using 'two neighbors' prevents that, but produces fewer branches towards the object as well. The other way to prevent it is to set edgeCondition='object', but in this case the ring around the object becomes a square on the image boundary.
To convert the DIPlib image sk back to a NumPy array, do
sk = np.array(sk)
sk is now a Boolean NumPy array (values True and False). To create an array compatible with OpenCV simply cast to np.uint8 and multiply by 255:
sk = np.array(sk, dtype=np.uint8)
sk *= 255
Note that, when dealing with NumPy arrays, you generally don't need to loop over all pixels. In fact, it's worth trying to avoid doing so, as loops in Python are extremely slow.
It seems the scikit-image is much better choice than cv2 here.
since the package define Bit functions, if you are playing with BW images, then try this ready to use code:
skeletonize
note: if process pass the image details, then don’t upsample the input at first until you tried other functions:again use skimage morphology functions to enhance details which in such case your code will be work on bigger area of images too. You could look here.

Array to Image conversion

I've been struggling for hours now with this tiny problem.
I've been trying to do some image modification. Here is the code snipet :
import numpy as np
from PIL import Image
#Conversion image array
img = Image.open('lena.jpg')
array = np.array(img)
def niv_de_gris(img):
height = len(img)
width = len(img[0])
#Creation tableau vide
new_img = ([[[0 for i in range(3)] for j in range(width)] for k in range(height)])
for i in range(height):
for j in range(width):
m = np.mean(img[i][j])
for k in range(3):
new_img[i][j][k] = int(m)
return np.array(new_img)
array_gris = niv_de_gris(array)
img_gris = Image.fromarray(array_gris) #problem is here !!
the first conversion works perfectly fine : it takes an image an converts it into an array. The program runs flowlessly, the image modification works, it sends me back an array of the image converted in gray levels.
Yet when I want to convert this array into an image to .show() it, i get this error :
Error screenshot
Can anybody help me figure this out pls?
Have a nice day!
Array to PIL may need to ensure pixel values are not 0-1 but 0-255, type is unit8 and then define the mode as 'RGB'.
Try using -
array_gris = niv_de_gris(array) * 255 #Skip this step if pixels are 0-255 already
array_gris = array_gris.astype(np.uint8)
img_gris = Image.fromarray(array_gris, mode='RGB')
That worked for me on a random image that I choose. Depending on the function and the image, I believe the only real important step is to ensure they are legit pixel values and that type is uint8.

Is there way to invert only specific pixels with Pillow?

I am making a program where you can chose an image and a color and it will invert only the pixels that match that color. I've been surfing stackoverflow and reddit for solutions but so far no luck.
I tried to do something like this first:
img = Image.open('past.png')
pixels = img.load()
for i in goodpixels:
ImageOps.invert(pixels[i])
AttributeError: 'tuple' object has no attribute 'mode'
No luck with that because ImageOps.invert only inverts full images. Next I tried to use ImageOps.polarize but realized that I couldn't use it because it takes greyscale thresholds not rgb values.
img = Image.open('past.png')
pixels = img.load()
for i in goodpixels:
ImageOps.solarize(img, threshold=pixels[i])
TypeError: '<' not supported between instances of 'int' and 'tuple'
This is my issue, I don't know if this is even possible. If it takes too much work I will probably abandon the project anyways because I'm just keeping myself occupied, and this isn't for marks/job.
Some more code:
def checkpixels():
img = Image.open('past.png')
height, width = img.size
img = img.convert('RGB')
targetcolor = input('What color do you want to search for, you can use RGB format or common names like \'red\', \'black\', e.t.c. Leave this blank to invert all pixels. ')
print('Processing image. This could take several minutes.')
isrgb = re.match(r'\d+, \d+, \d+|\(\d+, \d+, \d+\)', targetcolor)
if type(isrgb) == re.Match:
targetcolor = targetcolor.strip('()')
targetcolor = targetcolor.split(', ')
targetcolor = tuple(map(int, targetcolor))
print(str(targetcolor))
for x in range(width):
for y in range(height):
color = img.getpixel((y-1, x-1))
if color == targetcolor:
goodpixels.append((y-1, x-1))
else:
try:
targetcolor = ImageColor.getcolor(targetcolor.lower(), 'RGB')
print(targetcolor)
for x in range(width):
for y in range(height):
color = img.getpixel((y-1, x-1))
if color == targetcolor:
goodpixels.append((y-1, x-1))
except:
print('Not a valid color smh.')
return goodpixels
goodpixels = []
goodpixels = checkpixels()
Edit: I figured it out! Thank you to Mark Setchell for his incredible brain! I ended up using numpy to convert the image and target color to arrays, making an inverted copy of the image, and using numpy.where() to decide whether or not to switch out the pixels. I also plan on making the target color a range so the chosen color doesn't have to be so specific. All in all my code looks like this:
goodpixels = []
targetcolor = inputcolor()
img = Image.open('past.png')
invertimage = img.copy().convert('RGB')
invertimage = ImageOps.invert(invertimage)
invertimage.save('invert.png')
pastarray = np.array(img)
targetcolorarray = np.array(targetcolor)
pixels = img.load()
inverse = np.array(invertimage)
result = np.where((pastarray==targetcolorarray).all(axis=-1)[...,None], inverse, pastarray)
Image.fromarray(result.astype(np.uint8)).save('result.png')
Of course, inputcolor() is a function offscreen which just decides if the input is a color name or rgb value. Also I used import numpy as np in this example.
A problem that I had was that original my .where method looked like this:
result = np.where((pastarray==[0, 0, 0]).all(axis=-1)[...,None], inverse, pastarray)
This brought up the error: AttributeError: 'bool' object has no attribute 'all'
It turns out all I had to do was convert my color into an array!
Many libraries allow you to import an image into Python as a numpy array. PIL or opencv2 are well documented libraries for working with images:
pip install opencv2
Example numpy.where() selection, meeting a set criteria, in this case inverting all pixel values below THRESHOLD:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# cut off thereshold
THRESHOLD = 230
pixel_data = cv2.imread('filename.png')
pixel_data = np.where(pixel_data < THRESHOLD, 1/pixel_data, pixel_data)
# display the edited image using matplotlib
plt.imshow(pixel_data)
The numpy.where() function applies a condition to your numpy array. More details available on the: numpy official documentation

contrast enhancement how linearly stretch the grey levels of an image?

a screenshot of the img values2[this is the original]3[this is the expected output]this is the output I getI'm trying to stretch the grey levels from 0-100 to 50-200 in python but the output image is not right.
I drew the straight line representing the linear relationship between the two ranges, and in line 8 I'm using this equation to get the output.
What's wrong with my code?
This is my first question, so sorry for mistakes.
def Contrast_enhancement(img):
newimg = img
height = img.shape[0]
width = img.shape[1]
for i in range(height):
for j in range(width):
if(img[i][j] * 255 >= 0 and img[i][j] * 255 <= 100):
newimg[i][j] = (((3/2) * (img[i][j] * 255)) + 50)/255
return newimg
import numpy as np
import copy
def Contrast_enhancement(img):
newimg = np.array(copy.deepcopy(img)) #this makes a real copy of img, if you dont, any change to img will change newimg too
temp_img=np.array(copy.deepcopy(img))*3/2+50/255
newimg = np.where(newimg<=100,temp_img,newimg)
return newimg
or shorter:
import numpy as np
import copy
def Contrast_enhancement(img):
newimg = np.array(copy.deepcopy(img)) #this makes a real copy of img, if you dont, any change to img will change newimg too
newimg = np.where(newimg<=100,newimg*3/2+50/255,newimg)
return newimg
The copy part should be solving your problem and the numpy part is just to speed things up. Np.where returns temp_img if newimg is <=100 and newimg if not.
There are two answers to your question:
The one is strictly technical (the one that #DonQuiKong tries to answer) referring to how to do the stretching you refer to simpler or correctly.
The other one is implicit and tries to answer you actual problem of image stretching.
I am focusing on the second case here. Judging from the image sample you provided you are not taking the correct approach. Let's consider the samples you provided indeed have all intensity values between 0-100 (from screen capturing in my pc they don't but that's screen dependent to a degree). Your method seems correct and should work with minor bugs.
1) A minor bug for example is that:
newimg = img
does not do what you think it does. It does creates an alias of the original variable. Use:
newimg = img.copy()
instead.
2) If an image with different boundaries come to you your code is broken. It will ignore some pixels for some reason and that is not you wanted I guess.
3) The stretching you want can be applied to the whole image in that case using something like:
newimg -= np.min(newimg)
newimg /= np.max(newimg)
which just stretches your intensities to the 0-255 boundary.
4) Judging from your sample images also you need a more radical stretching (which will sacrifice a bit of image information to increase image contrast). Instead of the above you can use a lower limit:
newimg -= np.min(newimg)
newimg /= (np.max(newimg) * 0.5)
This effectively "burns" some pixels but in your case the result looks more close to your desired one. Apart from that you can apply a non linear mapping (a logarithmic one for example) of old intensities to new ones and you won't get any "burned" pixels.
A sample with value 0.5:

Faster method for adjusting PIL pixel values

I'm writing a script to chroma key (green screen) and composite some videos using Python and PIL (pillow). I can key the 720p images, but there's some left over green spill. Understandable but I'm writing a routine to remove that spill...however I'm struggling with how long it's taking. I can probably get better speeds using numpy tricks, but I'm not that familiar with it. Any ideas?
Here's my despill routine. It takes a PIL image and a sensitivity number but I've been leaving that at 1 so far...it's been working well. I'm coming in at just over 4 seconds for a 720p frame to remove this spill. For comparison, the chroma key routine runs in about 2 seconds per frame.
def despill(img, sensitivity=1):
"""
Blue limits green.
"""
start = time.time()
print '\t[*] Starting despill'
width, height = img.size
num_channels = len(img.getbands())
out = Image.new("RGBA", img.size, color=0)
for j in range(height):
for i in range(width):
#r,g,b,a = data[j,i]
r,g,b,a = img.getpixel((i,j))
if g > (b*sensitivity):
out_g = (b*sensitivity)
else:
out_g = g
# end if
out.putpixel((i,j), (r,out_g,b,a))
# end for
# end for
out.show()
print '\t[+] done.'
print '\t[!] Took: %0.1f seconds' % (time.time()-start)
exit()
return out
# end despill
Instead of putpixel, I tried to write the output pixel values to a numpy array then convert the array to a PIL image, but that was averaging just over 5 seconds...so this was faster somehow. I know putpixel isn't the snappiest option but I'm at a loss...
putpixel is slow, and loops like that are even slower, since they are run by the Python interpreter, which is slow as hell. The usual solution is to convert immediately the image to a numpy array and solve the problem with vectorized operations on it, which run in heavily optimized C code. In your case I would do something like:
arr = np.array(img)
g = arr[:,:,1]
bs = arr[:,:,2]*sensitivity
cond = g>bs
arr[:,:,1] = cond*bs + (~cond)*g
out = Image.fromarray(arr)
(it may not be correct and I'm sure it can be optimized way better, this is just a sketch)

Categories

Resources