Related
I want to make an affine transformation and afterwards use nearest neighbor interpolation while keeping the same dimensions for input and output images. I use for example the scaling transformation T= [[2,0,0],[0,2,0],[0,0,1]]. Any idea how can I fill the black pixels with nearest neighbor ? I tryied giving them the min value of neighbors' intensities. For ex. if a pixel has neighbors [55,22,44,11,22,55,23,231], I give it the value of min intensity: 11. But the result is not anything clear..
import numpy as np
from matplotlib import pyplot as plt
#Importing the original image and init the output image
img = plt.imread('/home/left/Desktop/computerVision/SET1/brain0030slice150_101x101.png',0)
outImg = np.zeros_like(img)
# Dimensions of the input image and output image (the same dimensions)
(width , height) = (img.shape[0], img.shape[1])
# Initialize the transformation matrix
T = np.array([[2,0,0], [0,2,0], [0,0,1]])
# Make an array with input image (x,y) coordinations and add [0 0 ... 1] row
coords = np.indices((width, height), 'uint8').reshape(2, -1)
coords = np.vstack((coords, np.zeros(coords.shape[1], 'uint8')))
output = T # coords
# Arrays of x and y coordinations of the output image within the image dimensions
x_array, y_array = output[0] ,output[1]
indices = np.where((x_array >= 0) & (x_array < width) & (y_array >= 0) & (y_array < height))
# Final coordinations of the output image
fx, fy = x_array[indices], y_array[indices]
# Final output image after the affine transformation
outImg[fx, fy] = img[fx, fy]
The input image is:
The output image after scaling is:
well you could simply use the opencv resize function
import cv2
new_image = cv2.resize(image, new_dim, interpolation=cv.INTER_AREA)
it'll do the resize and fill in the empty pixels in one go
more on cv2.resize
If you need to do it manually, then you could simply detect dark pixels in resized image and change their value to mean of 4 neighbour pixels (for example - it depends on your required alghoritm)
See: nereast neighbour, bilinear, bicubic, etc.
I am trying to increase the region of interest of an image using the below algorithm.
First, the set of pixels of the exterior border of the ROI is de termined, i.e., pixels that are outside the ROI and are neighbors (using four-neighborhood) to pixels inside it. Then, each pixel value of this set is replaced with the mean value of its neighbors (this time using eight-neighborhood) inside the ROI. Finally, the ROI is expanded by inclusion of this altered set of pixels. This process is repeated and can be seen as artificially increasing the ROI.
The pseudocode is below -
while there are border pixels:
border_pixels = []
# find the border pixels
for each pixel p=(i, j) in image
if p is not in ROI and ((i+1, j) in ROI or (i-1, j) in ROI or (i, j+1) in ROI or (i, j-1) in ROI) or (i-1,j-1) in ROI or (i+1,j+1) in ROI):
add p to border_pixels
# calculate the averages
for each pixel p in border_pixels:
color_sum = 0
count = 0
for each pixel n in 8-neighborhood of p:
if n in ROI:
color_sum += color(n)
count += 1
color(p) = color_sum / count
# update the ROI
for each pixel p=(i, j) in border_pixels:
set p to be in ROI
Below is my code
img = io.imread(path_dir)
newimg = np.zeros((584, 565,3))
mask = img == 0
while(1):
border_pixels = []
for i in range(img.shape[0]):
for j in range(img.shape[1]):
for k in range(0,3):
if(i+1<=583 and j+1<=564 and i-1>=0 and j-1>=0):
if ((mask[i][j][k]) and ((mask[i+1][j][k]== False) or (mask[i-1][j][k]==False) or (mask[i][j+1][k]==False) or (mask[i][j-1][k]==False) or (mask[i-1][j-1][k] == False) or(mask[i+1][j+1][k]==False))):
border_pixels.append([i,j,k])
if len(border_pixels) == 0:
break
for (each_i,each_j,each_k) in border_pixels:
color_sum = 0
count = 0
eight_neighbourhood = [[each_i-1,each_j],[each_i+1,each_j],[each_i,each_j-1],[each_i,each_j+1],[each_i-1,each_j-1],[each_i-1,each_j+1],[each_i+1,each_j-1],[each_i+1,each_j+1]]
for pix_i,pix_j in eight_neighbourhood:
if (mask[pix_i][pix_j][each_k] == False):
color_sum+=img[pix_i,pix_j,each_k]
count+=1
print(color_sum//count)
img[each_i][each_j][each_k]=(color_sum//count)
for (i,j,k) in border_pixels:
mask[i,j,k] = False
border_pixels.remove([i,j,k])
io.imsave("tryout6.png",img)
But it is not doing any change in the image.I am getting the same image as before
so I tried plotting the border pixel on a black image of the same dimension for the first iteration and I am getting the below result-
I really don't have any idea where I am doing wrong here.
Here's a solution that I think works as you have requested (although I agree with #Peter Boone that it will take a while). My implementation has a triple loop, but maybe someone else can make it faster!
First, read in the image. With my method, the pixel values are floats between 0 and 1 (rather than integers between 0 and 255).
import urllib
import matplotlib.pyplot as plt
import numpy as np
from skimage.morphology import binary_dilation, binary_erosion, disk
from skimage.color import rgb2gray
from skimage.filters import threshold_otsu
# create a file-like object from the url
f = urllib.request.urlopen("https://i.stack.imgur.com/JXxJM.png")
# read the image file in a numpy array
# note that all pixel values are between 0 and 1 in this image
a = plt.imread(f)
Second, add some padding around the edges, and threshold the image. I used Otsu's method, but #Peter Boone's answer works well, too.
# add black padding around image 100 px wide
a = np.pad(a, ((100,100), (100,100), (0,0)), mode = "constant")
# convert to greyscale and perform Otsu's thresholding
grayscale = rgb2gray(a)
global_thresh = threshold_otsu(grayscale)
binary_global1 = grayscale > global_thresh
# define number of pixels to expand the image
num_px_to_expand = 50
The image, binary_global1 is a mask that looks like this:
Since the image is three channels (RGB), I process the channels separately. I noticed that I needed to erode the image by ~5 px because the outside of the image has some unusual colors and patterns.
# process each channel (RGB) separately
for channel in range(a.shape[2]):
# select a single channel
one_channel = a[:, :, channel]
# reset binary_global for the each channel
binary_global = binary_global1.copy()
# erode by 5 px to get rid of unusual edges from original image
binary_global = binary_erosion(binary_global, disk(5))
# turn everything less than the threshold to 0
one_channel = one_channel * binary_global
# update pixels one at a time
for jj in range(num_px_to_expand):
# get 1 px ring of to update
px_to_update = np.logical_xor(binary_dilation(binary_global, disk(1)),
binary_global)
# update those pixels with the average of their neighborhood
x, y = np.where(px_to_update == 1)
for x, y in zip(x,y):
# make 3 x 3 px slices
slices = np.s_[(x-1):(x+2), (y-1):(y+2)]
# update a single pixel
one_channel[x, y] = (np.sum(one_channel[slices]*
binary_global[slices]) /
np.sum(binary_global[slices]))
# update original image
a[:,:, channel] = one_channel
# increase binary_global by 1 px dilation
binary_global = binary_dilation(binary_global, disk(1))
When I plot the output, I get something like this:
# plot image
plt.figure(figsize=[10,10])
plt.imshow(a)
This is an interesting idea. You're going to want to use masks and some form of mean ranks to accomplish this. Going pixel by pixel will take you a while, instead you want to use different convolution filters.
If you do something like this:
image = io.imread("roi.jpg")
mask = image[:,:,0] < 30
just_inside = binary_dilation(mask) ^ mask
image[~just_inside] = [0,0,0]
you will have a mask representing just the pixels inside of the ROI. I also set the pixels not in that area to 0,0,0.
Then you can get the pixels just outside of the roi:
just_outside = binary_erosion(mask) ^ mask
Then get the mean bilateral of each channel:
mean_blue = mean_bilateral(image[:,:,0], selem=square(3), s0=1, s1=255)
#etc...
This isn't exactly correct, but I think it should put you in the right direction. I would check out image.sc if you have more general questions about image processing. Let me know if you need more help as this was more general direction than working code.
I want to count the pixels of color intensity of [150,150,150] in an image and I have determined the shape of the image and made a loop to scan the image pixel by pixel but I have faced this error and I don't know why it appeared.
But I got the following error:
File "D:/My work/MASTERS WORK/FUNCTIONS.py", line 78, in <module>
if img[x,y] == [150,150,150]:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Code:
img = cv2.imread('imj.jpg')
h ,w =img.shape[:2]
m= 0
for y in range(h):
for x in range(w):
if img[x,y] == [150,150,150]:
m+=1
print('No. of points = ' , m)
Instead of using a for loop, you should vectorize the processing using Numpy. To count the number of pixels of color intensity [150,150,150], you can use np.count_nonzero()
count = np.count_nonzero((image == [150, 150, 150]).all(axis = 2))
Here's an example. We create a black image of size [400,400] and color the bottom left corner to [150,150,150]
import numpy as np
# Create black image
image = np.zeros((400,400,3), dtype=np.uint8)
image[300:400,300:400] = (150,150,150)
We then count the number of pixels at this intensity
# Count number of pixels of specific color intensity
count = np.count_nonzero((image == [150, 150, 150]).all(axis = 2))
print(count)
10000
Finally we if wanted to change the pixels of that intensity, we can find all desired pixels and use a mask. In this case, we turn the pixels to green
# Find pixels of desired color intensity and draw onto mask
mask = (image == [150.,150.,150.]).all(axis=2)
# Apply the mask to change the pixels
image[mask] = [36,255,12]
Full code
import numpy as np
# Create black image
image = np.zeros((400,400,3), dtype=np.uint8)
image[300:400,300:400] = (150,150,150)
# Count number of pixels of specific color intensity
count = np.count_nonzero((image == [150, 150, 150]).all(axis = 2))
print(count)
# Find pixels of desired color intensity and draw onto mask
mask = (image == [150.,150.,150.]).all(axis=2)
# Apply the mask to change the pixels
image[mask] = [36,255,12]
It's not a recommended way to count the pixels having a given value, but still you can use below code for above case(same value of r, g and b):
for x in range(h):
for y in range(w):
if np.all(img[x, y]==150, axis=-1): # (img[x, y]==150).all(axis=-1)
m+=1
If you want to count pixels with different values of r, g and b, then use np.all(img[x, y]==[b_value, g_value, r_value], axis=-1), since OpenCV follows bgr order.
Alternatively, you can use np.count_nonzero(np.all(img==[b_value, g_value, r_value],axis=-1)) or simply np.count_nonzero(np.all(img==150, axis=-1)) in above case.
I have an RGBA image where I have to find if any pixel has red value < 150 and to replace such pixels to black. I am using following code for this:
import numpy as np
imgarr = np.array(img)
for x in range(imgarr.shape[0]):
for y in range(imgarr.shape[1]):
if imgarr[x, y][0] < 150: # red value < 150
imgarr[x, y] = (0,0,0,255)
However, this is a slow loop and I am sure it can be optimized using some function such as numpy.where, but I am not able to fit it in this code. How can this be solved?
For one channel image, we can do as follow
out_val = 0
gray = cv2.imread("colour.png",0)
gray[gray<value] = out_val
Use np.where with the mask of comparison against the threshold -
img = np.asarray(img)
imgarr = np.where(img[...,[0]]<150,(0,0,0,255),img)
We are using img[...,[0]] to keep the number of dims as needed for broadcasted assignment with np.where. So, another way would be to use img[...,0,None]<150 to get the mask that keeps dims.
I'm trying to import an image file, such as file.bmp, read the RGB values of each pixel in the image, and then output the highest RGB valued pixel (the brightest pixel) for each row to the screen. Any suggestions on how to do it using Python?
You can make a lot of use of the power of numpy here. Note that the code below outputs "brightness" in the range [0, 255].
#!/bin/env python
import numpy as np
from scipy.misc import imread
#Read in the image
img = imread('/users/solbrig/smooth_test5.png')
#Sum the colors to get brightness
brightness = img.sum(axis=2) / img.shape[2]
#Find the maximum brightness in each row
row_max = np.amax(brightness, axis=1)
print row_max
If you think your image might have an alpha layer, you can do this:
#!/bin/env python
import numpy as np
from scipy.misc import imread
#Read in the image
img = imread('/users/solbrig/smooth_test5.png')
#Pull off alpha layer
if img.shape[2] == 4:
alph = img[:,:,3]/255.0
img = img[:,:,0:3]
else:
alph = np.ones(img.shape[0:1])
#Sum the colors to get brightness
brightness = img.sum(axis=2) / img.shape[2]
brightness *= alph
#Find the maximum brightness in each row
row_max = np.amax(brightness, axis=1)
print row_max
Well, you could use scipy.misc.imread to read the image and manipulate it like so:
import scipy.misc
file_array = scipy.misc.imread("file.bmp")
def get_brightness(pixel_tuple):
return sum([component*component for component in pixel_tuple])**.5 # distance from (0, 0, 0)
row_maxima = {}
height, width = len(file_array), len(file_array[0])
for y in range(height):
for x in range(width):
pixel = tuple(file_array[y][x]) # casting it to a tuple so it can be stored in the dict
if y in row_maxima and get_brightness(pixel) > row_maxima[y]:
row_maxima[y] = pixel
if y not in row_maxima:
row_maxima[y] = pixel
print row_maxima