OpenCV local pixel average generating extrange output - python

I am trying to use python to just compute a local pixel color average, however my output is not at all that.
Image:
Output:
Code:
image = cv2.imread('perspective.jpeg')
for i in range(image.shape[1]):
for j in range(image.shape[0]):
up = image[min(j + 1, image.shape[0]-1), i]
down = image[max(j - 1, 0), i]
right = image[j, min(i + 1, image.shape[1]-1)]
left = image[j, max(i - 1, 0)]
average = (up + down + left + right + image[j, i]) / 5
image[j, i] = average

The issues that you are observing is due to integer arithmetic overflow while computing the average. The reason of overflow is that the pixels are of type np.uint8 which when added together, generate result of type np.uint8 which is not large enough to hold the result of addition.
The solution to this problem is to cast the pixels to a larger data-type before adding them. Then cast the final value back to np.uint8 before storing back to the result image.
In-fact, casting only one of the values (say up) to larger data type will suffice as the rest of them will automatically be upgraded while performing addition.
The corrected code may look like this:
image = cv2.imread('perspective.jpeg')
for i in range(image.shape[1]):
for j in range(image.shape[0]):
up = np.float32(image[min(j + 1, image.shape[0]-1), i])
down = image[max(j - 1, 0), i]
right = image[j, min(i + 1, image.shape[1]-1)]
left = image[j, max(i - 1, 0)]
average = (up + down + left + right + image[j, i]) / 5
image[j, i] = np.uint8(average)

You can easily do this with filter2D as shown in the example below. It will work on any number of channels.
im = np.random.randint(0, 256, (5, 5), np.uint8)
kernel = np.array([[0, 1./5, 0], [1./5, 1./5, 1./5], [0, 1./5, 0]])
filt = cv2.filter2D(im, cv2.CV_8U, kernel)
For example:
im
array([[ 14, 127, 221, 74, 2],
[132, 251, 88, 19, 215],
[183, 140, 17, 60, 76],
[208, 144, 182, 11, 64],
[183, 89, 217, 131, 23]], dtype=uint8)
filt
array([[106, 173, 120, 67, 116],
[166, 148, 119, 91, 66],
[161, 147, 97, 37, 95],
[172, 153, 114, 90, 37],
[155, 155, 160, 79, 83]], dtype=uint8)
You can choose the border type, I've used the default.

Related

Image-processing convolution kernels are calculated dynamically

Using standard numpy and cv2.filter2D solutions I can apply static convolutions to an image:
import numpy as np
convolution_kernel = np.array([[-2, -1, 0],
[-1, 1, 1],
[0, 1, 2]])
import cv2
image = cv2.imread('1.png') result = cv2.filter2D(image, -1, convolution_kernel)
(example from https://stackoverflow.com/a/58383803/3310334)
Every pixel at [i, j] in the output image has a value calculated by centering a 3x3 "window" onto [i, j] in the input image, and then multiplying each value in the window by the corresponding value in the convolution kernel (Hadamard product) and finally summing the 9 products to get the value for [i, j] in the output image (for each color channel).
(image from: https://github.com/ashushekar/image-convolution-from-scratch#convolution)
In my case, the function to perform to calculate for each output pixel is not as simple as sum of Hadamard product. It is for each pixel calculated from operations performed on known-size windows into two input matrices centered around that pixel.
I have two input matrixes ("images"), like
A = [[179, 97, 77, 118, 144, 105],
[ 68, 56, 184, 210, 141, 230],
[178, 166, 218, 47, 106, 172],
[ 38, 183, 50, 185, 48, 87],
[ 60, 200, 228, 232, 6, 190],
[253, 75, 231, 166, 117, 134]]
B = [[116, 95, 94, 220, 80, 223],
[135, 9, 166, 78, 5, 129],
[102, 167, 120, 81, 141, 29],
[ 83, 117, 81, 129, 255, 48],
[130, 231, 165, 7, 187, 169],
[ 44, 137, 16, 50, 229, 202]]
And in the output matrix, each [i, j] pixel should be calculated as the sum of all of A[u,v] ** 2 - B[u,v] ** 2 values for [u, v] coordinates within 3x3 "windows" onto the two (same-sized) input matrixes.
How can I calculate this output matrix quickly in Python?
Using numpy, it seems to be the 3x3 sums of A * A - B * B, but how to do those sums? Or is there another "2d map" process I could be using?
I've written a loop-based solution to calculate the expected output for these two examples:
W = 3 # size of kernel is WxW
out = np.zeros(A.shape)
difference_of_squares = A * A - B * B
for i, j in np.ndindex(out.shape):
starti = max(i - W//2, 0) # use smaller kernels at input's boundaries, output will have same dimension as input
stopi = min(i - W//2 + W, np.shape(out)[0]) # I'm not worried at this point about what happens at boundaries
startj = max(j - W//2, 0) # standard convolution solutions are often just reducing output size or padding input with zeroes
stopj = min(j - W//2 + W, np.shape(out)[1])
out[i, j] = np.sum(difference_of_squares[starti:stopi, startj:stopj])
print(out)
[[ 8423. 11816. 10372. 41125. 35287. 31747.]
[ 29370. 65887. 38811. 61252. 51033. 51845.]
[ 24756. 60119. 109133. 35101. 70005. 18757.]
[ 8641. 62463. 126935. 14530. 2255. -64752.]
[ 36623. 110426. 163513. 33812. -50035. -146450.]
[ 22268. 100132. 130190. 83010. -10163. -88994.]]
You can use scipy.signal.convolve2d:
from scipy.signal import convolve2d
# Same shape as original (6x6)
>>> convolve2d(A**2-B**2, np.ones((3, 3), dtype=int), mode='same')
array([[ 8423, 11816, 10372, 41125, 35287, 31747],
[ 29370, 65887, 38811, 61252, 51033, 51845],
[ 24756, 60119, 109133, 35101, 70005, 18757],
[ 8641, 62463, 126935, 14530, 2255, -64752],
[ 36623, 110426, 163513, 33812, -50035, -146450],
[ 22268, 100132, 130190, 83010, -10163, -88994]])
# Shape reduce by 1 (5x5)
>>> convolve2d(A**2-B**2, np.ones((3, 3), dtype=int), mode='valid')
array([[ 65887, 38811, 61252, 51033],
[ 60119, 109133, 35101, 70005],
[ 62463, 126935, 14530, 2255],
[110426, 163513, 33812, -50035]])
Note: You have to play around with the "mode" and "limit" parameters until you get what you want.
Update
If the border is not a problem at this point, you can use sliding_window_view:
from numpy.lib.stride_tricks import sliding_window_view
>>> np.sum(sliding_window_view(A**2-B**2, (3, 3)), axis=(2, 3))
array([[ 65887, 38811, 61252, 51033],
[ 60119, 109133, 35101, 70005],
[ 62463, 126935, 14530, 2255],
[110426, 163513, 33812, -50035]])

Convert multidimensional array of indices clusters to a 1D categorical array

I have a function which returns a multidimensional array of k clusters. My algorith works for the most part, but I need it to return a categorical array instead of a multidimensional array. Here is my code:
import numpy as np
import pandas as pd
import random
from bokeh.sampledata.iris import flowers
from typing import List, Tuple
def get_closest(data_point: np.ndarray, centroids: np.ndarray):
"""
Takes a data_point and a nd.array of multiple centroids and returns the index of the centroid closest to data_point
by computing the euclidean distance for each centroid and picking the closest.
"""
N = centroids.shape[0]
dist = np.empty(N)
for i, c in enumerate(centroids):
dist[i] = np.linalg.norm(c - data_point)
index_min = np.argmin(dist)
return index_min
# Use these centroids in the first iteration of you algorithm if "Random Centroids" is set to False in the Dashboard
DEFAULT_CENTROIDS = np.array([[5.664705882352942, 3.0352941176470587, 3.3352941176470585, 1.0176470588235293],
[5.446153846153847, 3.2538461538461543, 2.9538461538461536, 0.8846153846153846],
[5.906666666666667, 2.933333333333333, 4.1000000000000005, 1.3866666666666667],
[5.992307692307692, 3.0230769230769234, 4.076923076923077, 1.3461538461538463],
[5.747619047619048, 3.0714285714285716, 3.6238095238095243, 1.1380952380952383],
[6.161538461538462, 3.030769230769231, 4.484615384615385, 1.5307692307692309],
[6.294117647058823, 2.9764705882352938, 4.494117647058823, 1.4],
[5.853846153846154, 3.215384615384615, 3.730769230769231, 1.2076923076923078],
[5.52857142857143, 3.142857142857143, 3.107142857142857, 1.007142857142857],
[5.828571428571429, 2.9357142857142855, 3.664285714285714, 1.1]])
def k_means(data_np: np.ndarray, k:int=3, n_iter:int=500, random_initialization=False) -> Tuple[np.ndarray, int]:
"""
:param data: your data, a numpy array with shape (n_entries, n_features)
:param k: The number of clusters to compute
:param n_iter: The maximal numnber of iterations
:param random_initialization: If False, DEFAULT_CENTROIDS are used as the centroids of the first iteration.
:return: A tuple (cluster_indices: A numpy array of cluster_indices,
n_iterations: the number of iterations it took until the algorithm terminated)
"""
# Initialize the algorithm by assigning random cluster labels to each entry in your dataset
k=k+1
centroids = data_np[random.sample(range(len(data_np)), k)]
labels = np.array([np.argmin([(el - c) ** 2 for c in centroids]) for el in data_np])
clustering = []
for k in range(k):
clustering.append(data_np[labels == k])
# Implement K-Means with a while loop, which terminates either if the centroids don't move anymore, or
# if the number of iterations exceeds n_iter
counter = 0
while counter < n_iter:
# Compute the new centroids, if random_initialization is false use DEFAULT_CENTROIDS in the first iteration
# if you use DEFAULT_CENTROIDS, make sure to only pick the k first entries from them.
if random_initialization is False and counter == 0:
centroids = DEFAULT_CENTROIDS[random.sample(range(len(DEFAULT_CENTROIDS)), k)]
# Update the cluster labels using get_closest
labels = np.array([get_closest(el, centroids) for el in data_np])
clustering = []
for i in range(k):
clustering.append(np.where(labels == i)[0])
counter += 1
new_centroids = np.zeros_like(centroids)
for i in range(k):
if len(clustering[i]) > 0:
new_centroids[i] = data_np[clustering[i]].mean(axis=0)
else:
new_centroids[i] = centroids[i]
# if the centroids didn't move, exit the while loop
if clustering is not None and (centroids == new_centroids).sum() == 0:
break
else:
centroids = new_centroids
pass
# return the final cluster labels and the number of iterations it took
return clustering, counter
# read and store the dataset
data: pd.DataFrame = flowers.copy(deep=True)
data = data.drop(['species'], axis=1)
data_np = np.asarray(data)
clustering, counter = k_means(data_np,4,500,False)
So clustering looks like so
clustering
[array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 57,
98], dtype=int64),
array([60, 93], dtype=int64),
array([ 50, 51, 52, 53, 54, 55, 56, 58, 61, 62, 63, 65, 66,
67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79,
80, 81, 82, 83, 86, 87, 89, 90, 91, 92, 94, 95, 96,
97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110,
111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123,
124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136,
137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149],
dtype=int64),
array([59, 64, 84, 85, 88], dtype=int64)]
However, what I'm looking for is an array like
clustering
array([1, 3, 2, ..., 4, 1, 4], dtype=int64)]
Also, the while loop is always terminating after 1 iteration which shouldn't be the case.
counter
1
EDIT1:
The code continues as follows.
def callback(attr, old, new):
# recompute the clustering and update the colors of the data points based on the result
k = slider_k.valued_throttled
init = select_init.value
clustering_new, counter_new = k_means(data_np,k,500,init)
pass
# Create the dashboard
# 1. A Select widget to choose between random initialization or using the DEFAULT_CENTROIDS on top
select_init = Select(title='Random Centroids',value='False',options=['True','False'])
# 2. A Slider to choose a k between 2 and 10 (k being the number of clusters)
slider_k = Slider(start=2,end=10,value=3,step=1,title='k')
# 4. Connect both widgets to the callback
select_init.on_change('value',callback)
slider_k.on_change('value_throttled',callback)
# 3. A ColumnDataSource to hold the data and the color of each point you need
source = ColumnDataSource(dict(petal_length=data['petal_length'],sepal_length=data['sepal_length'],petal_width=data['petal_width'],clustering=clustering))
# 4. Two plots displaying the dataset based on the following table, have a look at the images
# in the handout if this confuses you.
#
# Axis/Plot Plot1 Plot2
# X Petal length Petal width
# Y Sepal length Petal length
#
# Use a categorical color mapping, such as Spectral10, have a look at this section of the bokeh docs:
# https://docs.bokeh.org/en/latest/docs/user_guide/categorical.html#filling
plot1 = figure(plot_width=100,plot_height=100,title='Scatterplot of flowers distribution by petal length and sepal length')
plot1.yaxis.axis_label = 'Sepal length'
plot1.xaxis.axis_label = 'Petal length'
scatter1 = plot1.scatter(x='petal_length',y='sepal_length',source=source,fill_color=factor_cmap('clustering', palette=Spectral10, factors=clustering))
plot2 = figure(plot_width=100,plot_height=100,title='Scatterplot of flowers distribution by petal width and petal length')
plot2.yaxis.axis_label = 'Petal length'
plot2.xaxis.axis_label = 'Petal width'
scatter2 = plot2.scatter(x='petal_width',y='petal_length',source=source,fill_color=factor_cmap('clustering', palette=Spectral10, factors=clustering))
# 5. A Div displaying the currently number of iterations it took the algorithm to update the plot.
div = Div(text='Number of iterations: ')
Thus the end result should look like so
I'm not sure I understand what you need.
If clustering contains a list of arrays where each array represent a cluster and the ith array contains the indices of the samples that belong to the ith cluster and what you need is to convert this to a single vector of size number_of_samples that represent the cluster each sample belongs to you can do it like this:
def to_classes(clustering):
# Get number of samples (you can pass it directly to the function)
num_samples = sum(x.shape[0] for x in clustering)
indices = np.empty((num_samples,)) # An empty array with correct size
for ith, cluster in enumerate(clustering):
# use cluster indices to assign to correct the cluster index
indices[cluster] = ith
return indices
The loops exists after a single iteration because the break condition is wrong, I think what you want is actually
# note the !=
if clustering is not None and (centroids != new_centroids).sum() == 0:
break

Call a Python function name from concantenated string name

I want to create a def function name from concatenating "string" + variable + "string" and call that def function.
I am currently using this condensed version of code for simplicity to similarly accomplish tasks and I want to minimize the hard code contents of the function do_update(a):
ROTATE = '90'
ROT20 = [
[0, 0, 0, 0, 0, 0, 0, 0],
[126, 129, 153, 189, 129, 165, 129, 126],
[126, 255, 231, 195, 255, 219, 255, 126],
[0, 8, 28, 62, 127, 127, 127, 54],
[0, 8, 28, 62, 127, 62, 28, 8],
[62, 28, 62, 127, 127, 28, 62, 28],
[62, 28, 62, 127, 62, 28, 8, 8],
[0, 0, 24, 60, 60, 24, 0, 0],
];
def updatevalues90(a):
b = []
for i in range(8):
for j in range(8):
b[i] += a[j] + i
return b
def do_update(a):
if ROTATE == '90':
ROT = [updatevalues90(char) for char in a]
elif ROTATE == '180':
ROT = [updatevalues180(char) for char in a]
elif ROTATE == '270':
ROT = [updatevalues270(char) for char in a]
do_update(ROT20)
Everything I have tried has resulted in Invalid Syntax or ROT filled with the string name of what I want.
I want to take the function call to updatevalues90(char) and instead of needing it hard coded, I want to change it to:
ROT = ["updatevalues" + ROTATE + "(char)" for char in a]
So that whatever value is in ROTATE will become part of the function call, i.e. function name.
My question is how in Python do I concatenate the strings and a variable name into a useable function name?
I think eval, but I can't get the syntax to work for me. Maybe there is something simpler in Python that works?
Store your functions in a dict:
updaters = {
'90': updatevalues90,
'180': updatevalues180,
'270': updatevalues270
}
def do_update(a):
ROT = [updaters[ROTATE](char) for char in a]
# return ROT ?

Python find convolution kernel if input image and output image is known

I have a problem with convolution kernel in python. It is about simple convolution operator. I have input matrix and output matrix. I want to find a possible convolution kernel with size(5x5). How to solve this problem with python, numpy or tensorflow ?
import scipy.signal as ss
input_img = np.array([[94, 166, 76, 106, 152, 232],
[48, 242, 30, 98, 46, 210],
[52, 60, 86, 60, 216, 248],
[52, 236, 116, 240, 224, 184],
[138, 160, 146, 254, 236, 252],
[94, 100, 224, 246, 152, 74]], dtype=float)
output_img = np.array([[15, 49, 23, 105, 0, 0],
[43,30, 108, 124, 0, 0],
[58, 120, 112, 92, 0, 0],
[73, 127, 118, 126, 0, 0],
[112, 123, 76, 37, 0, 0],
[0, 0, 0, 0, 0, 0]], dtype=float)
# I want to find this kernel
conv = np.zeros((5,5), dtype=int)
# So if I do convolution operator, output_img will resulting a value same as I defined above
output_img = ss.convolve2d(input_img, conv, padding='same')
As far as I understood, you need to reconstruct window weights by given input, output arrays and window size. This is possible, I think, especially, if input array (image) is sufficiently big.
Look at the code below:
import scipy.signal as ss
import numpy as np
source_dataset = np.random.rand(20, 10)
sample_convolution = np.diag([1, 1, 1])
output_dataset = ss.convolve2d(data, sample_convolution, mode='same')
conv_size = c.shape[0]
# Given output_dataset, source_datset, and conv_size we need to reconstruct
# window weights.
def reconstruct(data, output, csize):
half_size = int(csize / 2)
min_row_ind = half_size
max_row_ind = int(data.shape[0]) - half_size
min_col_ind = half_size
max_col_ind = int(data.shape[1]) - half_size
A = list()
b = list()
for i in np.arange(min_row_ind, max_row_ind, dtype=int):
for j in np.arange(min_col_ind, max_col_ind, dtype=int):
A.append(data[(i - half_size):(i + half_size + 1), (j - half_size):(j + half_size + 1)].ravel().tolist())
b.append(output[i, j])
if len(A) == csize * csize and np.linalg.matrix_rank(A) == csize * csize:
return (np.linalg.pinv(A)#np.array(b)[:, np.newaxis]).reshape(csize, csize)
if len(A) < csize*csize:
raise Exception("Insufficient data")
result = reconstruct(source_dataset, output_dataset, 3)
I got the following result
array([[ 1.00000000e+00, -1.77635684e-15, -1.11022302e-16],
[ 0.00000000e+00, 1.00000000e+00, -8.88178420e-16],
[ 0.00000000e+00, -1.22124533e-15, 1.00000000e+00]])
So, it works as expected; but definitely need to be improved to take into account edge effects, case when size of window is even etc.

Find polynomial function through 30 points with polyfit

I need to find the polynomial function of degree 29 that exactly fits thirty data points. We can be sure, that such a function exists. However, the error of numpy.polyfit increases dramatically after only three points.
import numpy as np
y = [126, 34, 78, 120, 83, 62, 104, 6, 70, 142, 147, 63, 35, 126, 9, 84, 7, 122, 93, 29, 95, 141, 42, 102, 38, 96, 130, 83, 138, 148]
print(len(y))
x = np.arange(len(y))
f = np.polyfit(x,y,30)
def eval_polynom(f, x):
res = 0
for i in range(len(f)):
res += f[i] * x**(len(f)-i-1)
return res
for i in range(len(y)):
print(y[i], " -- ", eval_polynom(f, x[i]))
My data points are (x,y) with x = [0,1,2,3,4,...,29]
The output is
126 -- 125.941598976
34 -- 34.7366402172
78 -- 73.703669116
120 -- 134.514176467
83 -- 51.6471546864
62 -- 105.143046704
104 -- 70.1470309453
6 -- 13.808372367
70 -- 347.425617622
142 -- -1281.11122538
...
Is there a way to get the exact polynomial function such that the error is 0?
There's almost certainly an integer overflow issue (due to large exponents) in your eval_polynom function, because the values in x are all integers. Try to replace
res += f[i] * x**(len(f)-i-1)
with
res += f[i] * float(x)**(len(f)-i-1)
You'll probably end up with values that still don't perfectly match, but remember that floating point operations are inherently inaccurate. Even more so if numbers become large, as is the case here.
y - green, polynome - red, error - blue, it's 140 degree polynome
I need to find the polynomial function of degree 29 that exactly fits thirty data points. We can be sure, that such a function exists
Why you sure of this? I tried some twists and visualizations and think you datapoints can't be fit by such polinome.
I'v tried Chebyshev's polynomes, it's doing better, but still can't fit these values even with 140 degree polynome.
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from numpy.polynomial.chebyshev import chebfit,chebval
%matplotlib inline
y = [126, 34, 78, 120, 83, 62, 104, 6, 70, 142, 147, 63, 35, 126, 9, 84, 7, 122, 93, 29, 95, 141, 42, 102, 38, 96, 130, 83, 138, 148]
print(len(y))
x = np.arange(len(y))
c = chebfit(x, y, 30)
p = []
for i in np.arange(len(y)):
p.append(chebval(i, c))
df = pd.DataFrame(data={'x': x, 'y': y, 'p': p})
df['diff'] = df['y'] - df['p']
sns.pointplot(x = 'x', y = 'y', data=df, color='green')
sns.pointplot(x = 'x', y = 'p', data=df, color='red')
sns.pointplot(x = 'x', y = 'diff', data=df, color='blue')
While not exact, you get much better results if you use NumPys polyval
import numpy as np
y = [126, 34, 78, 120, 83, 62, 104, 6, 70, 142, 147, 63, 35, 126, 9, 84, 7, 122, 93, 29, 95, 141, 42, 102, 38, 96, 130, 83, 138, 148]
x = np.arange(len(y))
f = np.polyfit(x ,y, 30)
for i in range(len(y)):
print(y[i], " -- ", np.polyval(f, x[i]))
which gives
(126, ' -- ', 125.94427340268774)
(34, ' -- ', 34.674505165214924)
(78, ' -- ', 73.961360153890183)
(120, ' -- ', 133.96863767482208)
(83, ' -- ', 52.113307162099574)
(62, ' -- ', 105.65069882437891)
(104, ' -- ', 68.588480573695762)
(6, ' -- ', 14.814788499822299)
(70, ' -- ', 76.373263353880958)
(142, ' -- ', 149.39793233756134)
...
Note that you should be using a degree 29 polynomial to fit 30 points.

Categories

Resources