Find average colour of each section of an image - python

I am looking for the best way to achieve the following using Python:
Import an image.
Add a grid of n sections (4 shown in this example below).
For each section find the dominant colour.
Desired output
Output an array, list, dict or similar capturing these dominant colour values.
Maybe even a Matplotlib graph showing the colours (like pixel art).
What have I tried?
The image could be sliced using image slicer:
import image_slicer
image_slicer.slice('image_so_grid.png', 4)
I could then potentially use something like this to get the average colour but Im sure there are better ways to do this.
What are the best ways to do this with Python?

This works for 4 sections, but you'll need to figure out how to make it work for 'n' sections:
import cv2
img = cv2.imread('image.png')
def fourSectionAvgColor(image):
rows, cols, ch = image.shape
colsMid = int(cols/2)
rowsMid = int(rows/2)
numSections = 4
section0 = image[0:rowsMid, 0:colsMid]
section1 = image[0:rowsMid, colsMid:cols]
section2 = image[rowsMid: rows, 0:colsMid]
section3 = image[rowsMid:rows, colsMid:cols]
sectionsList = [section0, section1, section2, section3]
sectionAvgColorList = []
for i in sectionsList:
pixelSum = 0
yRows, xCols, chs = i.shape
pixelCount = yRows*xCols
totRed = 0
totBlue = 0
totGreen = 0
for x in range(xCols):
for y in range(yRows):
bgr = i[y,x]
b = bgr[0]
g = bgr[1]
r = bgr[2]
totBlue = totBlue+b
totGreen = totGreen+g
totRed = totRed+r
avgBlue = int(totBlue/pixelCount)
avgGreen = int(totGreen/pixelCount)
avgRed = int(totRed/pixelCount)
avgPixel = (avgBlue, avgGreen, avgRed)
sectionAvgColorList.append(avgPixel)
return sectionAvgColorList
print(fourSectionAvgColor(img))
cv2.waitKey(0)
cv2.destroyAllWindows()

You can use scikit-image's view_as_blocks together with numpy.mean. You specify the block size instead of the number of blocks:
import numpy as np
from skimage import data, util
import matplotlib.pyplot as plt
astro = data.astronaut()
blocks = util.view_as_blocks(astro, (8, 8, 3))
print(astro.shape)
print(blocks.shape)
mean_color = np.mean(blocks, axis=(2, 3, 4))
fig, ax = plt.subplots()
ax.imshow(mean_color.astype(np.uint8))
Output:
(512, 512, 3)
(64, 64, 1, 8, 8, 3)
Don't forget the cast to uint8 because matplotlib and scikit-image expect floating point images to be in [0, 1], not [0, 255]. See the scikit-image documentation on data types for more info.

Related

visualize a two-dimensional point set using Python

I'm new to Python and want to perform a rather simple task. I've got a two-dimensional point set, which is stored as binary data (i.e. (x, y)-coordinates) in a file, which I want to visualize. The output should look as in the picture below.
However, I'm somehow overwhelmed by the amount of google results on this topic. And many of them seem to be for three-dimensional point cloud visualization and/or a massive amount of data points. So, if anyone could point me to a suitable solution for my problem, I would be really thankful.
EDIT: The point set is contained in a file which is formatted as follows:
0.000000000000000 0.000000000000000
1.000000000000000 1.000000000000000
1
0.020375738732779 0.026169010160356
0.050815740313746 0.023209931647163
0.072530406907906 0.023975230642589
The first data vector is the one in the line below the single "1"; i.e. (0.020375738732779, 0.026169010160356). How do I read this into a vector in python? I can open the file using f = open("pointset file")
Install and import matplotlib and pyplot:
import matplotlib.pyplot as plt
Assuming this is your data:
x = [1, 2, 5, 1, 5, 7, 8, 3, 2, 6]
y = [6, 7, 1, 2, 6, 2, 1, 6, 3, 1]
If you need, you can use a comprehension to split the coordinates into seperate lists:
x = [p[0] for p in points]
y = [p[1] for p in points]
Plotting is as simple as:
plt.scatter(x=x, y=y)
Result:
Many customizations are possible.
EDIT: following question edit
In order to read the file:
x = []
y = []
with open('pointset_file.txt', 'r') as f:
for line in f:
coords = line.split(' ')
x.append(float(coords[0]))
y.append(float(coords[1]))
You could read your data as follow, and plot using scattr plot. this method is considering for small number of data and not csv, just the format you have presented.
import matplotlib.pyplot as plt
with open("pointset file") as fid:
lines = fid.read().split("\n")
# lines[:2] looks like the bounds for each axis, if yes use it in plot
data = [[float(d) for d in line.split(" ") if d] for line in lines[3:]]
plt.scatter(data[0], data[1])
plt.show()
Assuming you want a plot looking pretty much exactly like the sample image you give, and you want the plot to display the data with both axes in equal proportion, one could use a general purpose multimedia library like pygame to achieve this:
#!/usr/bin/env python3
import sys
import pygame
# windows will never be larger than this in their largest dimension
MAX_WINDOW_SIZE = 400
BG_COLOUR = (255, 255, 255,)
FG_COLOUR = (0, 0, 0,)
DATA_POINT_SIZE = 2
pygame.init()
if len(sys.argv) < 2:
print('Error: need filename to read data from')
pygame.quit()
sys.exit(1)
else:
data_points = []
# read in data points from file first
with open(sys.argv[1], 'r') as file:
[next(file) for _ in range(3)] # discard first 3 lines of file
# now the rest of the file contains actual data to process
data_points.extend(tuple(float(x) for x in line.split()) for line in file)
# file read complete. now let's find the min and max bounds of the data
top_left = [float('+Inf'), float('+Inf')]
bottom_right = [float('-Inf'), float('-Inf')]
for datum in data_points:
if datum[0] < top_left[0]:
top_left[0] = datum[0]
if datum[1] < top_left[1]:
top_left[1] = datum[1]
if datum[0] > bottom_right[0]:
bottom_right[0] = datum[0]
if datum[1] > bottom_right[1]:
bottom_right[1] = datum[1]
# calculate space dimensions
space_dimensions = (bottom_right[0] - top_left[0], bottom_right[1] - top_left[1])
# take the biggest of the X or Y dimensions of the point space and scale it
# up to our maximum window size
biggest = max(space_dimensions)
scale_factor = MAX_WINDOW_SIZE / biggest # all points will be scaled up by this factor
# screen dimensions
screen_dimensions = tuple(sd * scale_factor for sd in space_dimensions)
# basic init and draw all points to screen
display = pygame.display.set_mode(screen_dimensions)
display.fill(BG_COLOUR)
for point in data_points:
# translate and scale each point
x = point[0] * scale_factor - top_left[0] * scale_factor
y = point[1] * scale_factor - top_left[1] * scale_factor
pygame.draw.circle(display, FG_COLOUR, (x, y), DATA_POINT_SIZE)
pygame.display.update()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit(0)
pygame.time.wait(50)
Execute this script and pass the name of the file which holds your data in as the first argument. It will spawn a window with the data points displayed.
I generated a bunch of uniformly distributed random x,y points to test it, with:
from random import random
for _ in range(1000):
print(random(), random())
This produces a window looking like the following:
If the space your data points are within is not of square size, the window shape will change to reflect this. The largest dimension of the window, either width or height, will always stay at a specified size (I used 400px as a default in my demo).
Admittedly, this is not the most elegant or concise solution, and reinvents the wheel a little bit, however it gives you the most control on how to display your data points, and it also deals with both the reading in of the file data and the display of it.
To read your file:
import pandas as pd
import numpy as np
df = pd.read_csv('your_file',
sep='\s+',
header=None,
skiprows=3,
names=['x','y'])
For now I've created a random dataset
import random
df = pd.DataFrame({'x':[random.uniform(0, 1) for n in range(100)],
'y':[random.uniform(0, 1) for n in range(100)]})
I prefer Plotly for any kind of figure
import plotly.express as px
fig = px.scatter(df,
x='x',
y='y')
fig.show()
From here you can easily update labels, colors, etc.

How to divide an image to blocks, process them, and merge them back together in Python?

I want to divide my image into 4x4 blocks, then upscale every block, and finally merge them back together
I have consulted various ways on Stackoverflow but they do not mention how to merge blocks back together
Here's a copy paste of my answer to this post with the addition of how to reassemble the images:
I would do smth like what I do in the code below. In my example I used parts of images from skimage.data to illustrate my method and made the shapes and sizes different so that it will look prettier. But you can do the same for your dta by adjusting those parameters.
from skimage import data
from matplotlib import pyplot as plt
import numpy as np
astronaut = data.astronaut()
coffee = data.coffee()
arr = np.stack([coffee[:400, :400, :], astronaut[:400, :400, :]])
plt.imshow(arr[0])
plt.title('arr[0]')
plt.figure()
plt.imshow(arr[1])
plt.title('arr[1]')
arr_blocks = arr.reshape(arr.shape[0], 4, 100, 4, 100, 3, ).swapaxes(2, 3)
arr_blocks = arr_blocks.reshape(-1, 100, 100, 3)
for i, block in enumerate(arr_blocks):
plt.figure(10+i//16, figsize = (10, 10))
plt.subplot(4, 4, i%16+1)
plt.imshow(block)
plt.title(f'block {i}')
# batch_size = 9
# some_outputs_list = []
# for i in range(arr_blocks.shape[0]//batch_size + ((arr_blocks.shape[0]%batch_size) > 0)):
# some_outputs_list.append(some_function(arr_blocks[i*batch_size:(i+1)*batch_size]))
Output:
And for reassembling the images I would do smth like this:
arr_blocks = arr_blocks.reshape(-1, 4, 4, 100, 100, 3).swapaxes(2, 3)
arr_blocks = arr_blocks.reshape(-1, 400, 400, 3)
for i, block in enumerate(arr_blocks):
plt.figure()
plt.imshow(block)
plt.title('reconstruction {i}')
output:

Numy 2D image array how to apply formula to only pixels that satisfy a condition?

import numpy as np
from PIL import Image
def isblack(rgb):
return (rgb[0]==0) and (rgb[1]==0) and (rgb[2]==0)
a = Image.open('image1.jpg')
a = np.array(a) # RGB image
[h,w,chan] = np.shape(a)
filtsz = 9
# comparing subimage with a[50:59,60:69] and a[100:119,120:129], for example
srcTop = 50
srcLeft = 60
dstTop = 100
dstLeft = 120
ssd = 0 # sumOfSquareDifference
for i in range(filtsz):
for j in range(filtsz):
if not isblack(a[dstTop+i,dstLeft+j,:]):
ssd += sum((a[dstTop+i, dstLeft+j] - a[srcTop+i, srcLeft+j])**2)
print(ssd)
The naive implementation is to loop over all pixel that satisfy the condition, then compute.
However, this is very slow.
How can I make it faster? I'm looking for a way that use indexing. For example, something that has the following pseudo code:
selected = [not isblack(pixel) for pixel in image] # 2D array contains 0 if black, 1 if not black
diff = [(a[pixel] - b[pixel])**2 for pixel in a] # 2D array contains the square difference at each pixel
ssd = sum(diff * selected) # sum only positions that satisfy the condition
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
img = Image.open("image/image1.jpg")
filtsz = 100 # increase from 9 to 100 for display purpose
srcTop = 50
srcLeft = 60
dstTop = 100
dstLeft = 120
npimg = np.array(img)
# indexing
subimg_src = npimg[srcTop:srcTop+filtsz,srcLeft:srcLeft+filtsz,:]
subimg_dst = npimg[dstTop:dstTop+filtsz,dstLeft:dstLeft+filtsz,:]
fig,ax = plt.subplots(1,2)
ax[0].imshow(subimg_src)
ax[1].imshow(subimg_dst)
# channel axis - 2
# ~: negation operator
# keepdims: set True for broadcasting
selected = ~np.any(subimg_dst,axis=2,keepdims=True)
ssd = np.sum((subimg_src-subimg_dst)**2*selected)
print(ssd)
Example image:

Perlin noise in Python's noise library

I have a problem with generating Perlin noise for my project. As I wanted to understand how to use library properly, I tried to follow step-by-step this page: https://medium.com/#yvanscher/playing-with-perlin-noise-generating-realistic-archipelagos-b59f004d8401
In first part, there is code:
import noise
import numpy as np
from scipy.misc import toimage
shape = (1024,1024)
scale = 100.0
octaves = 6
persistence = 0.5
lacunarity = 2.0
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=0)
toimage(world).show()
I copy-paste it with small change at the end (toimage is obsolete) so I have:
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = 100
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
for i in range(shape[0]):
for j in range(shape[1]):
world[i][j] = noise.pnoise2(i/scale,
j/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
Image.fromarray(world, mode='L').show()
I tried a lot of diffrient modes, but this noise is not even close to coherent noise. My result is something like this (mode='L'). Could someone explain me, what am I doing wrong?
Here is the working code. I took the liberty of cleaning it up a little. See comments for details. As a final advice: When testing code, use matplotlib for visualization. Its imshow() function is way more robust than PIL.
import noise
import numpy as np
from PIL import Image
shape = (1024,1024)
scale = .5
octaves = 6
persistence = 0.5
lacunarity = 2.0
seed = np.random.randint(0,100)
world = np.zeros(shape)
# make coordinate grid on [0,1]^2
x_idx = np.linspace(0, 1, shape[0])
y_idx = np.linspace(0, 1, shape[1])
world_x, world_y = np.meshgrid(x_idx, y_idx)
# apply perlin noise, instead of np.vectorize, consider using itertools.starmap()
world = np.vectorize(noise.pnoise2)(world_x/scale,
world_y/scale,
octaves=octaves,
persistence=persistence,
lacunarity=lacunarity,
repeatx=1024,
repeaty=1024,
base=seed)
# here was the error: one needs to normalize the image first. Could be done without copying the array, though
img = np.floor((world + .5) * 255).astype(np.uint8) # <- Normalize world first
Image.fromarray(img, mode='L').show()
If someone comes after me, with noise library you should rather normalize with
img = np.floor((world + 1) * 127).astype(np.uint8)
This way there will not be any spots of abnormal colour opposite to what it should be.

OpenCV affine transformation won't perform

I'm trying to perform basic affine transformation using pivot points.
import cv2
import numpy as np
import PIL
import matplotlib.pyplot as plt
img = cv2.imread('earth.png')
img_pivots = cv2.imread('earth_keys.png')
map_img = cv2.imread('earth2.png')
map_pivots = cv2.imread('earth2_keys.png')
pts_img_R = np.transpose(np.where(img_pivots[:, :, 2] > 0 ))
pts_img_G = np.transpose(np.where(img_pivots[:, :, 1] > 0 ))
pts_img_B = np.transpose(np.where(img_pivots[:, :, 0] > 0 ))
pts_img = np.vstack([pts_img_R, pts_img_G, pts_img_B])
pts_map_R = np.transpose(np.where(map_pivots[:, :, 2] > 0 ))
pts_map_G = np.transpose(np.where(map_pivots[:, :, 1] > 0 ))
pts_map_B = np.transpose(np.where(map_pivots[:, :, 0] > 0 ))
pts_map = np.vstack([pts_map_R, pts_map_G, pts_map_B])
M = cv2.estimateRigidTransform(pts_map.astype(np.float32), pts_img.astype(np.float32), True)
dst = cv2.warpAffine(map_img,M,(img.shape[1], img.shape[0]))
plt.subplot(121),plt.imshow(img),plt.title('earth.png')
plt.subplot(122),plt.imshow(dst),plt.title('earth2.png transrofmed')
plt.show()
On both images I made 3 points (R, G & B) and saved them in separate images ('earth_keys.png' for 'earth.png' and 'earth2_keys.png' for 'earth2.png'). All I want is to match pivot points on 'earth2.png' with pivot points on 'earth.png'.
Still, all I get after transformation is this
I'm assuming that I misplaced some arguments or something like this, but I tried all combinations and got all types of wrong results, but still can't spot it.
Example images (with pivots)
Edit:
Changed pivots number to 6
Still wrong transformation
M is now equal to
array([[ 4.33809524e+00, 8.28571429e-01, -5.85633333e+02],
[ -6.22380952e+00, -1.69285714e+00, 1.03468333e+03]])
Example with 6 pivots
How confident are you in your pivot points ?
If I plot them on your images, I obtain this:
Which gives, after manual superposition, something that looks like your result:
If I define points manually for 3 correspondences, I get this:
pts_img = np.vstack([[68,33], [22,84], [113,87]] )
pts_map = np.vstack([[115,101], [30,199], [143,198]])
It's still not perfect, but it may be closer to what you want to achieve.
To conclude, I'd recommend you to check how you compute your keypoints, and, in case of doubt, to do a manual superposition.

Categories

Resources