I have images (png) that are 128x128 pixels, how do I convert the image so that each pixel in the image is closest in color to the ones in the following array?
The array will probably get bigger with more specific colors, but in this case:
[(0, 255, 100), (100, 100, 100), (255, 255, 255), (0, 0, 0), (156, 126, 210)]
Related
I was wondering whether anyone was aware of any approaches to discover which portion of an image was pixelated. For example for the following saussage dog where I have applied the following code
img = cv2.imread("sausage.jpg")
blurred_img = cv2.blur(img, (21, 21), 0)
mask = np.zeros(img.shape, dtype=np.uint8)
mask = cv2.circle(mask, (200, 100), 100, [255, 255, 255], -1)
out = np.where(mask==[255, 255, 255], blurred_img,img)
I would like to zoom in to a circle centered at 200,100 with a radius of 100.
I have tried looking at edges, but this doesn't give anything definitive and I haven't got an algorithm to extract the information yet.
I have the following transparent images.
What I want to do is to paste them on an image with a background of a specific color. The color of the background is randomized like this:
rand1, rand2, rand3 = (random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255))
background = Image.new('RGBA', png.size, (rand1, rand2, rand3))
alpha_composite = Image.alpha_composite(background, png)
Unfortunately, some of the logos don't go well with their background colors. The background color sometimes comes close to color(s) inside the logo, which makes the logo either partially or completely invisible. Here is an example where the background color is almost identical to the orange color in the Ubuntu logo:
What I did was to get all of the colors from each logo and save them in a list of tuples like this. This is actually a list of lists of tuples. I've just edited it now to highlight which nested list of tuples belong to which logo:
Intel = [(0, 113, 197)]
Corsair = [(4, 7, 7)]
Google = [(66, 133, 244), (234, 67, 53), (251, 188, 5), (52, 168, 83), (0, 255, 255), (255, 128, 0), (255, 255, 0)]
Riot = [(209, 54, 57), (255, 255, 255), (226, 130, 132), (0, 0, 0)]
What I want to do is to use the above ^ information to randomly choose background colours so that no part of a logo is made invisible. I'm asking for suggestions on strategies to go about this..
This is the function that adds a background color to the logos:
def logo_background(path):
rand1, rand2, rand3 = (random.randint(0, 255),
random.randint(0, 255),
random.randint(0, 255))
png = Image.open(path).convert('RGBA')
colors = extcolors.extract_from_path(path)
background = Image.new('RGBA', png.size, (rand1, rand2, rand3))
alpha_composite = Image.alpha_composite(background, png)
return alpha_composite
>>> extcolors.extract_from_path(path)
[((0, 113, 197), 25727), 56235]
# for the intel logo, which is just blue with transparent background
Some logos are completely black. The corsair logo is an all black logo with transparent background but the code did not select the right background.
I think using a thre component vektor like rgb is difficult for randome choice. I would convert it to the hsv-color-system (hue, saturation, lightness) first. Then we only need to worry about hue and can use random.choice to choose a value from a 1D list of possible values.
Google = [(66, 133, 244), (234, 67, 53), (251, 188, 5), (52, 168, 83), (0, 255, 255), (255, 128, 0), (255, 255, 0)]
threshold = 0.1 #No hue value closer than threshold to a logo-color
# convert to hsv
GoogleHsv = [colorsys.rgb_to_hsv(r/255.0, g/255.0, b/255.0) for r,g,b in Google]
print("GoogleHsv:", GoogleHsv)
# list of possible hue-values that are at least threshold away from all logo-hue-values
choices = [x for x in np.linspace(0,1,201) if min([abs(x-h) for h,l,s in GoogleHsv]) > threshold]
print("choices:", choices)
h = random.choice(choices)
l = random.random() # random lightness
s = random.random() # random saturation
# you could also use constants for l,s to alway get a vibrant/dark color.
color = [int(x*255) for x in colorsys.hsv_to_rgb(h, l, s)] # converting back to rbg
print("color:", color)
Hope this helps. Have a nice day.
EDIT
For your function, it would look like this:
def logo_background(path):
threshold = 0.1
png = Image.open(path).convert('RGBA')
used_colors_rgb = extcolors.extract_from_path(path)[0]
used_hsv = [colorsys.rgb_to_hsv(r/255.0, g/255.0, b/255.0) for (r,g,b), _ in used_colors_rgb]
choices = [x for x in np.linspace(0,1,201) if min([abs(x-h) for h,l,s in used_hsv]) > threshold]
h, l, s = random.choice(choices), (random.random()+1) / 2, (random.random()+1) / 2
color = [int(x*255) for x in colorsys.hsv_to_rgb(h, l, s)]
background = Image.new('RGBA', png.size,tuple(color))
alpha_composite = Image.alpha_composite(background, png)
return alpha_composite
logo_background("google.png")
I'm trying to pad a RGB image with magenta (255, 0, 255) color with np.pad. But I'm getting an error when using RGB values as constant_values. For example:
import numpy as np
from scipy.misc import face
import matplotlib.pyplot as plt
def pad_img(img, pad_with):
pad_value = max(img.shape[:-1])
img_padded = np.pad(img,
((0, (pad_value - img.shape[0])), # pad bottom
(0, (pad_value - img.shape[1])), # pad right
(0, 0)), # don't pad channels
mode='constant',
constant_values=pad_with)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(img)
ax2.imshow(img_padded)
plt.show()
This works fine (padding with white color):
img = face()
pad_img(img, pad_with=255)
And this not (padding with magenta):
img = face()
pad_img(img, pad_with=(255, 0, 255))
Throwing:
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (3,) and requested shape (3,2)
I think what you are looking for is:
img = face()
pad_img(img, pad_with=(((255, 0, 255), (255, 0, 255)), ((255, 0, 255), (255, 0, 255)), (0, 0)))
According to numpy doc constant_values is of form:
((before_1, after_1), ... (before_N, after_N))
And I think that is why the error says it gets shape (3,) ((255, 0, 255)) for pad_width while it requests shape (3,2) ((((255, 0, 255), (255, 0, 255)), ((255, 0, 255), (255, 0, 255)), (0, 0)))
I'm trying to get the rgba values of the pixels of an image.
Google suggests I use code similar to this:
from PIL import Image
im = Image.open("C:/Stuff/image.png", "r")
px = list(im.getdata())
My problem is the data not always being in rgba format.
On some images it does return rgba
[(0, 0, 0, 255), (0, 0, 0, 255), (0, 0, 255, 255), [...]
while on others it returns rgb
[(0, 0, 0), (0, 0, 0), (0, 0, 255), [...]
and on some it returns whatever this is
[0, 0, 1, [...]
Is there a way to always get rgba returned?
I'm trying to create an image from 1d numpy array of integers so that changes to this array reflects in the image. It seems that Image.frombuffer perfectly fits my needs. There's my attempts:
from PIL import Image
import numpy as np
data = np.full(100, 255, dtype = np.int32)
img = Image.frombuffer('RGB', (10, 10), data)
print(list(img.getdata()))
I expected to see a list of 100 tuples (0, 0, 255). But what I'm actually getting is (0, 0, 255), (0, 0, 0), (0, 0, 0), (0, 255, 0), (0, 0, 0), (0, 0, 0), (255, 0, 0), (0, 0, 0), (0, 0, 255), (0, 0, 0), (255, 0, 0), ...
What is the reason of that behavior?
'RGB' uses three bytes per pixel. The buffer that you provided is an array with data type numpy.int32, which uses four bytes per element. So you have a mismatch.
One way to handle it is to use mode 'RGBA':
img = Image.frombuffer('RGBA', (10, 10), data)
Whether or not that is a good solution depends on what you are going to do with the image.
Also note that whether you get (255, 0, 0, 0) or (0, 0, 0, 255) for the RGBA pixels depends on the endianess of the integers in data.
For an RGB image, here's an alternative:
data = np.zeros(300, dtype=np.uint8)
# Set the blue channel to 255.
data[2::3] = 255
img = Image.frombuffer('RGB', (10, 10), data)
Without more context for the problem, I don't know if that is useful for you.