I'm trying to learn how to create a set of images like this: this. The idea is that there are two seemingly random images, but when you XOR them, you find a secret message. I want to use Python Pillow, probably along with a simple image editor like paint.net. So my question consists of a few parts:
How do I generate an image full of random black or white pixels in Pillow.
How can I ensure certain areas of my images aren't actually random, but instead identical, ensuring an XOR compare will reveal them.
The process of creating those images is really simple. Here is an example how you could do it (not the most efficient):
Create two output images of same size
Create a template of same size, where 1 (white) means foreground (the hidden message) and 0 (black) means background (purely random).
Iterate over both images and the template in one loop:
If the template at current position says 0, draw two random numbers (zero or one) and assign them to the current pixel of each output image
If the template says 1, draw only one random number and assign it to both pixels
I will not go into detail on how you read your template image, create binary output images and iterate over them using Pillow, as I never tried Pillow. Drawing random numbers however is very simple:
x = random.randint(0,1) (see https://docs.python.org/2/library/random.html#random.randint)
To get you started, here's a way to make random binary images:
from PIL import Image
import numpy as np
# Make lots of ones and zeros.
data = np.random.randint(2, size=(100,100))
# Cast as 8-bit ints, 0 and 255.
data = data.astype(np.uint8) * 255
# Cast as an image. Pillow guesses mode.
img = Image.fromarray(data)
Result (magnified to 300 × 300 pixels):
For future posterity, here's what I did:
First I made a mask image. It was a white background with a red box and black text in the box. Looks like this:
Here's the script I wrote to make the two fuzzy images:
from PIL import Image
import random
WHITE = (255, 255, 255, 255)
RED = (255, 0, 0, 255)
BLACK = (0, 0, 0, 255)
wb = [WHITE,BLACK]
rng = random.SystemRandom()
orig = Image.open('mask.png')
origData = list(orig.getdata())
n1 = Image.new(orig.mode, orig.size)
n2 = Image.new(orig.mode, orig.size)
n1data = []
n2data = []
for x in origData:
if x == WHITE:
n1data.append(rng.choice(wb))
n2data.append(rng.choice(wb))
elif x == RED:
y = bool(rng.getrandbits(1))
if y:
n1data.append(WHITE)
n2data.append(BLACK)
else:
n1data.append(BLACK)
n2data.append(WHITE)
elif x == BLACK:
y = bool(rng.getrandbits(1))
if y:
n1data.append(WHITE)
n2data.append(WHITE)
else:
n1data.append(BLACK)
n2data.append(BLACK)
n1.putdata(n1data)
n2.putdata(n2data)
n1.save('n1.png')
n2.save('n2.png')
orig.close()
n1.close()
n2.close()
Resulted in these:
XOR them together and you get this:
Related
I'm pretty new to image processing and python so bear with me
I'm trying to take a big image (5632x2048) which is basically a map of the world with provinces (ripped from Hearts of Iron 4), and each province is colored a different RGB value, and color it with a set of colors, each corresponding to a certain country. I'm currently using this code
import numpy as np
import cv2
import sqlite3
dbPath = 'PATH TO DB'
dirPath = 'PATH TO IMAGE'
con = sqlite3.connect(dbPath)
cur = con.cursor()
im = cv2.imread(dirPath)
cur.execute('SELECT * FROM Provinces ORDER BY id')
provinceTable = cur.fetchall()
for line in provinceTable:
input_rgb = [line[1], line[2], line[3]]
if line[7] == None:
output_rgb = [255,255,255]
else:
output_rgb = line[7].replace('[', '').replace(']','').split(',')
im[np.all(im == (int(input_rgb[0]), int(input_rgb[1]), int(input_rgb[2])), axis=-1)] = (int(output_rgb[0]), int(output_rgb[1]), int(output_rgb[2]))
cv2.imwrite('result.png',im)
The problem I'm running into is that it's painfully slow (50 minutes in and it hasn't finished), due to the fact I'm definitely using numpy wrong by looping through it instead of vectorizing (a concept I'm still new to and have no idea how to do). Google hasn't been very helpful either.
What's the best way to do this?
Edit: forgot to mention that the amount of values I'm replacing is pretty big (~15000)
As I mentioned in the comments, I think you'll want to use np.take(yourImage, LUT) where LUT is a Lookup Table.
So, if you make a dummy image the same shape as yours:
import numpy as np
# Make a dummy image of 5632x2048 RGB values
im = np.random.randint(0,256,(5632,2048,3), np.uint8)
that will be 34MB. Now reshape it to a tall vector of RGB values:
# Make image into a tall vector, as tall as necessary and 3 RGB values wide
v = im.reshape((-1,3))
which will be of shape (11534336, 3) and then flatten that to 24-bit values rather than three 8-bit values with np.dot()
# Make into tall vector of shape 11534336x1 rather than 11534336x3
v24 = np.dot(v.astype(np.uint32),[1,256,65536])
You will now have a 1-D vector of 24-bit pixel values with shape (11534336,)
Now create your RGB lookup table (I am making all 2^24 RGB entries here, you may need less):
RGBLUT = np.zeros((2**24,3),np.uint8)
And set up the LUT. So, supposing you want to map all colours in the original image to mid-grey (128) in the output image:
RGBLUT[:] = 128
Now do the np.dot() thing just the same as we did with the image so we get a LUT with shape (224,1) rather than shape (224,3):
LUT24 = np.dot(RGBLUT.astype(np.uint32), [1,256,65536])
Then do the actual lookup in the table:
result = np.take(LUT24, v24)
On my Mac, that take 334ms for your 5632x2048 image.
Then reshape and convert back to three 8-bit values by shifting and ANDing to undo effect of np.dot().
I am not currently in a position to test the re-assembly, but it will look pretty much like this:
BlueChannel = result & 0xff # Blue channel is bottom 8 bits
GreenChannel = (result>>8) &0 xff # Green channel is middle 8 bits
RedChannel = (result>>16) &0 xff # Red channel is top 8 bits
Now combine those three single channels into a 3-channel image:
RGB = np.dstack(RedChannel, GreenChannel, BlueChannel))
And reshape back from tall vector to dimensions of original image:
RGB = RGB.reshape(im.shape)
As regards setting up the LUT, to something more interesting than mid-grey, if you want to map say orange, i.e. rgb(255,128,0) to magenta, i.e. rgb(255,0,255) you would do something along the lines of:
LUT[np.dot([255,128,0],[1,256,65536])] = [255,0,255] # map orange to magenta
LUT[np.dot([255,255,255],[1,256,65536])] = [0,0,0] # map white to black
LUT[np.dot([0,0,0],[1,256,65536])] = [255,255,255] # map black to white
Keywords: Python, image processing, LUT, RGB LUT 24-bit LUT, lookup table.
Here is one way to do that using Numpy and Python/OpenCV. Here I change red to green.
Input:
import cv2
import numpy as np
# load image
img = cv2.imread('test_red.png')
# change color
result = img.copy()
result[np.where((result==[0,0,255]).all(axis=2))] = [0,255,0]
# save output
cv2.imwrite('test_green.png', result)
# Display various images to see the steps
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
You can create a mask of the image first and use that to replace the colors. There's likely a pure numpy way of doing this that is faster, but I don't know it.
This code takes ~0.5 seconds to run. You should expect it to take about half a second for each color replacement.
import cv2
import numpy as np
import time
# make image
res = (5632, 2048, 3);
img = np.zeros(res, np.uint8);
# change black to white
black = (0,0,0);
white = (255,255,255);
# make a mask
start_time = time.time();
mask = cv2.inRange(img, black, black);
print("Mask Time: " + str(time.time() - start_time));
# replace color
start_time = time.time();
img[mask == 255] = white;
print("Replace Time: " + str(time.time() - start_time));
In terms of your code it'll look like this
for line in provinceTable:
input_rgb = [line[1], line[2], line[3]]
input_rgb = (int(input_rgb[0]), int(input_rgb[1]), int(input_rgb[2]))
if line[7] == None:
output_rgb = (255,255,255)
else:
output_rgb = line[7].replace('[', '').replace(']','').split(',')
output_rgb = (int(output_rgb[0]), int(output_rgb[1]), int(output_rgb[2]))
mask = cv2.inRange(im, input_rgb, input_rgb)
im[mask == 255] = output_rgb
My goal is to generate a color per pixel in order to fill up the whole canvas however the image generated always turns out black with only one of its pixels changed color, I can't seem to figure what I'm doing wrong.
import random
from PIL import Image
canvas = Image.new("RGB", (300,300))
y = random.randint(1, canvas.width)
x = random.randint(1, canvas.width)
r = random.randint(0,255)
g = random.randint(0,255)
b = random.randint(0,255)
rgb = (r,g,b)
for i in range(canvas.width):
canvas.putpixel((x,y), (rgb))
canvas.save("test.png", "PNG")
print("Image saved successfully.")
You really should try and avoid using for loops in any Python image processing - they are slow and error-prone.
The easiest and fastest way to make a random image is using vectorised Numpy functions like this:
import numpy as np
from PIL import Image
# Create Numpy array 300x300x3 of random uint8
data = np.random.randint(0, 256, (300,300,3), dtype=np.uint8)
# Make into PIL Image
im = Image.fromarray(data)
The problem with your code is that you are not iterating over every pixel. I've modified your code to iterate over every pixel, check whether or not it is black (0,0,0), then place a pixel on that iteration with your randomly-generated rgb value. Then, I regenerate 3 new random numbers and place them back into the rgb tuple causing the next pixel in the loop to have a different rgb value.
The x and y definitions are redundant, as you want a random color for every pixel but do not want random pixels, so I have removed them. I added a declaration, pixels = canvas.load() which allocates memory for the pixels so you can iterate over them and change each individual color. I heavily relied on this similar stackoverflow question, if you want further information. Here is my code:
canvas = Image.new("RGB", (300,300))
pixels = canvas.load()
width, height = canvas.size
for i in range(width):
for j in range(height):
if pixels[i,j] == (0,0,0):
r = random.randint(0,255)
g = random.randint(0,255)
b = random.randint(0,255)
rgb = (r,g,b)
canvas.putpixel((i,j), (rgb))
canvas.save("test.png", "PNG")
print("Image saved successfully.")
Here is the output produced:
I'm very new to programming, and I am learning more about image processing using PIL.
I have a certain task that requires me to change every specific pixel's color with another color. Since there are more than few pixels I'm required to change, I've created a for loop to access to every pixel. The script "works" at least, however the result is just a black screen with (0, 0, 0) color in each pixel.
from PIL import Image
img = Image.open('/home/usr/convertimage.png')
pixels = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
if pixels[i,j] == (225, 225, 225):
pixels[i,j] = (1)
elif pixels[i,j] == (76, 76, 76):
pixels [i,j] = (2)
else: pixels[i,j] = (0)
img.save('example.png')
The image I have is a grayscale image. There are specific colors, and there are gradient colors near the borders. I'm trying to replace each specific color with another color, and then replace the gradient colors with another color.
However for the life of me, I don't understand why my output comes out with a single (0, 0, 0) color at all.
I tried to look for an answer online and friends, but couldn't come up with a solution.
If anyone out there knows what I'm doing wrong, any feedback is highly appreciated. Thanks in advance.
The issue is that your image is, as you said, greyscale, so on this line:
if pixels[i,j] == (225, 225, 225):
no pixel will ever equal the RGB triplet (255,255,255) because the white pixels will be simply the greyscale vale 255 not an RGB triplet.
It works fine if you change your loop to:
if pixels[i,j] == 29:
pixels[i,j] = 1
elif pixels[i,j] == 179:
pixels [i,j] = 2
else:
pixels[i,j] = 0
Here is the contrast-stretched result:
You may like to consider doing the conversion using a "Look Up Table", or LUT, as large numbers of if statements can get unwieldy. Basically, each pixel in the image is replaced with a new one found by looking up its current index in the table. I am doing it with numpy for fun too:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open the input image
PILimage=Image.open("classified.png")
# Use numpy to convert the PIL image into a numpy array
npImage=np.array(PILimage)
# Make a LUT (Look-Up Table) to translate image values. Default output value is zero.
LUT=np.zeros(256,dtype=np.uint8)
LUT[29]=1 # all pixels with value 29, will become 1
LUT[179]=2 # all pixels with value 179, will become 2
# Transform pixels according to LUT - this line does all the work
pixels=LUT[npImage];
# Save resulting image
result=Image.fromarray(pixels)
result.save('result.png')
Result - after stretching contrast:
I am maybe being a bit verbose above, so if you like more terse code:
import numpy as np
from PIL import Image
# Open the input image as numpy array
npImage=np.array(Image.open("classified.png"))
# Make a LUT (Look-Up Table) to translate image values
LUT=np.zeros(256,dtype=np.uint8)
LUT[29]=1 # all pixels with value 29, will become 1
LUT[179]=2 # all pixels with value 179, will become 2
# Apply LUT and save resulting image
Image.fromarray(LUT[npImage]).save('result.png')
Basically, I have two images. One is comprised of white and black pixels, the black pixels making up a word, and the other image that I'm trying to paste the black pixels on top of. I've pasted the code below, however I'm aware that there's an issue with the "if pixels [x,y] == (0, 0, 0):' being a tuple and not an indice, however I'm uncertain of how to get it to look for black pixels with other means.
So essentially I need to find, and remember the positions of, the black pixels so that I can paste them onto the first image. Any help is very much appreciated!
image_one = Image.open (image_one)
image_two = Image.open (image_two)
pixels = list(image_two.getdata())
for y in xrange(image_two.size[1]):
for x in xrange(image_two.size[0]):
if pixels[x,y] == (0, 0, 0):
pixels = black_pixels
black_pixels.append()
image = Image.open (image_one);
image_one.putdata(pixels)
image.save(image_one+ "_X.bmp")
del image_one, image_two;
You're almost there. I am not too familiar with the PIL class, but instead of calling the getdata method, let's use getpixel directly on the image object, and directly set the results into the output image. That eliminates the need to store the set of pixels to overwrite. However, there may be cases beyond what you've listed here where such an approach would be necessary. I created a random image and then set various pixels to black. For this test I used a different condition - if the R channel of the image is greater than 50. You can comment that out and use the other test, which tests for tuple (R,G,B) == (0,0,0) which will work fine.
imagea = PIL.Image.open('temp.png')
imageb = PIL.Image.open('temp.png')
for y in xrange(imagea.size[1]):
for x in xrange(imagea.size[0]):
currentPixel = imagea.getpixel((x,y))
if currentPixel[0] > 50:
#if currentPixel ==(0,0,0):
#this is a black pixel, you can directly modify image 2 now
imageb.putpixel((x,y),(0,0,0))
imageb.save('outputfile.png')
An alternative way to do this is just to multiply the two images together. Any pixel that's black in the binary image will be black in the result (multiply by zero) and any pixel that's white in the binary image will be unchanged from the other image in the result (multiply by one).
PIL can do this,
from PIL import Image, ImageChops
image_one = Image.open("image_one.bmp")
image_two = Image.open("image_two.bmp")
out = ImageChops.multiply(image_one, image_two)
out.save("output.bmp")
I have folder full of images with each image containing at least 4 smaller images. I would to know how I can cut the smaller images out using Python PIL so that they will all exist as independent image files. fortunately there is one constant, the background is either white or black so what I'm guessing I need is a way to the cut these images out by searching for rows or preferably columns which are entirely black or entirely white, Here is an example image:
From the image above, there would be 10 separate images, each containing a number. Thanks in advance.
EDIT: I have another sample image that is more realistic in the sense that the backgrounds of some of the smaller images are the same colour as the background of the image they are contained in. e.g.
The output of which being 13 separate images, each containng 1 letter
Using scipy.ndimage for labeling:
import numpy as np
import scipy.ndimage as ndi
import Image
THRESHOLD = 100
MIN_SHAPE = np.asarray((5, 5))
filename = "eQ9ts.jpg"
im = np.asarray(Image.open(filename))
gray = im.sum(axis=-1)
bw = gray > THRESHOLD
label, n = ndi.label(bw)
indices = [np.where(label == ind) for ind in xrange(1, n)]
slices = [[slice(ind[i].min(), ind[i].max()) for i in (0, 1)] + [slice(None)]
for ind in indices]
images = [im[s] for s in slices]
# filter out small images
images = [im for im in images if not np.any(np.asarray(im.shape[:-1]) < MIN_SHAPE)]