Optimising a transform of image with a given function using PIL? - python

I'm trying to transform a given image from a given coordinate to a coordinate on a given image.
I've written an example code using ImageDraw, but with size being 2000 it is too slow for my purpose.
from PIL import Image, ImageDraw
size = [size]
img = Image.new('RGB', (size,size), color = (255,200,100))
draw = ImageDraw.Draw(img)
pix = [pixel data]
for X in range(0,size):
for Y in range (0,size):
draw.point((X, Y), fill=pix[functionX(X),functionY(Y)])
I'm sure it could be done using PIL's funcions faster that my code.

Related

Rotate an image in python and fill the cropped area with image

Have a look at the image and it will give you the better idea what I want to achieve. I want to rotate the image and fill the black part of image just like in required image.
# Read the image
img = cv2.imread("input.png")
# Get the image size
h, w = img.shape[:2]
# Define the rotation matrix
M = cv2.getRotationMatrix2D((w/2, h/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(img, M, (w, h))
mask = np.zeros(rotated.shape[:2], dtype=np.uint8)
mask[np.where((rotated == [0, 0, 0]).all(axis=2))] = 255
img_show(mask)
From the code I am able to get the mask of black regions. Now I want to replace these black regions with the image portion as shown in the image 1. Any better solution how can I achieve this.
Use the borderMode parameter of warpAffine.
You want to pass the BORDER_WRAP value.
Here's the result. This does exactly what you described with your first picture.
I have an approach. You can first create a larger image consisting of 3 * 3 times your original image. When you rotate this image and only cut out the center of this large image, you have your desired result.
import cv2
import numpy as np
# Read the image
img = cv2.imread("input.png")
# Get the image size of the origial image
h, w = img.shape[:2]
# make a large image containing 3 copies of the original image in each direction
large_img = np.tile(img, [3,3,1])
cv2.imshow("large_img", large_img)
# Define the rotation matrix. Rotate around the center of the large image
M = cv2.getRotationMatrix2D((w*3/2, h*3/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(large_img, M, (w*3, h*3))
# crop only the center of the image
cropped_image = rotated[w:w*2,h:h*2,:]
cv2.imshow("cropped_image", cropped_image)
cv2.waitKey(0)

How would I warp text around an image's edges?

I am trying to create an image with the edges replaced with text, similar to This Youtube video thumbnail but from a source image. I've used OpenCV to get a version of a source image with edges, and Pillow to actually write the text, but I'm not sure where to start when it comes to actually manipulating the text automatically to fit to the edges. The code I have so far is:
import cv2 as cv
from matplotlib import pyplot as plt
from PIL import Image, ImageFont, ImageDraw, ImageShow
font = ImageFont.truetype(r"C:\Users\X\Downloads\Montserrat\Montserrat-Light.ttf", 12)
text = ["text", "other text"]
img = cv.imread(r"C:\Users\X\Pictures\picture.jpg",0)
edges = cv.Canny(img,100,200)
img = cv.cvtColor(img, cv.COLOR_BGR2RGB)
im_pil = Image.fromarray(edges)
This code is just for the edge detection and moving the detected edges to Pillow.
Please help
I am not sure where the "edges" comes in from the canny edge detector.
However, the circular text wrap can be done very simply in Python/Wand that uses ImageMagick. Or one can do that in Python/OpenCV using cv2.remap and custom transformation maps.
Input:
1. Python Wand
(output size determined automatically from input size)
from wand.image import Image
from wand.font import Font
from wand.display import display
with Image(filename='some_text.png') as img:
img.background_color = 'white'
img.virtual_pixel = 'white'
# 360 degree arc, rotated 0 degrees
img.distort('arc', (360,0))
img.save(filename='some_text_arc.png')
img.format = 'png'
display(img)
Result:
2. Python/OpenCV
import numpy as np
import cv2
import math
# read input
img = cv2.imread("some_text.png")
hin, win = img.shape[:2]
win2 = win / 2
# specify desired square output dimensions and center
hout = 100
wout = 100
xcent = wout / 2
ycent = hout / 2
hwout = max(hout,wout)
hwout2 = hwout / 2
# set up the x and y maps as float32
map_x = np.zeros((hout, wout), np.float32)
map_y = np.zeros((hout, wout), np.float32)
# create map with the arc distortion formula --- angle and radius
for y in range(hout):
Y = (y - ycent)
for x in range(wout):
X = (x - xcent)
XX = (math.atan2(Y,X)+math.pi/2)/(2*math.pi)
XX = XX - int(XX+0.5)
XX = XX * win + win2
map_x[y, x] = XX
map_y[y, x] = hwout2 - math.hypot(X,Y)
# do the remap this is where the magic happens
result = cv2.remap(img, map_x, map_y, cv2.INTER_CUBIC, borderMode = cv2.BORDER_CONSTANT, borderValue=(255,255,255))
# save results
cv2.imwrite("some_text_arc.jpg", result)
# display images
cv2.imshow('img', img)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
Neither OpenCV nor PIL has a way to do that, but you can use ImageMagick.
How to warp an image to take shape of path with python?

How to change the color of a pixel using PIL?

I was trying to change pixel of an image in python using this question. If mode is 0, it changes first pixel in top right corner of image to grey(#C8C8C8). But it doesn't change. There is not enough documentation about draw.point(). What is the problem with this code?
import random
from PIL import Image, ImageDraw
mode = 0
image = Image.open("dom.jpg")
draw = ImageDraw.Draw(image)
width = image.size[0]
height = image.size[1]
pix = image.load()
string = "kod"
n = 0
if (mode == 0):
draw.point((0, 0), (200, 200, 200))
if(mode == 1):
print(pix[0,0][0])
image.save("dom.jpg", "JPEG")
del draw
Is using PIL a must in your case? If not then consider using OpenCV (cv2) for altering particular pixels of image.
Code which alter (0,0) pixel to (200,200,200) looks following way in opencv:
import cv2
img = cv2.imread('yourimage.jpg')
height = img.shape[0]
width = img.shape[1]
img[0][0] = [200,200,200]
cv2.imwrite('newimage.bmp',img)
Note that this code saves image in .bmp format - cv2 can also write .jpg images, but as jpg is generally lossy format, some small details might be lost. Keep in mind that in cv2 [0][0] is left upper corner and first value is y-coordinate of pixel, while second is x-coordinate, additionally color are three values from 0 to 255 (inclusive) in BGR order rather than RGB.
For OpenCV tutorials, including installation see this.

Converting 1-layer image to 3-layer image

I'm trying to convert a 1-layer (grey-scale) image to a 3-layer RGB image. Below is the code I'm using. This runs without error but doesn't create the correct result.
from PIL import Image # used for loading images
def convertLToRgb(img):
height = img.size[1]
width = img.size[0]
size = img.size
mode = 'RGB'
data = np.zeros((height, width, 3))
for i in range(height):
for j in range(width):
pixel = img.getpixel((j, i))
data[i][j][0] = pixel
data[i][j][1] = pixel
data[i][j][2] = pixel
img = Image.frombuffer(mode, size, data)
return img
What am I doing wrong here? I'm not expecting a color picture, but I am expecting a black and white picture resembling the input. Below are the input and output images:
Depending on the bit depth of your image, change:
data = np.zeros((height, width, 3))
to:
data = np.zeros((height, width, 3), dtype=np.uint8)
For an 8-bit image, you need to force your Numpy array dtype to an unsigned 8-bit integer, otherwise it defaults to float64. For 16-bit, use np.uint16, etc.
What is your task? black-white image or RGB color image. If you want to convert the gray image to the black-white image. You can directly convert the image into a binary image. As for your code, two things you need to care. Firstly, the location of the pixel is right, the wrong location will make the image all black like your post. Secondly, you only can convert the RGB to grayscale image directly, but you can not convert the grayscale image to RGB directly, because it may be not accurate.
You can do it with the PIL.Image and PIL.ImageOps as shown below. Because of the way it's written, the source image isn't required to be one layer—it will convert it to one if necessary before using it:
from PIL import Image
from PIL.ImageOps import grayscale
def convertLToRgb(src):
src.load()
band = src if Image.getmodebands(src.mode) == 1 else grayscale(src)
return Image.merge('RGB', (band, band, band))
src = 'whale_tail.png'
bw_img = Image.open(src)
rgb_img = convertLToRgb(bw_img)
rgb_img.show()

Made a gradient with PIL module, it turned out darker

I was trying to use the PIL module to make the colorpicker gradient
Like this.
I made a code to test out:
from PIL import Image
img = Image.new('HSV', (255,255), "white")
pix = img.load()
H = 0
for x in range(img.size[0]):
S =(int(100*(x/float(img.size[0])))) # A % of image width
for y in range(img.size[1]):
V = (int(100*(1-(y/float(img.size[1]))))) # A % of image height
pix[x,y] = (H,S,V)
img.show()
But my image turns out dark. What did I do wrong?
You are generating S and V values in the 0..100 range. However, I'm pretty sure that a PIL HSV image uses 0..255 values; in other words, you're only using the bottom 40% of the range.

Categories

Resources