Shrink contour to fit inside original - python

I have a contour with 4 points that form a parallelogram like shape, and I want to shrink the contour points and draw it inside a regular sized version with cv2.drawContours
When I use the following code to resize it I end up with this here before and after.
M = cv2.moments(cnt)
cx = int(M['m10']/M['m00'])
cy = int(M['m01']/M['m00'])
cnt_norm = cnt - [cx, cy]
cnt_scaled = cnt_norm * scale
cnt_scaled = cnt_scaled + [cx, cy]
cnt_scaled = cnt_scaled.astype(np.int32)
As you can see its not quite symmetrical on the top and bottom due to the skew, how can I fix this?

Related

Is there a way to get contour properties in OpenCV/skimage for floating point coordinates?

I have contour plots created in Matplotlib, that I need to analyze further to see if they are closed curves, and then look at area, convexity, solidity, etc. for cellular structures. In Matplotlib, they are of type LineCollection and Path.
In OpenCV, I cannot pass a float array to cv2.contourArea or similar functions. On the other hand, converting to uint8 coordinates loses important data like nesting structure. In this case, I need to get to the inner nested convex contours.
Are there any options to find information like area, convex hull, bounding rectangle in Python?
I could enlarge the image, but I'm worried it might skew the picture unpredictably.
For example: Attached image with floating point and integer coordinates.
I assume, you have full control over the Matplotlib part. So, let's try to get an image from there, which can you easily use for further image processing with OpenCV.
We start with some common contour plot as shown in your question:
You can set the levels parameter to get a single contour level. That's helpful to work on several levels individually. In the following, I will focus on levels=[1.75] (the most inner green ellipse). Later, you can simply loop through all desired levels, and perform your analyses.
For our custom contour plot, we will set a fixed x, y domain, for example [-3, 3] x [-2, 2], using xlim and ylim. So, we have known dimensions for the actual canvas. We get rid of the axes using axis('off'), and the margins around the canvas using tight_layout(pad=0). What's left is the plain canvas in full size (figure size was adjusted to (10, 5), and colors are automatically adjusted to the number of levels):
Now, we save the canvas to some NumPy array, cf. this Q&A. From there, we can perform any OpenCV operation. For finding the combined area of this level contours, we might threshold the grayscaled image, find all contours, and calculate their areas using cv2.contourArea. We sum those areas, and get the whole area in pixels. From the known canvas dimensions, we know the whole canvas area in "units", and from the image dimensions, we know the whole canvas area in pixels. So, we just need to divide the whole contour area (in pixels) by the whole canvas area (in pixels), and multiply with the whole canvas area (in "units").
That'd be the whole code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
# Generate some data for some contour plot
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-(X + 1.5)**2 - Y**2)
Z2 = np.exp(-(X - 1.5)**2 - Y**2)
Z = (Z1 + Z2) * 2
# Custom contour plot
x_min, x_max = -3, 3
y_min, y_max = -2, 2
fig = plt.figure(2, figsize=(10, 5)) # Set large figure size
plt.contour(X, Y, Z, levels=[1.75]) # Set single levels if needed
plt.xlim([x_min, x_max]) # Explicitly set x limits
plt.ylim([y_min, y_max]) # Explicitly set y limits
plt.axis('off') # No axes shown at all
plt.tight_layout(pad=0) # No margins at all
# Get figure's canvas as NumPy array, cf. https://stackoverflow.com/a/7821917/11089932
fig.canvas.draw()
img = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
img = img.reshape(fig.canvas.get_width_height()[::-1] + (3,))
# Grayscale, and threshold image
mask = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = cv2.threshold(mask, 0, 255, cv2.THRESH_OTSU + cv2.THRESH_BINARY_INV)[1]
# Find contours, calculate areas (pixels), sum to get whole area (pixels) for certain level
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
area = np.sum(np.array([cv2.contourArea(cnt) for cnt in cnts]))
# Whole area (coordinates) from canvas area (pixels), and x_min, x_max, etc.
area = area / np.prod(mask.shape[:2]) * (x_max - x_min) * (y_max - y_min)
print('Area:', area)
The output area seems reasonable:
Area: 0.861408
Now, you're open to do any image processing with OpenCV you like. Always remember to convert any results in pixels to some result in "units".
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Matplotlib: 3.4.1
NumPy: 1.20.2
OpenCV: 4.5.1

find angle between major axis of ellipse and x-axis of coordinate (help me implement method from paper)

So I am trying to implement a method from this paper. I am stuck at the part where I have to find the angle between the major axis of the lesion’s best-fit ellipse and the x-axis of the coordinate system.
Here is the sample image:
Here is what I got so far:
Is it possible to find that angle? And after the angle has been found, I have to flip the RoI along x-axis by the angle.
UPDATE ----------
Google drive link to Roi Image: RoI image
Implementing method step by step based on the paper.
First, I should recenter the RoI to the center of the image coordinate. In the paper, they centered the RoI using its centroid. I manage to do it based on this code I found in this answer. The result is fine if my RoI is small and not touching the image border. But if I have large image the result is really bad. So I ended up centering the RoI using boundingRect. Here is the result of centering:
Code for centering RoI:
import math
import cv2
import numpy as np
import matplotlib.pyplot as plt
# read image
cont_img = cv2.imread(r"C:\Users\Pandu\Desktop\IMD064_lesion.bmp", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# fit ellipse and find ellipse properties
hh, ww = cont_img.shape
contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# centering by centroid
half_width = int(ww/2)
half_height = int(hh/2)
offset_x = (half_width-xc)
offset_y = (half_height-yc)
T = np.float32([[1, 0, offset_x], [0, 1, offset_y]])
centered_by_centroid = cv2.warpAffine(cont_img.copy(), T, (ww, hh))
plt.imshow(centered_by_centroid, cmap=plt.cm.gray)
# centering by boundingRect
# This centered RoI is (L)
x, y, w, h = cv2.boundingRect(contours[0])
startx = (ww - w)//2
starty = (hh - h)//2
centered_by_boundingRect = np.zeros_like(cont_img)
centered_by_boundingRect[starty:starty+h, startx:startx+w] = cont_img[y:y+h, x:x+w]
plt.imshow(centered_by_boundingRect, cmap=plt.cm.gray)
Second, after centering the RoI, I should find the orientation angel and rotate the RoI based on that angel and then flip . Using code from this answer. (is this the correct way to rotate the RoI?):
# find ellipse properties of centered RoI
contours, hierarchy = cv2.findContours(centered_by_boundingRect, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
roi_centroid = (xc, yc)
rot_angle = 90 - angle
if rot_angle < 0:
rot_angle += 180
# This rotated RoI is (Lx)
M = cv2.getRotationMatrix2D(roi_centroid, -rot_angle, 1.0)
rot_im = cv2.warpAffine(centered_by_boundingRect, M, (ww, hh))
plt.imshow(rot_im, cmap=plt.cm.gray)
# (Ly)
# by passing 0 to flip() should flip image around x-axis, but I get the same result as the paper
res_flip_y = cv2.flip(rot_im.copy(), 0)
plt.imshow(res_flip_y , cmap=plt.cm.gray)
# (L) (xor) (Lx)
res_x_xor = cv2.bitwise_xor(centered_by_boundingRect, rot_im)
plt.imshow(res_x_xor, cmap=plt.cm.gray)
# (L) (xor) (Ly)
res_y_xor = cv2.bitwise_xor(centered_by_boundingRect, res_flip_x)
plt.imshow(res_y_xor, cmap=plt.cm.gray)
I still can't get the same result as the paper, the rotating operation also produce bad result on large RoI. Help...
UPDATE ---------- 20/03/2021
Small RoI: fine result on rotation and looks similar with the paper, but still not getting the same end result on the L (xor) Lx or L (xor) Ly
Large RoI: bad result on rotation as the RoI get out of border/image
The angle you're looking for is returned from fitEllipse. It's just rotated a bit according to a different reference frame. You can get your counter-clockwise rotation angle by doing 90 - angle. As for rotating the roi you can either use minAreaRect to get a minimum-fit rectangle directly, or you can fit a bounding box to the contour and rotate each point individually.
The green rectangle is the minAreaRect(), the red rectangle is the boundingRect() after it's been rotated.
import cv2
import numpy as np
import math
# rotate point
def rotate2D(point, deg):
rads = math.radians(deg);
x, y = point;
rcos = math.cos(rads);
rsin = math.sin(rads);
rx = x * rcos - y * rsin;
ry = x * rsin + y * rcos;
rx = round(rx);
ry = round(ry);
point[0] = rx;
point[1] = ry;
# translate point
def translate2D(src, target, sign):
tx, ty = target;
src[0] += tx * sign;
src[1] += ty * sign;
# read image
cont_img = cv2.imread("blob.png", 0)
cont_rgb = cv2.cvtColor(cont_img, cv2.COLOR_GRAY2RGB)
# find contour
_, contours, hierarchy = cv2.findContours(cont_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# fit ellipse and get ellipse properties
ellipse = cv2.fitEllipse(contours[0])
(xc, yc), (d1, d2), angle = ellipse
# -------- NEW STUFF IN HERE --------------
# calculate counter-clockwise angle relative to x-axis
rot_angle = 90 - angle;
if rot_angle < 0:
rot_angle += 180;
print(rot_angle);
# if you want a rotated ROI I would recommend using minAreaRect rather than rotating a different rectangle
# fit a minrect to the image # this is taken directly from OpenCV's tutorials
rect = cv2.minAreaRect(contours[0]);
box = cv2.boxPoints(rect);
box = np.int0(box);
cv2.drawContours(cont_rgb, [box], 0, (0,255,0), 2);
# but if you really want to use a different rectangle and rotate it, here's how to do it
# create rectangle
x,y,w,h = cv2.boundingRect(contours[0]);
rect = [];
rect.append([x,y]);
rect.append([x+w,y]);
rect.append([x+w,y+h]);
rect.append([x,y+h]);
# rotate it
rotated_rect = [];
center = [x + w/2, y + h/2];
for point in rect:
# for each point, center -> rotate -> uncenter
translate2D(point, center, -1);
rotate2D(point, 90 - rot_angle); # "90 - angle" is because rotation goes clockwise
translate2D(point, center, 1);
rotated_rect.append([point]);
rotated_rect = np.array(rotated_rect);
cv2.drawContours(cont_rgb, [rotated_rect.astype(int)], -1, (0,0,255), 2);
# ------------- END OF NEW STUFF -----------------
# draw fitted ellipse and centroid
target_ellipse = cv2.ellipse(cont_rgb.copy(), ellipse, (37, 99, 235), 10)
centroid = cv2.circle(target_ellipse.copy(), (int(xc), int(yc)), 20, (250, 204, 21), -1)
# draw major axis
rmajor = max(d1, d2)/2
if angle > 90:
angle = angle - 90
else:
angle = angle + 90
xtop_major = xc + math.cos(math.radians(angle))*rmajor
ytop_major = yc + math.sin(math.radians(angle))*rmajor
xbot_major = xc + math.cos(math.radians(angle+180))*rmajor
ybot_major = yc + math.sin(math.radians(angle+180))*rmajor
top_major = (int(xtop_major), int(ytop_major))
bot_major = (int(xbot_major), int(ybot_major))
target_major_axis = cv2.line(centroid.copy(),
top_major, bot_major,
(0, 255, 255), 5)
## image center coordinate
hh, ww = target_major_axis.shape[:2];
x_center_start = (0, int(hh/2))
x_center_end = (int(ww), int(hh/2))
y_center_start = (int(ww/2), 0)
y_center_end = (int(ww/2), int(hh))
img_x_middle_coor = cv2.line(target_major_axis.copy(), x_center_start, x_center_end, (219, 39, 119), 10)
img_y_middle_coor = cv2.line(img_x_middle_coor.copy(), y_center_start,
y_center_end, (190, 242, 100), 10)
# show
cv2.imshow("image", img_y_middle_coor);
cv2.waitKey(0);
For the future: check that your code runs before pasting it on here. Aside from the missing "import" lines it was also missing this line:
hh, ww = target_major_axis.shape[:2]
If the sample code you paste has errors, then everyone who wants to help will have to waste some time bug-stomping before they can begin working on a solution.

Extracting data from tables without any grid lines and border from scanned image of document

Extracting table data from digital PDFs have been simple using camelot and tabula. However, the solution doesn't work with scanned images of the document pages specifically when the table doesn't have borders and inner grids. I have been trying to generate vertical and horizontal lines using OpenCV. However, since the scanned images will have slight rotation angles, it is difficult to proceed with the approach.
How can we utilize OpenCV to generate grids (horizontal and vertical lines) and borders for the scanned document page which contains table data (along with paragraphs of text)? If this is feasible, how to nullify the rotation angle of the scanned image?
I wrote some code to estimate the horizontal lines from the printed letters in the page. The same could be done for vertical ones I guess. The code below follows some general assumptions, here
some basic steps in pseudo code style:
prepare picture for contour detection
do contour detection
we assume most contours are letters
calc mean width of all contours
calc mean area of contours
filter all contours with two conditions:
a) contour (letter) heigths < meanHigh * 2
b) contour area > 4/5 meanArea
calc center point of all remaining contours
assume we have line regions (bins)
list all center point which are inside the region
do linear regression of region points
save slope and intercept
calc mean slope and intercept
here the full code:
import cv2
import numpy as np
from scipy import stats
def resizeImageByPercentage(img,scalePercent = 60):
width = int(img.shape[1] * scalePercent / 100)
height = int(img.shape[0] * scalePercent / 100)
dim = (width, height)
# resize image
return cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
def calcAverageContourWithAndHeigh(contourList):
hs = list()
ws = list()
for cnt in contourList:
(x, y, w, h) = cv2.boundingRect(cnt)
ws.append(w)
hs.append(h)
return np.mean(ws),np.mean(hs)
def calcAverageContourArea(contourList):
areaList = list()
for cnt in contourList:
a = cv2.minAreaRect(cnt)
areaList.append(a[2])
return np.mean(areaList)
def calcCentroid(contour):
houghMoments = cv2.moments(contour)
# calculate x,y coordinate of centroid
if houghMoments["m00"] != 0: #case no contour could be calculated
cX = int(houghMoments["m10"] / houghMoments["m00"])
cY = int(houghMoments["m01"] / houghMoments["m00"])
else:
# set values as what you need in the situation
cX, cY = -1, -1
return cX,cY
def getCentroidWhenSizeInRange(contourList,letterSizeWidth,letterSizeHigh,deltaOffset,minLetterArea=10.0):
centroidList=list()
for cnt in contourList:
(x, y, w, h) = cv2.boundingRect(cnt)
area = cv2.minAreaRect(cnt)
#calc diff
diffW = abs(w-letterSizeWidth)
diffH = abs(h-letterSizeHigh)
#thresold A: almost smaller than mean letter size +- offset
#when almost letterSize
if diffW < deltaOffset and diffH < deltaOffset:
#threshold B > min area
if area[2] > minLetterArea:
cX,cY = calcCentroid(cnt)
if cX!=-1 and cY!=-1:
centroidList.append((cX,cY))
return centroidList
DEBUGMODE = True
#read image, do git clone https://github.com/WZBSocialScienceCenter/pdftabextract.git for the example
img = cv2.imread('pdftabextract/examples/catalogue_30s/data/ALA1934_RR-excerpt.pdf-2_1.png')
#get some basic infos
imgHeigh, imgWidth, imgChannelAmount = img.shape
if DEBUGMODE:
cv2.imwrite("img00original.jpg",resizeImageByPercentage(img,30))
cv2.imshow("original",img)
# prepare img
imgGrey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# apply Gaussian filter
imgGaussianBlur = cv2.GaussianBlur(imgGrey,(5,5),0)
#make binary img, black or white
_, imgBinThres = cv2.threshold(imgGaussianBlur, 130, 255, cv2.THRESH_BINARY)
## detect contours
contours, _ = cv2.findContours(imgBinThres, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#we get some letter parameter
averageLetterWidth, averageLetterHigh = calcAverageContourWithAndHeigh(contours)
threshold1AllowedLetterSizeOffset = averageLetterHigh * 2 # double size
averageContourAreaSizeOfMinRect = calcAverageContourArea(contours)
threshHold2MinArea = 4 * averageContourAreaSizeOfMinRect / 5 # 4/5 * mean
print("mean letter Width: ", averageLetterWidth)
print("mean letter High: ", averageLetterHigh)
print("threshold 1 tolerance: ", threshold1AllowedLetterSizeOffset)
print("mean letter area ", averageContourAreaSizeOfMinRect)
print("thresold 2 min letter area ", threshHold2MinArea)
#we get all centroid of letter sizes contours, the other we ignore
centroidList = getCentroidWhenSizeInRange(contours,averageLetterWidth,averageLetterHigh,threshold1AllowedLetterSizeOffset,threshHold2MinArea)
if DEBUGMODE:
#debug print all centers:
imgFilteredCenter = img.copy()
for cX,cY in centroidList:
#draw in red color as BGR
cv2.circle(imgFilteredCenter, (cX, cY), 5, (0, 0, 255), -1)
cv2.imwrite("img01letterCenters.jpg",resizeImageByPercentage(imgFilteredCenter,30))
cv2.imshow("letterCenters",imgFilteredCenter)
#we estimate a bin widths
amountPixelFreeSpace = averageLetterHigh #TODO get better estimate out of histogram
estimatedBinWidth = round( averageLetterHigh + amountPixelFreeSpace) #TODO round better ?
binCollection = dict() #range(0,imgHeigh,estimatedBinWidth)
#we do seperate the center points into bins by y coordinate
for i in range(0,imgHeigh,estimatedBinWidth):
listCenterPointsInBin = list()
yMin = i
yMax = i + estimatedBinWidth
for cX,cY in centroidList:
if yMin < cY < yMax:#if fits in bin
listCenterPointsInBin.append((cX,cY))
binCollection[i] = listCenterPointsInBin
#we assume all point are in one line ?
#model = slope (x) + intercept
#model = m (x) + n
mList = list() #slope abs in img
nList = list() #intercept abs in img
nListRelative = list() #intercept relative to bin start
minAmountRegressionElements = 12 #is also alias for letter amount we expect
#we do regression for every point in the bin
for startYOfBin, values in binCollection.items():
#we reform values
xValues = [] #TODO use more short transform
yValues = []
for x,y in values:
xValues.append(x)
yValues.append(y)
#we assume a min limit of point in bin
if len(xValues) >= minAmountRegressionElements :
slope, intercept, r, p, std_err = stats.linregress(xValues, yValues)
mList.append(slope)
nList.append(intercept)
#we calc the relative intercept
nRelativeToBinStart = intercept - startYOfBin
nListRelative.append(nRelativeToBinStart)
if DEBUGMODE:
#we debug print all lines in one picute
imgLines = img.copy()
colorOfLine = (0, 255, 0) #green
for i in range(0,len(mList)):
slope = mList[i]
intercept = nList[i]
startPoint = (0, int( intercept)) #better round ?
endPointY = int( (slope * imgWidth + intercept) )
if endPointY < 0:
endPointY = 0
endPoint = (imgHeigh,endPointY)
cv2.line(imgLines, startPoint, endPoint, colorOfLine, 2)
cv2.imwrite("img02lines.jpg",resizeImageByPercentage(imgLines,30))
cv2.imshow("linesOfLetters ",imgLines)
#we assume in mean we got it right
meanIntercept = np.mean(nListRelative)
meanSlope = np.mean(mList)
print("meanIntercept :", meanIntercept)
print("meanSlope ", meanSlope)
#TODO calc angle with math.atan(slope) ...
if DEBUGMODE:
cv2.waitKey(0)
original:
center point of letters:
lines:
I had the same problem some time ago and this tutorial is the solution to that. It explains using pdftabextract which is a Python library by Markus Konrad and leverages OpenCV’s Hough transform to detect the lines and works even if the scanned document is a bit tilted. The tutorial walks your through parsing a 1920s German newspaper

Fit internal rectangles to joints

Using a joints array, like the following:
How can I fit the internal rectangles, so that no rectangles overlap and all points are used? Basically fitting table cells to the points.
I've tried grabbing the contours, works fine:
With 22 points found. How can I fit these points to internal polygons? E.g. find the 21 rectangles in this image.
I found the joint array through this method, I guess this is a continuation.
I figured out a quick and dirty solution. This only works with perfectly horizontal/vertical alignment, and if there is a gap in the columns, it's not handled.
# First dilate the image
kernel = np.ones((5,5),np.uint8)
dilation = cv.dilate(img,kernel,iterations = 1)
# Find contours then points
(img, contours, hierarchy) = cv.findContours(dilation, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
points = []
for con in contours:
if (cv.contourArea(con)>0):
M = cv.moments(con)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
points.append([cY, cX])
# attempt at finding rectangles
map = {}
for p in points:
map[p[1]] = []
for p in points:
map[p[1]].append(p[0])
# Check for rectangles
keys = sorted(map.keys(), key=int)
for i in range(len(keys)-1):
one = np.array(map[keys[i]])
two = np.array(map[keys[i+1]])
intersect = np.in1d(one,two)
intersect2 = np.in1d(two,one)
# If two horizontal collections have an intersection it's likely a cell
if (sum(intersect) >= 2):
intersects = sorted(one[intersect], key=int)
for x in range(len(intersects)-1):
rect = [keys[i], intersects[x],keys[i+1], intersects[x+1]]
showimg(rois[numimg][rect[1]:rect[3],rect[0]:rect[2]])

python arrange images on canvas in a circle

I have bunch of images (say 10) I have generated both as array or PIL object.
I need to integrate them into a circular fashion to display them and it should adjust itself to the resolution of the screen, is there anything in python that can do this?
I have tried using paste, but figuring out the resolution canvas and positions to paste is painful, wondering if there is an easier solution?
We can say that points are arranged evenly in a circle when there is a constant angle theta between neighboring points. theta can be calculated as 2*pi radians divided by the number of points. The first point is at angle 0 with respect to the x axis, the second point at angle theta*1, the third point at angle theta*2, etc.
Using simple trigonometry, you can find the X and Y coordinates of any point that lies on the edge of a circle. For a point at angle ohm lying on a circle with radius r:
xFromCenter = r*cos(ohm)
yFromCenter = r*sin(ohm)
Using this math, it is possible to arrange your images evenly on a circle:
import math
from PIL import Image
def arrangeImagesInCircle(masterImage, imagesToArrange):
imgWidth, imgHeight = masterImage.size
#we want the circle to be as large as possible.
#but the circle shouldn't extend all the way to the edge of the image.
#If we do that, then when we paste images onto the circle, those images will partially fall over the edge.
#so we reduce the diameter of the circle by the width/height of the widest/tallest image.
diameter = min(
imgWidth - max(img.size[0] for img in imagesToArrange),
imgHeight - max(img.size[1] for img in imagesToArrange)
)
radius = diameter / 2
circleCenterX = imgWidth / 2
circleCenterY = imgHeight / 2
theta = 2*math.pi / len(imagesToArrange)
for i, curImg in enumerate(imagesToArrange):
angle = i * theta
dx = int(radius * math.cos(angle))
dy = int(radius * math.sin(angle))
#dx and dy give the coordinates of where the center of our images would go.
#so we must subtract half the height/width of the image to find where their top-left corners should be.
pos = (
circleCenterX + dx - curImg.size[0]/2,
circleCenterY + dy - curImg.size[1]/2
)
masterImage.paste(curImg, pos)
img = Image.new("RGB", (500,500), (255,255,255))
#red.png, blue.png, green.png are simple 50x50 pngs of solid color
imageFilenames = ["red.png", "blue.png", "green.png"] * 5
images = [Image.open(filename) for filename in imageFilenames]
arrangeImagesInCircle(img, images)
img.save("output.png")
Result:

Categories

Resources