Creating an arc with a given thickness using PIL's Imagedraw - python

I am trying to create a segmented arc using PIL and Imagedraw. The arc function allows me to draw an arc easily, but it is just a line. I need to be able to place an arc of given radius and thickness(ID to OD), but AI cannot find any type of thickness or width setting. Is there a way to do this? If not, is there some other way to do this using PIL?
Snippet:
import Image
import ImageDraw
conv = 0.1
ID = 15
OD = 20
image = Image.new('1',(int(ceil(OD/conv))+2,int(ceil(OD/conv))+1), 1)
draw = ImageDraw.Draw(image)
diam = OD-ID
box=(1, 1, int(ceil(diam/conv)), int(ceil(diam/conv))) #create bounding box
draw.arc(box, 0, 90, 0) #draw circle in black

I created the following arc replacement function based on Mark's suggestion:
https://gist.github.com/skion/9259926
Probably not pixel perfect (nor fast), but seems to come close for what I need it for. If you have a better version please comment in the Gist.
def arc(draw, bbox, start, end, fill, width=1, segments=100):
"""
Hack that looks similar to PIL's draw.arc(), but can specify a line width.
"""
# radians
start *= math.pi / 180
end *= math.pi / 180
# angle step
da = (end - start) / segments
# shift end points with half a segment angle
start -= da / 2
end -= da / 2
# ellips radii
rx = (bbox[2] - bbox[0]) / 2
ry = (bbox[3] - bbox[1]) / 2
# box centre
cx = bbox[0] + rx
cy = bbox[1] + ry
# segment length
l = (rx+ry) * da / 2.0
for i in range(segments):
# angle centre
a = start + (i+0.5) * da
# x,y centre
x = cx + math.cos(a) * rx
y = cy + math.sin(a) * ry
# derivatives
dx = -math.sin(a) * rx / (rx+ry)
dy = math.cos(a) * ry / (rx+ry)
draw.line([(x-dx*l,y-dy*l), (x+dx*l, y+dy*l)], fill=fill, width=width)

PIL can't draw wide arcs, but Aggdraw can, and works well with PIL (same author).

Simulate the arc using straight line segments and put the coordinates of those segments into a list. Use draw.line with a width option to draw the arc.

A trick I found that can be pulled is to make a white circle inside the black circle. You can use the pieslice method to break it up as needed. The rendering is sequential, so you just have to get the ordering correct. The hard part is getting the positioning correct, due to Imagedraw's use of bounding boxes as opposed to center and radius coordinates. You have to make sure that the centers of everything end up exactly on each other.
THIS SOLUTION IS GOOD ONLY IN A LIMITED CASE, see comment.

Related

Getting the GPS position of coordinates (x,y) on a Google maps API satellite image in function of the zoom level

Using yolo to detect features on satellite images Google Maps API, I get the coordinates (x,y) of each features. The reference (0, 0) is the top left corner. Yolo provides also the width and height of the bounding box. I have the GPS position of the center of the image.
I would like to get the GPS coordinates for the center of each feature.
def getGPSPosition(centerLat, centerLong, zoomLevel, x, y):
# calculate degrees per pixel ratio at the given zoom level
degreesPerPixel = 180 / pow(2,zoomLevel);
imageSize = 640
# calculate offset in degrees
deltaX = (x-imageSize/2) * degreesPerPixel
deltaY = (y-imageSize/2) * degreesPerPixel
# calculate gps position based on the center coordinates
gpsLat = centerLat + deltaY
gpsLong = centerLong + deltaX
return (gpsLat, gpsLong)
I'm supposed to get the coordinate of the upper left corner of the bounding box. I miss the target... The result is approx 50m away from the correct point.
try somethin like that. read this to understand the logic.
import math
def getGPSPosition(centerLat, centerLong, zoomLevel, x, y):
dppx = 360 / math.pow(2, zoomLevel + 8)
dppy = dppx * math.cos(centerLat * math.pi / 180)
return centerLat - dppx * ( y - 640 / 2),centerLong + dppy * ( x - 640 / 2)

How to increase elliptical arc resolution for high radius values in opencv?

I've been trying to draw an elliptical arc in openCV using the ellipse function (https://docs.opencv.org/3.0-beta/modules/imgproc/doc/drawing_functions.html), however, for high radius values the arcs seem segmented.
Do you know how can I sort of increase the arc's resolution to appear better for high radius values?
I tried to draw an arc with a small radius and it looked smooth and I also tried increasing image resolution but no difference was noticed.
My code is as follows:
A[0] = round(A[0]*dpm - Xmin + margin) #Normalize CenterX
A[1] = round(A[1]*dpm - Ymin + margin) #Normalize CenterY
A[2] = round(A[2]*dpm) #Normalize Radius
startAng = A[3]
endAng = A[4]
A=A.astype(int)
cv2.ellipse(Blank,(A[0],A[1]),(A[2],A[2]), 0, startAng, endAng, 0 ,1)
while:
Blank is the image I want to draw the arc on (np array, size= (398, 847)
(A[0],A[1]) is the center point
(A[2],A[2]) ellipse axes
0 is the angle
startAng is the starting angle of the arc
endAng is the ending angle of the arc
0 is the line color (Black)
1 is the line thickess
The code should produce a smooth arc but it looks segmented as if it is made of 4 lines.
I ended up writing a function to plot an arc on an input image:
import numpy as np
import cv2
blank = np.ones((500,500))
def DrawArc(image, center, radius, startAng, endAng, color,resolution):
'''Draws an arc with specific reslution and color on an input image
Args:
image - The input image to draw the arc on
center - Arc's center
radius - Arc's radius
startAng - the starting angle of the arc
engAng - the ending angle of the arc
color - Arc's color on the input image
resolution - Number of points for calculation
output:
image - updated image with plotted arc'''
startAng += 90
endAng += 90
theta = np.linspace(startAng,endAng,resolution)
x = np.round(radius*np.cos(np.deg2rad(theta))) + center[0]
y = np.round(radius*np.sin(np.deg2rad(theta))) + center[1]
x=x.astype(int)
y=y.astype(int)
for k in range(np.size(theta)):
image[x[k]][y[k]] = color
return image
image = DrawArc(blank,(250,250),200,0,90,0,1000)
cv2.imshow("Arc",image)
cv2.waitKey()
The output image is
Output

How to find the translation values after a rotate about a point for a 2D image?

I am having issues getting the correct translation values after rotating my image. The code I have so far calculates the bounding box for a given rotation using basic trigonometry, it then applies a translation to the rotation matrix. The issue I am having however is that my translation always seems to be 1 pixel out, by that I mean I get a 1-pixel black border along the top or sides of my rotated image.
Here is my code:
def rotate_image(mat, angle):
height, width = mat.shape[:2]
image_center = (width / 2.0, height / 2.0)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
# Get Bounding Box
radians = math.radians(angle)
sin = abs(math.sin(radians))
cos = abs(math.cos(radians))
bound_w = (width * cos) + (height * sin)
bound_h = (width * sin) + (height * cos)
# Set Translation
rotation_mat[0, 2] += (bound_w / 2.0) - image_center[0]
rotation_mat[1, 2] += (bound_h / 2.0) - image_center[1]
rotated_mat = cv2.warpAffine(mat, rotation_mat, (int(bound_w), int(bound_h)))
return rotated_mat
Here is the original image for reference and some examples of the image using that code:
coffee.png – Original
coffee.png - 90° - Notice the 1px border across the top
coffee.png - 180° - Notice the 1px border across the top and left
I am not so hot on my math, but I hazard a guess that this is being caused by some rounding issue as we’re dealing with floating point numbers.
I would like to know what methods other people use, what would be the simplest and most performant way to rotate and translate an image about its centre point please?
Thank you.
EDIT
As per #Falko's answer, I was not using zero-based calculation. My corrected code is as follows:
def rotate_image(mat, angle):
height, width = mat.shape[:2]
image_center = ((width - 1) / 2.0, (height - 1) / 2.0)
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0)
# Get Bounding Box
radians = math.radians(angle)
sin = abs(math.sin(radians))
cos = abs(math.cos(radians))
bound_w = (width * cos) + (height * sin)
bound_h = (width * sin) + (height * cos)
# Set Translation
rotation_mat[0, 2] += ((bound_w - 1) / 2.0 - image_center[0])
rotation_mat[1, 2] += ((bound_h - 1) / 2.0 - image_center[1])
rotated_mat = cv2.warpAffine(mat, rotation_mat, (int(bound_w), int(bound_h)))
return rotated_mat
I'd still appreciate seeing alternative methods people are using to perform rotation and translation! :)
I guess your image center is wrong. Take, e.g., a 4x4 image with columns 0, 1, 2 and 3. Then your center is computed as 4 / 2 = 2. But it should be 1.5 between column 1 and 2.
So you better use (width - 1) / 2.0 and (height - 1) / 2.0.

python arrange images on canvas in a circle

I have bunch of images (say 10) I have generated both as array or PIL object.
I need to integrate them into a circular fashion to display them and it should adjust itself to the resolution of the screen, is there anything in python that can do this?
I have tried using paste, but figuring out the resolution canvas and positions to paste is painful, wondering if there is an easier solution?
We can say that points are arranged evenly in a circle when there is a constant angle theta between neighboring points. theta can be calculated as 2*pi radians divided by the number of points. The first point is at angle 0 with respect to the x axis, the second point at angle theta*1, the third point at angle theta*2, etc.
Using simple trigonometry, you can find the X and Y coordinates of any point that lies on the edge of a circle. For a point at angle ohm lying on a circle with radius r:
xFromCenter = r*cos(ohm)
yFromCenter = r*sin(ohm)
Using this math, it is possible to arrange your images evenly on a circle:
import math
from PIL import Image
def arrangeImagesInCircle(masterImage, imagesToArrange):
imgWidth, imgHeight = masterImage.size
#we want the circle to be as large as possible.
#but the circle shouldn't extend all the way to the edge of the image.
#If we do that, then when we paste images onto the circle, those images will partially fall over the edge.
#so we reduce the diameter of the circle by the width/height of the widest/tallest image.
diameter = min(
imgWidth - max(img.size[0] for img in imagesToArrange),
imgHeight - max(img.size[1] for img in imagesToArrange)
)
radius = diameter / 2
circleCenterX = imgWidth / 2
circleCenterY = imgHeight / 2
theta = 2*math.pi / len(imagesToArrange)
for i, curImg in enumerate(imagesToArrange):
angle = i * theta
dx = int(radius * math.cos(angle))
dy = int(radius * math.sin(angle))
#dx and dy give the coordinates of where the center of our images would go.
#so we must subtract half the height/width of the image to find where their top-left corners should be.
pos = (
circleCenterX + dx - curImg.size[0]/2,
circleCenterY + dy - curImg.size[1]/2
)
masterImage.paste(curImg, pos)
img = Image.new("RGB", (500,500), (255,255,255))
#red.png, blue.png, green.png are simple 50x50 pngs of solid color
imageFilenames = ["red.png", "blue.png", "green.png"] * 5
images = [Image.open(filename) for filename in imageFilenames]
arrangeImagesInCircle(img, images)
img.save("output.png")
Result:

Python 2.7.3 + OpenCV 2.4 after rotation window doesn't fit Image

I'm trying to rotate a image some degrees then show it in a window.
my idea is to rotate and then show it in a new window with new width and height of window calculated from the old width and height:
new_width = x * cos angle + y * sin angle
new_height = y * cos angle + x * sin angle
I was expecting the result to look like below:
but it turns out the result looks like this:
and my code is here:
#!/usr/bin/env python -tt
#coding:utf-8
import sys
import math
import cv2
import numpy as np
def rotateImage(image, angle):#parameter angle in degrees
if len(image.shape) > 2:#check colorspace
shape = image.shape[:2]
else:
shape = image.shape
image_center = tuple(np.array(shape)/2)#rotation center
radians = math.radians(angle)
x, y = im.shape
print 'x =',x
print 'y =',y
new_x = math.ceil(math.cos(radians)*x + math.sin(radians)*y)
new_y = math.ceil(math.sin(radians)*x + math.cos(radians)*y)
new_x = int(new_x)
new_y = int(new_y)
rot_mat = cv2.getRotationMatrix2D(image_center,angle,1.0)
print 'rot_mat =', rot_mat
result = cv2.warpAffine(image, rot_mat, shape, flags=cv2.INTER_LINEAR)
return result, new_x, new_y
def show_rotate(im, width, height):
# width = width/2
# height = height/2
# win = cv2.cv.NamedWindow('ro_win',cv2.cv.CV_WINDOW_NORMAL)
# cv2.cv.ResizeWindow('ro_win', width, height)
win = cv2.namedWindow('ro_win')
cv2.imshow('ro_win', im)
if cv2.waitKey() == '\x1b':
cv2.destroyWindow('ro_win')
if __name__ == '__main__':
try:
im = cv2.imread(sys.argv[1],0)
except:
print '\n', "Can't open image, OpenCV or file missing."
sys.exit()
rot, width, height = rotateImage(im, 30.0)
print width, height
show_rotate(rot, width, height)
There must be some stupid mistakes in my code lead to this problem, but I can not figure it out...
and I know my code is not pythonic enough :( ..sorry for that..
Can anyone help me?
Best,
bearzk
As BloodyD's answer said, cv2.warpAffine doesn't auto-center the transformed image. Instead, it simply transforms each pixel using the transformation matrix. (This could move pixels anywhere in Cartesian space, including out of the original image area.) Then, when you specify the destination image size, it grabs an area of that size, beginning at (0,0), i.e. the upper left of the original frame. Any parts of your transformed image that don't lie in that region will be cut off.
Here's Python code to rotate and scale an image, with the result centered:
def rotateAndScale(img, scaleFactor = 0.5, degreesCCW = 30):
(oldY,oldX) = img.shape #note: numpy uses (y,x) convention but most OpenCV functions use (x,y)
M = cv2.getRotationMatrix2D(center=(oldX/2,oldY/2), angle=degreesCCW, scale=scaleFactor) #rotate about center of image.
#choose a new image size.
newX,newY = oldX*scaleFactor,oldY*scaleFactor
#include this if you want to prevent corners being cut off
r = np.deg2rad(degreesCCW)
newX,newY = (abs(np.sin(r)*newY) + abs(np.cos(r)*newX),abs(np.sin(r)*newX) + abs(np.cos(r)*newY))
#the warpAffine function call, below, basically works like this:
# 1. apply the M transformation on each pixel of the original image
# 2. save everything that falls within the upper-left "dsize" portion of the resulting image.
#So I will find the translation that moves the result to the center of that region.
(tx,ty) = ((newX-oldX)/2,(newY-oldY)/2)
M[0,2] += tx #third column of matrix holds translation, which takes effect after rotation.
M[1,2] += ty
rotatedImg = cv2.warpAffine(img, M, dsize=(int(newX),int(newY)))
return rotatedImg
When you get the rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,1.0)
Your "scale" parameter is set to 1.0, so if you use it to transform your image matrix to your result matrix of the same size, it will necessarily be clipped.
You can instead get a rotation matrix like this:
rot_mat = cv2.getRotationMatrix2D(image_center,angel,0.5)
that will both rotate and shrink, leaving room around the edges (you can scale it up first so that you will still end up with a big image).
Also, it looks like you are confusing the numpy and OpenCV conventions for image sizes. OpenCV uses (x, y) for image sizes and point coordinates, while numpy uses (y,x). That is probably why you are going from a portrait to landscape aspect ratio.
I tend to be explicit about it like this:
imageHeight = image.shape[0]
imageWidth = image.shape[1]
pointcenter = (imageHeight/2, imageWidth/2)
etc...
Ultimately, this works fine for me:
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
Update:
Here is the complete script that I executed. Just cv2.imshow("winname", image) and cv2.waitkey() with no arguments to keep it open:
import cv2
def rotateImage(image, angel):#parameter angel in degrees
height = image.shape[0]
width = image.shape[1]
height_big = height * 2
width_big = width * 2
image_big = cv2.resize(image, (width_big, height_big))
image_center = (width_big/2, height_big/2)#rotation center
rot_mat = cv2.getRotationMatrix2D(image_center,angel, 0.5)
result = cv2.warpAffine(image_big, rot_mat, (width_big, height_big), flags=cv2.INTER_LINEAR)
return result
imageOriginal = cv2.imread("/Path/To/Image.jpg")
# this was an iPhone image that I wanted to resize to something manageable to view
# so I knew beforehand that this is an appropriate size
imageOriginal = cv2.resize(imageOriginal, (600,800))
imageRotated= rotateImage(imageOriginal, 45)
cv2.imshow("Rotated", imageRotated)
cv2.waitKey()
Really not a lot there... And you were definitely right to use if __name__ == '__main__': if it is a real module that you're working on.
Well, this question seems not up-to-date, but I had the same problem and took a while to solve it without scaling the original image up and down. I will just post my solution(unfortunately C++ code, but it could be easily ported to python if needed):
#include <math.h>
#define PI 3.14159265
#define SIN(angle) sin(angle * PI / 180)
#define COS(angle) cos(angle * PI / 180)
void rotate(const Mat src, Mat &dest, double angle, int borderMode, const Scalar &borderValue){
int w = src.size().width, h = src.size().height;
// resize the destination image
Size2d new_size = Size2d(abs(w * COS((int)angle % 180)) + abs(h * SIN((int)angle % 180)), abs(w * SIN((int)angle % 180)) + abs(h * COS((int)angle % 180)));
dest = Mat(new_size, src.type());
// this is our rotation point
Size2d old_size = src.size();
Point2d rot_point = Point2d(old_size.width / 2.0, old_size.height / 2.0);
// and this is the rotation matrix
// same as in the opencv docs, but in 3x3 form
double a = COS(angle), b = SIN(angle);
Mat rot_mat = (Mat_<double>(3,3) << a, b, (1 - a) * rot_point.x - b * rot_point.y, -1 * b, a, b * rot_point.x + (1 - a) * rot_point.y, 0, 0, 1);
// next the translation matrix
double offsetx = (new_size.width - old_size.width) / 2,
offsety = (new_size.height - old_size.height) / 2;
Mat trans_mat = (Mat_<double>(3,3) << 1, 0, offsetx , 0, 1, offsety, 0, 0, 1);
// multiply them: we rotate first, then translate, so the order is important!
// inverse order, so that the transformations done right
Mat affine_mat = Mat(trans_mat * rot_mat).rowRange(0, 2);
// now just apply the affine transformation matrix
warpAffine(src, dest, affine_mat, new_size, INTER_LINEAR, borderMode, borderValue);
}
The general solution is to rotate and translate the rotated picture to the right position. So we create two transformation matrices(first for the rotation, second for the translation) and multiply them to the final affine transformation. As the matrix returned by opencv's getRotationMatrix2D is only 2x3, I had to create the matrices by hand in the 3x3 format, so they could by multiplied. Then just take the first two rows and apply the affine tranformation.
EDIT: I have created a Gist, because I have needed this functionality too often in different projects. There is also a Python-Version of it: https://gist.github.com/BloodyD/97917b79beb332a65758

Categories

Resources