Hey guys I am struggling with making a grid over an image of a basketball court that starts from the middle - meaning there is a middle line both at width and height of the image from where the grid then evenly spans out over the whole picture (to the left&right and bottom&top). I am using this as an example: draw grid lines over an image in matplotlib. I would greatly appreciate any help! I am very new to matplotlib and am asking the experts out there!
Thank you
This is the code I am currently working on which is not really working (in case that helps)
import matplotlib.pyplot as plt
from PIL import Image
import pylab as pl
img = Image.open('PATH_TO_IMAGE')
im = pl.imread('PATH_TO_IMAGE')
width, height = img.size
newh = height/2
neww = width/2
print(newh)
print(neww)
#dividing the new numbers by the court dimensions 60x110 both divided by two since height and width are divided by two. We want rectangles exactly 1 foot big in width and height
dx, dy = int(newh/30), int(neww/55)
print(dx)
print(dy)
grid_color = [0,0,0]
im[:,::dy,:] = grid_color
im[::dx,:,:] = grid_color
plt.figure(figsize=(6,3.2))
# Show the result
plt.imshow(im)
plt.show()
This works exactly for one image but no other ones. Here is the error message:
line 37, in
im[:,::dy,:] = grid_color
ValueError: could not broadcast input array from shape (3) into shape (1322,119,4)
I can't paste the image of the court because of copyright problems but you can just use any image of a 2D basketball court image from google like this: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.istockphoto.com%2Fillustrations%2Fbasketball-court-overhead&psig=AOvVaw3ORchlrt0TuWGaDMHoe5zn&ust=1629496920541000&source=images&cd=vfe&ved=0CAsQjRxqFwoTCPCIk5yLvvICFQAAAAAdAAAAABAG
Ok so I found a solution for grid
since it is a RGBA value it needs 4 values. Red, green, blue and alpha. For the example of orange it would be:
grid_color = (255, 165, 0, 255)
im[:, ::dy, :] = grid_color
im[::dx, :, :] = grid_color
Related
How can I crop images that looks like this and save as 3 different images?
The issue is that images are different in size and non-proportional, so I want to make a code that dynamically cuts black borders but not the black part which is inside the picture.
Here is the desired outcome:
Below is the sample code I've made which works only for one specific image.
from PIL import Image
im = Image.open(r"image.jpg")
# Setting the points for cropped image1
# im1 = im.crop((left, top, right, bottom))
im1 = im.crop((...))
im2 = im.crop((...))
im3 = im.crop((...))
im1 = im1.save(r"image1.jpg")
im2 = im2.save(r"image2.jpg")
im3 = im3.save(r"image3.jpg")
Finally I've found the solution. Here is what I did:
from PIL import Image, ImageChops
def RemoveBlackBorders(img):
bg = Image.new(img.mode, img.size, img.getpixel((0,0)))
diff = ImageChops.difference(img, bg)
diff = ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
return img.crop(bbox)
# Opens a image in RGB mode
im = Image.open(r"C:\Path\Image.jpg")
# removing borders
im = RemoveBlackBorders(im)
# getting midpoint from size
width, height = im.size
mwidth = width/2
# assign shape of figure from the midpoint
#crop((x,y of top left, x, y of bottom right))
im1 = im.crop((0, 0, mwidth-135, height))
im2 = im.crop((mwidth-78, 0, mwidth+84, height))
im3 = im.crop((mwidth+135, 0, width, height))
The function to remove borders I've found from here.
Although the solution is not completely dynamic, it still solves my problem with ~90% accuracy. But I believe there should be a more universal approach for this problem.
If the areas have always the same size and the same top and bottom coordinates the following should work:
The coordinates for the crops can be retrieved by calculating the sums per rows and per columns, then analyzing them.
import cv2
import numpy as np
im = cv2.imread(image_path)
sum_of_rows = np.sum(im, axis=(1,2))
sum_of_cols = np.sum(im, axis=(0,2))
The top and bottom can be calculated by calculating the sum for each row (each sum value being calculated R+G+B, the value should be zero for black). Then looking for the first value being different form zero and the last value being different than zero. Both indicating the top and bottom.
top = np.argmax(sum_of_rows > 0)
bottom = top + np.argmax(sum_of_rows[top:]==0)
The same can be done for the sum for each column, but here checking for multiple left and right values.
I have a matrix filled with 0 and 1. I would like to draw it round instead of square. That is, divide the circle into sectors and color them according to the matrix as in this picture:
Each value in this array corresponds to the color that should be filled in the image area. I painted the area pink for clarity:
I managed to create pie slices that I can shade, which works for the sections in circle b for example, but not for the other two:
from PIL import Image, ImageDraw
x_center = 400 // 2
y_center = 400 //2
img = Image.new('RGBA', (400, 400), 'white') Here's what happened
idraw = ImageDraw.Draw(img)
idraw.pieslice([x_center-100, x_center-100,
y_center + 106, y_center + 106], 225, 315, fill='blue')
Here's what happened:
Do you know how to do it in matplotlib or plotly? Theoretically, I understand how to do it, practically-no... Can you help me please??
I need to resize an image, but with a "varying scaling" in the y axis, after warping:
Plotted Image
Original input image
Warped output image
The image (left one) was taken at an angle, so I've used the getPerspectiveTransform and warpPerspective OpenCV functions to get the top/plan view of the image (right one).
But, now the top half of the warped image is stretched and the bottom half is squashed, and this amount of stretch/squash is varying continuously as you go down the image. So, I need to do the opposite.
For example: The zebra crossing lines in the warped image are thicker at the top of the image and thinner at the bottom. I want them to all be the same thickness and same vertical distance from each other essentially.
Badly drawn but something like this: (if we ignore the 2 people, I think this is what the final output image should be like.)
predicted output image
My end goal is to measure distance between people's feet in an image (shown by green dots), but I've got that section sorted already.
By vertically scaling the warped image to make it linear, it will allow me to accurately measure the real distance in the x & y direction from a top/plan view, (i.e each pixel in the x or y direction is say 1cm in real distance)
I was thinking of multiplying each row of the image by a factor (e.g. top rows multiply by smaller number like 0.8 or 0.9, and bottom rows multiply by bigger number like 1.1 or 1.2), but I really don't know how to do that.
Code:
import cv2 as cv
from matplotlib import pyplot as plt
import numpy as np
# READ IMAGE
imgOrig = cv.imread('.jpg')
# RESIZE IMAGE
width = int(1000)
ratio = imgOrig.shape[1]/width
height = int(imgOrig.shape[0]/ratio)
dsize = (width, height)
img = cv.resize(imgOrig, dsize)
feetLocation = [[280, 500], [740, 496]]
cv.circle(img,(280, 500),5,(0,255,0),thickness= 10)
cv.circle(img,(740, 496),5,(0,255,0),thickness= 10)
# WARPING
pts1 = np.float32([[0, -0], [width, 0], [-1800, height], [width + 1800, height]])
pts2 = np.float32([[0, 0], [width, 0], [0, height], [width, height]])
M = cv.getPerspectiveTransform(pts1, pts2)
dst = cv.warpPerspective(img, M, (width, height))
#DISPLAY IMAGES
plt.subplot(121),plt.imshow(img),plt.title('Original Image')
plt.subplot(122),plt.imshow(dst),plt.title('Warped Image')
plt.show()
I was working on a solution, before the several edits were applied. I focussed on the actual boxes only. If, instead, you actually need the surrounding, too, the following approach won't help you much, I'm afraid. Also, I assumed the bottom box to be fully included. So, if that one's somehow cut like presented in your new desired final output, additional work would be needed to handle that case.
From the given image, you could mask the gray-ish part around and between the single boxes using the saturation and value channels from the HSV color space:
Following, row-wise sum all pixels, apply some moving average to clean the signal, and detect the peaks in that signal:
The bottom image border must be manually added, since there is no gray-ish border (most likely because the box is somehow cut).
Now, for each of these "peak rows", determine the first and last masked pixels, and build boxes from each two neighbouring "peak rows". Finally, for each of these boxes, apply a distinct perspective transform to a given size. If needed, stack those boxes vertically for example:
That'd be the whole code:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from scipy.signal import find_peaks
# Read original image
imgOrig = cv2.cvtColor(cv2.imread('DInAq.jpg'), cv2.COLOR_BGR2RGB)
# Resize image
width = int(1000)
ratio = imgOrig.shape[1] / width
height = int(imgOrig.shape[0] / ratio)
dsize = (width, height)
img = cv2.resize(imgOrig, dsize)
# Mask low saturation and medium to high value (i.e. gray-ish/white-ish colors)
img_gauss = cv2.GaussianBlur(img, (5, 5), -1)
h, s, v = cv2.split(cv2.cvtColor(img_gauss, cv2.COLOR_BGR2HSV))
mask = (s < 24) & (v > 64)
# Row-wise sum mask pixels, apply moving average filter, and find peaks
row_sum = np.sum(mask, axis=1)
row_sum = np.convolve(row_sum, np.ones(5)/5, 'same')
peaks = find_peaks(row_sum, prominence=50)[0]
peaks = np.insert(peaks, 4, img.shape[0]-1)
# Find first and last pixels per "peak row"
x1 = [np.argwhere(mask[p, :]).min() for p in peaks]
x2 = [np.argwhere(mask[p, :]).max() for p in peaks]
# Collect single boxes
boxes = []
for i in np.arange(len(peaks)-1, 0, -1):
boxes.append([[x1[i], peaks[i]],
[x1[i-1], peaks[i-1]],
[x2[i-1], peaks[i-1]],
[x2[i], peaks[i]]])
# Warp each box individually to a given size
warped = []
bw, bh = [400, 400]
for box in reversed(boxes):
pts1 = np.float32(box)
pts2 = np.float32([[0, bh-1], [0, 0], [bw-1, 0], [bw-1, bh-1]])
M = cv2.getPerspectiveTransform(pts1, pts2)
warped.append(cv2.warpPerspective(img, M, (bw, bh)))
# Output
plt.figure(1)
plt.subplot(121), plt.imshow(img), plt.title('Original image')
for box in boxes:
pts = np.array(box)
plt.plot(pts[:, 0], pts[:, 1], 'rx')
plt.subplot(122), plt.imshow(np.vstack(warped)), plt.title('Warped image')
plt.tight_layout(), plt.show()
That's kind of an automated way to detect and extract the single boxes. For better results, you could set up a simple GUI (solely using OpenCV, for example), and let the user click on the exact corners, and build the boxes to be transformed from there.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1
Matplotlib: 3.4.1
NumPy: 1.20.2
OpenCV: 4.5.1
SciPy: 1.6.2
----------------------------------------
I'm kinda new with the PIL was wondering why my circle is not perfect. Is there a fix for this? Thanks.
here's my code:
avatar_image = avatar_image.resize((128, 128))
avatar_size = (avatar_image.size[0] * 3, avatar_image.size[1] * 3)
circle_image = Image.new('L', avatar_size, 0)
circle_draw = ImageDraw.Draw(circle_image)
circle_draw.ellipse((0, 0) + avatar_size, fill=255)
mask = circle_image.resize(avatar_image.size, Image.ANTIALIAS)
avatar_image.putalpha(mask)
final = ImageOps.fit(avatar_image, mask.size, centering=(0.5, 0.5))
final.putalpha(mask)
final.show()
Draw Circle: right side of the circle looks off
Circle with Picture:
You have an off-by-one error, commonly caused by a confusion between size and position which is the case here too.
image.new takes a width and height in number of pixels.
circle_draw.ellipse takes a start and end position, which is based on a 0-indexed grid.
To get a full circle you need to make the circle one pixel smaller than it is now to fit inside circle_image
I am trying to draw a circle on an image, using Python. I tried this using PIL but I would like to specify a linewidth. Currently, PIL draws a circle but the border is too thin.
Here is what I have done.
For a test image: I created a 1632 X 1200 image in MS Paint and filled it green. I called it test_1.jpg. Here is the input file:
from PIL import Image, ImageDraw
im = Image.open('test_1.jpg')
width, height = im.size
eX, eY = 816,816 #Size of Bounding Box for ellipse
bbox = (width/2 - eX/2, height/2 - eY/2, width/2 + eX/2, height/2 + eY/2)
draw = ImageDraw.Draw(im)
bbox_L = []
for j in range(0,5):
bbox_L.append([element+j for element in bbox])
draw.ellipse(tuple(bbox_L[j]), outline ='white')
im.show()
Basically, I tried to draw multiple circles that would be centered at the same spot but with a different radius. My thinking was that this would create the effect of a thicker line.
However, this is producing the output shown in the attached file below:
Problem: As you can see, the bottom-left and top-right are too thin. Also, there are gaps between the various circles (see top left and bottom right).
The circle has a varying thickness. I am looking a circle with a uniform thickness.
Question:
Is there a way to do draw a circle in Python, on an image like test_1.jpg, using PIL, NumPy, etc. and to specify line thickness?
I had the same problem, and decided to write a helper function, similar to yours. This function draws two concentric ellipses in black and white on a mask layer, and the intended outline colour is stamped onto the original image through the mask. To get smoother results (antialias), the ellipses and mask is drawn in higher resolution.
Output with and without antialias
The white ellipse is 20 pixels wide, and the black ellipse is 0.5 pixels wide.
Code
from PIL import Image, ImageDraw
def draw_ellipse(image, bounds, width=1, outline='white', antialias=4):
"""Improved ellipse drawing function, based on PIL.ImageDraw."""
# Use a single channel image (mode='L') as mask.
# The size of the mask can be increased relative to the imput image
# to get smoother looking results.
mask = Image.new(
size=[int(dim * antialias) for dim in image.size],
mode='L', color='black')
draw = ImageDraw.Draw(mask)
# draw outer shape in white (color) and inner shape in black (transparent)
for offset, fill in (width/-2.0, 'white'), (width/2.0, 'black'):
left, top = [(value + offset) * antialias for value in bounds[:2]]
right, bottom = [(value - offset) * antialias for value in bounds[2:]]
draw.ellipse([left, top, right, bottom], fill=fill)
# downsample the mask using PIL.Image.LANCZOS
# (a high-quality downsampling filter).
mask = mask.resize(image.size, Image.LANCZOS)
# paste outline color to input image through the mask
image.paste(outline, mask=mask)
# green background image
image = Image.new(mode='RGB', size=(700, 300), color='green')
ellipse_box = [50, 50, 300, 250]
# draw a thick white ellipse and a thin black ellipse
draw_ellipse(image, ellipse_box, width=20)
# draw a thin black line, using higher antialias to preserve finer detail
draw_ellipse(image, ellipse_box, outline='black', width=.5, antialias=8)
# Lets try without antialiasing
ellipse_box[0] += 350
ellipse_box[2] += 350
draw_ellipse(image, ellipse_box, width=20, antialias=1)
draw_ellipse(image, ellipse_box, outline='black', width=1, antialias=1)
image.show()
I've only tested this code in python 3.4, but I think it should work with 2.7 without major modification.
Simple (but not nice) solution is to draw two circles (the smaller one with color of background):
outline = 10 # line thickness
draw.ellipse((x1-outline, y1-outline, x2+outline, y2+outline), fill=outline_color)
draw.ellipse((x1, y1, x2, y2), fill=background_color)
From version 5.3.0 onwards, released on 18 Oct 2018, Pillow has supported width for ImageDraw.ellipse. I doubt many people are using PIL nowadays.
I don't think there's a way to specify ellipse thickness, but you probably can draw lines at each pixel where ellipse pass, with the argument width=...
NB: I'm foreign, so sorry if my english is wrong.
You can use the Image.core.draw method like this:
zero_array = np.zeros((224,224))
im = Image.fromarray(np.uint8(zero_array))
draw = ImageDraw.Draw(im)
dr_im = Image.core.draw(im.getdata(), 0)
dr_im.draw_rectangle((22,33, 150,100),220,2)
dr_im.draw_rectangle((22,33, 150,100),125,0)
#draw.rectangle((22,33, 150,100), fill=220,outline = 125)
print(np.array(im)[33][23])
im.show()