I have all sorts of images of rectangular shape. I need to modify them to uniform square shape (different size ok).
For that I have to layer it on top of larger squared shape.
Background is black.
I figured it to the point when I need to layer 2 images:
import cv2
import numpy as np
if 1:
img = cv2.imread(in_img)
#get size
height, width, channels = img.shape
print (in_img,height, width, channels)
# Create a black image
x = height if height > width else width
y = height if height > width else width
square= np.zeros((x,y,3), np.uint8)
cv2.imshow("original", img)
cv2.imshow("black square", square)
cv2.waitKey(0)
How do I stack them on top of each other so original image is centered vertically and horizontally on top of black shape?
I figured it. You need to "broadcast into shape":
square[(y-height)/2:y-(y-height)/2, (x-width)/2:x-(x-width)/2] = img
Final draft:
import cv2
import numpy as np
if 1:
img = cv2.imread(in_img)
#get size
height, width, channels = img.shape
print (in_img,height, width, channels)
# Create a black image
x = height if height > width else width
y = height if height > width else width
square= np.zeros((x,y,3), np.uint8)
#
#This does the job
#
square[int((y-height)/2):int(y-(y-height)/2), int((x-width)/2):int(x-(x-width)/2)] = img
cv2.imwrite(out_img,square)
cv2.imshow("original", img)
cv2.imshow("black square", square)
cv2.waitKey(0)
You can use numpy.vstack to stack your images vertically and numpy.hstack to stack your images horizontally.
Please mark answered if you this resolves your problem.
Related
I have many 708x708 pictures which I need to resize into a 500x250px, keeping the ratio the same. I imagined this can be done by resize the actual image to 250x250 via Image.thumbnail('image.jpg'), and adding two white borders to fill the remainder of the space. However, I don't know how to do the latter. The following code gives me the thumbnail image of 250x250px.
image = img
img
image.thumbnail((500, 250))
image.save('image_thumbnail.jpg')
print(image.size)
Question is similar to this one.
Any suggestions would be much appreciated!
Check this method in skimage package. There's parameter called mode, where you can control desired behaviour.
Try the following:
import cv2
import numpy as np
img = cv2.imread('myimage.jpg', cv2.IMREAD_UNCHANGED)
print('Original Dimensions : ',img.shape)
width = int(img.shape[1] * 35.31 / 100) # 250/708 is 35%
height = int(img.shape[0] * 35.31 / 100)
dim = (width, height)
resized_image = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
print('Resized_image Dimensions : ',resized_image.shape)
row, col = resized_image.shape[:2]
bottom = resized_image[row-2:row, 0:col]
bordersize = 125
border = cv2.copyMakeBorder(
resized_image,
top=bordersize,
bottom=bordersize,
left=0,
right=0,
borderType=cv2.BORDER_CONSTANT,
value=[255, 255, 255]
)
cv2.imshow('image', resized_image)
cv2.imshow('left', bottom)
cv2.imshow('right', border)
cv2.waitKey(0)
cv2.destroyAllWindows()
I tried the following: first, I make a thumbnail of size (250,250) and alter the image with ImageOps.expand to add two white borders to make the dimensions (250, 250).
from PIL import Image, ImageOps
img = Image.open('801595.jpg')
img.thumbnail((500, 250))
print(img.size)
img_with_border = ImageOps.expand(img, border = (125, 0) ,fill='white')
img_with_border.save('imaged-with-border2.jpg')
I am trying to crop a centered (or not centered) circle from this image:
I stole this code from the existing questions regarding this topic on stack overflow, something goes wrong though:
import cv2
file = 'dog.png'
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
circle = cv2.HoughCircles(img,
3,
dp=1.5,
minDist=10,
minRadius=1,
maxRadius=10)
x = circle[0][0][0]
y = circle[0][0][1]
r = circle[0][0][2]
rectX = (x - r)
rectY = (y - r)
crop_img = img[rectY:(rectY+2*r), rectX:(rectX+2*r)]
cv2.imwrite('dog_circle.png', crop_img)
Output:
Traceback (most recent call last):
File "C:\Users\Artur\Desktop\crop_circle - Kopie\crop_circle.py", line 14, in <module>
x = circle[0][0][0]
TypeError: 'NoneType' object is not subscriptable
cv2.HoughCircles() seems to produce None instead of a cropped circle array. How do I fix this?
first: HoughCircles is used to detect circles on image, not to crop it.
You can't have circle image. Image is always rectangle but some of pixels can be transparent (alpha channel in RGBA) and programs will not display them.
So you can first crop image to have square and later add alpha channel with information which pixels should be visible. And here you can use mask with white circle on black background. At the end you have to save it as png or tiff because jpg can't keep alpha channel.
I use module PIL/pillow for this.
I crop square region in center of image but you can use different coordinates for this.
Next I create grayscale image with the same size and black background and draw white circle/ellipse.
Finally I add this image as alpha channel to cropped image and save it as png.
from PIL import Image, ImageDraw
filename = 'dog.jpg'
# load image
img = Image.open(filename)
# crop image
width, height = img.size
x = (width - height)//2
img_cropped = img.crop((x, 0, x+height, height))
# create grayscale image with white circle (255) on black background (0)
mask = Image.new('L', img_cropped.size)
mask_draw = ImageDraw.Draw(mask)
width, height = img_cropped.size
mask_draw.ellipse((0, 0, width, height), fill=255)
#mask.show()
# add mask as alpha channel
img_cropped.putalpha(mask)
# save as png which keeps alpha channel
img_cropped.save('dog_circle.png')
img_cropped.show()
Result
BTW:
In mask you can use values from 0 to 255 and different pixels may have different transparency - some of them can be half-transparent to make smooth border.
If you want to use it in HTML on own page then you don't have to create circle image because web browser can round corners of image and display it as circle. You have to use CSS for this.
EDIT: Example with more circles on mask.
from PIL import Image, ImageDraw
filename = 'dog.jpg'
# load image
img = Image.open(filename)
# crop image
width, height = img.size
x = (width - height)//2
img_cropped = img.crop((x, 0, x+height, height))
# create grayscale image with white circle (255) on black background (0)
mask = Image.new('L', img_cropped.size)
mask_draw = ImageDraw.Draw(mask)
width, height = img_cropped.size
mask_draw.ellipse((50, 50, width-50, height-50), fill=255)
mask_draw.ellipse((0, 0, 250, 250), fill=255)
mask_draw.ellipse((width-250, 0, width, 250), fill=255)
# add mask as alpha channel
img_cropped.putalpha(mask)
# save as png which keeps alpha channel
img_cropped.save('dog_2.png')
img_cropped.show()
This answer explains how to apply a mask. First, read in the image:
import cv2
import numpy as np
img = cv2.imread('dog.jpg')
Next create a mask, or a blank image that is the same size as the source image:
h,w,_ = img.shape
mask = np.zeros((h,w), np.uint8)
Then, draw a circle on the mask. Change these parameters based on where the face is:
cv2.circle(mask, (678,321), 5, 255, 654)
Finally, mask the source image:
img = cv2.bitwise_and(img, img, mask= mask)
Here is the mask:
And the output:
The idea is to create a black mask then draw the desired region to crop out in white using cv2.circle(). From there we can use cv2.bitwise_and() with the original image and the mask. To crop the result, we can use cv2.boundingRect() on the mask to obtain the ROI then use Numpy slicing to extract the result. For this example I used the center point derived from the image's width and height
import cv2
import numpy as np
# Create mask and draw circle onto mask
image = cv2.imread('1.jpg')
mask = np.zeros(image.shape, dtype=np.uint8)
x,y = image.shape[1], image.shape[0]
cv2.circle(mask, (x//2,y//2), 300, (255,255,255), -1)
# Bitwise-and for ROI
ROI = cv2.bitwise_and(image, mask)
# Crop mask and turn background white
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
x,y,w,h = cv2.boundingRect(mask)
result = ROI[y:y+h,x:x+w]
mask = mask[y:y+h,x:x+w]
result[mask==0] = (255,255,255)
cv2.imshow('result', result)
cv2.waitKey()
I want to resize an image based on a percentage and keep it as close to the original with minimal noise and distortion. The resize could be up or down as in I could scale to 5% the size of the original image or 500% (or any other value these are given as example)
This is what i tried and i'd need absolute minimal changes in the resized image since i'll be using it in comparison with other images
def resizing(main,percentage):
main = cv2.imread(main)
height = main.shape[ 0] * percentage
width = crop.shape[ 1] * percentage
dim = (width,height)
final_im = cv2.resize(main, dim, interpolation = cv2.INTER_AREA)
cv2.imwrite("C:\\Users\\me\\nature.jpg", final_im)
You can use this syntax of cv2.resize:
cv2.resize(image,None,fx=int or float,fy=int or float)
fx depends on width
fy depends on height
You can put the second argument None or (0,0)
Example:
img = cv2.resize(oriimg,None,fx=0.5,fy=0.5)
Note:
0.5 means 50% of image to be scaling
I think you're trying to resize and maintain aspect ratio. Here's a function to upscale or downscale an image based on percentage
Original image example
Resized image to 0.5 (50%)
Resized image to 1.3 (130%)
import cv2
# Resizes a image and maintains aspect ratio
def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA):
# Grab the image size and initialize dimensions
dim = None
(h, w) = image.shape[:2]
# Return original image if no need to resize
if width is None and height is None:
return image
# We are resizing height if width is none
if width is None:
# Calculate the ratio of the height and construct the dimensions
r = height / float(h)
dim = (int(w * r), height)
# We are resizing width if height is none
else:
# Calculate the ratio of the width and construct the dimensions
r = width / float(w)
dim = (width, int(h * r))
# Return the resized image
return cv2.resize(image, dim, interpolation=inter)
if __name__ == '__main__':
image = cv2.imread('1.png')
cv2.imshow('image', image)
resize_ratio = 1.2
resized = maintain_aspect_ratio_resize(image, width=int(image.shape[1] * resize_ratio))
cv2.imshow('resized', resized)
cv2.imwrite('resized.png', resized)
cv2.waitKey(0)
I have an image with e.g. Width=999 and Height=767. I know the contour of my ROI and have used rect=cv2.minAreaRect() to get the CenterPosition, Width, Height and Angle of the rotated rectangle around it. Now I need to resize the image to the size of another image with e.g. Width=4096 and Height=2160. So far I am using cv2.resize() to do so.
My problem now is, that of course also the boxPoints of my rectangle take place somewhere else and the data in rect about CenterPosition, Width, Height and Angle of the rotated rectangle around the now resized ROI is not updated and so false. I have tried different workarounds, but didn't find any solution yet.
Here is my code:
import numpy as np
import cv2
#Create black image
img = np.zeros((767, 999, 3), np.uint8)
#Turn ROI to white
cv2.fillConvexPoly(img, np.array(ROI_contour), (255, 255, 255))
#Get Width, Height, and Angle of rectangle around ROI
rect = cv2.minAreaRect(np.array(ROI_contour))
#Draw rotated rectangle in red
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box], 0, (0,0,255), 1)
#Resize img to new size
img_resized = cv2.resize(img, (4096, 2160), interpolation=cv2.INTER_AREA)
Here is how img e.g. could look like:
img with ROI in white before resizing - CenterPosition, Width, Height and Angle of ROI is known by rect.
How can I get new Width, Height and Angle of the resized ROI?
This is simple unitary method.
In your example, h_old = 767, w_old = 999; h_new = 4096, w_new = 2160.
h_ratio = h_new / h_old = 5.34, w_ratio = w_new / w_old = 2.16
Say the center_position, width and height of the rectangle found in old image is: (old_center_x, old_center_y), old_rect_width and old_rect_height, respectively.
Then the new values would become:
(old_center_x * w_ratio, old_center_y * h_ratio), old_rect_width * w_ratio, old_rect_height * h_ratio, respectively.
Since the aspect ratio of the two images is also not the same,
old_aspect_ratio = 999/767 = 1.30, new_aspect_ratio = 2160 / 4096 = 0.52, you need to multiply this factor with the new dimensions too.
can someone tell me how to implement a bisection of the image into upper and lower part? so that I could overlap them. for example, I have an image and I should divide it to calculate a number of pixels on each part. I am new to OpenCV and don't exactly understand the geometry of the image.
To simplify #avereux's answer:
In Python you can use splicing to break down an image into sub-images. The syntax for this is:
sub_image = full_image[y_start: y_end, x_start:x_end]
Note that for images, the origin is the top-left corner of the image. So a pixel on the first row of the image (that is the topmost row) would have coordinates x_coordinate = x, y_coordinate = 0
To get the shape of the image, use image.shape. This returns (no_of_rows, no_of_cols)
You can use these to break down the image any which way you want.
You can crop the top and bottom portion of the image down the middle horizontally.
Open the image.
import cv2
import numpy as np
image = cv2.imread('images/blobs1.png')
cv2.imshow("Original Image", image)
cv2.waitKey(0)
Use image.shape to let us capture the height and width variables.
height, width = image.shape[:2]
print image.shape
Now we can start cropping.
# Let's get the starting pixel coordiantes (top left of cropped top)
start_row, start_col = int(0), int(0)
# Let's get the ending pixel coordinates (bottom right of cropped top)
end_row, end_col = int(height * .5), int(width)
cropped_top = image[start_row:end_row , start_col:end_col]
print start_row, end_row
print start_col, end_col
cv2.imshow("Cropped Top", cropped_top)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Let's get the starting pixel coordiantes (top left of cropped bottom)
start_row, start_col = int(height * .5), int(0)
# Let's get the ending pixel coordinates (bottom right of cropped bottom)
end_row, end_col = int(height), int(width)
cropped_bot = image[start_row:end_row , start_col:end_col]
print start_row, end_row
print start_col, end_col
cv2.imshow("Cropped Bot", cropped_bot)
cv2.waitKey(0)
cv2.destroyAllWindows()
Finally, we can use image.size to give use the number of pixels in each part.
cropped_top.size
cropped_bot.size
You can do the same thing with contours but it will involve bounding boxes.
Here is much more flexible method by which you can cut image in half or into 4 equal parts or 6 parts how ever parts you may need.
This code will crop image first into 2 piece Horizontally
Then for each of that 2 piece it will crop another 3 images
Leaving total 6 cropped images
Change the CROP_W_SIZE and CROP_H_SIZE value to adjust your crop settings.
you will need a CROP folder where this code will save images to.
import cv2,time
img = cv2.imread('image.png')
img2 = img
height, width, channels = img.shape
# Number of pieces Horizontally
CROP_W_SIZE = 3
# Number of pieces Vertically to each Horizontal
CROP_H_SIZE = 2
for ih in range(CROP_H_SIZE ):
for iw in range(CROP_W_SIZE ):
x = width/CROP_W_SIZE * iw
y = height/CROP_H_SIZE * ih
h = (height / CROP_H_SIZE)
w = (width / CROP_W_SIZE )
print(x,y,h,w)
img = img[y:y+h, x:x+w]
NAME = str(time.time())
cv2.imwrite("CROP/" + str(time.time()) + ".png",img)
img = img2
Check this out for two left-right and bottom-up division:
import cv2
img = cv2.imread('img.jpg')
h, w, channels = img.shape
for left-right division:
half = w//2
left_part = img[:, :half]
right_part = img[:, half:]
cv2.imshow('Left part', left_part)
cv2.imshow('Right part', right_part)
for top-bottom division:
half2 = h//2
top = img[:half2, :]
bottom = img[half2:, :]
cv2.imshow('Top', top)
cv2.imshow('Bottom', bottom)
I edited a bit this source: https://www.geeksforgeeks.org/dividing-images-into-equal-parts-using-opencv-in-python/