Fill image up to a size with Python - python

I am writing a handwriting recognition app and my inputs have to be of a certain size (128x128). When I detect a letter it looks like this:
That image for instance has a size of 40x53. I want to make it 128x128, but simply resizing it lowers the quality especially for smaller images. I want to somehow fill the rest up to 128x128 with the 40x53 in the middle. The background color should also stay relatively the same. I am using Python's opencv but I am new to it. How can I do this, and is it even possible?

Here you can get what you have asked using outputImage. Basically I have added a border using copyMakeBorder method. You can refer this for more details. You have to set the color value as you want in the value parameter. For now it is white [255,255,255].
But I would rather suggest you to resize the original image, seems like it is the better option than what you have asked. Get the image resized you can use resized in the following code. For your convenience I have added both methods in this code.
import cv2
import numpy as np
inputImage = cv2.imread('input.jpg', 1)
outputImage = cv2.copyMakeBorder(inputImage,37,38,44,44,cv2.BORDER_CONSTANT,value=[255,255,255])
resized = cv2.resize(inputImage, (128,128), interpolation = cv2.INTER_AREA)
cv2.imwrite('output.jpg', outputImage)
cv2.imwrite('resized.jpg', resized)

I believe you want to scale your image.
This code might help:
import cv2
img = cv2.imread('name_of_image', cv2.IMREAD_UNCHANGED)
# Get original size of image
print('Original Dimensions: ',img.shape)
# Percentage of the original size
scale_percent = 220
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
# Resize/Scale the image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
# The new size of the image
print('Resized Dimensions: ',resized.shape)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()

Related

cv2.imshow() function displays correct image but while saving it using cv2.imwrite() funtion, its saves all black pixel image?

I am trying to resize the input image to 736 x 736 as output size preserving the aspect ratio of the original image and add zero paddings while doing so.
The function image_resize_add_padding() works fine and is doing what I am trying to do. The resized image looks good while displaying using cv2.imshow() function
but while saving using cv2.imwrite() function it seems to be a fully black image.
How do I save the correct image as it was displayed?
import cv2
import numpy as np
def image_resize_add_padding(image, target_size):
ih, iw = target_size
h, w, _ = image.shape
scale = min(iw/w, ih/h)
nw, nh = int(scale * w), int(scale * h)
image_resized = cv2.resize(image, (nw, nh))
image_paded = np.full(shape=[ih, iw, 3], fill_value=128.0)
dw, dh = (iw - nw) // 2, (ih-nh) // 2
image_paded[dh:nh+dh, dw:nw+dw, :] = image_resized
image_paded = image_paded / 255.
return image_paded
input_size = 736
image_path = "test_image.jpg"
original_image = cv2.imread(image_path)
output_image = image_resize_add_padding(
np.copy(original_image), [input_size, input_size])
cv2.imshow('image', output_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('test_output.jpg', output_image)
The imshow function and imwrite using the JPG format handle floating-point image buffers differently.
The line where you divided the image by 255. changed the image format to floating point. For the image data to be handled properly by the JPG writer, you can, for example, convert your buffer to uint8 (and make sure the values are in the range 0-255) before calling imwrite.
edit:
The code converts the image to floating point and also changes the range to 0-1. It is unclear why this is done, but if you want to keep the function as is, you can prepare the image for the imwrite call like this:
output_image = (output_image * 255).astype('uint8')

Resize Image using Opencv Python and preserving the quality

I would like to resize images using OpenCV python library.
It works but the quality of the image is pretty bad.
I must say, I would like to use these images for a photo sharing website, so the quality is a must.
Here is the code I have for the moment:
[...]
_image = image
height, width, channels = _image.shape
target_height = 1000
scale = height/target_height
_image = cv2.resize(image, (int(width/scale), int(height/scale)), interpolation = cv2.INTER_AREA)
cv2.imwrite(local_output_temp_file,image, (cv2.IMWRITE_JPEG_QUALITY, 100))
[...]
I don't know if there are others parameters to be used to specify the quality of the image.
Thanks.
You can try using imutils.resize to resize an image while maintaining aspect ratio. You can adjust based on desired width or height to upscale or downscale. Also when saving the image, you should use a lossless image format such as .tiff or .png. Here's a quick example:
Input image with shape 250x250
Downscaled image to 100x100
Reverted image back to 250x250
import cv2
import imutils
image = cv2.imread('1.png')
resized = imutils.resize(image, width=100)
revert = imutils.resize(resized, width=250)
cv2.imwrite('resized.png', resized)
cv2.imwrite('original.png', image)
cv2.imwrite('revert.png', revert)
cv2.waitKey()
Try to use more accurate interpolation techniques like cv2.INTER_CUBIC or cv2.INTER_LANCZOS64. Try also switching to scikit-image. Docs are better and lib is more reach in features. It has 6 modes of interpolation to choose from:
0: Nearest-neighbor
1: Bi-linear (default)
2: Bi-quadratic
3: Bi-cubic
4: Bi-quartic
5: Bi-quintic

How to increase size of an image while saving it in opencv python

I have python script which detect person and face in a frame. First it detects person, then save its image by increasing some image. Then it detects the face in that person image and also save the face image.
As the original saved image of both person and face is very small because I am reducing the size of the frame initially
frame = imutils.resize(frame, width=500)
so that I get good fps and low noise. This is why while saving the image, I have to increase its width and height. Below is the code I am using and its results:
scale_percent = 220 # percent of original size
width = int(image.shape[1] * scale_percent / 100)
height = int(image.shape[0] * scale_percent / 100)
dim = (width, height)
dim = (width, height)
resized_img = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
Test image I am using is :
Saved image of face after resize:
and saved image of person after resize:
although the person image looks to be fine but face image is very low in quality. Is there any way possible so that we can increase the size (width and height) of the image but still keep it in a good quality. Please help. Thanks
Try with INTER_CUBIC or INTER_LACZOS4
resized_img = cv2.resize(image, dim, interpolation = cv2.INTER_CUBIC)
I have worked mostly on images and from my experience:
INTER_NEAREST~INTER_AREA < INTER_CUBIC~INTER_LACZOS4~INTER_LINEAR
But still, there will be some amount of pixelation when you do any of these operations, coz you are manipulating the original image data.

Python Resize image with ratio of maximum side of the image

I am very new in Python and this is going to be a very basic question.I have a website which is image based and i am developing it using Django.Now i want to resize the image or you can say i want to minimize the size of the images.There are different size of images are avaible,some images are largest in width,some images are largest in height and i want to resize images without changing there shape.
Here are some example what dimensions images are using in my website.
Here the First image is largest in width and the second image is largest in height and they are really big in Dimension.so they need to be resized or rather these images are need to be minimized in size.So i have used the PIL as below.
from PIL import Image,ImageDraw, ImageFont, ImageEnhance
def image_resize(request,image_id):
photo = Photo.objects.get(pk=image_id)
img = Image.open(photo.photo.file)
image = img.resize((1000, 560), Image.ANTIALIAS)
image.save()
so this function returns all the images with width of 1000 and height of 560.But i don't want to resize all the images with same width and height,rather i want to resize each images maintaining there own shape. That is there shape will be same but the images will be resized.How can i do this? i am really new in python.
Do you want to have all images with same width 1000? Try this code. It will resize to at most 1000 as width (if the image's width is less than 1000, nothing changes)
def image_resize(request,image_id):
photo = Photo.objects.get(pk=image_id)
image = Image.open(photo.photo.file)
(w,h) = image.size
if (w > 1000):
h = int(h * 1000. / w)
w = 1000
image = image.resize((w, h), Image.ANTIALIAS)
image.save()
I recall doing this sometime back without any problem except that I used thumbnail method rather than resize. Try it. You need not assign img to image. You can process img and save the same.
# open img
img.thumbnail((1000,560), Image.ANTIALIAS)
# save img

PIL Image.resize() not resizing the picture

I have some strange problem with PIL not resizing the image.
from PIL import Image
img = Image.open('foo.jpg')
width, height = img.size
ratio = floor(height / width)
newheight = ratio * 150
img.resize((150, newheight), Image.ANTIALIAS)
img.save('mugshotv2.jpg', format='JPEG')
This code runs without any errors and produces me image named mugshotv2.jpg in correct folder, but it does not resize it. It does something to it, because the size of the picture drops from 120 kb to 20 kb, but the dimensions remain the same.
Perhaps you can also suggest way to crop images into squares with less code. I kinda thought that Image.thumbnail does it, but what it did was that it scaled my image to 150 px by its width, leaving height 100px.
resize() returns a resized copy of an image. It doesn't modify the original. The correct way to use it is:
from PIL import Image
#...
img = img.resize((150, newheight), Image.ANTIALIAS)
source
I think what you are looking for is the ImageOps.fit function. From PIL docs:
ImageOps.fit(image, size, method, bleed, centering) => image
Returns a sized and cropped version of
the image, cropped to the requested
aspect ratio and size. The size
argument is the requested output size
in pixels, given as a (width, height)
tuple.
[Update]
ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.image.resize((100,100),Image.ANTIALIAS)
Today you should use something like this:
from PIL import Image
img = Image.open(r"C:\test.png")
img.show()
img_resized = img.resize((100, 100), Image.Resampling.LANCZOS)
img_resized.show()

Categories

Resources