Resize images to very small n x n dimensions - python

I am resizing my images to 50 x 50 in python. Skimage transform and PIL thumbnail both resize the image while preserving the aspect ratio.
Whats the other way to do it?
I have tried:
For PIL thumbnail,
im.thumbnail((50,50),Image.ANTIALIAS)
This gives me a (42,50) image not a (50,50) image.
For skimage.transform
image = skimage.transform.resize(image, (50, 50))
It returns a completely distorted image.

Use im.resize((50,50), Image.ANTIALIAS)

To resize to fixed size while maintaining aspect ratio and cropping to fit, use PIL.ImageOps.fit(image,size)
import PIL.ImageOps
import PIL.Image
impath = '1-True Mountain Covered with Cloud.jpg'
im = PIL.Image.open(impath)
display(im)
imfit = PIL.ImageOps.fit(im, (64,64))
display(imfit)

Related

Why loading a 2d array with PIL.Image.fromarray mode="L" change the image?

Why this code plot different images?
from PIL import Image
import numpy as np
x = (np.random.random((32,32))*255).astype(np.int16)
img1 = Image.fromarray(x, mode='L')
img2 = Image.fromarray(x)
plt.imshow(img1, cmap='gray')
plt.imshow(img2, cmap='gray')
see images:
PIL requires L mode images to be 8-bit, see here. So, if you pass in your 16-bit image, where every high byte is zero, every second pixel will be black.

Resize Image using Opencv Python and preserving the quality

I would like to resize images using OpenCV python library.
It works but the quality of the image is pretty bad.
I must say, I would like to use these images for a photo sharing website, so the quality is a must.
Here is the code I have for the moment:
[...]
_image = image
height, width, channels = _image.shape
target_height = 1000
scale = height/target_height
_image = cv2.resize(image, (int(width/scale), int(height/scale)), interpolation = cv2.INTER_AREA)
cv2.imwrite(local_output_temp_file,image, (cv2.IMWRITE_JPEG_QUALITY, 100))
[...]
I don't know if there are others parameters to be used to specify the quality of the image.
Thanks.
You can try using imutils.resize to resize an image while maintaining aspect ratio. You can adjust based on desired width or height to upscale or downscale. Also when saving the image, you should use a lossless image format such as .tiff or .png. Here's a quick example:
Input image with shape 250x250
Downscaled image to 100x100
Reverted image back to 250x250
import cv2
import imutils
image = cv2.imread('1.png')
resized = imutils.resize(image, width=100)
revert = imutils.resize(resized, width=250)
cv2.imwrite('resized.png', resized)
cv2.imwrite('original.png', image)
cv2.imwrite('revert.png', revert)
cv2.waitKey()
Try to use more accurate interpolation techniques like cv2.INTER_CUBIC or cv2.INTER_LANCZOS64. Try also switching to scikit-image. Docs are better and lib is more reach in features. It has 6 modes of interpolation to choose from:
0: Nearest-neighbor
1: Bi-linear (default)
2: Bi-quadratic
3: Bi-cubic
4: Bi-quartic
5: Bi-quintic

enlarge image (may be pixelate but not blurry)

I wrote a script that draws some images, usually not bigger than 50x50px. Then I want to display that image in a Tkinter window. But first I need to enlarge the image because 30x30px are too small for a user to see every single pixel generated by my script. So I wrote this:
multiplier = 4
image = np.full((height * multiplier, width * multiplier, 3), 0, dtype=np.uint8)
for r in range(height):
for c in range(width):
for i in range(multiplier):
for j in range(multiplier):
image[r * multiplier + i][c * multiplier + j] = original[r][c]
P.S. original was initialized the same way as image
Also I tried:
resize((width * multiplier ,height * multiplier), Image.ANTIALIAS)
but it's not an option, because it makes an image look blurry. So what would be the better solution?
Example image:
I would suggest resizing with Nearest Neighbour resampling so you don't introduce any new, blurred colours - just ones already existing in your image:
import numpy as np
from PIL import Image
im = Image.open("snake.png").convert('RGB')
im = im.resize((200,200),resample=Image.NEAREST)
im.save("result.png")
You can go from a Pillow image to a numpy array with:
numpy_array = np.array(pillowImage)
and from numpy array to a Pillow image with:
pillow_image = Image.fromarray(numpyArray)
You could use PIL.Image/openCV and PIL.ImageFilter modules:
from PIL import Image, ImageFilter
import cv2
image = cv2.resize(image, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
image = Image.fromarray(image)
Here, fx and fy are the values you have to set yourself. Hope this helps :)
You can use deep learning Image Super-Resolution (ISR) or (SR3) to enlarge images with good quality.
Here's the unofficial implementation of Image Super-Resolution via Iterative Refinement using Pytorch
https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement
The paper: https://iterative-refinement.github.io/
Here's a video explanation by Two Minutes Paper https://www.youtube.com/watch?v=WCAF3PNEc_c
Here's a solution using just PIL:
from PIL import Image
multiplier = 4
original = Image.open("snake.png")
width, height = original.size
resized = original.resize((width * multiplier, height * multiplier), resample=0)
resized.save("resized.png")
Using resample=0 is the key.

crop image in skimage?

I'm using skimage to crop a rectangle in a given image, now I have (x1,y1,x2,y2) as the rectangle coordinates, then I had loaded the image
image = skimage.io.imread(filename)
cropped = image(x1,y1,x2,y2)
However this is the wrong way to crop the image, how would I do it in the right way in skimage
This seems a simple syntax error.
Well, in Matlab you can use _'parentheses'_ to extract a pixel or an image region. But in Python, and numpy.ndarray you should use the brackets to slice a region of your image, besides in this code you is using the wrong way to cut a rectangle.
The right way to cut is using the : operator.
Thus,
from skimage import io
image = io.imread(filename)
cropped = image[x1:x2,y1:y2]
One could use skimage.util.crop() function too, as shown in the following code:
import numpy as np
from skimage.io import imread
from skimage.util import crop
import matplotlib.pylab as plt
A = imread('lena.jpg')
# crop_width{sequence, int}: Number of values to remove from the edges of each axis.
# ((before_1, after_1), … (before_N, after_N)) specifies unique crop widths at the
# start and end of each axis. ((before, after),) specifies a fixed start and end
# crop for every axis. (n,) or n for integer n is a shortcut for before = after = n
# for all axes.
B = crop(A, ((50, 100), (50, 50), (0,0)), copy=False)
print(A.shape, B.shape)
# (220, 220, 3) (70, 120, 3)
plt.figure(figsize=(20,10))
plt.subplot(121), plt.imshow(A), plt.axis('off')
plt.subplot(122), plt.imshow(B), plt.axis('off')
plt.show()
with the following output (with original and cropped image):
You can crop image using skimage just by slicing the image array like below:
image = image_name[y1:y2, x1:x2]
Example Code :
from skimage import io
import matplotlib.pyplot as plt
image = io.imread(image_path)
cropped_image = image[y1:y2, x1:x2]
plt.imshow(cropped_image)
you can go ahead with the Image module of the PIL library
from PIL import Image
im = Image.open("image.png")
im = im.crop((0, 50, 777, 686))
im.show()

PIL Image.resize() not resizing the picture

I have some strange problem with PIL not resizing the image.
from PIL import Image
img = Image.open('foo.jpg')
width, height = img.size
ratio = floor(height / width)
newheight = ratio * 150
img.resize((150, newheight), Image.ANTIALIAS)
img.save('mugshotv2.jpg', format='JPEG')
This code runs without any errors and produces me image named mugshotv2.jpg in correct folder, but it does not resize it. It does something to it, because the size of the picture drops from 120 kb to 20 kb, but the dimensions remain the same.
Perhaps you can also suggest way to crop images into squares with less code. I kinda thought that Image.thumbnail does it, but what it did was that it scaled my image to 150 px by its width, leaving height 100px.
resize() returns a resized copy of an image. It doesn't modify the original. The correct way to use it is:
from PIL import Image
#...
img = img.resize((150, newheight), Image.ANTIALIAS)
source
I think what you are looking for is the ImageOps.fit function. From PIL docs:
ImageOps.fit(image, size, method, bleed, centering) => image
Returns a sized and cropped version of
the image, cropped to the requested
aspect ratio and size. The size
argument is the requested output size
in pixels, given as a (width, height)
tuple.
[Update]
ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.image.resize((100,100),Image.ANTIALIAS)
Today you should use something like this:
from PIL import Image
img = Image.open(r"C:\test.png")
img.show()
img_resized = img.resize((100, 100), Image.Resampling.LANCZOS)
img_resized.show()

Categories

Resources