enlarge image (may be pixelate but not blurry) - python

I wrote a script that draws some images, usually not bigger than 50x50px. Then I want to display that image in a Tkinter window. But first I need to enlarge the image because 30x30px are too small for a user to see every single pixel generated by my script. So I wrote this:
multiplier = 4
image = np.full((height * multiplier, width * multiplier, 3), 0, dtype=np.uint8)
for r in range(height):
for c in range(width):
for i in range(multiplier):
for j in range(multiplier):
image[r * multiplier + i][c * multiplier + j] = original[r][c]
P.S. original was initialized the same way as image
Also I tried:
resize((width * multiplier ,height * multiplier), Image.ANTIALIAS)
but it's not an option, because it makes an image look blurry. So what would be the better solution?
Example image:

I would suggest resizing with Nearest Neighbour resampling so you don't introduce any new, blurred colours - just ones already existing in your image:
import numpy as np
from PIL import Image
im = Image.open("snake.png").convert('RGB')
im = im.resize((200,200),resample=Image.NEAREST)
im.save("result.png")
You can go from a Pillow image to a numpy array with:
numpy_array = np.array(pillowImage)
and from numpy array to a Pillow image with:
pillow_image = Image.fromarray(numpyArray)

You could use PIL.Image/openCV and PIL.ImageFilter modules:
from PIL import Image, ImageFilter
import cv2
image = cv2.resize(image, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
image = Image.fromarray(image)
Here, fx and fy are the values you have to set yourself. Hope this helps :)

You can use deep learning Image Super-Resolution (ISR) or (SR3) to enlarge images with good quality.
Here's the unofficial implementation of Image Super-Resolution via Iterative Refinement using Pytorch
https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement
The paper: https://iterative-refinement.github.io/
Here's a video explanation by Two Minutes Paper https://www.youtube.com/watch?v=WCAF3PNEc_c

Here's a solution using just PIL:
from PIL import Image
multiplier = 4
original = Image.open("snake.png")
width, height = original.size
resized = original.resize((width * multiplier, height * multiplier), resample=0)
resized.save("resized.png")
Using resample=0 is the key.

Related

Using dicom Images with OpenCV in Python

I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?
Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.
This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Quality loss after combining two images with PIL and numpy

I'm using PIL and numpy to combine two images while one is a .jpg and the other image is represented by a numpy array, which defines a mask that I want to put on top of the original image (basically just a matrix with one and zero entries and the same size as the .jpg). PIL’s composite function works just fine for that but for some reason, after saving the composite image, the file size shrinks to approximately 1/3 of the original image size. Can someone explain this behavior to me?
Here's a code snippet:
import numpy as np
import PIL
from PIL import Image
from PIL import ImageColor
rgb = ImageColor.getrgb('black')
# Read image and write into numpy array
image = Image.open('test_image.jpg')
(im_width, im_height) = image.size
# Create empty mask
mask = np.zeros((im_width, im_height))
# Composite image and mask
solid_color = np.expand_dims(np.ones_like(mask), axis=2) *
np.reshape(list(rgb), [1, 1, 3])
pil_solid_color =
Image.fromarray(np.uint8(solid_color)).convert('RGBA')
pil_mask = Image.fromarray(np.uint8(255.*mask)).convert('L')
image = Image.composite(pil_solid_color, image, pil_mask)
# save image
image.save('test_image_with_mask.jpg')
Code was inspired by tnesorflow's object detection api. Thanks in advance.

Fill image up to a size with Python

I am writing a handwriting recognition app and my inputs have to be of a certain size (128x128). When I detect a letter it looks like this:
That image for instance has a size of 40x53. I want to make it 128x128, but simply resizing it lowers the quality especially for smaller images. I want to somehow fill the rest up to 128x128 with the 40x53 in the middle. The background color should also stay relatively the same. I am using Python's opencv but I am new to it. How can I do this, and is it even possible?
Here you can get what you have asked using outputImage. Basically I have added a border using copyMakeBorder method. You can refer this for more details. You have to set the color value as you want in the value parameter. For now it is white [255,255,255].
But I would rather suggest you to resize the original image, seems like it is the better option than what you have asked. Get the image resized you can use resized in the following code. For your convenience I have added both methods in this code.
import cv2
import numpy as np
inputImage = cv2.imread('input.jpg', 1)
outputImage = cv2.copyMakeBorder(inputImage,37,38,44,44,cv2.BORDER_CONSTANT,value=[255,255,255])
resized = cv2.resize(inputImage, (128,128), interpolation = cv2.INTER_AREA)
cv2.imwrite('output.jpg', outputImage)
cv2.imwrite('resized.jpg', resized)
I believe you want to scale your image.
This code might help:
import cv2
img = cv2.imread('name_of_image', cv2.IMREAD_UNCHANGED)
# Get original size of image
print('Original Dimensions: ',img.shape)
# Percentage of the original size
scale_percent = 220
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent / 100)
dim = (width, height)
# Resize/Scale the image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
# The new size of the image
print('Resized Dimensions: ',resized.shape)
cv2.imshow("Resized image", resized)
cv2.waitKey(0)
cv2.destroyAllWindows()

Resize images to very small n x n dimensions

I am resizing my images to 50 x 50 in python. Skimage transform and PIL thumbnail both resize the image while preserving the aspect ratio.
Whats the other way to do it?
I have tried:
For PIL thumbnail,
im.thumbnail((50,50),Image.ANTIALIAS)
This gives me a (42,50) image not a (50,50) image.
For skimage.transform
image = skimage.transform.resize(image, (50, 50))
It returns a completely distorted image.
Use im.resize((50,50), Image.ANTIALIAS)
To resize to fixed size while maintaining aspect ratio and cropping to fit, use PIL.ImageOps.fit(image,size)
import PIL.ImageOps
import PIL.Image
impath = '1-True Mountain Covered with Cloud.jpg'
im = PIL.Image.open(impath)
display(im)
imfit = PIL.ImageOps.fit(im, (64,64))
display(imfit)

PIL Image.resize() not resizing the picture

I have some strange problem with PIL not resizing the image.
from PIL import Image
img = Image.open('foo.jpg')
width, height = img.size
ratio = floor(height / width)
newheight = ratio * 150
img.resize((150, newheight), Image.ANTIALIAS)
img.save('mugshotv2.jpg', format='JPEG')
This code runs without any errors and produces me image named mugshotv2.jpg in correct folder, but it does not resize it. It does something to it, because the size of the picture drops from 120 kb to 20 kb, but the dimensions remain the same.
Perhaps you can also suggest way to crop images into squares with less code. I kinda thought that Image.thumbnail does it, but what it did was that it scaled my image to 150 px by its width, leaving height 100px.
resize() returns a resized copy of an image. It doesn't modify the original. The correct way to use it is:
from PIL import Image
#...
img = img.resize((150, newheight), Image.ANTIALIAS)
source
I think what you are looking for is the ImageOps.fit function. From PIL docs:
ImageOps.fit(image, size, method, bleed, centering) => image
Returns a sized and cropped version of
the image, cropped to the requested
aspect ratio and size. The size
argument is the requested output size
in pixels, given as a (width, height)
tuple.
[Update]
ANTIALIAS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.image.resize((100,100),Image.ANTIALIAS)
Today you should use something like this:
from PIL import Image
img = Image.open(r"C:\test.png")
img.show()
img_resized = img.resize((100, 100), Image.Resampling.LANCZOS)
img_resized.show()

Categories

Resources