This question already has answers here:
How to pixelate a square image to 256 big pixels with python?
(4 answers)
Closed 2 years ago.
I want to manipulate some images with Python for a little quiz game. The quiz player should guess the image.
I think an image with only big pixel areas would be fine. I want a similar result like this: https://www.ixxiyourworld.com/media/2387631/ixsp110-van-gogh-petrol-pixel-03.jpg
Lets try PIL to first downscale the image massively to a given kernel size, then upscale with NEAREST back to the same size -
from PIL import Image
from numpy import asarray
img = Image.open("van_gogh.jpg", mode='r')
factor = 100
kernel = (img.height//factor, img.width//factor)
pixelated = img.resize(kernel,resample=Image.BICUBIC) #downsample
pixelated = pixelated.resize(img.size,Image.NEAREST) #upsample
#Grids
grid_color = [255,255,255]
dx, dy = factor, factor
g = np.asarray(pixelated).copy()
g[:,::dy,:] = grid_color
g[::dx,:,:] = grid_color
pixelated2 = Image.fromarray(g)
pixelated2
Increasing the factor here, will pixelate the image further.
factor = 100
Related
How to downscale a tiff image of 10m resolution and create a new image of 50m where each pixel is stats from the first image?
The initial tiff image is a binary classification map - meaning each pixel (10m) belongs either to class "water" (value =0) or class "ice" (value=1).
I would like to create a new image, where each pixel is the percentage of water in a 5 x 5 block of the initial map, meaning each pixel of the new image will have a 50 m resolution and represents the ratio or percentage of "water" pixel on every 5x5 pixel of the former map. You can see the example here: Example
Here is an image sample (can be downloaded from google drive):
https://drive.google.com/uc?export=download&id=19hWQODERRsvoESiUZuL0GQHg4Mz4RbXj
Your image is saved in a rather odd format, using a 32-bit float to represent just two classes of data which could be represented in a single bit, so I converted it to PNG with ImageMagick using:
magick YOURIMAGE.TIF -alpha off image.png
Many Python libraries will stutter on your actual TIFF so maybe think about using a different way of writing it.
Once that is done, the code might look something like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Set size of tiles for subsampling
tileX = tileY = 5
# Open image and convert to greyscale and thence to Numpy array
im = Image.open('image.png').convert('L')
na = np.array(im)
# Round height and width down to next lower multiple of tile sizes
h = (na.shape[0] // tileY) * tileY
w = (na.shape[1] // tileX) * tileX
# Create empty output array to fill
res = np.empty((h//tileY,w//tileX), np.uint8)
pxPerTile = tileX * tileY
for yoffset in range(0,h,tileY):
for xoffset in range(0,w,tileX):
# Count ice pixels in this 5x5 tile
nonZero = np.count_nonzero(na[yoffset:yoffset+tileY, xoffset:xoffset+tileX])
percent = int((100.0 * (pxPerTile - nonZero))/pxPerTile)
res[yoffset//tileY, xoffset//tileX] = percent
# Make Numpy array back into PIL Image and save
Image.fromarray(res.astype(np.uint8)).save('result.png')
On reflection, you can probably do it faster and more simply with cv2.resize() and a decimation of 0.2 on both axes and interpolation cv2.INTER_AREA
I did a version in pyvips:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
# label (0 == water, 1 == ice) is in the first band
label = image[0]
# average 5x5 areas
label = label.shrink(5, 5)
# turn into a percentage of water
water_percent = 100 * (1 - label)
# ... and save
water_percent.write_to_file(sys.argv[2])
I can run it on your test image like this:
$ ./average.py ~/pics/meltPondClassifiedAndS12.tif x.png
To make this (rather dark) output:
I have a set of very low-resolution pictures (in .png but I can easily convert them to something else). They all only have black or white pixels, like QR codes.
What I want is to be able to read them as binary matrix (a 1 for a black pixel and a zero for a white one).
I don't need anything more fancy than that, what should I use?
Hi you can use PIL to read the image, and then numpy to convert it to a matrix
from PIL import Image
import numpy as np
im = Image.read("imageName.ext")
im_mat = np.asarray(im)
Alternatively you can do all in one step with opencv
import cv2
img = cv2.imread("imageName.ext")
in both cases you will have a matrix with size WxHxC with H the height in pixels, W the widht and c the number of channels (3 or 4 depending if there's an alpha for transparency).
If your image is black and white and you only want a matrix with size WxH take one channel with
img = img_mat[:,:,0] #8-bit matrix
and last you can binarize that givving an umbral or just by comparing
bin = img> 128
or
bin = img == 255
I corrected this last line I had a typo in it
This question already has answers here:
enlarge image (may be pixelate but not blurry)
(4 answers)
Closed 2 years ago.
I'm taking a look at python imaging library and trying to resize a pixel art with it but whenever I resize it, it gets distorted and weird, is there some way of keeping the pixels' proportions?
Here you can see an example of what I want to do:
original image
resized image
But with PIL it gets like this:
image resized with PIL
Here's the code that's being used:
from PIL import Image
original = Image.open("original.png")
resized = original.resize((1024,1024))
resized.save("resized.png")
Thanks in advance.
You will likely want to make sure you maintain the aspect ratio by getting the width and height of the original image. Use a fixed value for the new width and divide the fixed value by the width (for example) to get a multiplier (you could also just use a multiplier on both the width and height directly if you don't have a fixed width or height in mind). Multiply the height by that multiplier and you will have the new height. For example:
from PIL import Image
original = Image.open("original.png")
new_width = 1024
multiplier = new_width / original.width
new_height = original.height * multiplier
resized = original.resize((new_width, new_height))
resized.save("resized.png")
Do note that changing the size of a rastor image significantly will affect the quality.
This question already has answers here:
OpenCV - Reading a 16 bit grayscale image
(4 answers)
Closed 3 years ago.
I'm generating an image that contains all the possible color values of certain bit depth on RGB (same value on 3 channels, so it looks grayscale), creating an easy to read pattern, this code might be usefull (It generates a uint16 NumPy array):
import cv2
import numpy as np
w = 1824
h= 948
bits = 16
max_color = pow(2,bits)
hbar_size = round((w*h)/(max_color))
num_bars = round(h/hbar_size)
color = 0
image_data = np.zeros((h, w, 3)).astype(np.uint16)
for x in range(0,num_bars+1):
if((x+1)*hbar_size < h):
for j in range(0,w):
color += 1
for i in range(x*hbar_size , (x+1)*hbar_size):
#print(i)
image_data[i, j] = color
else:
for j in range(0,w):
color += 1
for i in range(x*hbar_size , h):
#print(i)
image_data[i, j] = min(color , max_color)
The problem is:
When I save it using cv2.imwrite('allValues.png',image_data) I can see the image, which seems to be right BUT it is actually saved on 8 bits depth (when i read it with img = cv2.imread('allValues.png') I get a uint8 NumPy array).
The Question is:
Is there an apropiate way of write/read 16 bits RGB images on OpenCV for python?
Is the png format the problem?
It is saving in 16bit, but it converts to 8bit automatically when loaded so that you can view it on your screen. You can bypass that functionality by using
im = cv2.imread('allValues.png',-1)
Im trying to find a good package or algorithm to modify an image to push the center of an image outwards to mimic macular degen. The best method I found was image_slicer package and split image into 4 pieces, push the inner corners and stitch images back. But, the join method of the package is not working and documentation is unclear. Does anyone have a package that can do this?
Also, I am trying to push the outside of an image in, to create tunnel vision.
(for both of these I am still trying to preserve the image, although skewed is fine, I am trying to prevent image loss.)
some code I wrote
import image_slicer
#split image into 4 pieces
image_slicer.slice('piegraph.jpeg',4) #just a simple sample img
#code to resize corners
#I can figure this out later.
#stitch images back
tiles = ("pie_01_01.png","pie_01_02.png","pie_02_01.png","pie_02_02.png")
image_slicer.join(tiles)
You can use opencv and numpy to do what you want.
If I understand correctly what you need is a mapping that take the original image and maps them as a function of the distance from the center of the image.
For all the pixels inside the "black hole" you want to be black and all others you want them to be bunched together.
So if we take the original image to be:
The result you are looking for is something like:
The following code dose this. The parameters that you need to play with are
RBlackHole - The radius of your black hole
FACTOR - Changes the amount of "bunching" too small and all the pixels will mapped also to black too large and they will not be bunched.
import cv2
import numpy as np
import math
# Read img
img = cv2.imread('earth.jpg')
rows,cols,ch = img.shape
# Params
FACTOR = 75
RBlackHole = 10
# Create a 2d mapping between the image and a new warp
smallSize = min(rows,cols)
xMap = np.zeros((rows,cols), np.float32)
yMap = np.zeros_like(xMap)
for i in range(rows):
for j in range(cols):
# Calculate the distance of the current pixel from the cneter of the image
r = math.sqrt((i-rows/2)*(i-rows/2) + (j-cols/2)*(j-cols/2))
# If the pixles are in the radius of the black hole
# mapped them to a location outside of the image.
if r <= RBlackHole:
xMap[i, j] = rows*cols
yMap[i, j] = rows*cols
else:
# Mapped the pixels as a function of the distance from the center.
# The further thay are the "buncher thay will be"
xMap[i, j] = (r-RBlackHole)*(j - cols/2)/FACTOR + cols/2
yMap[i, j] = (r-RBlackHole)*(i - rows/2)/FACTOR + rows/2
# Applay the remmaping
dstImg = cv2.remap(img,xMap,yMap,cv2.INTER_CUBIC)
# Save output image
cv2.imwrite("blackHoleWorld.jpg", dstImg)