I have image-like data. And I wont to perform Image cropping, squishing and zooming, on one or both axis. The problem is that the data is not in between 0-255, and normalizing it to 0-255, would mean loosing a lot of the information I want to preserve. So unfortunately I can’t use PIL or cv2. Is there a easy way to do it with numpy or scipy?
Thanks for the help
You can crop pictures with simple indexing, like
picture[100:300, 400:800]
To squish or zoom (is that anything more than resizing?), you can just resize with skimage:
from skimage import data, color
from skimage.transform import resize
image = color.rgb2gray(data.astronaut())
image_resized = resize(image, (image.shape[0] // 4, image.shape[1] // 4),
anti_aliasing=True)
Check ImageChops function of Image Librart
Related
I am trying to de-noise an image that I've made in order to read the numbers on it using Tesseract.
Noisy image.
Is there any way to do so?
I am kind of new to image manipulation.
from PIL import ImageFilter
im1 = im.filter(ImageFilter.BLUR)
im2 = im.filter(ImageFilter.MinFilter(3))
im3 = im.filter(ImageFilter.MinFilter)
The Pillow library provides the ImageFilter module that can be used to enhance images. Per the documentation:
The ImageFilter module contains definitions for a pre-defined set of filters, which can be be used with the Image.filter() method.
These filters work by passing a window or kernel over the image, and computing some function of the pixels in that box to modify the pixels (usually the central pixel)
The MedianFilter seems to be widely used and resembles the description given in nishthaneeraj's answer.
You have to read Python pillow Documentation
Python pillow Documentation link:
https://pillow.readthedocs.io/en/stable/
Pillow image module:
https://pillow.readthedocs.io/en/stable/reference/ImageFilter.html#module-PIL.ImageFilter
How do you remove noise from an image in Python?
The mean filter is used to blur an image in order to remove noise. It involves determining the mean of the pixel values within a n x n kernel. The pixel intensity of the center element is then replaced by the mean. This eliminates some of the noise in the image and smooths the edges of the image.
Is there a way to save large (4000000, 200, 3) numpy RGB array as an image with axis using PIL? Or maybe any other packages?
I tried use matplotlib but it has weak savefig method whic not allow me to save large figures
So then I try use PIL and it works ok but not allow me to add axis to the image
At the moment i'm using scikit-image as a way to process my images in python. But after some testing I found out that scikit-image doesn't work with images which have a to high resolution. I tried to use a image with a resolution of 3024 x 4032 but it results in a MemoryError error. This happens on multiple different methods provided by scikit-image.
I've found out that it does work if I downscale the image to a way lower resolution. I want to know the maximum allowed resolution is so that I can downscale my images without losing too much of it's quality. And that I can check if a resolution is too big.
I found the real cause of the problem. It's not the resolution but rather scikit-image who changes the datatype of the image to a float which makes it way too big for the memory stack.
A way to get around this is to turn your image in a numpy array with the datatype uint8. Like this:
from PIL import Image
import numpy as np
from skimage.color import rgb2gray
im = Image.open("test.jpg")
pix = np.array(im, dtype=np.uint8)
img = rgb2gray(pix)
after converting it to a numpy array, you can use it for any operation provided by scikit-image
Your workaround is fine, but I would have done it like this:
from skimage import io
from skimage import img_as_ubyte
img = img_as_ubyte(io.imread('test.jpg', as_grey=True))
I have two images:
Exuse the different resolution but that's not the point. On the left I have a "large" blob due to a camera reflection. I want to get rid of that blob, so closing the blob. But on the right I have smaller blobs that are valuable information that I need to keep.
Both of these image need to undergo the same algorithm.
If I use a simple opening the smaller blobs will be gone, too. Is there an easy way to implement this in Python with skimage or/and PIL?
In a perfect world the left image should just create a white circle, where the right image should have the black dots within the white circle. It is okay to change the size of the black dots on the right image.
Here is an image which should describe the problem at the image directly
Ok. So before I answer. I have to tell you that this is a hackish way and has no scientific background.
from skimage import io, measure
import numpy as np
img = io.imread('img.png', as_grey=True)
img = np.invert(img>0)
labeled_img = measure.label(img)
labels = np.unique(labeled_img)
newimg = np.zeros((img.shape[0],img.shape[1]))
for label in labels:
if np.sum(labeled_img==label) < 250:
newimg = newimg + (labeled_img==label)
io.imshow(newimg)
io.show()
Since this is a hackish way, I know I should have commented rather answered, but I don't have enough points to comment.
I have an image of 300*300 pixels size and want to convert it into 1500*1500 pixels size using python. I need this because i have to georeferenced this image with 1500*1500 pixel raster image. Any python library function or basic fundamental how to do this are welcome.
You should try using Pillow (fork of PIL = Python Image Library)
Simple like this:
from PIL import Image
img = Image.open("my_image.png")
img.resize((1500, 1500, ))
img.save("new_image.png")