skimage max image resolution (memoryerror) - python

At the moment i'm using scikit-image as a way to process my images in python. But after some testing I found out that scikit-image doesn't work with images which have a to high resolution. I tried to use a image with a resolution of 3024 x 4032 but it results in a MemoryError error. This happens on multiple different methods provided by scikit-image.
I've found out that it does work if I downscale the image to a way lower resolution. I want to know the maximum allowed resolution is so that I can downscale my images without losing too much of it's quality. And that I can check if a resolution is too big.

I found the real cause of the problem. It's not the resolution but rather scikit-image who changes the datatype of the image to a float which makes it way too big for the memory stack.
A way to get around this is to turn your image in a numpy array with the datatype uint8. Like this:
from PIL import Image
import numpy as np
from skimage.color import rgb2gray
im = Image.open("test.jpg")
pix = np.array(im, dtype=np.uint8)
img = rgb2gray(pix)
after converting it to a numpy array, you can use it for any operation provided by scikit-image

Your workaround is fine, but I would have done it like this:
from skimage import io
from skimage import img_as_ubyte
img = img_as_ubyte(io.imread('test.jpg', as_grey=True))

Related

How to remove noise from an image using pillow?

I am trying to de-noise an image that I've made in order to read the numbers on it using Tesseract.
Noisy image.
Is there any way to do so?
I am kind of new to image manipulation.
from PIL import ImageFilter
im1 = im.filter(ImageFilter.BLUR)
im2 = im.filter(ImageFilter.MinFilter(3))
im3 = im.filter(ImageFilter.MinFilter)
The Pillow library provides the ImageFilter module that can be used to enhance images. Per the documentation:
The ImageFilter module contains definitions for a pre-defined set of filters, which can be be used with the Image.filter() method.
These filters work by passing a window or kernel over the image, and computing some function of the pixels in that box to modify the pixels (usually the central pixel)
The MedianFilter seems to be widely used and resembles the description given in nishthaneeraj's answer.
You have to read Python pillow Documentation
Python pillow Documentation link:
https://pillow.readthedocs.io/en/stable/
Pillow image module:
https://pillow.readthedocs.io/en/stable/reference/ImageFilter.html#module-PIL.ImageFilter
How do you remove noise from an image in Python?
The mean filter is used to blur an image in order to remove noise. It involves determining the mean of the pixel values within a n x n kernel. The pixel intensity of the center element is then replaced by the mean. This eliminates some of the noise in the image and smooths the edges of the image.

perform Image cropping on Image like data using Python

I have image-like data. And I wont to perform Image cropping, squishing and zooming, on one or both axis. The problem is that the data is not in between 0-255, and normalizing it to 0-255, would mean loosing a lot of the information I want to preserve. So unfortunately I can’t use PIL or cv2. Is there a easy way to do it with numpy or scipy?
Thanks for the help
You can crop pictures with simple indexing, like
picture[100:300, 400:800]
To squish or zoom (is that anything more than resizing?), you can just resize with skimage:
from skimage import data, color
from skimage.transform import resize
image = color.rgb2gray(data.astronaut())
image_resized = resize(image, (image.shape[0] // 4, image.shape[1] // 4),
anti_aliasing=True)
Check ImageChops function of Image Librart

Reading in RAW image is larger

When I read in a camera raw image (CR2) via rawpy it is larger then the original and was wondering if anyone knows what the issue might be. The original image is of size 6000 by 4000, but comes in as 6022 by 4020.
with rawpy.imread(raw_image) as raw:
img = raw.postprocess(output_bps=16, output_color=rawpy.ColorSpace.sRGB)
print(img.shape) # -> (6022, 4020, 3)
Try reading it with scikit-image. If the shape is the same as with rawpy, then it might be due to the extra pixels mentioned by Mark Ransom. If the shape is correct, then the extra pixels were due to the camera sensor and you can now use the image as a numpy array (or try reading it via the rawpy library).
Last but not least, you could try opencv, which can read camera images in real time.
Scikit-image
https://scikit-image.org/

Resample the Image Pixels in Python

I have an image of 300*300 pixels size and want to convert it into 1500*1500 pixels size using python. I need this because i have to georeferenced this image with 1500*1500 pixel raster image. Any python library function or basic fundamental how to do this are welcome.
You should try using Pillow (fork of PIL = Python Image Library)
Simple like this:
from PIL import Image
img = Image.open("my_image.png")
img.resize((1500, 1500, ))
img.save("new_image.png")

Change a pixel value

I have an image that I opened using LoadImageM and I get the pixel data using Get2D put I can't seem to find any built-in function to change a pixel value.
I've tried using multiple things from Rectangle to CV_RGB but with no successful results.
Consider checking out the new version of the opencv library.
You import it with
import cv2
and it directly returns numpy arrays.
So for example if you do
image_array = cv2.imread('image.png')
then you can just access and change the pixels values by simply manipulating image_array :
image_array[0,0] = 100
sets the top left pixel to the value to 100.
Depending on your installation, you may already have the cv2 bindings, so check if import cv2 works.
Otherwise just install opencv and numpy and you are good to go.

Categories

Resources