Read Grayscale raster in opencv - python

I have a raster image which contains only one band. I am trying to apply histogram equalization on this raster. The problem i cannot read it as grayscale. if i read it as unchanged it show the shape and the values.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_UNCHANGED)
print(img.shape)
shape
(1678, 1064)
But when i change the code to read grayscale it shows empty.
import cv2
import numpy as np
img = cv2.imread('blur3.tif',cv2.IMREAD_GRAYSCALE)

Related

Saving Image in tensor array as jpg or png

I am trying to detect the face using mtcnn. The main aim is to detect face, crop and save the cropped image as jpg or png file type. The code implemented is below.
from facenet_pytorch import MTCNN
from PIL import Image
import numpy as np
from matplotlib import pyplot as plt
img = Image.open("example.jpg")
mtcnn = MTCNN(margin=20, keep_all=True, post_process=False)
faces = mtcnn(img)
print(faces.shape)
This gives the shape
torch.Size([1, 3, 160, 160])
How to save this cropped portion as jpg file.
torch.save(faces, "faces.torch")
that wont be saved as an image, if you want to save it as an image:
img = Image.fromarray(faces.cpu().detach().numpy()[0])
img.save("faces.png")

Saving grayscale image to a directory in python

I have a piece of code that takes in image data as grayscale values, and then converts into an image using matplotlib below
import matplotlib.pyplot as plt
import numpy
image_data = image_result.GetNDArray()
numpy.savetxt('data.cvs', image_data)
# Draws an image on the current figure
image = plt.imshow(image_data, cmap='gray')
I want to be able to export this data to LabView as a .png file. So I need to save these image to a folder where LabView and display them. Is there a function with pillow or os that can do this?
plt.imsave('output.png', image)
Does this work?
If image_data is a Numpy array of shape height x width with dtype=np.uint8 or dtype=np.uint16, you can make a PIL Image and save it as a PNG like this:
from PIL import Image
# Make PIL Image from Numpy array
pImage = Image.fromarray(image_data)
pImage.save('forLabView.png')
You can equally use OpenCV to save a Numpy array as a PNG for LabView like this:
import cv2
# Save Numpy array as PNG
cv2.imwrite('forLabView.png', image_data)
Check what your array is with:
print(image_data.shape, image_data.dtype)

Unable To Convert Numpy Array to Image

I am trying to create a simple image using Numpy and PIL. However, I seem to be getting this bizarre image instead of what I expected.
My code (Cell wise in a jupyter notebook)
import numpy as np
from PIL import Image
arr = np.zeros([100,100,3])
arr[:,:] = [255,128,0]
img = Image.fromarray(arr, 'RGB')
img
The resultant image is this:
I expected an image which would've been completely orange.

Creating an RGB picture in Python with OpenCV from a randomized array

I want to create a RGB image made from a random array of pixel values in Python with OpenCV/Numpy setup.
I'm able to create a Gray image - which looks amazingly live; with this code:
import numpy as np
import cv2
pic_array=np.random.randint(255, size=(900,800))
pic_array_8bit=slika_array.astype(np.uint8)
pic_g=cv2.imwrite("pic-from-random-array.png", pic_array_8bit)
But I want to make it in color as well. I've tried converting with cv2.cvtColor() but it couldnt work.
The issue might be in an array definition or a missed step. Couldn't find a similar situation... Any help how to make a random RGB image in color, would be great.
thanks!
RGB image is composed of three grayscale images. You can make three grayscale images like
rgb = np.random.randint(255, size=(900,800,3),dtype=np.uint8)
cv2.imshow('RGB',rgb)
cv2.waitKey(0)
First, you should define a random image data consisting of 3 channels using numpy as shown below-
import numpy as np
data = np.random.randint(0, 255, size=(900, 800, 3), dtype=np.uint8)
Now use, python imaging library as shown below-
from PIL import Image
img = Image.fromarray(data, 'RGB')
img.show()
You can also save the image easily using save function
img.save('image.png')

Scale imread matrix in python

I am looking for a way to rescale the matrix given by reading in a png file using the matplotlib routine imread,
e.g.
from pylab import imread, imshow, gray, mean
from matplotlib.pyplot import show
a = imread('spiral.png')
#generates a RGB image, so do
show()
but actually I want to manually specify the dimension of $a$, say 200x200 entries, so I need some magic command (which I assume exists but cannot be found by myself) to interpolate the matrix.
Thanks for any useful comments : )
Cheers
You could try using the PIL (Image) module instead, together with numpy. Open and resize the image using Image then convert to array using numpy. Then display the image using pylab.
import pylab as pl
import numpy as np
from PIL import Image
path = r'\path\to\image\file.jpg'
img = Image.open(path)
img.resize((200,200))
a = np.asarray(img)
pl.imshow(a)
pl.show()
Hope this helps.

Categories

Resources