I have to integrate my python code in Labview and I am comparing the pixel value of the image in both.
The Labview gives pixel values in U16 and hence I want to see the pixel values of the enter image description heresame image in python and see if the values are the same.
Can someone please help me with the code for the same?
My image is a png image black and white.
You can use PIL or OpenCV or wand or scikit-image for that. Here is a PIL version:
from PIL import Image
import numpy as np
# Open image
im = Image.open('dXGat.png')
# Make into Numpy array for ease of access
na = np.array(im)
# Print shape (pixel dimensions) and data type
print(na.shape,na.dtype) # prints (256, 320) int32
# Print brightest and darkest pixel
print(na.max(), na.min())
# Print top-left pixel
print(na[0,0]) # prints 25817
# WATCH OUT FOR INDEXING - IT IS ROW FIRST
# print first pixel in second row
print(na[1,0]) # prints 24151
# print first 4 columns of first 2 rows
print(na[0:2,0:4])
Output
array([[25817, 32223, 30301, 33504],
[24151, 22934, 19859, 21460]], dtype=int32)
If you prefer to use OpenCV, change these lines:
from PIL import Image
import numpy as np
# Open image
im = Image.open('dXGat.png')
# Make into Numpy array for ease of access
na = np.array(im)
to this:
import cv2
import numpy as np
# Open image
na = cv2.imread('dXGat.png',cv2.IMREAD_UNCHANGED)
If you just want to one-time inspect the pixels, you can just use ImageMagick in the Terminal:
magick dXGat.png txt: | more
Sample Output
# ImageMagick pixel enumeration: 320,256,65535,gray
0,0: (25817) #64D964D964D9 gray(39.3942%)
1,0: (32223) #7DDF7DDF7DDF gray(49.1691%)
2,0: (30301) #765D765D765D gray(46.2364%)
3,0: (33504) #82E082E082E0 gray(51.1238%)
...
...
317,255: (20371) #4F934F934F93 gray(31.0842%)
318,255: (20307) #4F534F534F53 gray(30.9865%)
319,255: (20307) #4F534F534F53 gray(30.9865%)
Related
I want to make the visible part of image more transparent, but also do not change the alpha-level of fully-transparent background.
Here's the image:
and I do it like this:
from PIL import Image
img = Image.open('image_with_transparent_background.png')
img.putalpha(128)
img.save('half_transparent_image_with_preserved_background.png')
And here is what I get: half_transparent_image_with_preserved_background.png
How do I achieve exactly what I want - so, without changing the background?
I think you want to make the alpha 128 anywhere it is currently non-zero:
from PIL import Image
# Load image and extract alpha channel
im = Image.open('moth.png')
A = im.getchannel('A')
# Make all opaque pixels into semi-opaque
newA = A.point(lambda i: 128 if i>0 else 0)
# Put new alpha channel back into original image and save
im.putalpha(newA)
im.save('result.png')
If you are happier doing that with Numpy, you can do:
from PIL import Image
import numpy as np
# Load image and make into Numpy array
im = Image.open('moth.png')
na = np.array(im)
# Make alpha 128 anywhere is is non-zero
na[...,3] = 128 * (na[...,3] > 0)
# Convert back to PIL Image and save
Image.fromarray(na).save('result.png')
As the title states I'm converting my image to a numpy array then converting it right back. Here's my code:
import os
import numpy as np
from PIL import Image
img = Image.open(os.path.join(no_black_border, png_files[0]))
img.show()
np_arr = np.asarray(img)
img1 = Image.fromarray(np_arr)
img1.show()
Here's my before converting it
Here's my after converting it back
Your image is not RGB, it is a palette image. That means it does not have a Red, a Green and a Blue value at every pixel location, instead it has a single 8-bit palette index at each location that PIL uses to know the colour. You lose the palette when you convert to Numpy array.
You have 2 choices.
Either convert your image to RGB when you open it and all 3 values will be carried across to Numpy:
# Load image and make RGB
im = Image.open(...).convert('RGB')
# Convert to Numpy array and process
numpyarray = np.array(im)
Or, do as you currently do, but re-appply the palette from the original image after converting back to PIL Image:
# Load image
im = Image.open()
# Convert to Numpy array
numpyarray = np.array(im)
... do Numpy stuff ...
# Convert back to PIL Image and re-apply original palette
r = Image.fromarray(numpyarray,mode='P')
r.putpalette(im.getpalette())
# Optionally save
r.save('result.png')
See answer here and accompanying comments.
I am testing a segmentation algorithm on several VHSR satellite images, which originally comes in 16bit format, but when I convert them to 8bit images, the produced images are showing striped appearance.
I've been trying different python libraries (skimage, cv2, scipy) getting similar results.
1) The original 16-bit image it is a 4 band image (NIR,B,G,R), so you need to choose the right bands to create a true color image, RGB image (4,3,2 bands). thanks in advance. It can be downloaded from this link:
16bit image
2) I use this code to convert each pixel value, from a 16-bit integer now fitting within 8-bit range:
from scipy.misc import bytescale
SS = io.imread('Imag16bit.tif')
SS = bytescale(SS)
SS = np.asarray(SS)
plt.imshow(SS)
This is my result of above code:
bytescale works for me. I think the asarray step messes up something.
import cv2
from skimage import io
from scipy.misc import bytescale
image = io.imread('SkySat_16bit.tif')
cv2.imshow('Original', image)
print(image.dtype)
image = bytescale(image)
print(image.dtype)
cv2.imshow('Converted', image)
cv2.waitKey(0)
I think this is a way to do it:
#!/usr/local/bin/python3
from PIL import Image
from tifffile import imsave, imread
# Load image
im = imread('SkySat_16bit.tif')
# Extract Red, Green and Blue bands into separate 8-bit arrays
R = (im[:,:,3]/256).astype(np.uint8)
G = (im[:,:,2]/256).astype(np.uint8)
B = (im[:,:,1]/256).astype(np.uint8)
# Combine bands into RGB array
RGB = np.dstack((R,G,B))
# Save to disk
Image.fromarray(RGB).save('result.png')
You may want to adjust the contrast a bit, and check I selected the correct bands.
I am taking an RGB image as an input in Python which it obviously converts into 2D numpy array. I would like to replace only a window/part of an image by making it totally white (or replacing it with a 2D numpy array having values of only 255).
Here's what I tried:
img[i:i+r,j:j+c] = (np.ones(shape=(r,c))) * 255
r,c is my window size (128*128) and my input image is of RGB channel. It throws an error:
ValueError: could not broadcast input array from shape (128,128) into shape (128,3)
Note: I would like my final output image to be in RGB channel with specific parts replaced by white windows. I am using Python 3.5.
You can do it like this:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Numpy array containing 640x480 solid blue image
solidBlueImage=np.zeros([480,640,3],dtype=np.uint8)
solidBlueImage[:]=(0,0,255)
# Make a white window
solidBlueImage[20:460,200:600]=(255,255,255)
# Save as PNG
img=Image.fromarray(solidBlueImage)
img.save("result.png")
Essentially, we are using numpy indexing to draw over the image.
Or like this:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Numpy array containing 640x480 solid blue image
solidBlueImage=np.zeros([480,640,3],dtype=np.uint8)
solidBlueImage[:]=(0,0,255)
# Make a white array
h,w=100,200
white=np.zeros([h,w,3],dtype=np.uint8)
white[:]=(255,255,255)
# Splat white onto blue
np.copyto(solidBlueImage[20:20+h,100:100+w,],white)
# Save as PNG
img=Image.fromarray(solidBlueImage)
img.save("result.png")
Essentially, we are using numpy's copyto() in order to paste, (or composite or overlay), one image into another.
I want to do some image processing using Python.
Is there a simple way to import .png image as a matrix of greyscale/RGB values (possibly using PIL)?
scipy.misc.imread() will return a Numpy array, which is handy for lots of things.
Up till now no one told about matplotlib.image:
import matplotlib.image as img
image = img.imread(file_name)
Now the image would be a 3D numpy array
print image.shape
Would be something like: (317, 504, 3)
scipy.misc.imread() is deprecated now. We can use imageio.imread instead of that to read it as a Numpy array
im.load in PIL returns a matrix-like object.
you can use PyGame image and use PixelArray to access the pixeldata
Definitely try
from matplotlib.image import imread
image = imread(filename)
The filename preferably has to be an .jpg image.
And then, try
image.shape
This would return :
for a black and white or grayscale image
An (n,n) matrix where n represents the dimension of the images (pixels) and values inside the matrix range from 0 to 255.
Typically 0 is taken to be black, and 255 is taken to be white. 128 tends to be grey!
For color or RGB image
It will render a tensor of 3 channels. Each channel is an (n,n) matrix where each entry represents the respectively the level of Red, Green or Blue at the actual location inside the image.