Python PIL reduce alpha level, but do not change transparent background - python

I want to make the visible part of image more transparent, but also do not change the alpha-level of fully-transparent background.
Here's the image:
and I do it like this:
from PIL import Image
img = Image.open('image_with_transparent_background.png')
img.putalpha(128)
img.save('half_transparent_image_with_preserved_background.png')
And here is what I get: half_transparent_image_with_preserved_background.png
How do I achieve exactly what I want - so, without changing the background?

I think you want to make the alpha 128 anywhere it is currently non-zero:
from PIL import Image
# Load image and extract alpha channel
im = Image.open('moth.png')
A = im.getchannel('A')
# Make all opaque pixels into semi-opaque
newA = A.point(lambda i: 128 if i>0 else 0)
# Put new alpha channel back into original image and save
im.putalpha(newA)
im.save('result.png')
If you are happier doing that with Numpy, you can do:
from PIL import Image
import numpy as np
# Load image and make into Numpy array
im = Image.open('moth.png')
na = np.array(im)
# Make alpha 128 anywhere is is non-zero
na[...,3] = 128 * (na[...,3] > 0)
# Convert back to PIL Image and save
Image.fromarray(na).save('result.png')

Related

Read pixels from image in python as in Labview

I have to integrate my python code in Labview and I am comparing the pixel value of the image in both.
The Labview gives pixel values in U16 and hence I want to see the pixel values of the enter image description heresame image in python and see if the values are the same.
Can someone please help me with the code for the same?
My image is a png image black and white.
You can use PIL or OpenCV or wand or scikit-image for that. Here is a PIL version:
from PIL import Image
import numpy as np
# Open image
im = Image.open('dXGat.png')
# Make into Numpy array for ease of access
na = np.array(im)
# Print shape (pixel dimensions) and data type
print(na.shape,na.dtype) # prints (256, 320) int32
# Print brightest and darkest pixel
print(na.max(), na.min())
# Print top-left pixel
print(na[0,0]) # prints 25817
# WATCH OUT FOR INDEXING - IT IS ROW FIRST
# print first pixel in second row
print(na[1,0]) # prints 24151
# print first 4 columns of first 2 rows
print(na[0:2,0:4])
Output
array([[25817, 32223, 30301, 33504],
[24151, 22934, 19859, 21460]], dtype=int32)
If you prefer to use OpenCV, change these lines:
from PIL import Image
import numpy as np
# Open image
im = Image.open('dXGat.png')
# Make into Numpy array for ease of access
na = np.array(im)
to this:
import cv2
import numpy as np
# Open image
na = cv2.imread('dXGat.png',cv2.IMREAD_UNCHANGED)
If you just want to one-time inspect the pixels, you can just use ImageMagick in the Terminal:
magick dXGat.png txt: | more
Sample Output
# ImageMagick pixel enumeration: 320,256,65535,gray
0,0: (25817) #64D964D964D9 gray(39.3942%)
1,0: (32223) #7DDF7DDF7DDF gray(49.1691%)
2,0: (30301) #765D765D765D gray(46.2364%)
3,0: (33504) #82E082E082E0 gray(51.1238%)
...
...
317,255: (20371) #4F934F934F93 gray(31.0842%)
318,255: (20307) #4F534F534F53 gray(30.9865%)
319,255: (20307) #4F534F534F53 gray(30.9865%)

How to change the pixel colour of an image with PIL?

I'd like to change a pixel and for some reason this isn't working.
from PIL import Image
import numpy
im = Image.open("art\\PlanetX#1.25.png")
a = numpy.asarray(im)
img = Image.fromarray(a)
pixels = img.load()
pixels[0, 0] = (255, 0, 0, 255)
What should happen is the top left corner of the PNG should be set as red. I get the ValueError: Image is readonly error.
If you want to change just a few odd pixels, you can use the rather slow putpixel() like this:
from PIL import Image
# Create blue 30x15 image
im = Image.new('RGB',(30,15),color='blue')
# Change single pixel at 10,0 to red
im.putpixel((10,0),(255,0,0))
Alternatively, you can convert the entire image to a Numpy array and make many more changes, much faster with Numpy functions:
from PIL import Image
import numpy as np
# Create blue 30x15 image
im = Image.new('RGB',(30,15),color='blue')
# Convert to Numpy array
na = np.array(im)
# Change single pixel at 10,0 to green
na[0,10] = (0,255,0)
# Change whole row to red
na[3] = (255,0,0)
# Change whole column to yellow
na[:,8] = (255,255,0)
# Convert back to PIL Image and save
Image.fromarray(na).save('result.png')

Using dicom Images with OpenCV in Python

I am trying to use a dicom image and manipulate it using OpenCV in a Python environment. So far I have used the pydicom library to read the dicom(.dcm) image data and using the pixel array attribute to display the picture using OpenCV imshow method. But the output is just a blank window. Here is the snippet of code I am using at this moment.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
cv2.imshow('sample image dicom',ds.pixel_array)
cv2.waitkey()
If i print out the array which is used here, the output is different from what i would get with a normal numpy array. I have tried using matplotlib imshow method as well and it was able to display the image with some colour distortions. Is there a way to convert the array into a legible format for OpenCV?
Faced a similar issue. Used exposure.equalize_adapthist() (source). The resulting image isn't a hundred percent to that you would see using a DICOM Viewer but it's the best I was able to get.
import numpy as np
import cv2
import pydicom as dicom
from skimage import exposure
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array
dcm_sample=exposure.equalize_adapthist(dcm_sample)
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I have figured out a way to get the image to show. As Dan mentioned in the comments, the value of the matrix was scaled down and due to the imshow function, the output was too dark for the human eye to differentiate. So, in the end the only thing i needed to do was multiply the entire mat data with 128. The image is showing perfectly now. multiplying the matrix by 255 over exposes the picture and causes certain features to blow. Here is the revised code.
import numpy as np
import cv2
import pydicom as dicom
ds=dicom.dcmread('sample.dcm')
dcm_sample=ds.pixel_array*128
cv2.imshow('sample image dicom',dcm_sample)
cv2.waitkey()
I don't think that is a correct answer. It works for that particular image because most of your pixel values are in the lower range. Check this OpenCV: How to visualize a depth image. It is for c++ but easily adapted to Python.
This is the best way(in my opinion) to open image in opencv as a numpy array while perserving the image quality:
import numpy as np
import pydicom, os, cv2
def dicom_to_numpy(ds):
DCM_Img = ds
rows = DCM_Img.get(0x00280010).value #Get number of rows from tag (0028, 0010)
cols = DCM_Img.get(0x00280011).value #Get number of cols from tag (0028, 0011)
Instance_Number = int(DCM_Img.get(0x00200013).value) #Get actual slice instance number from tag (0020, 0013)
Window_Center = int(DCM_Img.get(0x00281050).value) #Get window center from tag (0028, 1050)
Window_Width = int(DCM_Img.get(0x00281051).value) #Get window width from tag (0028, 1051)
Window_Max = int(Window_Center + Window_Width / 2)
Window_Min = int(Window_Center - Window_Width / 2)
if (DCM_Img.get(0x00281052) is None):
Rescale_Intercept = 0
else:
Rescale_Intercept = int(DCM_Img.get(0x00281052).value)
if (DCM_Img.get(0x00281053) is None):
Rescale_Slope = 1
else:
Rescale_Slope = int(DCM_Img.get(0x00281053).value)
New_Img = np.zeros((rows, cols), np.uint8)
Pixels = DCM_Img.pixel_array
for i in range(0, rows):
for j in range(0, cols):
Pix_Val = Pixels[i][j]
Rescale_Pix_Val = Pix_Val * Rescale_Slope + Rescale_Intercept
if (Rescale_Pix_Val > Window_Max): #if intensity is greater than max window
New_Img[i][j] = 255
elif (Rescale_Pix_Val < Window_Min): #if intensity is less than min window
New_Img[i][j] = 0
else:
New_Img[i][j] = int(((Rescale_Pix_Val - Window_Min) / (Window_Max - Window_Min)) * 255) #Normalize the intensities
return New_Img
file_path = "C:/example.dcm"
image = pydicom.read_file(file_path)
image = dicom_to_numpy(image)
#show image
cv2.imshow('sample image dicom',image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Quality loss after combining two images with PIL and numpy

I'm using PIL and numpy to combine two images while one is a .jpg and the other image is represented by a numpy array, which defines a mask that I want to put on top of the original image (basically just a matrix with one and zero entries and the same size as the .jpg). PIL’s composite function works just fine for that but for some reason, after saving the composite image, the file size shrinks to approximately 1/3 of the original image size. Can someone explain this behavior to me?
Here's a code snippet:
import numpy as np
import PIL
from PIL import Image
from PIL import ImageColor
rgb = ImageColor.getrgb('black')
# Read image and write into numpy array
image = Image.open('test_image.jpg')
(im_width, im_height) = image.size
# Create empty mask
mask = np.zeros((im_width, im_height))
# Composite image and mask
solid_color = np.expand_dims(np.ones_like(mask), axis=2) *
np.reshape(list(rgb), [1, 1, 3])
pil_solid_color =
Image.fromarray(np.uint8(solid_color)).convert('RGBA')
pil_mask = Image.fromarray(np.uint8(255.*mask)).convert('L')
image = Image.composite(pil_solid_color, image, pil_mask)
# save image
image.save('test_image_with_mask.jpg')
Code was inspired by tnesorflow's object detection api. Thanks in advance.

Using Python Pillow lib to set Color depth

I am using the Python Pillow lib to change an image before sending it to device.
I need to change the image to make sure it meets the following requirements
Resolution (width x height) = 298 x 144
Grayscale
Color Depth (bits) = 4
Format = .png
I can do all of them with the exception of Color Depth to 4 bits.
Can anyone point me in the right direction on how to achieve this?
So far, I haven't been able to save 4-bit images with Pillow. You can use Pillow to reduce the number of gray levels in an image with:
import PIL.Image as Image
im = Image.open('test.png')
im1 = im.point(lambda x: int(x/17)*17)
Assuming test.png is a 8-bit graylevel image, i.e. it contains values in the range 0-255 (im.mode == 'L'), im1 now only contains 16 different values (0, 17, 34, ..., 255). This is what ufp.image.changeColorDepth does, too. However, you still have a 8-bit image. So instead of the above, you can do
im2 = im.point(lambda x: int(x/17))
and you end up with an image that only contains 16 different values (0, 1, 2, ..., 15). So these values would all fit in an uint4-type. However, if you save such an image with Pillow
im2.save('test.png')
the png will still have a color-depth of 8bit (and if you open the image, you see only really dark gray pixels). You can use PyPng to save a real 4-bit png:
import png
import numpy as np
png.fromarray(np.asarray(im2, np.uint8),'L;4').save('test4bit_pypng.png')
Unfortunately, PyPng seems to take much longer to save the images.
using changeColorDepth function in ufp.image module.
import ufp.image
import PIL
im = PIL.Image.open('test.png')
im = im.convert('L') # change to grayscale image
im.thumbnail((298, 144)) # resize to 294x144
ufp.image.changeColorDepth(im, 16) # change 4bits depth(this function change original PIL.Image object)
#if you will need better convert. using ufp.image.quantizeByImprovedGrayScale function. this function quantized image.
im.save('changed.png')
see example : image quantize by Improved Gray Scale. [Python]

Categories

Resources