trying to save an inverted image, saved inverted RGB colour data in array pixelArray, then converted this to a numpy array. Not sure what is wrong but any help is appreciated.
from PIL import Image
import numpy as np
img = Image.open('image.jpg')
pixels = img.load()
width, height = img.size
pixelArray = []
for y in range(height):
for x in range(width):
r, g, b = pixels[x, y]
pixelArray.append((255-r,255-b,255-g))
invertedImageArray = np.array(pixelArray, dtype=np.uint8)
invertedImage = Image.fromarray(invertedImageArray, 'RGB')
invertedImage.save('inverted-image.jpeg')
img.show()
getting error code "ValueError : not enough image data"
Your np.array creates an array shape (4000000, 3) instead of (2000, 2000, 3).
Also, you may find that directly mapping the subtraction to the NumPy array is faster and easier
from PIL import Image
import numpy as np
img = Image.open('image.jpg')
pixelArray = np.array(img)
pixelArray = 255 - pixelArray
invertedImageArray = np.array(pixelArray, dtype=np.uint8)
invertedImage = Image.fromarray(invertedImageArray, 'RGB')
invertedImage.save('inverted-image.jpeg')
PIL already provides an easier way to invert the image colours with the ImageOps module.
from PIL import Image, ImageOps
img = Image.open('image.jpg')
invertedImage = ImageOps.invert(img)
invertedImage.save('inverted-image.jpeg')
Related
I have a folder of about 150 large images (40000x30000x3) ~400MB each, and I want to validate ROIs from an imaging analysis. I was looking to store the file information in a dask array and then index the specific section of the image by converting it to a numpy array.
import dask_image.imread
import numpy as np
from PIL import ImageTk, Image, ImageDraw
lazy_signal = dask_image.imread.imread(os.path.join(path, '*.jpeg'))
for roi in rois:
z, y_range, x_range = roi[:]
img = lazy_signal.[z,
y_range[0]:y_range[1],
x_range[0]:x_range[1],
:]
img = np.asarray(img)
img = Image.fromarray(img)
img = img.resize((200, 200))
draw = ImageDraw.Draw(img)
draw.ellipse((85, 85, 115, 115), outline = (0, 0, 255), width = 1)
img.imshow()
However, the conversion to an numpy array is taking multiple seconds per roi and the image chunk is only (100, 100, 3). Any idea how to speed this up or just go straight from dask.array to image?
A numpy array (x,y) = unsorted data between(0,10 f.eks.) is coverted to a colored cv2 image bgr and saved.
self.arr = self.arr * 255 #bgr format
cv2.imwrite("img", self.arr)
How to make this cv2 colored image to blue range color (light to dark blue), and how to make it to green range color(light to dark green)?
My thoughts are to go image2np and then do some stuff to the array. Then go back np2image. But I don't know how change values to get expected colours.
I'm not sure if I understand problem but I would convert RGB to grayscale and next create empty RGB (with zeros) and put grayscale as layer B to get "blue range" or as G to get "green range"
import cv2
import numpy as np
img = cv2.imread('test/lenna.png')
cv2.imshow('RGB', img)
h, w = img.shape[:2] # height, width
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('Gray', gray_img)
blue_img = np.zeros((h,w,3), dtype='uint8')
blue_img[:,:,0] = gray_img # cv2 uses `BGR` instead of `RGB`
cv2.imshow('Blue', blue_img)
green_img = np.zeros((h,w,3), dtype='uint8')
green_img[:,:,1] = gray_img # cv2 uses `BGR` instead of `RGB`
cv2.imshow('Green', green_img)
red_img = np.zeros((h,w,3), dtype='uint8')
red_img[:,:,2] = gray_img # cv2 uses `BGR` instead of `RGB`
cv2.imshow('Red', red_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Image Lenna from Wikipedia.
I have an image with this dimension (1280 x 960). To create a Blank image with this dimension, I use this:
import cv2
import numpy as np
blank_image2 = 255 * np.ones(shape=[960, 1280, 3], dtype=np.uint8)
Is it possible to create a blank image based on the dimension of another image? Something like this:
import cv2
import numpy as np
blank_image2 = 255 * np.ones(shape=image, dtype=np.uint8)
You can use the shape of the image object:
image = cv2.imread('img.jpg')
h, w, c = image.shape
blank_image2 = 255 * np.ones(shape=(h, w, c), dtype=np.uint8)
Amin is correct, I'm just sharing an alternative using 'ones_like' - similar to Mark's suggestion:
image = cv2.imread('img.jpg')
blank_image = 255 * np.ones_like(image , dtype = np.uint8)
I was doing a course which taught data science; it had a portion on using NumPy arrays for image inversion. It's able to invert jpg, but isn't able to invert PNG, I tried other images with the same extension, it doesn't work on those which have "png" extension (it only shows a transparent image).
What can be the problem? Thank you!
from PIL import Image
from IPython.display import display
#displaying the image
img = Image.open(r"./download.png")
display(img)
#converting the image into an array
imgArray = np.array(img)
imgArrayShape = imgArray.shape
#inverting the image
fullArray = np.full(imgArrayShape, 255)
invertedImageArray = abs(fullArray - imgArray)
invertedImageArray = invertedImageArray.astype(np.uint8)
#displaying the inverted image
invertedImage = Image.fromarray(invertedImageArray)
display(invertedImage)
As far as I could tell, the problem was, that you inverted the Alpha Channel as well.
The following code adaptation works on my end:
from PIL import Image
import numpy as np
#displaying the image
img = Image.open("test.png")
img.show()
#converting the image into an array
imgArray = np.array(img)
imgArrayShape = imgArray.shape
#inverting the image
fullArray = np.full(imgArrayShape, [255, 255, 255, 0])
invertedImageArray = abs(fullArray - imgArray)
invertedImageArray = invertedImageArray.astype(np.uint8)
#displaying the inverted image
invertedImage = Image.fromarray(invertedImageArray)
invertedImage.show()
I'm trying to invert the pixels of an RGB image. That is, simply subtracting the intensity value of each channel (red, green, blue) of each pixel from 255.
I have the following so far:
from PIL import Image
im = Image.open('xyz.png')
rgb_im = im.convert('RGB')
width, height = im.size
output_im = Image.new('RGB', (width,height))
for w in range(width):
for h in range(height):
r,g,b = rgb_im.getpixel((w,h))
output_r = 255 - r
output_g = 255 - g
output_b = 255 - b
output_im[w,h] = (output_r, output_g, output_b)
When I run the above script, I get the following error:
Traceback (most recent call last):
File "image_inverse.py", line 31, in <module>
output_im[w,h] = (output_r, output_g, output_b)
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 528, in __getattr__
raise AttributeError(name)
AttributeError: __setitem__
How can I solve this issue?
Thanks.
I guess you can use a vectorized operation if the image is a numpy array
from PIL import Image
im = Image.open('xyz.png')
im = 255 - im
You can use img.putpixel to assign the r,g,b,a values at each pixel-
from PIL import Image
im = Image.open('xyz.png')
rgb_im = im.convert('RGB')
width, height = im.size
output_im = Image.new('RGB', (width,height))
for w in range(width):
for h in range(height):
r,g,b = rgb_im.getpixel((w,h))
output_r = 255 - r
output_g = 255 - g
output_b = 255 - b
alpha = 1
output_im.putpixel((w, h), (output_r, output_g, output_b, alpha))
Convert image to numpy array, and you can perform the operation on all 2-dimensional arrays in one line
from PIL import Image
import numpy as np
image = Image.open('my_image.png')
# Convert Image to numpy array
image_array = np.array(image)
print(image_array.shape)
# Prints something like: (1024, 1024, 4)
# So we have 4 two-dimensional arrays: R, G, B, and the alpha channel
# Do `255 - x` for every element in the first 3 two-dimensional arrays: R, G, B
# Keep the 4th array (alpha channel) untouched
image_array[:, :, :3] = 255 - image_array[:, :, :3]
# Convert numpy array back to Image
inverted_image = Image.fromarray(image_array)
inverted_image.save('inverted.png')