I am creating a simple script to check if images are the same or different.
My code works for the jpg files but not for the png files.
For some reason, my code below thinks that the below png:
is the same as below png
from PIL import Image, ImageChops
img1 = Image.open('./1.png')
img2 = Image.open('./2.png')
img3 = Image.open('./A.jpg')
img4 = Image.open('./B.jpg')
diff1 = ImageChops.difference(img3, img4)
diff = ImageChops.difference(img2, img1)
print(diff.getbbox())
if diff.getbbox():
diff.show() # does not work for me. should show image if they are different
print(diff1.getbbox())
if diff1.getbbox():
diff1.show() # this works not sure why the PNG files do not
I am running this on Ubuntu. I am not sure what I am doing wrong. Any help would be great thanks!
Working code after #Mark's help: https://github.com/timothy/image_diff/blob/master/test.py
Not 100% certain what's going on here, but if you take your two images and split them into their channels and lay the channels out side-by-side, with ImageMagick:
magick 1.png -separate +append 1ch.png
You can see the Red, Green and Blue channels all contain shapes but there is a superfluous alpha channel (the rightmost area) serving no purpose - other than to confuse PIL!
If you change your code to drop the alpha channel like this, it then works:
img1 = Image.open('1.png').convert('RGB')
img2 = Image.open('2.png').convert('RGB')
diff = ImageChops.difference(img2, img1)
diff.getbbox()
(28, 28, 156, 156)
Difference image:
I note also that the ImageChops.difference documentation says "one of the images must be "1" mode" and have no idea if that is an issue.
Related
I have to convert some PNGs with transparent bg to simple JPEGs where the transparent background turns to white (which I assumed will happen by default). I have tried all of the solutions I could find here, but after saving the PNG as JPEG the image will look like this: (the noisy area was transparent on the PNG and the black was drop shadow)
Converted image
Original image
This is the code I use:
# save the PNG
response = requests.get(image_url, headers = header)
file = open("%s-%s.png" % (slug, item_id,), "wb")
file.write(response.content)
file.close()
# open the PNG and save as a JPEG
im1 = Image.open(filepath_to_png)
rgb_im = im1.convert('RGB')
rgb_im.mode
rgb_im.save(filepath_normal)
My question is that how could I prevent the JPEG to have that corrupted background? My goal is just simply have the same image in JPEG what I had in PNG.
The method you are using to convert to RGB would work on some images that just require straight-forward conversion like the ones with hard-edged transparency masks, but for those with soft-edged masking (like the transparency shadows in your image) it is not be effective as the conversion does not know how to deal with that semi-transparency.
A better approach to handle this would be to create a new Image with the same dimensions and fill it with a white background, then you just need to paste your original image:
new_im = Image.new( "RGB", im1.size, ( 255,255,255 ) )
new_im.paste( im1, im1 )
new_im.save( filepath_normal )
I have tested this approach using an image with soft-edged masking and obtained the following result:
You could use pillow library in python.
from PIL import Image
im = Image.open("1.png")
bg = Image.new("RGB", im.size, (255,255,255))
bg.paste(im,im)
bg.save("2.jpg")
Result I got had transparent background turned to white.
I'm working on a grayscale .tif file:
I convert it to BGR and try to draw some colorful stuff on it. If I save the result as .png, it's all still in shades of gray, including the drawn elements. If I save it as .jpg, colors of them are okay, but the rest of image is a lot brighter than it was, which I definitely don't want to happen.
simplified example of what I'm trying to do:
def draw_lines(image_path):
image = cv2.cvtColor(cv2.imdecode(np.fromfile(image_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED), cv2.COLOR_GRAY2BGR)
cv2.line(image, (0,10), (image.shape[1], 1000), (0, 255, 0), 10)
cv2.imwrite("result.jpg", image)
I'm not exactly sure why, but apparently the problem was caused by usage of flag cv2.IMREAD_UNCHANGED
After changing this line to image = cv2.imdecode(np.fromfile(image_path, dtype=np.uint8), cv2.IMREAD_COLOR) everything works fine, no matter which extension
Hi guys I'm trying to figure out what's wrong with my code
I want to load my image contains alpha channel
the description from official website says that:
cv.IMREAD_UNCHANGED: If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).
Here's my try:
import cv2 as cv
img2 = cv.imread( 'lbj.jpg' , cv.IMREAD_UNCHANGED)
img2.shape
And the result shows : (350, 590, 3)
Isn't is supposed to be (350,590,4)?
Thanks!
The reason there are only three channels is that the image is in jpg format, which does not have an alpha channel. If you were to load e.g. a png format image which had an alpha channel then
img2 = cv.imread( 'lbj.png' , cv.IMREAD_UNCHANGED)
with 'lbj.png' would load the image with the alpha channel included, and then
img2.shape
would show (350, 590, 4).
If you convert a jpg to png then you will still, at that point, have only three channels because the image would only have the BGR channels that were in the original jpg. However, you could at this point add an alpha channel to make it BGRA and then proceed to work with transparency options.
Adding an alpha channel is answered in python-opencv-add-alpha-channel-to-rgb-image
I've been having trouble trying to get PIL to nicely downsample images. The goal, in this case, is for my website to automagically downsample->cache the original image file whenever a different size is required, thus removing the pain of maintaining multiple versions of the same image. However, I have not had any luck. I've tried:
image.thumbnail((width, height), Image.ANTIALIAS)
image.save(newSource)
and
image.resize((width, height), Image.ANTIALIAS).save(newSource)
and
ImageOps.fit(image, (width, height), Image.ANTIALIAS, (0, 0)).save(newSource)
and all of them seem to perform a nearest-neighbout downsample, rather than averaging over the pixels as it should Hence it turns images like
http://www.techcreation.sg/media/projects//software/Java%20Games/images/Tanks3D%20Full.png
to
http://www.techcreation.sg/media/temp/0x5780b20fe2fd0ed/Tanks3D.png
which isn't very nice. Has anyone else bumped into this issue?
That image is an indexed-color (palette or P mode) image. There are a very limited number of colors to work with and there's not much chance that a pixel from the resized image will be in the palette, since it will need a lot of in-between colors. So it always uses nearest-neighbor mode when resizing; it's really the only way to keep the same palette.
This behavior is the same as in Adobe Photoshop.
You want to convert to RGB mode first and resize it, then go back to palette mode before saving, if desired. (Actually I would just save it in RGB mode, and then turn PNGCrush loose on the folder of resized images.)
This is over a year old, but in case anyone is still looking:
Here is a sample of code that will see if an image is in a palette mode, and make adjustments
import Image # or from PIL import Image
img = Image.open(sourceFile)
if 'P' in img.mode: # check if image is a palette type
img = img.convert("RGB") # convert it to RGB
img = img.resize((w,h),Image.ANTIALIAS) # resize it
img = img.convert("P",dither=Image.NONE, palette=Image.ADAPTIVE)
#convert back to palette
else:
img = img.resize((w,h),Image.ANTIALIAS) # regular resize
img.save(newSourceFile) # save the image to the new source
#img.save(newSourceFile, quality = 95, dpi=(72,72), optimize = True)
# set quality, dpi , and shrink size
By converting the paletted version to RGB, we can resize it with the anti alias. If you want to reconvert it back, then you have to set dithering to NONE, and use an ADAPTIVE palette. If there options aren't included your result (if reconverted to palette) will be grainy. Also you can use the quality option, in the save function, on some image formats to improve the quality even more.
I have been hitting my head against the wall for a while with this, so maybe someone out there can help.
I'm using PIL to open a PNG with transparent background and some random black scribbles, and trying to put it on top of another PNG (with no transparency), then save it to a third file.
It comes out all black at the end, which is irritating, because I didn't tell it to be black.
I've tested this with multiple proposed fixes from other posts. The image opens in RGBA format, and it's still messed up.
Also, this program is supposed to deal with all sorts of file formats, which is why I'm using PIL. Ironic that the first format I tried is all screwy.
Any help would be appreciated. Here's the code:
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff)) # xoff and yoff are 0 in my tests
img.save(outfile)
I think what you want to use is the paste mask argument.
see the docs, (scroll down to paste)
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff), mask=layer)
# the transparancy layer will be used as the mask
img.save(outfile)