Im trying to make image compressor in my django project. I did well with jpg, but got a lot of problems with png. For compression i using PIL and cv2, but cant get result better then 16% of compression for big PNG files (>1 mb). Ive tryed to combine both libraries, and its still not innove. Here simple code of my view:
(the above code for jpg compression)
elif picture.mode == ('RGBA'):
if photo.image.size < 1000000:
colorsloss = picture.convert(mode="P", palette=Image.ADAPTIVE)
colorsloss.save('media/new/'+name,"PNG",quality=75, optimize=True, bits=8)
else:
originalImage = cv.imread(str('/home/andrey/sjimalka'+ photo.image.url))
cv.imwrite('media/new/'+name, originalImage,[cv.IMWRITE_PNG_COMPRESSION, 9])
cvcompressed = Image.open('media/new/'+name)
cvcompressed.convert(mode="RGB")
cvcompressed.save('media/new/'+name,"PNG",quality=75, optimize=True)
So here ive got 2 big problems:
1) If ive got low size image (< 1 mb), i using P mode in Pillow. It works great, but if i compressing image with gradient, i can see some distortions in places where i got gradient.
I have good compression (something like 85%), but no idea yet how to fix it.
2) I cant get good compression of big png files. My best goal yet is 16%, with really good quality, but it still not innove. Mb i do something wrong, or i shold use any other library or technology to make it better. I want to get a list 50% of compression with big png files.
I already tryed to use pngquant, but their docs wasnt too clear for me, and i cant find good code examples.
PNG is lossless. You cannot choose to discard information when writing in order to make files smaller like you can with JPEG.
If you go for a palettised version, you only need one byte per pixel instead of three, but then you only get 256 colours and gradients will look rubbish.
Also, the quality setting is not the same as for JPEG - it is more like the --fast or --best parameter to gzip.
One thing you can do, if you have large areas of transparency like you do, is make black all pixels that are 100% transparent. That will help them compress better. See example here.
Related
I want to compare two images (.png format) pixel by pixel using selenium in python. Or how could i do it using pillow library.
I have a base image and i get the compare image by taking screenshot of the webpage. I want to compare those two images and assert that they are equal. how can I do it.
Below is what I have tried:
def assert_images_are_equal(base_image, compare_image):
with open(base_image, 'rb') as f1, open(compare_image, 'rb') as f2:
base_image_contents = f1.read()
compare_image_contents = f2.read()
assert base_image_contents == compare_image_contents
But this doesnt work always. I want to compare pixel by pixel. Could someone help me with this using pillow library or any other library apart from PIL? thanks.
It is rather difficult to say whether 2 images are the same or similar, because it depends on your definitions of "same" and "similar".
You can make a solid red image, save it as a PNG and then save the exact same image again and it could be different because the PNG format contains a timestamp in the image header that may have ticked over to the next second in between saves.
You can make a solid red PNG file that is 8-bits deep, and another that is 16-bits deep and you cannot see the difference but the data will be grossly different.
You can make a TIF file in Motorola byte order and the same file in Intel byte order. Visually, and in calculations, they will be indistinguishable, but the files will be grossly different.
You can make a GIF file that is red and it will look no different from a PNG file but the files will differ.
You can make a palette image and a true-colour image and the pixels will be grossly different but they will look identical.
You could make a simple black image with a white rectangle in the middle and write it using one JPEG library and it will come out different from the same image written with a different JPEG library, or even a different release version of the same library.
There are many more cases...
One a more helpful note, you may want to look at Perceptual Hashing which tells you if images look pretty similar. One library that does Perceptual Hashing is ImageMagick and it has a Python binding here and here.
I'm trying to write a steganography applcation using the LSB method and it works so far well enough for a few image formats .
However it doesn't work for GIF images since i have noticed that the saved gif has a few different pixel values (usually +- 1) and the LSB method relies on changing the least significant bit so a few different values throws the decoding algorithm off.
i have tried using both imageio and PIL.Image and it's the same problem in both cases
So basically my question is : Why does the pixel values change when saved and is it even possible to use LSB for encoding and decoding a GIF ?
Thanks for your help.
Gif is lossless it should not change the pixels, I recently did a little application using LSB method with gif format here is few things you should do:
make sure when you encoded right, try replacing the pixel(0,0) then verify if the value is change if not so check the decoding
make sure that the gif color is 255
you will encounter this later but you should put the original metadata and delay time when assembling the frames
These are the main issues, other than that as I said earlier it is a lossless compression just like png it should not change the pixels so the problem is either in coding/decoding or type of the RGB color.
So, I have a PNG image file like the following example, and I need it to be converted into PGM format.
I'm using Ubuntu and Python, so any of terminal or Python tools would suit just fine. And there sure is a plenty of ways to do this: using ImageMagick convert command or pngtopam package or Python PIL library, etc.
But the point is, the quality of the image is essential in my case, and all of those failed in keeping it, always ending up with:
No need to mention this is totally not what I want to see. And the interesting thing is that when I tried to convert the same image into PGM manually using GIMP, it turned out quite well, looking exactly the way I'd like it to, i.e. the same as the PNG one.
So, that means it is possible to get a PGM image in fine quality after all, and now I'd really appreciate if someone can tell me how do I do that using terminal/Python tools. I guess, there should be some ImageMagick option that does the trick, it's just that I'm not aware of any.
You lost the antialiasing, which is conveyed via the alpha channel. To preserve it, use:
convert in.png -flatten out.pgm
Without -flatten, convert simply deletes the alpha channel; with -flatten it composites the input image against the background color, which is white by default.
Here are the results, magnified 10x so you can see what's going on:
Not flattened:
Flattened:
I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?
I am trying to convert a color image to pure BW. I looked around for some code to do this and settled with
im = Image.open("mat.jpg")
gray = im.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save("result_bw.jpg")
However, the result still has grays!
So, I tried to do it myself:
floskel = Image.open("result_bw.jpg")
flopix = floskel.load()
for i in range (0,floskel.size[0]):
for j in range (0, floskel.size[1]):
print flopix[i,j]
if flopix[i,j]>100:
flopix[i,j]=255
else:
flopix[i,j]=0
But, STILL, there are grays in the image.
Am I doing something wrong?
As sebdelsol mentioned, it's much better to use im.convert('1') directly on the colour source image. The standard PIL "dither" is Floyd-Steinberg error diffusion, which is generally pretty good (depending on the image), but there are a variety of other options, eg random dither and ordered dither, although you'd have to code them yourself, so they'd be quite a bit slower.
The conversion algorithm(s) you use in the code in the OP is just simple thresholding, which generally loses a lot of detail, although it's easy to write. But I guess in this case you were just trying to confirm your theory about grey pixels being present in the final image. But as sebdelsol said, it just looks like there are grey pixels due to the "noise", i.e. regions containing a lot of black and white pixels mixed together, which you should be able to verify if you zoom into the image.
FWIW, if you do want to do your own pixel-by-pixel processing of whole images it's more efficient to get a list of pixels using im.getdata() and put them back into an image with im.putdata(), rather than doing that flopix[i,j] stuff. Of course, if you don't need to know coordinates, algorithms that use im.point() are usually pretty quick.
Finally, JPEG isn't really suitable for B&W images, it was designed for images with (mostly) continuous tone. Try saving as PNG; the resulting files will probably be a lot smaller than the equivalent JPEGs. It's possible to reduce JPEG file size by saving with low quality settings, but the results generally don't look very good.
You'd rather use convert to produce a mode('1') image. It would be faster and better since it use dithering by default.
bw = im.convert('1')
The greys you see appear probably in the parts of the image with noise near the 128 level, that produces high frequency B&W that looks grey.