I'm trying to write a steganography applcation using the LSB method and it works so far well enough for a few image formats .
However it doesn't work for GIF images since i have noticed that the saved gif has a few different pixel values (usually +- 1) and the LSB method relies on changing the least significant bit so a few different values throws the decoding algorithm off.
i have tried using both imageio and PIL.Image and it's the same problem in both cases
So basically my question is : Why does the pixel values change when saved and is it even possible to use LSB for encoding and decoding a GIF ?
Thanks for your help.
Gif is lossless it should not change the pixels, I recently did a little application using LSB method with gif format here is few things you should do:
make sure when you encoded right, try replacing the pixel(0,0) then verify if the value is change if not so check the decoding
make sure that the gif color is 255
you will encounter this later but you should put the original metadata and delay time when assembling the frames
These are the main issues, other than that as I said earlier it is a lossless compression just like png it should not change the pixels so the problem is either in coding/decoding or type of the RGB color.
Related
I want to compare two images (.png format) pixel by pixel using selenium in python. Or how could i do it using pillow library.
I have a base image and i get the compare image by taking screenshot of the webpage. I want to compare those two images and assert that they are equal. how can I do it.
Below is what I have tried:
def assert_images_are_equal(base_image, compare_image):
with open(base_image, 'rb') as f1, open(compare_image, 'rb') as f2:
base_image_contents = f1.read()
compare_image_contents = f2.read()
assert base_image_contents == compare_image_contents
But this doesnt work always. I want to compare pixel by pixel. Could someone help me with this using pillow library or any other library apart from PIL? thanks.
It is rather difficult to say whether 2 images are the same or similar, because it depends on your definitions of "same" and "similar".
You can make a solid red image, save it as a PNG and then save the exact same image again and it could be different because the PNG format contains a timestamp in the image header that may have ticked over to the next second in between saves.
You can make a solid red PNG file that is 8-bits deep, and another that is 16-bits deep and you cannot see the difference but the data will be grossly different.
You can make a TIF file in Motorola byte order and the same file in Intel byte order. Visually, and in calculations, they will be indistinguishable, but the files will be grossly different.
You can make a GIF file that is red and it will look no different from a PNG file but the files will differ.
You can make a palette image and a true-colour image and the pixels will be grossly different but they will look identical.
You could make a simple black image with a white rectangle in the middle and write it using one JPEG library and it will come out different from the same image written with a different JPEG library, or even a different release version of the same library.
There are many more cases...
One a more helpful note, you may want to look at Perceptual Hashing which tells you if images look pretty similar. One library that does Perceptual Hashing is ImageMagick and it has a Python binding here and here.
I am a bit confused about when an image is gamma encoded/decoded and when I need to raise it to a gamma function.
Given an image 'boat.jpg' where the colour representation is labeled 'sRGB'. My assumption is that the pixel values are encoded in the file by raising the arrays to ^(1/2.2) during the save process.
When I import the image into numpy using scikit-image or opencv I end up with a 3-dim array of uint8 values. Do these values need to be raised to ^2.2 in order to generate a histogram of the values, or when I apply the imread function, does that map the image into linear space in the array?
from skimage import data,io
boat = io.imread('boat.jpg')
if you get your image anywhere on the internet, it has gamma 2.2.
unless the image has an image profile encoded, then you get the gamma from that profile.
imread() reads the pixel values 'as-is', no conversion.
there's no point converting image to gamma 1.0 for any kind of the processing, unless you specifically know that you have to. basically, nobody does that.
As you probably know, skimage uses a handful of different plugins when reading in images (seen here). The values you get should not have to be adjusted...that happens under the hood. I would also recommend you don't use the jpeg file format because you lose data with the compression.
OpenCV (as of v 4) usually does the gamma conversion for you, depending on the image format. It appears to do it automatically with PNG, but it's pretty easy to test. Just generate a 256x256 8-bit color image with a linear color ramps along x and y, then check to see what the pixel values at given image coords. If the sRGB mapping/unmapping is done correctly at every point, x=i should have pixel value i and so on. If you imwrite to PNG in OpenCV, it will convert to sRGB, tag that in the image format, and GIMP or whatever will happily decode it back back to linear.
Most image files are stored as sRGB, and there's a tendency for most image manipulation APIs to handle it correctly, since well, if they didn't, they'd work wrong most of the time. In the odd instance where you read an sRGB file as linear or vice versa, it will make a significant difference though, especially if you're doing any kind of image processing. Mixing up sRGB and linear causes very significant problems, and you will absolutely notice it if it gets messed up; fortunately, the software world usually handles it automagically in the file read/write stage so casual app developers don't usually have to worry about it.
I am currently working in Steganography Project and I am a beginner.
I developed the following code in Python to complement the Last bit of all pixels and save the Resultant Image as New Image say: Output.jpg
Everything in the code works fine until I save the image using img.save() function: when I reopen the same image the pixels Remains unchanged.
I am aiming to use Java for this project.
from PIL import Image
img=Image.open("P:\Input.jpg")
img=img.convert("RGB")
width,height=img.size
pix=img.load()
for i in range(width):
for j in range(height):
r,g,b=pix[i,j]
bin_b=bin(b)
bin_list=list(bin_b)
if bin_list[-1] == 0:
bin_list[-1]=1
else:
bin_list[-1]=0
b=int("".join(str(i) for i in bin_list),2)
pix[i,j]=(r,g,b)
img.save("P:\Sampleout.jpg")
I need to get modified pixels as either oldpixels+1 or oldpixels-1 and not same oldpixel in position i,j
One possible problem is that you're using jpeg format. This image format is, by default, compressed using a lossy compression scheme that is allowed to alter slightly values of the pixel data to get a smaller footprint for the image. There is no guarantee that the lowest bit (or any bit actually) will be the same after saving to jpeg and reloading.
If you want to experiment on this kind of processing you should use lossless formats like BMP, TGA or PNG instead. JPEG also has a lossless mode, but it's not used by default by most software.
So, I have a PNG image file like the following example, and I need it to be converted into PGM format.
I'm using Ubuntu and Python, so any of terminal or Python tools would suit just fine. And there sure is a plenty of ways to do this: using ImageMagick convert command or pngtopam package or Python PIL library, etc.
But the point is, the quality of the image is essential in my case, and all of those failed in keeping it, always ending up with:
No need to mention this is totally not what I want to see. And the interesting thing is that when I tried to convert the same image into PGM manually using GIMP, it turned out quite well, looking exactly the way I'd like it to, i.e. the same as the PNG one.
So, that means it is possible to get a PGM image in fine quality after all, and now I'd really appreciate if someone can tell me how do I do that using terminal/Python tools. I guess, there should be some ImageMagick option that does the trick, it's just that I'm not aware of any.
You lost the antialiasing, which is conveyed via the alpha channel. To preserve it, use:
convert in.png -flatten out.pgm
Without -flatten, convert simply deletes the alpha channel; with -flatten it composites the input image against the background color, which is white by default.
Here are the results, magnified 10x so you can see what's going on:
Not flattened:
Flattened:
I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?