Reading tiffs in opencv swaps top and bottom third of image - python

I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?

Related

Combining two images horizontally in python using OpenCV

I have four images, each slices of a larger image. If I string them together horizontally, then I get the larger image. To complete this task, I'm using python 2.7 and the OpenCV library, specifically the hconcat() function. Here is the code:
with open("tempfds.jpg", 'ab+') as f:
f.write(cv2.hconcat(cv2.hconcat(cv2.imread("491411.jpg"),cv2.imread("491412.jpg")),cv2.hconcat(cv2.imread("491413.jpg"),cv2.imread("491414.jpg"))))
When I run it, everything works fine. But when I try to open the image itself, I get an error: Error interpreting JPEG image file (Not a JPEG file: starts with 0x86 0x7e). All the images I'm using are jpg's, so I don't understand why this error is occurring. Any insight is appreciated.
If you want to write a JPEG, you need:
cv2.imwrite('lovely.jpg', image)
where image is all your images concatenated together.

How to save float64 image data without loss of information, while being able to visualize it

I'm trying to implement frequency spectrum image watermarking, using the Fast Fourier Transform and everything is working perfectly, except the fact that I can't save the resulting watermarked image without running into trouble. Since the application is meant to be user-friendly, no hacks regarding photo viewing are acceptable. The user has to be able to download the resulting file and view it as is, without any further modifications.
The code basically goes like this:
import cv, imageio
image = cv2.imread(imagePath, cv2.IMREAD_UNCHANGED)
watermark = cv2.imread(wmPath, cv2.IMREAD_UNCHANGED)
# result is float64, due to the transition from the frequency spectrum to spatial
result = embed_watermark(image, watermark)
imageio.imwrite('result.tiff', result)
result2 = imageio.imread('result.tiff') # Still float64, thanks to tiff format
detection = detect_watermark(result, image)
So, the code is working as wanted, it prevents data loss, thanks to the tiff container, which allows for floating-point pixel values. However, the saved file ('result.tiff') can't be opened with any photo viewer included in MS Windows. If I use any other of the usual containers (jpeg, png, bmp), I am able to visualise the resulting image, but end up losing the watermark information.
I've tried solutions discussed here and here and read the documentation, but I don't seem to wrap my head around this. I have also tried opening the image using FIJI (following Ander Biguri's suggestion, but I get this error: "ImageJ can only open 8 and 16 bit/channel images (64)"
How could I possibly save the file without losing the watermark data, nor the ability to open the resulting file for visualisation?

How to compare two image files pixel by pixel in python using selenium?

I want to compare two images (.png format) pixel by pixel using selenium in python. Or how could i do it using pillow library.
I have a base image and i get the compare image by taking screenshot of the webpage. I want to compare those two images and assert that they are equal. how can I do it.
Below is what I have tried:
def assert_images_are_equal(base_image, compare_image):
with open(base_image, 'rb') as f1, open(compare_image, 'rb') as f2:
base_image_contents = f1.read()
compare_image_contents = f2.read()
assert base_image_contents == compare_image_contents
But this doesnt work always. I want to compare pixel by pixel. Could someone help me with this using pillow library or any other library apart from PIL? thanks.
It is rather difficult to say whether 2 images are the same or similar, because it depends on your definitions of "same" and "similar".
You can make a solid red image, save it as a PNG and then save the exact same image again and it could be different because the PNG format contains a timestamp in the image header that may have ticked over to the next second in between saves.
You can make a solid red PNG file that is 8-bits deep, and another that is 16-bits deep and you cannot see the difference but the data will be grossly different.
You can make a TIF file in Motorola byte order and the same file in Intel byte order. Visually, and in calculations, they will be indistinguishable, but the files will be grossly different.
You can make a GIF file that is red and it will look no different from a PNG file but the files will differ.
You can make a palette image and a true-colour image and the pixels will be grossly different but they will look identical.
You could make a simple black image with a white rectangle in the middle and write it using one JPEG library and it will come out different from the same image written with a different JPEG library, or even a different release version of the same library.
There are many more cases...
One a more helpful note, you may want to look at Perceptual Hashing which tells you if images look pretty similar. One library that does Perceptual Hashing is ImageMagick and it has a Python binding here and here.

Color in image gets dull after saving it in OpenCV

I am using opencv module to read and write the image. here is the code and below is the image i am reading and second image is after saving it on disk using cv2.imwrite().
import cv2
img = cv2.imread('originalImage.jpg')
cv2.imwrite('test.jpg',img)
It is significantly visible that colors are dull in second image. Is there any workaround to this problem or I am missing on some sort of setting parameters..?
I have done a bit of research on the point #mark raised about ICC profile. I have figured out a way to handle this in python PIL module. here is the code that worked for me. I have also learned to use PNG file format rather JPEG to do lossless conversion.
import Image
img = Image.open('originalImage.jpg')
img.save('test.jpg',icc_profile=img.info.get('icc_profile'))
I hope this will help others as well.
The difference is that the initial image (on the left in the diagram) has an attached ICC profile whereas the second one (on the right) does not.
I obtained the above image by running the ImageMagick utility called identify like this:
identify -verbose first.jpg > 1.txt
identify -verbose second.jpg > 2.txt
Then I ran the brilliant opendiff tool (which is part of macOS) like this:
opendiff [12].txt
You can extract the ICC profile from the first image also with ImageMagick like this:
convert first.jpg profile.icc
Your first input image has some icc-Profile associated in the meta-data, which is an optional attribute and most devices may not inject it in the first place. The ICC profile basically performs a sort of color correction, and the correction coefficients are calculated for each unique device during calibration.
Modern Web Browsers, Image Viewing utilities mainly take into account this ICC profile information before rendering the image onto the screen, that is the reason why there is a diff in both the images.
But Unfortunately OpenCV doesn't reads the ICC config from the meta data of the image to perform any color correction.

Python: Import multiple images from a folder and scale/combine them into one image?

I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
If you want to be able to zoom into the images, you do not want to scale them. You'll have to rely on the image viewer to do the scaling as they're being displayed - that's what PowerPoint is doing for you now.
The input images are GIF so they all contain a palette to describe which colors are in the image. If your images don't all have identical palettes, you'll need to convert them to 24-bit color before you combine them. This means that the output can't be another GIF; good options would be PNG or JPG depending on whether you can tolerate a bit of loss in the image quality.
You can use PIL to read the images, combine them, and write the result. You'll need to create a new image that is the size of the final result, and copy each of the smaller images into different parts of it.
You may want to outsource the image manipulation part to ImageMagick. It has a montage command that gets you 90% of the way there; just pass it some options and the names of the files in the directory.
Have a look at Python Imaging Library.
The handbook contains several examples on both opening files, combining them and saving the result.
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.

Categories

Resources