Combining two images horizontally in python using OpenCV - python

I have four images, each slices of a larger image. If I string them together horizontally, then I get the larger image. To complete this task, I'm using python 2.7 and the OpenCV library, specifically the hconcat() function. Here is the code:
with open("tempfds.jpg", 'ab+') as f:
f.write(cv2.hconcat(cv2.hconcat(cv2.imread("491411.jpg"),cv2.imread("491412.jpg")),cv2.hconcat(cv2.imread("491413.jpg"),cv2.imread("491414.jpg"))))
When I run it, everything works fine. But when I try to open the image itself, I get an error: Error interpreting JPEG image file (Not a JPEG file: starts with 0x86 0x7e). All the images I'm using are jpg's, so I don't understand why this error is occurring. Any insight is appreciated.

If you want to write a JPEG, you need:
cv2.imwrite('lovely.jpg', image)
where image is all your images concatenated together.

Related

How do I work with the mask data for AI4 boundaries?

I am working with the Ai4Boundaries dataset and while the data in the imagery folder is opening with Windows Photos + causing no issues in the python code I'm reading it into, the data in the mask folder will only open on ArcMap (as a black-blue gradient) and is causing errors in my code. (Both the imagery and the mask are in tiff format).
Here is a link to the
Imagery
Masks
When I simply try and open a mask using python with this code
plt.imshow(mpimg.imread('/content/AT_2989_ortholabel_1m_512.tif'))
the error I get is
UnidentifiedImageError: cannot identify image file '/content/AT_2989_ortholabel_1m_512.tif'
Any leads as to what the issue is and how I can resolve it?
I tried converting a mask to png and the png file is working fine on my code. But I'm working with around 7k+ images and don't know how to bulk convert.

python write_videofile results in a black screen video

Code:
clip = ImageSequenceClip(new_frames, fps=fps1)
clip.write_videofile("out.mp4", fps=fps1)
TL;DR:
This code produces a black screen video.
where fps1 is from the original video I stitch on
I am trying to stitch a video using frames from many videos.
I created an array containing all the images in their respective place and then passed frame by frame on each video and assigned the correct frame in the array. When I acted that way the result was ok, but the process was slow so I saved each frame to a file and loaded it within the stitching process. Python throw an exception that the array is to big and I chunked the video into parts and saved each chunk. The result came out as a black screen, even thought when I debugged I could show each frame on the ImageSequenceClip correctly. I tried reinstalling moviepy. I use windows 10 and I converted all frames to png type.
Well #BajMile was indeed right offering to use opencv.
What took me a while to realize is that I have to use only functions of opencv, also for the images I was opening and resizing.

How to save float64 image data without loss of information, while being able to visualize it

I'm trying to implement frequency spectrum image watermarking, using the Fast Fourier Transform and everything is working perfectly, except the fact that I can't save the resulting watermarked image without running into trouble. Since the application is meant to be user-friendly, no hacks regarding photo viewing are acceptable. The user has to be able to download the resulting file and view it as is, without any further modifications.
The code basically goes like this:
import cv, imageio
image = cv2.imread(imagePath, cv2.IMREAD_UNCHANGED)
watermark = cv2.imread(wmPath, cv2.IMREAD_UNCHANGED)
# result is float64, due to the transition from the frequency spectrum to spatial
result = embed_watermark(image, watermark)
imageio.imwrite('result.tiff', result)
result2 = imageio.imread('result.tiff') # Still float64, thanks to tiff format
detection = detect_watermark(result, image)
So, the code is working as wanted, it prevents data loss, thanks to the tiff container, which allows for floating-point pixel values. However, the saved file ('result.tiff') can't be opened with any photo viewer included in MS Windows. If I use any other of the usual containers (jpeg, png, bmp), I am able to visualise the resulting image, but end up losing the watermark information.
I've tried solutions discussed here and here and read the documentation, but I don't seem to wrap my head around this. I have also tried opening the image using FIJI (following Ander Biguri's suggestion, but I get this error: "ImageJ can only open 8 and 16 bit/channel images (64)"
How could I possibly save the file without losing the watermark data, nor the ability to open the resulting file for visualisation?

How to compare two image files pixel by pixel in python using selenium?

I want to compare two images (.png format) pixel by pixel using selenium in python. Or how could i do it using pillow library.
I have a base image and i get the compare image by taking screenshot of the webpage. I want to compare those two images and assert that they are equal. how can I do it.
Below is what I have tried:
def assert_images_are_equal(base_image, compare_image):
with open(base_image, 'rb') as f1, open(compare_image, 'rb') as f2:
base_image_contents = f1.read()
compare_image_contents = f2.read()
assert base_image_contents == compare_image_contents
But this doesnt work always. I want to compare pixel by pixel. Could someone help me with this using pillow library or any other library apart from PIL? thanks.
It is rather difficult to say whether 2 images are the same or similar, because it depends on your definitions of "same" and "similar".
You can make a solid red image, save it as a PNG and then save the exact same image again and it could be different because the PNG format contains a timestamp in the image header that may have ticked over to the next second in between saves.
You can make a solid red PNG file that is 8-bits deep, and another that is 16-bits deep and you cannot see the difference but the data will be grossly different.
You can make a TIF file in Motorola byte order and the same file in Intel byte order. Visually, and in calculations, they will be indistinguishable, but the files will be grossly different.
You can make a GIF file that is red and it will look no different from a PNG file but the files will differ.
You can make a palette image and a true-colour image and the pixels will be grossly different but they will look identical.
You could make a simple black image with a white rectangle in the middle and write it using one JPEG library and it will come out different from the same image written with a different JPEG library, or even a different release version of the same library.
There are many more cases...
One a more helpful note, you may want to look at Perceptual Hashing which tells you if images look pretty similar. One library that does Perceptual Hashing is ImageMagick and it has a Python binding here and here.

Reading tiffs in opencv swaps top and bottom third of image

I've got a pretty strange issue. I have several tif images of astronomical objects. I'm trying to use opencv's python bindings to process them. Upon reading the image file, it appears that segments of the images are swapped or rotated. I've stripped it down to the bare minimum, and it still reproduces:
img = cv2.imread('image.tif', 0)
cv2.imwrite('image_unaltered.tif', img)
I've uploaded some samples to imgur, to show the effect. The images aren't super clear, that's the nature of preprocessed astronomical images, but you can see it:
First set:
http://imgur.com/vXzRQvS
http://imgur.com/wig99KR
Second set:
http://imgur.com/pf7tnPz
http://imgur.com/xGn9C77
The same rotated/swapped images appear if I use cv2.imShow(...) as well, so I believe it's something when I read the file. Furthermore, it persists if I save as jpg as well. Opening the original in Photoshop shows the correct image. I'm using opencv 2.4.10, on Linux Mint 17.1. If it matters, the original tifs were created with FITS liberator on windows.
Any idea what's happening here?

Categories

Resources