Join the two images - python

I am quite new to OpenCV and DIP in general so I need bit of help in stitching of two images. The problem background is, there are two pieces which have their adhesives/glue torn apart from two joined pieces of plastic. This is the image of "glue" on the base:
and this is the image of "glue" on the other attached face:
As the background of the the images is not the same, I read that it's not possible to do stitching (because different features). And these two pieces are like jigsaw pieces which needs to rotated, so the problem is not straightforward like panaroma stitching.
How do I join such images together?
I was thinking of finding the white color countours and then keeping one image fixed, rotating the other one and finding area of merged countours, also storing the angle of what I rotate. The area would become smallest when there would be perfect match.

This is not a complete answer, (no-one said answers have to be complete), but it may inspire you or someone else to work out a method.
I flipped vertically and flopped horizontally one of your images and then put them both into Photoshop on two separate layers. I then set the "Blending Mode" to Difference which is always a great way to align images - because they normally go black when images are aligned and there is no difference.
I then moved one layer around on the other. I guess you will need to do something similar to solve your problem - you just need to find something that your code can maximise or minimise.

Related

Splitting an image into overlapping tiles, and recording the tile name as the top left pixel coords from the image

I'm very new to image processing in Python (and not massively adept at python in general), so forgive me for how stupid this may sound. Im working with an AI for object detection, and need to submit 1000x1000 pixel images to it, that have been divided up from larger images of varying lengths and widths (not necessarily divisible, but I have a way of padding out images less than 1000x1000). In order for this to work, I need 200 pixel overlap on each segment or the AI will pick may miss objects.
I've tried a host of methods, and have either got the image to divide up using the methods suggested in Creating image tiles (m*n) of original image using Python and Numpy and how can I split a large image into small pieces in python (plus a few others that effectively do the same techniques in different words. I've been able to make a grid and get the tile names from this, using How to determine coordinate of grid elements of an image, however have not been able to get overlap to work in this, as I would then just tile it normally.
Basically what I'm saying is that I've found one way to cut the images up that works, and one way to get the tile coordinates, but I am utterly failing at putting it all together. Does anyone have any advice on what to do here?
So far I've not found a direct approach to my end goal online - and I've tried mucking around with different scripts (like the ones listed above), but feel like Im barking up totally the wrong tree.

Eliminate the background (the common points) of 3 images - OpenCV

Forgive me but I'm new in OpenCV.
I would like to delete the common background in 3 images, where there is a landscape and a man.
I tried some subtraction codes but I can't solve the problem.
I would like output each image only with the man and without landscape
Are there in OpenCV Algorithms what do this do? (then without any manual operation so no markers or other)
I tried this python code CV - Extract differences between two images
but not works because in my case i don't have an image with only background (without man).
I thinks that good solution should to Compare all the images and save those "points" that are the same at least in an image.
In this way I can extrapolate a background (which we call "Result.jpg") and finally analyze each image and cut those portions that are also present in "Result.jpg".
You say it's a good idea? Do you have other simplest ideas?
Without semantic segmentation, you can't do that.
Because all you can compute is where two images differ, and this does not give you the silhouette of the person, but an overlapping of two silhouettes. You'll never know the exact outline.

how to locate and extract coordinates/data/sub-components of charts/map image data?

I'm working on creating a tile server from some raster nautical charts (maps) i've paid for access, and i'm trying to post-process the raw image data that these charts are distributed as, prior to geo-referencing them and slicing them up into tiles
i've got a two sets of tasks and would greatly appreciate any help or even sample code on how to get these done in an automated way. i'm no stranger to python/jupyter notebooks but have zero experience with this type of data-science to do image analysis/processing using things like opencv/machine learning (or if there's a better toolkit library that i'm not even yet aware of).
i have some sample images (originals are PNG but too big to upload so i encoded them in high-quality JPEGs to follow along/provide sample data).. here's what i'm trying to get done:
validation of all image data.. the first chart (as well as last four) demonstrate what properly formatted charts images should looks like (i manually added a few colored rectangles to the first, to highlight different parts of the image in the bonus section below)
some images will either have missing tile data, as in the 2nd sample image, these are ALWAYS chunks of 256x256 image data, so should be straightforward to identify black boxes of this exact size..
some images will have corrupt/misplaced tiles as in the 3rd image (notice in the center/upper half of the image is a large colorful semi-circle/arcs, it is slightly duplicated beneath and if you look along horizontally you can see the image data is shifted and so these tiles have been corrupted somehow
extraction of information, ultimately once all image data is verified to be valid (the above steps are ensured), there is a few bit of data i really need pulled out of the image, the most important of which is
the 4 coordinates (upper left, upper right, lower left, lower right) of the internal chart frame, in the first image they are highlighted in a small pink box at each corner (the other images don't have them but they are located in a simlar way) - NOTE, because these are geographic coordinates and involve projections, they are NOT always 100% horizontal/vertical of each other.
the critical bit is that SOME images container more than one "chartlet", i really need to obtain the above 4 coordinate for EACH chartlet (some charts have no chartlets, some two to several of them, and they are not always simple rectangular shapes), i may be able to generate for input the number of chartlets if that helps..
if possible, what would also help is extracting each chartlet as a separate image (each of these have a single capital letter, A, B, C in a circle that would be good if it appeared in the filename)
as a bonus, if there was a way to also extract the sections sampled in the first sample image (in the lower left corner), this would probably involve recognize where/if in the image this appears (would probably only appear once per file but not certain) and then extracting based on its coordinates?
mainly the most important is inside a green box and represents a pair of tables (the left table is an example and i believe would always be the same, and the right has a variable amount of columns)
also the table in the orange box would be good to also get the text from as it's related
as would the small overview map in the blue box, can be left as an image
i have been looking at tutorials on opencv and image recognition processes but the content so far has been highly elementary not to mention an overwhelming endless list of algorithms for different operations (which again i don't know which i'd even need), so i'm not sure how it relates to what i'm trying to do.. really i don't even know where to begin to structure the steps needed for undertaking all these tasks or how each should be broken down further to ease the processing.

Detecting changes between two pictures

I am a beginner in python & image processing.I have the following image.
I would like to detect the changes between this picture and another one where the other one may be
taken from a slightly different angle or the object is translated
have different light conditions
and the changes may be
a change in color in part of the object
an extra or missing part
After various searches I thought about using ORB to detect the matching parts and remove them from the picture then use contour to extract and compare the difference.
But I cannot seem to find a way to remove the matching parts from the two images.
I am open for all suggestions / better way to approach the problem.
edit:
Sorry , forgot to mention that the color change could either be white or pink
Looking at your image, it appears there are three dominant colors. If this is always the case the first thing that comes to mind is to apply a color K mean algorithm with three clusters, like explained here.
The center color of each cluster would then give you information on the color of the tubes, and the size of each cluster (# of pixels belonging to that cluster) would give you if there are extra or missing parts.

Combine two overlapping videos frame by frame to form a single frame

I am getting video input from 2 separate cameras with some area of overlap between the output videos. I have tried out a code which combines the video output horizontally. Here is the link for that code:
https://github.com/rajatsaxena/NeuroscienceLab/blob/master/positiontracking/combinevid.py
To explain the problem visually:
The red part shows the overlap region between two image frame. I need the output to look like the second image, with first frame in blue and second frame in green (as shown in third illustration)
A solutions I can think of but unable to implement is, Using SIFT/SURF find out the maximum distance keypoints from both frames and then take the first video frame completely and just pick the non overlapping region from second video frame and horizontally combine them to get the stitched output.
Let me know of any other solutions possible as well. Thanks!
I read this post one hour ago. I tried some really easy approach. Not perfect but in some cases should work well. For example, if you have both cameras on one frame placed side by side.
I took 2 images from the phone like on a picture (color images). Program select Rectangles region from both source images and resize end extract this roi rectangles. The idea is to find the "best" overlapping Rect regions by normalized correlation.
M1 and M2 is mat roi to compare,
matchTemplate(M1, M2, res, TM_CCOEFF_NORMED);
After, I find this overlapping Rect use this to crop source images and combine by hconcat() function together.
My code is in C++ but is really simple to replicate this in python. It is not the best solution but one of the most simple solution. If your cameras are fixed in stable position between themselves. This is a good solution I think.
I hold my phone in hand :)
You can also use this simple approach on video. The speed depends only on the number of rectangle candidate you compare.
You can improve this by smart region to compare selection.
Also, I am thinking about another idea to use optical flow by putting your images from a camera at the same time to sequence behind each other. From the possible overlapping regions in one image extract good features to track and find them in the region of second images.
Surf and sift are great for this but this is the most simple idea on my mind.
Code is Here Code

Categories

Resources