Rasterio mask image fill pixels - python

I am programmatically processing satellite images and using Rasterio to clip my Area of Interests. All my rasterio masks are currently in the 1:1 aspect ratio (square). I have a specific case where in the clipping mask boundaries fall outside the image bounds. I would like to fill the empty areas with dummy pixels of a specific color, instead of the program clipping my image lazily.
The image above shows my current and expected scenario. Basically I would like to preserve my target aspect ratio (1:1) regardless of whether the clipping area is inside or outside of my main image. Is there a function in Rasterio that I can make use of?
I also have an idea in mind using cv2 where I can compare the pixels in and out and colour them conditionally but quite unsure where to start.

Related

How do I use the result of thresholding to select parts of another image in OpenCV?

I have a threshold image and a pattern image. I want to overlay the pattern image onto my threshold image, where the threshold part is black. See images attached:
Threshold Image, Pattern Image:
The issue with that is, my threshold part is not of any uniform shape. It is random, not having some fixed shape like rect, circle etc. (Comes from frontend). It would be similar to this above sample image only.
What I want is to "cover" the black part of threshold with a single pattern image. Now, if my threshold image was of some shape like rectangle / circle, I could simply use some "slicing" operation to overlay them. A sample is shown below as Ideal condition.
Ideal threshold, ideal result:
However, actual shapes are not that trivial. So I cannot use slicing on them. Also, my pattern images are not just one solid color, so I think I can't even do much with that.
I think I need to somehow "generate" the pattern image according to my black portion of threshold Image. How do I do that? Any pointer to that would also help. I'm using OpenCV and Python.
Feel free to consider the dimensions of both images to be same.

Extracting objects with image-difference

Working on object detection in Python with opencv.
I have two pictures
The reference picture with no object in it.
Picture with object.
The result of the images is:
The problem is, the pattern of the reference image is now on my objects. I want to remove this pattern and I don't know how to do it. For further image processing I need the the correct outline of the objects.
Maybe you know how to fix it, or have better ideas to exctract the object.
I would be glad for your help.
Edit: 4. A black object:
As #Mark Setchell commented, the difference of the two images shows which pixels contain the object, you shouldn't try to use it as the output. Instead, find the pixels with a significant difference, and then read those pixels directly from the input image.
Here, I'm using Otsu thresholding to find what "significant difference" is. There are many other ways to do this. I then use the inverse of the mask to blank out pixels in the input image.
import PyDIP as dip
bg = dip.ImageReadTIFF('background.tif')
bg = bg.TensorElement(1) # The image has 3 channels, let's use just the green one
fg = dip.ImageReadTIFF('object.tif')
fg = fg.TensorElement(1)
mask = dip.Abs(bg - fg) # Difference between the two images
mask, t = dip.Threshold(mask, 'otsu') # Find significant differences only
mask = dip.Closing(mask, 7) # Smooth the outline a bit
fg[~mask] = 0 # Blank out pixels not in the mask
I'm using PyDIP above, not OpenCV, because I don't have OpenCV installed. You can easily do the same with OpenCV.
An alternative to smoothing the binary mask as I did there, is to smooth the mask image before thresholding, for example with dip.Gauss(mask,[2]), a Gaussian smoothing.
Edit: The black object.
What happens with this image, is that its illumination has changed significantly, or you have some automatic exposure settings in your camera. Make sure you have turned all of that off so that every image is exposed exactly the same, and that you use the raw images directly off of the camera for this, not images that have gone through some automatic enhancement procedure or even JPEG compression if you can avoid it.
I computed the median of the background image divided by the object image (fg in the code above, but for this new image), which came up to 1.073. That means that the background image is 7% brighter than the object image. I then multiplied fg by this value before computing the absolute difference:
mask = dip.Abs(fg * dip.Median(bg/fg)[0][0] - bg)
This helped a bit, but it showed that the changes in contrast are not consistent across the image.
Next, you can change the threshold selection method. Otsu assumes a bimodal histogram, and works well if you have a significant number of pixels in each group (foreground and background). Here we'll have fewer pixels belonging to the object, because only some of the object pixels have a different color from the background. The 'triangle' method is suitable in this case:
mask, t = dip.Threshold(mask, 'triangle')
This will lead to a mask that contains only some of the object pixels. You'll have to add some additional knowledge about your object (i.e. it is a rotated square) to find the full object. There are also some isolated background pixels that are being picked up by the threshold, those are easy to eliminate using a bit of blurring before the threshold or a small opening after.
Getting the exact outline of the object in this case will be impossible with your current setup. I would suggest you improve your setup by either:
making the background more uniform in illumination,
using color (so that there are fewer possible objects that match the background color so exactly as in this case),
using infrared imaging (maybe the background could have different properties from all the objects to be detected in infrared?),
using back-illumination (this is the best way if your aim is to measure the objects).

Crop out complementary regions from an image

I have coordinate regions of an image. I am cropping out images from those coordinates. Now, I need complimentary regions of what that is cut out, with respect to the original image. How do I go about using Pillow?
If you crop a region you basically create a new, smaller image.
A complementary operation to that would be to fill the region with some value you consider invalid or zero as you will still have an image of the original size. Technically you cannot remove a region from an image. you can just change or ignore it.
PIL.ImageDraw.Draw.rectangle(xy, fill=None, outline=None)
is something I found quickly. Maybe there is something better. Just crawl through the reference.

How to properly detect corners using Harris detector with OpenCV?

I'm testing some image processing to obtain minutiae from digital fingerprints. I'm doing so far:
Equalize histogram
Binarize
Apply Zhang-Suen algorithm for lines thinning (this is not working properly).
Try to determine corners in thinned image and show them.
So, the modifications I'm obtaining are:
However, I can't get to obtain possible corners in the last image, which belongs to thinned instance of Mat object.
This is code for trying to get corners:
corners_image = cornerHarris(thinned,1,1,0.04)
corners_image = dilate(corners_image,None)
But trying imshow on the resulting matrix will show something like:
a black image.
How should I determine corners then?
Actually cv::cornerHarris returns corener responses, not corners itself. Looks like responses on your image is too small.
If you want to visualize corners you may get responses which are larger some threshold parameter, then you may mark this points on original image as follows:
corners = cv2.cvtColor(thinned, cv2.COLOR_GRAY2BGR)
threshold = 0.1*corners_image.max()
corners [corners_image>threshold] = [0,0,255]
cv2.imshow('corners', corners)
Then you can call imshow and red points will correspond to corner points. Most likely you will need to tune threshold parameter to get results what you need.
See more details in tutorial.

PIL: overlaying images with different dimensions and aspect ratios

I've been attempting to overlay two images in python to match coordinates, the top left and bottom right corners have the same coordinates and their aspects are almost identical bar a few pixels. Although they are different resolutions.
Using PIL I have been able to overlay the images, though after overlaying them the image output is square but the resolution is that of the background image, the foreground image is also re-sized incorrectly (As far as I can see). I must be doing something wrong.
import Image
from PIL import Image
#load images
background = Image.open('ndvi.png')
foreground = Image.open('out.png')
#resizing
foreground.thumbnail((643,597),Image.ANTIALIAS)
#overlay
background.paste(foreground, (0, 0), foreground)
#save
background.save("overlay.png")
#display
background.show()
When dropping the images into something horrible like powerpoint the image aspects are almost identical. I've included an example image, the image on the left is my by hand overlay and the image on the right is the output from python. The background at some point in the code is squashed vertically, also affecting the overlay. I'd like to be able to do this in python and make it correctly look like the left hand image.
A solution upfront.
Background image
width/height/ratio: 300 / 375 / 0.800
Foreground image
width/height/ratio: 400 / 464 / 0.862
Overlay
from PIL import Image
imbg = Image.open("bg.png")
imfg = Image.open("fg.png")
imbg_width, imbg_height = imbg.size
imfg_resized = imfg.resize((imbg_width, imbg_height), Image.LANCZOS)
imbg.paste(imfg_resized, None, imfg_resized)
imbg.save("overlay.png")
Discussion
The most important information you have given in your question were:
the aspect ratios of your foreground and background images are not equal, but similar
the top left and bottom right corners of both images need to be aligned in the end.
The conclusion from these points is: the aspect ratio of one of the images has to change. This can be achieved with the resize() method (not with thumbnail(), as explained below). To summarize, the goal simply is:
Resize the image with larger dimensions (foreground image) to the exact dimensions of the smaller background image. That is, do not necessarily maintain the aspect ratio of the foreground image.
That is what the code above is doing.
Two comments on your approach:
First of all, I recommend using the newest release of Pillow (Pillow is the continuation project of PIL, it is API-compatible). In the 2.7 release they have largely improved the image re-scaling quality. The documentation can be found at http://pillow.readthedocs.org/en/latest/reference.
Then, you obviously need to take control of how the aspect ratio of both images evolves throughout your program. thumbnail(), for instance, does not alter the aspect ratio of the image, even if your size tuple does not have the same aspect ratio as the original image. Quote from the thumbnail() docs:
This method modifies the image to contain a thumbnail version of
itself, no larger than the given size. This method calculates an
appropriate thumbnail size to preserve the aspect of the image
So, I am not sure where you were going exactly with your (643,597) tuple and if you are possibly relying on the thumbnail to have this exact size afterwards.

Categories

Resources