Crop out complementary regions from an image - python

I have coordinate regions of an image. I am cropping out images from those coordinates. Now, I need complimentary regions of what that is cut out, with respect to the original image. How do I go about using Pillow?

If you crop a region you basically create a new, smaller image.
A complementary operation to that would be to fill the region with some value you consider invalid or zero as you will still have an image of the original size. Technically you cannot remove a region from an image. you can just change or ignore it.
PIL.ImageDraw.Draw.rectangle(xy, fill=None, outline=None)
is something I found quickly. Maybe there is something better. Just crawl through the reference.

Related

Rasterio mask image fill pixels

I am programmatically processing satellite images and using Rasterio to clip my Area of Interests. All my rasterio masks are currently in the 1:1 aspect ratio (square). I have a specific case where in the clipping mask boundaries fall outside the image bounds. I would like to fill the empty areas with dummy pixels of a specific color, instead of the program clipping my image lazily.
The image above shows my current and expected scenario. Basically I would like to preserve my target aspect ratio (1:1) regardless of whether the clipping area is inside or outside of my main image. Is there a function in Rasterio that I can make use of?
I also have an idea in mind using cv2 where I can compare the pixels in and out and colour them conditionally but quite unsure where to start.

How do I use the result of thresholding to select parts of another image in OpenCV?

I have a threshold image and a pattern image. I want to overlay the pattern image onto my threshold image, where the threshold part is black. See images attached:
Threshold Image, Pattern Image:
The issue with that is, my threshold part is not of any uniform shape. It is random, not having some fixed shape like rect, circle etc. (Comes from frontend). It would be similar to this above sample image only.
What I want is to "cover" the black part of threshold with a single pattern image. Now, if my threshold image was of some shape like rectangle / circle, I could simply use some "slicing" operation to overlay them. A sample is shown below as Ideal condition.
Ideal threshold, ideal result:
However, actual shapes are not that trivial. So I cannot use slicing on them. Also, my pattern images are not just one solid color, so I think I can't even do much with that.
I think I need to somehow "generate" the pattern image according to my black portion of threshold Image. How do I do that? Any pointer to that would also help. I'm using OpenCV and Python.
Feel free to consider the dimensions of both images to be same.

How to detect edge of object using OpenCV

I am trying to use OpenCV to measure size of filament ( that plastic material used for 3D printing)
What I am trying to do is measuring filament size ( that plastic material used for 3D printing ). The idea is that I use led panel to illuminate filament, then take image with camera, preprocess the image, apply edge detections and calculate it's size. Most filaments are fine made of one colour which is easy to preprocess and get fine results.
The problem comes with transparent filament. I am not able to get useful results. I would like to ask for a little help, or if someone could push me the right directions. I have already tried cropping the image to heigh that is a bit higher than filament, and width just a few pixels and calculating size using number of pixels in those images, but this did not work very well. So now I am here and trying to do it with edge detections
works well for filaments of single colour
not working for transparent filament
Code below is working just fine for common filaments, the problem is when I try to use it for transparent filament. I have tried adjusting tresholds for Canny function. I have tried different colour-spaces. But I am not able to get the results.
Images that may help to understand:
https://imgur.com/gallery/CIv7fxY
image = cv.imread("../images/img_fil_2.PNG") # load image
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY) # convert image to grayscale
edges = cv.Canny(gray, 100, 200) # detect edges of image
You can use the assumption that the images are taken under the same conditions.
Your main problem is that the reflections in the transparent filament are detected as edges. But, since the image is relatively simple, without any other edges, you can simply take the upper and the lower edge, and measure the distance between them.
A simple way of doing this is to take 2 vertical lines (e.g. image sides), find the edges that intersect the line (basically traverse a column in the image and find edge pixels), and connect the highest and the lowest points to form the edges of the filament. This also removes the curvature in the filament, which I assume is not needed for your application.
You might want to use 3 or 4 vertical lines, for robustness.

Find Coordinates of cropped image (JPG) from it's original

I have a database of original images and for each original images there are various cropped versions.
This is an example of how the image look like:
Original
Horizontal Crop
Square Crop
This is a very simple example, but most images are like this, some might taken a smaller section of the original image than others.
I was looking at OpenCV in python but I'm very new to this kind of image processing.
The idea is to be able to save the cropping information separate from the image to save space and then generate all the cropping and different aspect ratio on the fly with a cache system instead.
The method you are looking for is called "template matching". You find examples here
https://docs.opencv.org/trunk/d4/dc6/tutorial_py_template_matching.html
For your problem, given the large images, it might be a good idea to constrain the search space by resizing both images by the same factor. So that searching a position that isn't as precise, but allows then to constrain the actual full pixel sized search to a smaller region around that point.

How to calculate a marked area within boundary in SimpleCV or OpenCV

I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!

Categories

Resources