I have an image segmentation problem. First I need to find a certain animal out of an image with multiple different animals. Then I need to find a certain feature in the animal. The first network build to find the particular animal is simply a unet doing binary classification. I have a resulting dice score of 96%.
Now I would like to be able to use the mask from the first network to crop the original image around the animal, I would also need to crop the second ground truth mask related to that image (this is the ground thruth for the features). How can I retreive a bounding box from the first mask predicted to be able to crop my images further?
I am coding in python and using pytorch and torchvision. I would like to avoid keras and tensorflow, any other library is welcome.
Related
I am training a U-Net using Monai, which is based on Pytorch. I am using Decathlon Dataset, where each segmentation image has two labels (one for the organ and the other for the tumour). What I want is to ignore the first label (organ segmentation) and train the network on the second label (tumour segmentation). I don't know if I should delete one label from the images manually(This will take me a lot of time if I have hundreds of images). Is there a way to do it using code? What is the right way to do it? Is there any existing function in Monai? Because Opening each image as a tensor, reading its values and replacing label 1 with the background pixel might be time and resource-consuming. Thanks
I tried to search on Monai docs to get a simple code but I didn't find.
There is a tutorial on the web for drawing bounding boxes using R-CCN, where a VGG16 network is modified for this task (using transfer learning take advantage that the inner layers are trained already.).
The edit consists on:
removing the classification layer
using a regression layer instead
The training involves images for inputs and [x1,y1,x2,y2] labeled outputs, each pair being a corner of an image, i.e a description of a square box around the object we want to detect.
I have tried it, and so far didn't have luck for the coordinates predicted. So my questions are:
Is the procedure of editing the CNN to create an R-CNN that outputs the vector (also in link at the top) a correct approach for predicting a bounding box for a specific object ?
I am trying with Mobile Net because it is lighter, so assuming 1. is correct, would this also be a "logically similar" idea?
I am trying to understand RPN network in Faster RCNN.
I understand the concept of RPN network,
Pass the input images to the pre trained CNN, and get the output as feature maps
Make fixed size of the feature maps
Extract anchors (3 different scales and ratio for every sliding window) from the fixed size feature maps.
Use two 1×1 Fully connected NN to find the background or object and the bounding box coordinates (4 values)
Calculate IOU for Anchors bounding box with Ground Truth bounding box, if IOU>0.7, then the anchor has object, otherwise, the anchor has background.
The theme for RPN is to give the region proposals which have objects.
But, I do not understand the input and the output structure.
For example, I have 50 images, each images having 5 to 6 objects, and labeling informations(coordinates of each objects).
How do I generate target values, to train PRN Network...
In all the blogs, they shows the architecture as feed the entire image to the pre trained CNN.
And, the output of RPN, the model has to tell whether the anchor has object or not, and also predict the bounding box for the object in the anchor.
For this, how to prepare the input and target/output values like we do in dog/cat or dog/cat/car classification problem.
Let me correct if I am not correct,
Is that, we have to crop all the objects in every image and do binary classification as object vs background for classifying the anchor has object or not
And, Is that, we have to give the ground truth value as target for every cropped objects from all images in the dataset, so that the RPN network trained well to predict the bounding box for the object in every anchor.
Hope, I clearly explained my doubts.
Help me to learn this concept, Thank you
After training an image detection model, how do I load the parameters of the bounding boxes for a specific operation?
Model: Darkflow Yolov2
Classes:7
For instance, if I set the threshold as 0.5, how do I utilize the resultant bounding boxes in a video to calculate the overlap. I am rather new to python and would appreciate it if someone could point me in the right direction.
I am unclear how to extract the individual class detection box and their relevant x and y data. Thank you!
![sample training input]http://www.cs.toronto.edu/~vmnih/data/mass_roads/train/sat/10078660_15.tiff
![sample training output]
http://www.cs.toronto.edu/~vmnih/data/mass_roads/train/map/10078660_15.tif
I am a beginner to CNN , and have worked with the MNIST dataset in which we input 28x28x3 images and output a 10x1 vector containing probabilities of the 10 classes(0,1,2,3---,9).
How do we extract only the road pixels from the input image and display them, as is represented by the output image?
This problems is a binary segmentation problem. In a sense you learn a mapping from satellite images and predict for each pixel, iff this pixel is part of the road. A simple algorithm to do this would be to check if the pixel color is part of some range.
A CNN naturally will learn you a more complicated function based on the local neighborhood of said pixel. One repo to get you started should be this one: https://github.com/jocicmarko/ultrasound-nerve-segmentation. Therein they use a similar algorithm to segment ultrasound images using CNNs. You just have to use 3 input channels instead of 1 and everything else should be quite similar.