Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
How can I segment cells from an image taken on a microscope, along the lines of what was done here in Matlab?
http://blogs.mathworks.com/steve/2006/06/02/cell-segmentation/
Also, if I take multiple image in different fluorescent channels (after staining the cells with some antibody/maker), how can I automatically quantitate the fraction of cells positive for each marker? Has anyone done something like this in Python? Or is there a library in Python that can be used to do this?
You can do this in Python using the OpenCV library.
In particular, you'll be interested in the following features:
histogram stretching (cv.EqualizeHist). This is missing from the current Python API, but if you download the latest SVN release of OpenCV, you can use it. This part is for display purposes only, not required to get the same result
image thresholding
morphological operations such as erode (also dilate, open, close, etc)
determine the outline of a blob in a binary image using cv.FindContours -- see this question. It's using C, not Python, but the APIs are virtually the same so you can learn a lot from there
watershed segmentation (use cv.Watershed -- it exists, but for some reason I can't find it in the manual)
With that in mind, here's how I would use OpenCV to get the same results as in the matlab article:
Threshold the image using an empirically determined threshold (or Ohtsu's method)
Apply dilation to the image to fill in the gaps. Optionally, blur the image prior to the previous thresholding step -- that will also remove small "holes"
Determine outlines using cv.FindContours
Optionally, paint the contours
Using the blob information, iterate over each blob in the original image and apply a separate threshold for each blob to separate the cell nuclei (this is what their imextendedmax operation is doing)
Optionally, paint in the nuclei
Apply the watershed transform
I haven't tried any of this (sorry, don't have the time now), so I can't show you any code yet. However, based on my experience with OpenCV, I'm confident that everything up to step 7 will work well. I've never used OpenCV's watershed transform before but I can't see a reason for it not to work here.
Try going through the steps I have shown and let us know if you have any problems. Be sure to post your source as that way more people will be able to help you.
Finally, to answer your question about staining cells and quantifying their presence, it's quite easy knowing the dyes that you are using. For example, to determine the cells stained with red dye, you'd extract the red channel from the image and examine areas of high intensity (perhaps by thresholding).
Have you read the tutorial on pythonvision.org?
http://pythonvision.org/basic-tutorial
It is very similar to what you are looking for.
And just to add one more: cellprofiler.org (open source cell image analysis software, in python)
You may also find this library useful:
https://github.com/luispedro/pymorph/
I found it easier to get moving with than the OpenCV library.
Related
I was wondering if there was a simple python toolkit for region-based image segmentation. I have a grayscale image, and my goal is to efficiently find a complete segmentation such that the pixel values in each region are similar (presumably the definition of "similar" will be determined by some tolerance parameter). I am looking for an instance segmentation where every pixel belongs exactly one region.
I have looked at the scikit-image segmentation module (https://scikit-image.org/docs/dev/api/skimage.segmentation.html), but the tools there didn't seem to do what I was looking for. For instance, skimage.segmentation.watershed looked attractive, but gave poor results using markers=None.
The flood fill algorithm from scikit-image seems close to what you want, has a tolerance parameter as well.
For more fine-tuned control you can check out OpenCV
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I’m trying to make sky detection, I already try on opencv blue color detection, it didn’t work well because of clouds and also because of different color depends on time, the final try was the most useful method, I used a canny detection that need adjusted by user, then I filled the sky with white pixels, and other with black.
The question is,
Is it possible to make sky detection automatically without interface from the user ?!
I can give a recommendation from an AI perspective. Read to the end for a non-AI recommendation.
Actually, there does exist a "simple" way given that you are willing to annotate about a few thousand images manually. You can have some of your coworkers/classmates to help you out with this. I worked with the YOLO-V3 program which does give you a decent GUI to manually annotate your images. YOLO-V3 however only works with bounding boxes, so my next suggestion will work with identifying the whole sky and segmenting the image pixel by pixel.
But if you already have an annotated dataset, There's a neural network architecture called Mask RCNN which overlays your given image with a mask of any color you choose over a given object or setting you to indicate. This one, from my experience, does take a LOT of annotated data to train on for a decent result. But for something as generalizable as a sky-detector and overlay, it should work well with only 1-3k annotated pictures. If you chose to go down this route, here is an article that describes how you can make your own annotated pictures.
Non-AI recommendations:
using the blue color index in the RGB tuple, play around with what thresholds you can use for each color, and you can then do some random sampling of these points and go from there.
But seriously, based on everything I researched on this, looking at other people's repositories, it seems the best method is via AI. Here's an example. The reason is detecting a sky takes a lot of spacial "awareness". For example, how will the computer discriminate between the sky and an ocean? Both are blue. But you can see from the waves that it is an ocean. Basic spatial reasoning can really be done by AI or a crap ton of manual coding.
I have been browsing the internet and stack overflow in order to find a solution to my problem, but to no avail.
So here is my problem:
Problem
I have a series of images with specific ROIs, where I detect a signal change. In order to extract the signal I need subtract the background of the image from the actual signal. Unfortunately I can't just subtract the images, as this doesn't delete the background noise sufficiently.
Solution Idea
What I want to do is to cut out (black out) my ROIs and then do an interpolation across the entire "reduced" image. Then I want to fill in the blacked out ROIs again via interpolation. This way I can get an idea of what the background below my signal is actually doing. I have been playing around with griddata, RectBivariateSpline, but I haven't found a way that works.
So far I have been doing this in MATLAB with the function scatteredInterpolant, but I would like to do it in python.
Below an image series, that describes the concept. One can see the third image being slightly blurry in the before blacked out ROIs.
Imageprocessing concept
So, does python provide a solution or way, which is similar to MATLABs scatteredInterpolant or how could I best tackle this problem?
Thank you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a couple of standard ways of detecting a modified image such as
Luminance gradient
Copy move detection
Metadata Extraction
Histogram analysis
ELA(Error level analysis)
Quantization matrix analysis
Thumbnail analysis
are there any other standard ways of detecting a modified image?
Tried out
Finding the EXIF of the image to check the created and modified date and check for modification. I also had some rules for EXIF camera make and make note validation along with checking for the software used such as photoshop, Shotwell, etc.
Was able to segment the image and use SLIC(simple linear iterative clustering) to find out the similar cluster regions in an image
Find the largest contour with less pixel inconsistency with luminance gradient to mark that as a potential modified region
Largest contour with ELA as a potential modified region
Check for inconsistencies in histogram graph and mark it as a potential editted image.
Here are my questions
Are there any standard logics to verify the image with metadata such as using the created and modified dates, the camera make or maker note, etc. As these details are not consistent for any given image.
Finding out the least pixel inconsistency contour in the Luminance gradient would always give me an image that is modified?
If the histogram graph has a regular interval fluctuation could it be considered a modified image?
How could I use Quantization matrices to find out image anomalies
what is the best way to compare the thumbnail image to the original image to check for inconsistencies?
The answer to this question needs more detailed so, I will give some references to the subject itself and I will share with you the code of every part of your question :
You need to use exif to verify the image with metadata
For Anomaly Detection in Images see here
To compare the thumbnail image to the original image read this. where it showed you how to compare two images using Python.
References :
ccse.kfupm.edu.sa
github.com/redaelli
github.com/Ghirensics
www.amazon.com/Learning
books.google.com.tw
hal.inria.fr/
I'm trying to make an algorithm that finds the roi of a latent fingerprint. There is no need for minutiae extraction or image enhancement (though some may be necessary), it only needs to find the region of the image that contains the actual fingerprint. I have tried several approaches, applying local ridge orientation and binarization and thinning to try to find a way to identify the roi. However my results have been lackluster at every turn. I've tried to read a lot of books and papers on this subject, but they are vague at best. Any thoughts on how to achieve this? Currently programming in python