I am analysing a greyscale recording of synapses, from which I would like to automatically extract regions of interests (ROIs) as sets of small 'cuts' of the whole animation in order to be able to both trace and account for the movement of the microscope and to profile the Z-axis profile of a particular ROI. This means that I need to scan through the image, identify ROIs and match them 'over the frames', exporting the result as set of frames. Common ROI catching techniques (filtering, averaging over frames via Markov or Fourier and then matching the points) render images which are too blurred/skewed to be used for further analysis and they can't handle the amounts of motion happening in the image, and so I'm trying to come up with a different ROI tracing and extracting mechanism. Any ideas?
To illustrate:
Video (originally huge tiff file, compressed to gif for upload):
Something along the lines of result
(this is a cut made in ImageJ, preferably my program would be able to either track my ROI or just cut out enough space to extract most of the on-screen appearance of ROI, but I'm not sure what else can be done)
Related
I'm currently working on a image processing project opencv python based, and essentially it's quite simple, it uses cv2.matchTemplate and a database to make all the matches, but I think something is done wrong.
The images are get from a few cameras, then the processing starts, based on predefined masks, the ROI of each object is returned the ROI of to then be matched with a database of images.
The problem is, the database has crops are the same size of the ROI predefined, which means the cv2.matchTemplate is always trying to match an image of the same size as the one in the database.
Here's an example with a printed circuit board (which is what my project does, it looks for bad positioned components/missing components).
Simple circuit board
This would be the full raw image, taken from one of the cameras.
Next a predefined filter would be applied to the raw image, I usually convert it to grayscale.
Now, my predefined ROI would crop the image.
Image ROI
Image crop
Notice though it crops way more than the component itself, and this crop will be saved in the database for eventual matches.
But shouldn't I be manually feeding my database with properly images? I mean, is it ideal? wouldn't it take way more matches (images) to get my desired threshold?
I need arguments to convince someone of it, I want to feed my database with well cropped images, but this person just wants to save it automatically.
Thanks!
I have a database of original images and for each original images there are various cropped versions.
This is an example of how the image look like:
Original
Horizontal Crop
Square Crop
This is a very simple example, but most images are like this, some might taken a smaller section of the original image than others.
I was looking at OpenCV in python but I'm very new to this kind of image processing.
The idea is to be able to save the cropping information separate from the image to save space and then generate all the cropping and different aspect ratio on the fly with a cache system instead.
The method you are looking for is called "template matching". You find examples here
https://docs.opencv.org/trunk/d4/dc6/tutorial_py_template_matching.html
For your problem, given the large images, it might be a good idea to constrain the search space by resizing both images by the same factor. So that searching a position that isn't as precise, but allows then to constrain the actual full pixel sized search to a smaller region around that point.
First time posting here.
I am working on image processing for a bioengineering project, and I am using Python, mostly the Skimage package, to batch process the images we have taken. There are 6 - 8 tubes in each image, where cells flow through. At each time point, we captured normal bright field images, as well as fluorescence images. I am trying to identify the tubes based on the bright field images, separate the tubes from the background and label them. The masked/labeled images will be used for downstream processing for the fluorescence images, where we identify cells and get their shape metrics.
tl;dr: Python; skimage; image processing; separate tube-like structures from background in a bright field image.
I will use one example to show what I have done. I wanted to include all the intermediate images, but I do not have any reputation points to post more than two images. So I will show the first and last images.
I cropped out the scale bar first and obtained a greyscale image, and here is the resulting image.bf_image
I used sobel_h filter to find the horizontal edges.
bf_sobel = sobel_h(bf_cropped)
io.imshow(bf_sobel)
I then tried all thresholds and picked out a threshold algorithm that looked good to me (otsu)
fig, ax = try_all_threshold(bf_sobel, figsize=(25,20), verbose=False)
plt.show()
bf_threshold = threshold_otsu(bf_sobel)
bf_thresholded = bf_sobel > bf_threshold
io.imshow(bf_thresholded)
I then applied the closing function and remove_small_objects.
bf_closed = closing(bf_thresholded)
bf_small_removed = remove_small_objects(bf_closed, 50)
io.imshow(bf_small_removed)
bf_small_removed
This is where I got stuck. I am trying to fill the gaps between the tube edges, and create masks for individual tubes to separate them from the background. Any advice? Thanks!!!
I have a bunch of scanned images of documents of the same layout (strict forms filled out with variable data) that I need to process with OCR. I can more or less cope with the OCR process itself (convert text images to text) but still have to cope with the annoying fact that the scanned images are distorted either by different degree of rotation, different scaling or both.
Because my method focuses on reading pieces of information from respective cells that are defined as bounding boxes by pixels, I must convert all pictures to a "standard" version where every corresponding cells are in the same pixel position, otherwise my reader "misreads". My question is, how could I "normalize" the distorted images?
I use Python.
Today in high-volume form-scanning jobs we use commercial software with adaptive template matching, which does deskew and selective binarization to prepare the images, but then it adapts field boxes per image, not placing boxes on XY-location.
Deskeing process overall increases the image size. It is visible in this random image from online search:
https://github.com/tesseract-ocr/tesseract/wiki/skew-linedetection.png
Notice how the title of the document was near the top border, and in the deskewed image it is shifted down. In this oversimplified example an XY-based box would not catch it.
I use commercial software for deskewing and image pre-processing. It is quite inexpensive but good. Unfortunately, I believe it will take you only part-way if the data capture method relies on xy-coordinate field matching. I sense your frustration with dealing with it, thus appropriate tools were already created for handling that.
I run a service bureau for such form processing. If you are interested I can further share privately methods how we process.then
Okay so i am trying to find homography of a soccer match. What i have till now is
Read images from a folder which is basically many cropped images of a template soccer field. Basically this has images for center circle and penalty lines etc.
Read video stream from a file and crop it into many smaller segments.
Loop inside the images in video stream and inside that another loop for images that i read from folder.
Now in the two images that i get through iteration , i applied a green filter because of my assumption that field is green
Use orb to find points and then find matches.
Now the Problem is that because of players and some noise from croud, i am unable to find proper matches for homography. Also removing them is a problem because that also tends to hide the soccer field lines that i need to calculate the homography on.
Any suggestions on this is greatly appreciated. Also below are some sample code and images that i am using.
"Code being used"
Sample images
Output that i am getting
The image on right of output is a frame from video and that on left is the same sample image that i uploaded after filterGreen function as can be seen from the code.
Finally what i want is for the image to properly map to center circle so i can draw a cube in center, Somewhat similar to "This example" . Thanks in advance for helping me out.
An interesting technique to throw at this problem is RASL. It computes homographies that align stacks of related images. It does not require that you specify corresponding points on the images, but operates directly on the image pixels. It is robust against image occlusions (eg, players moving in the foreground).
I've just released a Python implementation here: https://github.com/welch/rasl
(there are also links there to the original RASL paper, MATLAB implementation, and data).
I am unsure if you'd want to crop the input images to that center circle, or if the entire frames can be aligned. Try both and see.