I am working on a code in python and I came across a figure in a report that I would like to replicate.
Basically I would like to create a 'bounding' box onto the original image, and then subsequently crop and display the part of the image that has the bounding box on it. (basically to 'magnify' that section)
I've been googling but I can't seem to find the correct function to use so that I can achieve this. Currently, opencv is used to read my image, but if there is a function in matplotlib that does this, then you can suggest that too.
Thank you for your help!
Related
this is my first ever post on stackoverflow.
I want to create an image which I'll be using as a feature image on WordPress, the image will have an Icon and Text with a gradient background.
The icon will show on the left and text will show on the right (side-by-side), I've tried searching everywhere but found nothing relevant.
If any expert can show some example code that would be a great help. <3
Below i'm attaching an example of image which I want to create.
example image
I have been browsing the internet and stack overflow in order to find a solution to my problem, but to no avail.
So here is my problem:
Problem
I have a series of images with specific ROIs, where I detect a signal change. In order to extract the signal I need subtract the background of the image from the actual signal. Unfortunately I can't just subtract the images, as this doesn't delete the background noise sufficiently.
Solution Idea
What I want to do is to cut out (black out) my ROIs and then do an interpolation across the entire "reduced" image. Then I want to fill in the blacked out ROIs again via interpolation. This way I can get an idea of what the background below my signal is actually doing. I have been playing around with griddata, RectBivariateSpline, but I haven't found a way that works.
So far I have been doing this in MATLAB with the function scatteredInterpolant, but I would like to do it in python.
Below an image series, that describes the concept. One can see the third image being slightly blurry in the before blacked out ROIs.
Imageprocessing concept
So, does python provide a solution or way, which is similar to MATLABs scatteredInterpolant or how could I best tackle this problem?
Thank you.
I am trying to extract a subimage from a scanned paper like this:
https://cloud.kopa.ch/index.php/s/gGZm5xeMYlPfU81
The extracted images should be georeferenced and added to a webmap service, but thats not the question here.
How can I get the frame / its pixel coordinates to crop the image?
I am also free in creating the "layout" (similar to the example), which means I could add markers to get the frame better after scanning it again.
The workflow is:
generate layout - print map - draw on the map - scan it - crop "map-frame" - georeferencing this frame - show it on a webmap
The "map-frames" are preprocessed and I know their location/extent
Has anybody an idea how to crop the (scanned) images automatically to this "map-frame"?
I have to work with python and have the packages PIL, pillow and imagemagick for the image processing
Thanks for you help!
If you need more information, don't hesitate to ask
Here's an example I adapted form the Pillow docs, check them out for any further processing that you might need to perform:
from Pillow import Image
Image.open("/path/to/image.jpg")
box = (100, 100, 400, 400)
region = im.crop(box)
Also, it might prove valuable to search Stack Overflow for this kind of operation, I'm sure it has been discussed earlier.
As for finding the actual rectangle to crop you'll have to do some form of image analysis. In it's simplest form, conceptually that could be something along these lines:
Applying an S-curve filter to a black-and-white representation of your image
Iterate over all of the pixels in the image
Keep track of horizontal and vertical lines that has sufficiently black pixel values.
Use this data to determine the bounding box of the portion of the image your interested in.
Depending on your needs you might want to look into some computer vision library instead, which are well optimized for this and similar tasks. The one that springs to mind is OpenCV which is I would guess is well optimized and documented, and there's a python module available as well.
I am trying to write a code which, given an image, will run tessearct on the entire image and return a map of all locations where text was detected (as a binary image).
It doesn't have to be pixel-by-pixel, a union of bounding boxes is more than enough.
Is there a way to do this?
Thanks in advance
Yes... (of course). Look at the Python imaging library for loading the image, and cropping it. Then you can apply Tesseract on each piece and check the output.
Have a look at the program I answered a while back. It might help you with the elements you need. It lets you manually select and area, and OCR it. But that can be easily changed.
There is an image and a multiset of pixel coordinates. Each set corresponds to a polygon.
The problem at hand is to overlay all the polygons onto the image, adjust their shapes/sizes to cover specific areas in the image and then save the resulting view of the image.
Suggestions on how to get started on this would be highly appreciated!
I realize I was talking about image annotation. I found 'sloth'. I am giving it a try.
https://cvhci.anthropomatik.kit.edu/~baeuml/projects/a-universal-labeling-tool-for-computer-vision-sloth/