As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
So I have TEM images, that look like this:
Is there a Python module I can use to help me analyze this image, in particular, detect the atoms (circles) in the picture?
The image quality of the TEMs is pretty bad so I need an approach that is robust enough to distinguish what is an atom and what is not.
I can easily enough open the picture using PIL and do things with it, but I was hoping to find an algorithm that could detect the circles.
If there is not a tool like this, does anyone know how I would go about making my own algorithm to do this?
Here's an attempt to count the number of atoms in your picture, using OpenCV. It's sort of a hokey approach but yields decent results. First blur the picture a little bit, then threshold it, and then find resulting contours.
Here's the code:
import cv2
image = cv2.imread('atoms.png')
image2 = cv2.cvtColor(
image,
cv2.COLOR_BGR2GRAY,
)
image2 = cv2.GaussianBlur(
image2,
ksize=(9,9),
sigmaX=8,
sigmaY=8,
)
cv2.imwrite('blurred.png', image2)
hello, image2 = cv2.threshold(
image2,
thresh=95,
maxval=255,
type=cv2.THRESH_BINARY_INV,
)
cv2.imwrite('thresholded.png', image2)
contours, hier = cv2.findContours(
image2, # Note: findContours() changes the image.
mode=cv2.RETR_EXTERNAL,
method=cv2.CHAIN_APPROX_NONE,
)
print('Number of contours: {0}'.format(len(contours)))
cv2.drawContours(
image,
contours=contours,
contourIdx=-1,
color=(0,255,0),
thickness=2,
)
cv2.imwrite('augmented.png', image)
cv2.imshow('hello', image)
cv2.waitKey(-1)
And the stdout output was:
Number of contours: 46
Spend some time fiddling with the Gaussian Blur and threshold parameters, and I bet you can get even more accurate results.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
for my thesis I'm working on a classifier (with Tensorflow) able to classifier if a heart trace contains or not an aritmia.
But I have a problem with the dataset: Practically, this is one image of my dataset:
The problem is that if we zoom on the trace we can see this:
Practically the outline of the curve has some kind of gradient around it. Could someone tell me how to eliminate this nuance in Python and maybe, how to increase the thickness of the stroke in order to highlight it?
Thanks a lot to everybody
Update 1:
I'm trying with this code, that seem resolve the problem, but when I apply cv2.dilate the image appear complete white.
import numpy as np
import cv2
for file in os.listdir("data/clean_test/original"):
img = image.load_img("data/clean_test/original/" + file, color_mode="grayscale")
img = image.img_to_array(img, dtype="uint8")
# do OTSU threshold to get circuit image
thresh = cv2.adaptiveThreshold(
img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 15, 100
)
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(thresh, kernel, iterations=1)
print("Processed image: " + file)
cv2.imwrite(
"data/clean_test/new/" + os.path.splitext(file)[0] + ".png",
thresh,
[cv2.IMWRITE_PNG_COMPRESSION, 0],
)
I am very new to OpenCV(and to StackOverflow). I'm writing a program with OpenCV which takes a picture with an object (i.e. pen(rice, phone) put on paper) and calculates what percent does the object make of the picture.
Problem I'm facing with is when I threshold image (tried adaptive and otsu) photo is a little bit shadow around edges:
Original image
Resulted picture
And here's my code:
import cv2
img = cv2.imread("image.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
b,g,r = cv2.split(img)
th, thresh = cv2.threshold(b, 100, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
cv2.imwrite("image_bl_wh.png", thresh)
Tried to blur and morphology, but couldn't do it.
How can I make my program count that black parts around the picture as background and is there more better and easier way to do it?
P.S. Sorry for my English grammar mistakes.
This is not a programmatic solution but when you do automatic visual inspection it is the first thing you should try: Improve your set-up. The image is simply darker around the edges so increasing the brightness when recording the images should help.
If that's not an option you could consider having an empty image for comparison. What you are trying to do is background segmentation and there are better ways than simple color thresholding they do however usually require at least one image of the background or multiple images.
If you want a software only solution you should try an edge detector combined with morphological operators.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently trying to contour a human body from an image, but I am stuck right now.
I have taken different video lectures on contour, but they were related to objects like rectangles, circles and other simle shapes.
Can someone guide me in Human body contour? This picture shows an example of contour I am looking for.
You have to understand that detecting a human body is not so simple because it is hard to diferentiate the background from the body. That being said, if you have a simple background like the uploaded image, you can try to apply number of image tranformations (like applying binary threshold, otsu... look at opencv documentation - OpenCV documentation) to make your ROI "stand out" so you can detect with cv2.findContours() - same as drawing contour for circles, squares, etc. You can even apply cv2.Canny() (Canny edge detection) which detects a wide range of edges in the image and then search for contour. Here is an example for your image (the results could be better if the image didn't already have a red contour surrounding the body). Steps are desribed in comments in the code. Note that this is very basic stuff and would not work in most cases as the human detection is very difficult and broad question.
Example:
import cv2
# Read image and convert it to grayscale.
img = cv2.imread('human.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Search for edges in the image with cv2.Canny().
edges = cv2.Canny(img,150,200)
# Search for contours in the edged image with cv2.findContour().
_, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
# Filter out contours that are not in your interest by applying size criterion.
for cnt in contours:
size = cv2.contourArea(cnt)
if size > 100:
cv2.drawContours(img, [cnt], -1, (255,0,0), 3)
# Display the image.
cv2.imshow('img', img)
Result:
Here is another useful link in the OpenCV documentation regarding this subject: Background Subtraction. Hope it helps a bit and gives you an idea on how to proceede. Cheers!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My problem is that I want to differentiate the light and dark areas in the following image to generate a binary mask.
https://i.stack.imgur.com/7ZRKB.jpg
An approximation to the output can be this:
https://i.stack.imgur.com/2UuJb.jpg
I've tried a lot of things but the results still have some noise or I lost a lot of data, like in this image:
https://i.stack.imgur.com/hUyjY.png
I've used python with opencv and numpy, gaussian filters, opening, closing, etc...
Somebody have some idea to doing this?
Thanks in advance!
I first reduced the size of the image using pyrDown then used CLAHE to equalize the histogram. I used medianblur as this will create patches then used opening 3 times. After that it was a simple binary_inv threshold. If you want to get the original image size, use cv2.pyrUp on image. By playing with the parameters you can manage to get better results.
import cv2
image = cv2.imread("7ZRKB.jpg",0)
image = cv2.pyrDown(image)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(16,16))
image = clahe.apply(image)
image = cv2.medianBlur(image, 7)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
ret,image = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("image",image)
cv2.waitKey()
cv2.destroyAllWindows()
Result:
I have kind of challenging task and spent a lot of time on it but without satisfactory results.
The sense is to perform a background subtraction for future people counting. I am doing this with Python 3 and OpenCV 3.3. I have applied cv2.createBackgroundSubtractorMOG2 but faced two main difficulties:
As background is almost dark, and some people that walk on video are wearing dark staff, subtractor sometimes is unable to detect them properly, it simply skips them (take a look at the image below). Converting image from BGR to HSV made little changes but i expect even better result.
As you can see, a man in grey clothes is not detected well. How is it possible to improve this? If there is more efficient methods, please provide this information, I appreciate and welcome any help! Maybe there is sense to use stereo camera and try to process objects using images depth?
Another question that worries me, is what if a couple of people will be close to each other in case of hard traffic? The region will be simply merged and counted as simple. What can be done in such case?
Thanks in advance for any information!
UPD:
I performed histogram equalization on every channel of the image with HSV colorspace, but even now I am not able to absorb some people with color close to background color.
Here is code updated:
import cv2
import numpy as np
import imutils
cap = cv2.VideoCapture('test4.mp4')
clahe = cv2.createCLAHE(2.0, (8,8))
history = 50
fgbg = cv2.createBackgroundSubtractorMOG2(history=history, detectShadows=True)
while cap.isOpened():
frame = cap.read()[1]
width, height = frame.shape[:2]
frame = imutils.resize(frame, width=min(500, width))
origin = frame.copy()
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
for channel in range(3):
hsv[:,:,channel] = clahe.apply(hsv[:,:,channel])
fgmask = fgbg.apply(hsv, learningRate=1 / history)
blur = cv2.medianBlur(fgmask, 5)
cv2.imshow('mask', fgmask)
cv2.imshow('hcv', hsv)
cv2.imshow('origin', origin)
k = cv2.waitKey(30) & 0xff
if k == 27 or k == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I believe following the steps below will solve your first problem to a large extent:
1.Preprocessing:
Image preprocessing is very crucial because a computer does not see an image as we humans perceive. Hence, it is always advised to look for ways to enhance the image rather than working on it directly.
For the given image, the man in a jacket appears to be having the same color as the background. I applied histogram equalization to all the three channels of the image and merged them to get the following:
The man is visible slightly better than before.
2. Color Space:
Your choice of going with HSV color space was right. But why restrict to the three channels together? I obtained the hue channel and got the following:
3. Fine Tuning
Now to the image above, you will have to apply some optimal threshold and then follow it up with a morphological erosion operation to get a better silhouette of the man in the frame.
Note: In order to solve your second problem you can also go with some morphological operations after applying a threshold.