Clear a heart trace with python [closed] - python

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
for my thesis I'm working on a classifier (with Tensorflow) able to classifier if a heart trace contains or not an aritmia.
But I have a problem with the dataset: Practically, this is one image of my dataset:
The problem is that if we zoom on the trace we can see this:
Practically the outline of the curve has some kind of gradient around it. Could someone tell me how to eliminate this nuance in Python and maybe, how to increase the thickness of the stroke in order to highlight it?
Thanks a lot to everybody
Update 1:
I'm trying with this code, that seem resolve the problem, but when I apply cv2.dilate the image appear complete white.
import numpy as np
import cv2
for file in os.listdir("data/clean_test/original"):
img = image.load_img("data/clean_test/original/" + file, color_mode="grayscale")
img = image.img_to_array(img, dtype="uint8")
# do OTSU threshold to get circuit image
thresh = cv2.adaptiveThreshold(
img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 15, 100
)
kernel = np.ones((5, 5), np.uint8)
dilation = cv2.dilate(thresh, kernel, iterations=1)
print("Processed image: " + file)
cv2.imwrite(
"data/clean_test/new/" + os.path.splitext(file)[0] + ".png",
thresh,
[cv2.IMWRITE_PNG_COMPRESSION, 0],
)

Related

Is there a way to convert img to grayscale with opencv [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last month.
Improve this question
I am trying to convert an image into grayscale using python and cv2. I saw a few other answers and they were using matlab, which I am not going to use. Is there anyway that I can fix this issue. The image boots up fine and everything it just wont convert. Here is the code.
import cv2
# Choose an image to detect faces in
img = cv2.imread('RDJ.png')
# Must convert to grayscale
grayscaled_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#
cv2.imshow('Face Detector', img)
cv2.waitKey()
I have tried to fix it using different things but I cant figure out any solutions.
You need to change the penultimate line of your code:
cv2.imshow('Face Detector', grayscaled_img)
Because the image showed is the original.

Trying to understand how img = np.zeros((300,512,3), np.uint8) gives us a black window [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am a naive Python coder so pardon my ignorance.
I wanted to know how does img = np.zeros((300,512,3), np.uint8) generate a black window in OpenCV. Also if someone could help me understand the importance of channels, in my case (300,512,3) '3' is the channel.
I tried googling it out and found https://answers.opencv.org/question/74576/how-does-npzeros-create-a-black-background/ but still am confused!.
Thanks in advance guys!!
The line
img = np.zeros((300,512,3), np.uint8)
creates a 3D array that is 300 rows high, 512 columns wide, and 3 "channels" deep. Each channel corresponds to the amount of red, green, or blue intensity.
np.zeros
means that this array will be completely filled with 0's.
0 intensity for red, green, and blue translates into a black image.
This type of encoding is called RGB, because each color channel represents the intensity of red, green, or blue.

How to Contour Human Body in an image using PpenCv? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am currently trying to contour a human body from an image, but I am stuck right now.
I have taken different video lectures on contour, but they were related to objects like rectangles, circles and other simle shapes.
Can someone guide me in Human body contour? This picture shows an example of contour I am looking for.
You have to understand that detecting a human body is not so simple because it is hard to diferentiate the background from the body. That being said, if you have a simple background like the uploaded image, you can try to apply number of image tranformations (like applying binary threshold, otsu... look at opencv documentation - OpenCV documentation) to make your ROI "stand out" so you can detect with cv2.findContours() - same as drawing contour for circles, squares, etc. You can even apply cv2.Canny() (Canny edge detection) which detects a wide range of edges in the image and then search for contour. Here is an example for your image (the results could be better if the image didn't already have a red contour surrounding the body). Steps are desribed in comments in the code. Note that this is very basic stuff and would not work in most cases as the human detection is very difficult and broad question.
Example:
import cv2
# Read image and convert it to grayscale.
img = cv2.imread('human.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Search for edges in the image with cv2.Canny().
edges = cv2.Canny(img,150,200)
# Search for contours in the edged image with cv2.findContour().
_, contours, hierarchy = cv2.findContours(edges,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
# Filter out contours that are not in your interest by applying size criterion.
for cnt in contours:
size = cv2.contourArea(cnt)
if size > 100:
cv2.drawContours(img, [cnt], -1, (255,0,0), 3)
# Display the image.
cv2.imshow('img', img)
Result:
Here is another useful link in the OpenCV documentation regarding this subject: Background Subtraction. Hope it helps a bit and gives you an idea on how to proceede. Cheers!

Differentiate dark and light areas in image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
My problem is that I want to differentiate the light and dark areas in the following image to generate a binary mask.
https://i.stack.imgur.com/7ZRKB.jpg
An approximation to the output can be this:
https://i.stack.imgur.com/2UuJb.jpg
I've tried a lot of things but the results still have some noise or I lost a lot of data, like in this image:
https://i.stack.imgur.com/hUyjY.png
I've used python with opencv and numpy, gaussian filters, opening, closing, etc...
Somebody have some idea to doing this?
Thanks in advance!
I first reduced the size of the image using pyrDown then used CLAHE to equalize the histogram. I used medianblur as this will create patches then used opening 3 times. After that it was a simple binary_inv threshold. If you want to get the original image size, use cv2.pyrUp on image. By playing with the parameters you can manage to get better results.
import cv2
image = cv2.imread("7ZRKB.jpg",0)
image = cv2.pyrDown(image)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(16,16))
image = clahe.apply(image)
image = cv2.medianBlur(image, 7)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(5,5))
image = cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel, iterations=3)
ret,image = cv2.threshold(image, 150, 255, cv2.THRESH_BINARY_INV)
cv2.imshow("image",image)
cv2.waitKey()
cv2.destroyAllWindows()
Result:

Good Python tools for object detection (circles) in blurry images [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
So I have TEM images, that look like this:
Is there a Python module I can use to help me analyze this image, in particular, detect the atoms (circles) in the picture?
The image quality of the TEMs is pretty bad so I need an approach that is robust enough to distinguish what is an atom and what is not.
I can easily enough open the picture using PIL and do things with it, but I was hoping to find an algorithm that could detect the circles.
If there is not a tool like this, does anyone know how I would go about making my own algorithm to do this?
Here's an attempt to count the number of atoms in your picture, using OpenCV. It's sort of a hokey approach but yields decent results. First blur the picture a little bit, then threshold it, and then find resulting contours.
Here's the code:
import cv2
image = cv2.imread('atoms.png')
image2 = cv2.cvtColor(
image,
cv2.COLOR_BGR2GRAY,
)
image2 = cv2.GaussianBlur(
image2,
ksize=(9,9),
sigmaX=8,
sigmaY=8,
)
cv2.imwrite('blurred.png', image2)
hello, image2 = cv2.threshold(
image2,
thresh=95,
maxval=255,
type=cv2.THRESH_BINARY_INV,
)
cv2.imwrite('thresholded.png', image2)
contours, hier = cv2.findContours(
image2, # Note: findContours() changes the image.
mode=cv2.RETR_EXTERNAL,
method=cv2.CHAIN_APPROX_NONE,
)
print('Number of contours: {0}'.format(len(contours)))
cv2.drawContours(
image,
contours=contours,
contourIdx=-1,
color=(0,255,0),
thickness=2,
)
cv2.imwrite('augmented.png', image)
cv2.imshow('hello', image)
cv2.waitKey(-1)
And the stdout output was:
Number of contours: 46
Spend some time fiddling with the Gaussian Blur and threshold parameters, and I bet you can get even more accurate results.

Categories

Resources