I am a beginner working on a script that will mask out a section of an image and then feather the edges. So far, I have gotten far enough that I can produce a feathered edge and an original masked image, but when I add them together I am still seeing a lot of the original edge left behind. I am wondering what the best way to subtract or replace the edge of my original masked image with the new one that just has the edge blurred (I don't want to blur the center of the masked image, just feather the edges.
Here is my basic test code for two images:
image = cv2.imread('/content/drive/MyDrive/image')
mask1 = np.zeros(image.shape[:2], dtype="uint8")
cv2.circle(mask1, (500, 500), 500, 255, 30)
mask2 = np.zeros(image.shape[:2], dtype="uint8")
cv2.circle(mask2, (500, 500), 500, 255, -1)
masked = cv2.bitwise_and(image, image, mask=mask1)
masked2 = cv2.bitwise_and(image, image, mask=mask2)
feathered = cv2.GaussianBlur(masked, (21,21), 50)
image_without_alpha = masked2[:,:,:3]
final = cv2.addWeighted(image_without_alpha, .5, feathered, .8, 1)
cv2_imshow(final)
Any suggested improvements would be awesome. Thanks!
Related
I have an image I need to draw a bounding box around and I'm trying to use the code at bottom of this post.
The issue I have is that I have tried to blur the blue box shape to remove its details e.g.
cv2.blur(img,(20,20))
but the blurred image doesnt seem to have a good enough edge to produce a bounding box.
I have found that if I use my code below with a plain blue square with sharp edges of the same size as the image below, it works just fine. It seems the detail within the blue area stops a boundary box being drawn.
I was hoping someone could tell me how to get this to work please.
import cv2
# Load the image - container completely blurred out so no hinges,locking bars , writing etc are visible, just a blank blue square
img = cv2.imread('blue_object.jpg')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edged = cv2.Canny(img, 120,890)
# Apply adaptive threshold
thresh = cv2.adaptiveThreshold(edged, 255, 1, 1, 11, 2)
thresh_color = cv2.cvtColor(thresh, cv2.COLOR_GRAY2BGR)
# apply some dilation and erosion to join the gaps - change iteration to detect more or less area's
thresh = cv2.dilate(thresh,None,iterations = 50)
thresh = cv2.erode(thresh,None,iterations = 50)
# Find the contours
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# For each contour, find the bounding rectangle and draw it
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 50000:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image below is the yellow container after lower HSV mask range of 8,0,0 and upper range of 78,255,255 . Trees are above and to top right of the container, so hard to separate the tress from the container to put a proper bounding box around it.Happy to move to chat if that helps.
You're converting to gray, throwing all that valuable color information away. You're also Canny-ing, which is generally a bad idea. Beginners don't have the judgment to apply Canny sensibly. Best stay away from it.
This can be solved with the usual approach:
transform colorspace into something useful, say HSV
inRange on well-saturated blue
some morphology operations to clean up debris
bounding box
That is assuming you are looking for a blue container, or any well-saturated color really.
im = cv.imread("i4nPk.jpg")
hsv = cv.cvtColor(im, cv.COLOR_BGR2HSV)
lower = (90, 84, 0)
upper = (180, 255, 255)
mask1 = cv.inRange(hsv, lower, upper)
mask2 = cv.erode(mask1, kernel=None, iterations=2)
(x, y, w, h) = cv.boundingRect(mask2) # yes that works on masks too
canvas = cv.cvtColor(mask2, cv.COLOR_GRAY2BGR)
cv.rectangle(canvas, (x,y), (x+w, y+h), color=(0,0,255), thickness=3)
I have a microscopy image and need to calculate the area shown in red. The idea is to build a function that returns the area inside the red line on the right photo (float value, X mm²).
Since I have almost no experience in image processing, I don't know how to approach this (maybe silly) problem and so I'm looking for help. Other image examples are pretty similar with just 1 aglomerated "interest area" close to the center.
I'm comfortable coding in python and have used the software ImageJ for some time.
Any python package, software, bibliography, etc. should help.
Thanks!
EDIT:
The example in red I made manually just to make people understand what I want. Detecting the "interest area" must be done inside the code.
Canny, morphological transformation and contours can provide a decent result.
Although it might need some fine-tuning depending on the input images.
import numpy as np
import cv2
# Change this with your filename
image = cv2.imread('test.png', cv2.IMREAD_GRAYSCALE)
# You can fine-tune this, or try with simple threshold
canny = cv2.Canny(image, 50, 580)
# Morphological Transformations
se = np.ones((7,7), dtype='uint8')
image_close = cv2.morphologyEx(canny, cv2.MORPH_CLOSE, se)
contours, _ = cv2.findContours(image_close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Create black canvas to draw area on
mask = np.zeros(image.shape[:2], np.uint8)
biggest_contour = max(contours, key = cv2.contourArea)
cv2.drawContours(mask, biggest_contour, -1, 255, -1)
area = cv2.contourArea(biggest_contour)
print(f"Total area image: {image.shape[0] * image.shape[1]} pixels")
print(f"Total area contour: {area} pixels")
cv2.imwrite('mask.png', mask)
cv2.imshow('img', mask)
cv2.waitKey(0)
# Draw the contour on the original image for demonstration purposes
original = cv2.imread('test.png')
cv2.drawContours(original, biggest_contour, -1, (0, 0, 255), -1)
cv2.imwrite('result.png', original)
cv2.imshow('result', original)
cv2.waitKey(0)
The code produces the following output:
Total area image: 332628 pixels
Total area contour: 85894.5 pixels
Only thing left to do is convert the pixels to your preferred measurement.
Two images for demonstration below.
Result
Mask
I don't know much about this topic, but it seems like a very similar question to this one, which has what look like two great answers:
Python: calculate an area within an irregular contour line
gear
I want generalize method so that any type of noises inside the gear can be remove. I am using OpenCV with python
I have already try with lots filter and noise removing methods but I am not getting proper output. here is my code
import cv2
import numpy as np
import imutils
from imutils import perspective
from scipy.spatial import distance as dist
img1 = cv2.imread("5cam.png")
img = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
rows, cols = img.shape
dst = cv2.fastNlMeansDenoising(img, 15, 10, 7, 21)
gaussian_blurred_images = cv2.GaussianBlur(dst, (9, 9), 0)
_, thresh = cv2.threshold(gaussian_blurred_images, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((7, 7), np.uint8)
dilation = cv2.dilate(thresh, kernel)
canny = cv2.Canny(dilation, 200, 255)
contours = cv2.findContours(canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)[0]
areas1 = []
for ctr in contours:
areas = cv2.contourArea(ctr)
areas1.append(areas)
amax = max(areas1)
max_contour = [contours[areas1.index(amax)]]
cv2.drawContours(img1, max_contour, -1, (0, 255, 255), 2)
cv2.imshow("g", dst)
cv2.imshow("thresh", thresh)
cv2.imshow("c", canny)
cv2.imshow("img", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
first you'll need to improve the lighting and scene.
it needs to be more diffuse and not straight on, to prevent reflection on the gear. place lights to the side all around. don't use the camera's built in flash. use/build a "softbox" which is a white sheet of paper or fabric that diffuses the light before it hits the object (either translucent or used like a "mirror").
it needs to be more uniform. your picture shows a "vignette", darkening near the picture's outside. the previous step will probably fix that.
be careful about smudges, dirt on the background
move the camera further away and use zoom if possible. that will improve the overall sharpness of the picture (more depth of focus) and reduce lens distortion (if you care about that).
then you'll need a different approach. I would suggest trying segmentation based on hue and saturation (select the uniformly blue background).
use cv.cvtColor to transform the image into the HSV color space
then use numpy indexing/masking (or cv.inRange) to select a small range of hue (somewhere around green-blue, which is probably a hue of around 180 degrees, or 90 in cvtColor's CV_8U hue values) and saturations (medium to full). for example: mask = ((hsv_img >= (90, 170, 0)) & (hsv_img <= (100, 255, 255))).all(axis=2)
that approach, on the unimproved lighting, gets me this far. on better lighting it should be even better.
I'm in the process of scanning old photographs, and I would like to automate the process of extracting the photograph from the (noisy) solid white background of the scanner so that I have a transparent photograph. This part of the program now works, but I have one more small problem with this.
The photograph can now be accurately detected (and extracted), but it leaves a small and sharp black border from the background around the entire photograph. I've tried to apply a gaussian blur to the transparency mask, but this wouldn't smooth the blackness away (and it made the border of the photograph look 'smudged').
This is the code that I have to extract the photo and generate the transparency mask:
# Load the scan, and convert it to RGBA.
original = cv2.imread('input.jpg')
original = cv2.cvtColor(original, cv2.COLOR_BGR2BGRA)
# Make the scan grayscale, and apply a blur.
image = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY)
image = cv2.GaussianBlur(image, (25, 25), 0)
# Binarize the scan.
retval, threshold = cv2.threshold(image, 50, 255, cv2.THRESH_BINARY)
# Find the contour of the object.
contours, hierarchy = cv2.findContours(threshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
largestContourArea = -1
largestContour = -1
for contour in contours:
area = cv2.contourArea(contour)
if area > largestContourArea:
largestContourArea = area
largestContour = contour
# Generate the transparency mask.
mask = numpy.zeros(original.shape, numpy.uint8)
cv2.drawContours(mask, [ largestContour ], -1, (255, 255, 255, 255), -1)
# Apply the transparency mask.
original = cv2.multiply(mask.astype(float) / 255.0, original.astype(float))
cv2.imwrite('output.png', original)
I have a sample scan and the result of the code above using the sample scan. As you can see, there is a slight black border all around the photograph, which I would like to remove.
By using the erode method, you could shrink the contour (mask), effectively removing the black edge.
Since this method supports in-place operation, the code would look something like this: cv2.erode(mask, mask, kernel), where kernel is a kernel acquired using cv2.getStructuringElement. By playing around with the kernel and the iteration count, you can control the amount of shrinking.
This page explains everything in great detail and also provides nice examples: https://docs.opencv.org/2.4/doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.html
I have two binary images. The first is like this:
and the last one is like this:
They dont have the same sized curve. I want to add the second one's two white zones contained in the black zone to the first one's black zone.
My code runs like this,but this a wrong answer:
The question is like this,and I want get the the finally image which I draw in picture with the the final image:
How can I achieve this task?
Assuming img1 is your first array (larger solid blob) and img2 is the second (smaller blob with holes), you need a method to identify and remove the outer region of the second image. The flood fill algorithm is a good candidate. It is implemented in opencv as cv2.floodFill.
The easiest thing to do would be to fill the outer edge, then just add the results together:
mask = np.zeros((img2.shape[0] + 2, img2.shape[1] + 2), dtype=np.uint8)
cv2.floodFill(img2, mask, (0, 0), 0, 0)
result = img1 + img2
Here is a toy example that shows mini-images topologically equivalent to your originals:
img1 = np.full((9, 9), 255, dtype=np.uint8)
img1[1:-1, 1:-1] = 0
img2 = np.full((9, 9), 255, dtype=np.uint8)
img2[2:-2, 2:-2] = 0
img2[3, 3] = img2[5, 5] = 255
The images look like this:
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(img1)
ax2.imshow(img2)
After the flood fill, the images look like this:
Adding the resulting images together looks like this:
Keep in mind that floodFill operates in-place, so you may want to make a copy of img2 before going down this road.
I think you want this:
#!/usr/local/bin/python3
from PIL import Image,ImageDraw, ImageColor, ImageChops
# Load images
im1 = Image.open('im1.jpg')
im2 = Image.open('im2.jpg')
# Flood fill white edges of image 2 with black
seed = (0, 0)
black = ImageColor.getrgb("black")
ImageDraw.floodfill(im2, seed, black, thresh=127)
# Now select lighter pixel of image1 and image2 at each pixel location and save it
result = ImageChops.lighter(im1, im2)
result.save('result.png')
If you prefer OpenCV, it might look like this:
#!/usr/local/bin/python3
import cv2
# Load images
im1 = cv2.imread('im1.jpg', cv2.IMREAD_GRAYSCALE)
im2 = cv2.imread('im2.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold, because JPEG is dodgy!
ret, im1 = cv2.threshold(im1, 127, 255, cv2.THRESH_BINARY)
ret, im2 = cv2.threshold(im2, 127, 255, cv2.THRESH_BINARY)
# Flood fill white edges of image 2 with black
h, w = im2.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
cv2.floodFill(im2, mask, (0,0), 0)
# Now select lighter of image1 and image2 and save it
result = np.maximum(im1, im2)
cv2.imwrite('result.png', result)