We create a 360 degree photo of the sky from many camera photos from different angles. During this process, few imperfections arise:
What would be the best way to get rid of of these visible overlaps? Is it possible to do after the stitching or should we try to prevent this during the process of stitching?
We already tried blurring the lines after the stitch but it does not seem to be the right way to go.
img = cv2.imread('light-sky-stitch-202001011200.png')
blurred_img = cv2.GaussianBlur(img, (211, 211), 0)
mask = np.zeros((2000, 2000, 3), dtype=np.uint8)
mask = cv2.circle(img=mask, center=(1000, 1000), radius=500,
color=(255, 255, 255), thickness=-1)
out = np.where(mask==np.array([255, 255, 255]), blurred_img, img)
Related
I am a beginner working on a script that will mask out a section of an image and then feather the edges. So far, I have gotten far enough that I can produce a feathered edge and an original masked image, but when I add them together I am still seeing a lot of the original edge left behind. I am wondering what the best way to subtract or replace the edge of my original masked image with the new one that just has the edge blurred (I don't want to blur the center of the masked image, just feather the edges.
Here is my basic test code for two images:
image = cv2.imread('/content/drive/MyDrive/image')
mask1 = np.zeros(image.shape[:2], dtype="uint8")
cv2.circle(mask1, (500, 500), 500, 255, 30)
mask2 = np.zeros(image.shape[:2], dtype="uint8")
cv2.circle(mask2, (500, 500), 500, 255, -1)
masked = cv2.bitwise_and(image, image, mask=mask1)
masked2 = cv2.bitwise_and(image, image, mask=mask2)
feathered = cv2.GaussianBlur(masked, (21,21), 50)
image_without_alpha = masked2[:,:,:3]
final = cv2.addWeighted(image_without_alpha, .5, feathered, .8, 1)
cv2_imshow(final)
Any suggested improvements would be awesome. Thanks!
I want to detect the centroid of individual blocks in the following grids for path planning. The idea is that a central navigation system like overhead camera will detect the blocks of grids along with the bot and help in navigation. Till now I have tried Hough lines Probabilistic and Harris corner detection but both of them either detect extra points or fail in real world scenario. I want to detect the blocks in real time and number them. Those numbering should not change or the whole path planning will be messed up.
Is there any solution to this problem that I missed.
thanks in advance
You need to learn how to eliminate noise. This is not a complete answer. The more time you spend and learn, the better your results will be.
import cv2
import numpy as np
import sys
# Load source as grayscale
im = cv2.imread(sys.path[0]+'/im.jpg', cv2.IMREAD_GRAYSCALE)
H, W = im.shape[:2]
# Convert im to black and white
im = cv2.adaptiveThreshold(
im, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 21, 2)
# Remove noise
im = cv2.medianBlur(im, 11)
im = cv2.erode(im, np.ones((15, 15)))
# Fill the area around the shape
im = ~im
mask = np.zeros((H+2, W+2), np.uint8)
cv2.floodFill(im, mask, (0, 0), 255)
cv2.floodFill(im, mask, (W-1, 0), 255)
cv2.floodFill(im, mask, (0, H-1), 255)
cv2.floodFill(im, mask, (W-1, H-1), 255)
# Remove noise again
im = cv2.dilate(im, np.ones((15, 15)))
# Find the final blocks
cnts, _ = cv2.findContours(~im, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for c in cnts:
x, y, w, h = cv2.boundingRect(c)
cv2.circle(im, (x+w//2, y+h//2), max(w, h)//2, 127, 5)
print("Found any: ", len(cnts) > 0)
# Save the output
cv2.imwrite(sys.path[0]+'/im_.jpg', im)
I was wondering whether anyone was aware of any approaches to discover which portion of an image was pixelated. For example for the following saussage dog where I have applied the following code
img = cv2.imread("sausage.jpg")
blurred_img = cv2.blur(img, (21, 21), 0)
mask = np.zeros(img.shape, dtype=np.uint8)
mask = cv2.circle(mask, (200, 100), 100, [255, 255, 255], -1)
out = np.where(mask==[255, 255, 255], blurred_img,img)
I would like to zoom in to a circle centered at 200,100 with a radius of 100.
I have tried looking at edges, but this doesn't give anything definitive and I haven't got an algorithm to extract the information yet.
gear
I want generalize method so that any type of noises inside the gear can be remove. I am using OpenCV with python
I have already try with lots filter and noise removing methods but I am not getting proper output. here is my code
import cv2
import numpy as np
import imutils
from imutils import perspective
from scipy.spatial import distance as dist
img1 = cv2.imread("5cam.png")
img = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
rows, cols = img.shape
dst = cv2.fastNlMeansDenoising(img, 15, 10, 7, 21)
gaussian_blurred_images = cv2.GaussianBlur(dst, (9, 9), 0)
_, thresh = cv2.threshold(gaussian_blurred_images, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = np.ones((7, 7), np.uint8)
dilation = cv2.dilate(thresh, kernel)
canny = cv2.Canny(dilation, 200, 255)
contours = cv2.findContours(canny, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_NONE)[0]
areas1 = []
for ctr in contours:
areas = cv2.contourArea(ctr)
areas1.append(areas)
amax = max(areas1)
max_contour = [contours[areas1.index(amax)]]
cv2.drawContours(img1, max_contour, -1, (0, 255, 255), 2)
cv2.imshow("g", dst)
cv2.imshow("thresh", thresh)
cv2.imshow("c", canny)
cv2.imshow("img", img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
first you'll need to improve the lighting and scene.
it needs to be more diffuse and not straight on, to prevent reflection on the gear. place lights to the side all around. don't use the camera's built in flash. use/build a "softbox" which is a white sheet of paper or fabric that diffuses the light before it hits the object (either translucent or used like a "mirror").
it needs to be more uniform. your picture shows a "vignette", darkening near the picture's outside. the previous step will probably fix that.
be careful about smudges, dirt on the background
move the camera further away and use zoom if possible. that will improve the overall sharpness of the picture (more depth of focus) and reduce lens distortion (if you care about that).
then you'll need a different approach. I would suggest trying segmentation based on hue and saturation (select the uniformly blue background).
use cv.cvtColor to transform the image into the HSV color space
then use numpy indexing/masking (or cv.inRange) to select a small range of hue (somewhere around green-blue, which is probably a hue of around 180 degrees, or 90 in cvtColor's CV_8U hue values) and saturations (medium to full). for example: mask = ((hsv_img >= (90, 170, 0)) & (hsv_img <= (100, 255, 255))).all(axis=2)
that approach, on the unimproved lighting, gets me this far. on better lighting it should be even better.
I need to draw a lot of semi-transparent circles that overlap one another. The problem is that it should work fast. I wrote the following code:
im = Image.new('RGBA', (512, 512), (255, 255, 255, 0))
for i in range(1000):
im1 = Image.new("RGBA", (512, 512), (255, 255, 255, 0))
draw = ImageDraw.Draw(im1)
draw.ellipse(c[i].cv_repr(), fill=c[i].color)
im = Image.alpha_composite(im1,im)
This code works but it does very slowly. Is there any approach without using Image.alpha_composite for better performance? The image below is the expected result.
I found a solution in the OpenCV library.
im = np.zeros([512,512,3],dtype=np.uint8)
im.fill(255)
for i in range(1000):
im1 = im.copy()
cv2.circle(im1, c[i].center, c[i].r, c[i].color, -1)
im = cv2.addWeighted(im1, c[i].alpha, im, 1 - c[i].alpha, 0)
The average elapsed time for the code (1000 circles) in the answer is ~4.16s VS ~302ms in my answer. This is the performance I wanted to get.