How to Mask an image using Numpy/OpenCV? - python

I have an image I load with:
im = cv2.imread(filename)
I want to keep data that is in the center of the image. I created a circle as a mask of the area I want to keep.
I created the circle with:
height,width,depth = im.shape
circle = np.zeros((height,width))
cv2.circle(circle,(width/2,height/2),280,1,thickness=-1)
How can I mask out the data outside of the circle from the original image?
masked_data = im * circle
does not work.

Use cv2.bitwise_and and pass the circle as mask.
im = cv2.imread(filename)
height,width,depth = im.shape
circle_img = np.zeros((height,width), np.uint8)
cv2.circle(circle_img,(width/2,height/2),280,1,thickness=-1)
masked_data = cv2.bitwise_and(im, im, mask=circle_img)
cv2.imshow("masked", masked_data)
cv2.waitKey(0)

circle is just a 2D array with 1.0s and 0.0s. Numpy needs help to understand what you want to do with the third dimension of your im so you must give it an extra axis and then your line would work.
masked_data = im * circle[..., np.newaxis]
But note that the masking is simply setting the color to (0, 0, 0) for things outside the circle according to your code if the image lacks an alpha-channel.
However you have another potential problem: circle will be of the default data-type (which probably will be float64 or float32. That's not good for your image, so you should change the line where you create circle to
circle = np.zeros((height, width), dtype=im.dtype)

Using NumPy assignment to an indexed array:
im[circle == 0] = [0, 0, 0]

In this case if you want to have a circular image you must write a new algorithm and first you must be able to access to the coordinates of the pixels. Then you can simply compare pixels that are not within the scope of that circle or not and replace them with some value (or NULL if it's accepted with your image format criteria).
Here is an example:
import cv2
import numpy as np
im = cv2.imread('sss.png')
def facechop(im):
height,width,depth = im.shape
#circle = np.zeros((height,width))
#print circle
x=width/2
y=height/2
circle=cv2.circle(im,(width/2,height/2),180,1,thickness=1)
#newcameramtx, roi=cv2.getOptimalNewCameraMatrix(im,10,(w,h),1,(w,h))
cv2.rectangle(im,(x-180,y-180),(x+180,y+180),(0,0,255),2)
crop_img = im[y-180:y+180,x-180:x+180]
lastim=np.equal(crop_img,circle)
#dd=np.logical_and(crop_img,circle)
for i in range(len(last_im)) :
if last_im[i].all()==False:
crop_img[i]=[0,0,0]
cv2.imshow('im',crop_img)
if __name__ == '__main__':
facechop(im)
while(True):
key = cv2.waitKey(20)
if key in [27, ord('Q'), ord('q')]:
break

Related

Rotate an image in python and fill the cropped area with image

Have a look at the image and it will give you the better idea what I want to achieve. I want to rotate the image and fill the black part of image just like in required image.
# Read the image
img = cv2.imread("input.png")
# Get the image size
h, w = img.shape[:2]
# Define the rotation matrix
M = cv2.getRotationMatrix2D((w/2, h/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(img, M, (w, h))
mask = np.zeros(rotated.shape[:2], dtype=np.uint8)
mask[np.where((rotated == [0, 0, 0]).all(axis=2))] = 255
img_show(mask)
From the code I am able to get the mask of black regions. Now I want to replace these black regions with the image portion as shown in the image 1. Any better solution how can I achieve this.
Use the borderMode parameter of warpAffine.
You want to pass the BORDER_WRAP value.
Here's the result. This does exactly what you described with your first picture.
I have an approach. You can first create a larger image consisting of 3 * 3 times your original image. When you rotate this image and only cut out the center of this large image, you have your desired result.
import cv2
import numpy as np
# Read the image
img = cv2.imread("input.png")
# Get the image size of the origial image
h, w = img.shape[:2]
# make a large image containing 3 copies of the original image in each direction
large_img = np.tile(img, [3,3,1])
cv2.imshow("large_img", large_img)
# Define the rotation matrix. Rotate around the center of the large image
M = cv2.getRotationMatrix2D((w*3/2, h*3/2), 30, 1)
# Rotate the image
rotated = cv2.warpAffine(large_img, M, (w*3, h*3))
# crop only the center of the image
cropped_image = rotated[w:w*2,h:h*2,:]
cv2.imshow("cropped_image", cropped_image)
cv2.waitKey(0)

affine transformation using nearest neighbor in python

I want to make an affine transformation and afterwards use nearest neighbor interpolation while keeping the same dimensions for input and output images. I use for example the scaling transformation T= [[2,0,0],[0,2,0],[0,0,1]]. Any idea how can I fill the black pixels with nearest neighbor ? I tryied giving them the min value of neighbors' intensities. For ex. if a pixel has neighbors [55,22,44,11,22,55,23,231], I give it the value of min intensity: 11. But the result is not anything clear..
import numpy as np
from matplotlib import pyplot as plt
#Importing the original image and init the output image
img = plt.imread('/home/left/Desktop/computerVision/SET1/brain0030slice150_101x101.png',0)
outImg = np.zeros_like(img)
# Dimensions of the input image and output image (the same dimensions)
(width , height) = (img.shape[0], img.shape[1])
# Initialize the transformation matrix
T = np.array([[2,0,0], [0,2,0], [0,0,1]])
# Make an array with input image (x,y) coordinations and add [0 0 ... 1] row
coords = np.indices((width, height), 'uint8').reshape(2, -1)
coords = np.vstack((coords, np.zeros(coords.shape[1], 'uint8')))
output = T # coords
# Arrays of x and y coordinations of the output image within the image dimensions
x_array, y_array = output[0] ,output[1]
indices = np.where((x_array >= 0) & (x_array < width) & (y_array >= 0) & (y_array < height))
# Final coordinations of the output image
fx, fy = x_array[indices], y_array[indices]
# Final output image after the affine transformation
outImg[fx, fy] = img[fx, fy]
The input image is:
The output image after scaling is:
well you could simply use the opencv resize function
import cv2
new_image = cv2.resize(image, new_dim, interpolation=cv.INTER_AREA)
it'll do the resize and fill in the empty pixels in one go
more on cv2.resize
If you need to do it manually, then you could simply detect dark pixels in resized image and change their value to mean of 4 neighbour pixels (for example - it depends on your required alghoritm)
See: nereast neighbour, bilinear, bicubic, etc.

How to remove multiple polygons using Opencv python

Hi StackOverflow team,
I have an image and I want to remove many portions/parts from the image. I tried to use the below code taken from Cropping Concave polygon from Image using Opencv python
Assume I have this image . Also, I have multiple polygons (such as rectangular shapes or any form of a polygon) from the image achieved via lebelme annotation tool. So, I want to remove those shapes from the images or simply changing their pixels to white.
In other words, Labelme Tool will give you a dictionary file, where the dictionary has a key consisting of the points of each portion/polygon/shape)
Then the polygon points can be easily extracted from the dictionary file. After points are extracted, we can define our points by giving names (e.g a,b,s...h), and each one is in this multidimensional format "[[1526, 319], [1526, 376], [1593, 379], [1591, 324]]"
Here I thought of whitening each region. but whitening of multidimensional array seems to be unreliable.
import numpy as np
import cv2
import json
with open('ann1.json') as f:
data = json.load(f)
#%%
a = data['shapes'][0]['points']; b = data['shapes'][1]['points']; c = data['shapes'][2]['points'];
#%%
img = cv2.imread("lena.jpg")
pts = np.array(a) # Points
#%%
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
x,y,w,h = rect
croped = img[y:y+h, x:x+w].copy()
## (2) make mask
pts = pts - pts.min(axis=0)
mask = np.zeros(croped.shape[:2], np.uint8)
cv2.drawContours(mask, [pts], -1, (255, 255, 255), -1, cv2.LINE_AA)
## (3) do bit-op
dst = cv2.bitwise_and(croped, croped, mask=mask)
## (4) add the white background
bg = np.ones_like(croped, np.uint8)*255
cv2.bitwise_not(bg,bg, mask=mask)
dst2 = bg+ dst
#cv2.imwrite("croped.png", croped)
#cv2.imwrite("mask.png", mask)
#cv2.imwrite("dst.png", dst)
cv2.imwrite("dst2.png", dst2)
Using Lena I have this output .
But I need to go further and whiten other points/polygons, for example, the eyes.
As you can see my code can use only one polygon points. I tried appending two other polygon points in my case the two eyes and got .
By appending, I mean I added the multidimensional points (e.g. pts = np.array(a+b+c)).
In short, having an image is there a short way to remove these multiple polygons from the image (by keeping the dimensions of the image) using OpenCV and python.
Json File:
https://drive.google.com/file/d/1UyOYUVMHpu2vBBEdR99bwrRX5xIfdOCa/view?usp=sharing
You'll need to use to loop to go through all the points in the JSON file. I've edited your code to reflect this.
import cv2
import json
import matplotlib.pyplot as plt
import numpy as np
img_path =r"/path/to/lena.png"
json_path = r"/path/to/lena.json"
with open(json_path) as f:
data = json.load(f)
img = cv2.imread(img_path)
for idx in np.arange(len(data['shapes'])):
if idx == 0: #can remove this
continue #can remove this
a = data['shapes'][idx]['points']
pts = np.array(a) # Points
## (1) Crop the bounding rect
rect = cv2.boundingRect(pts)
print(rect)
x,y,w,h = rect
img[y:y+h, x:x+w] = (255, 255, 255)
plt.imshow(img)
plt.show()
Output:
I ignored the first line, since it didn't visualize the results nicely. I took your lead and used rectangles instead of polygons. If you need polygons, you'll need to use something like cv2.drawContours() or cv2.polylines() or cv2.fillPoly() as is recommnded in the SO answer you have linked here, to achieve it.
I would like to share with you my expected solution which is a bit modified version of #Shawn Mathew answer.
Input image:
Code:
with open('lena.json') as f:
json_file = json.load(f)
img = cv2.imread("folder/lena.jpg")
for polygon in np.arange(len(json_file['shapes'])):
pts = np.array(json_file['shapes'][polygon]['points'])
# If your polygons are rectangular, you can fill with white color to the areas you want be removed by uncommenting the below two lines
# x,y,w,h = cv2.boundingRect(pts)
# cv2.rectangle(img, (x, y), (x+w, y+h), (255, 255, 255), -1)
# if your polygons are different shapes other than rectangles you can just use the below line
cv2.fillPoly(img, pts =[pts], color=(255,255,255))
plt.imshow(img)
plt.show()
The color of the image changed because of Matplotlib, if you want to preserve the color save the image using cv2.imwrite

Replacement of circular spots by respective colors

My objective here is to replace the spot in mask_image by a color corresponding to the spot in original_image. What I did here is to find connected components and labeling them, but I can't figure out how to find the corresponding labeled spot and replace it.
How can i put the n circles in n objects and fill them by the corresponding intensities?
Any help would be appreciated.
For example, if spot in (2, 1) in mask image should be painted by color of corresponding spot in this image below.
mask image http://myfair.software/goethe/images/mask.jpg
original image http://myfair.software/goethe/images/original.jpg
def thresh(img):
ret , threshold = cv2.threshold(img,5,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return threshold
def spot_id(img):
seed_pt = (5, 5)
fill_color = 0
mask = np.zeros_like(img)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
for th in range(5, 255):
prev_mask = mask.copy()
mask = cv2.threshold(img, th, 255, cv2.THRESH_BINARY)[1]
mask = cv2.floodFill(mask, None, seed_pt, fill_color)[1]
mask = cv2.bitwise_or(mask, prev_mask)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
#here I labelled them
n_centers, labels = cv2.connectedComponents(mask)
label_hue = np.uint8(892*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
labeled_img[label_hue==0] = 0
print('There are %d bright spots in the image.'%n_centers)
cv2.imshow("labeled_img",labeled_img)
return mask, n_centers
image_thresh = thresh(img_greyscaled)
mask, centers = spot_id(img_greyscaled)
There is one very simple way of accomplishing this task. First one needs to sample the value at the center of each dot in mask_image. Next, one expands this color to fill the dot in that same image.
Here is some code using PyDIP (because I know it better than OpenCV, I'm an author), I'm sure something similar can be done with OpenCV alone:
import PyDIP as dip
import cv2
import numpy as np
# Load the color image shown in the question
original_image = cv2.imread('/home/cris/tmp/BxW25.png')
# Load the mask image shown in the question
mask_image = cv2.imread('/home/cris/tmp/aqf3Z.png')[:,:,0]
# Get a single colored pixel in the middle of each spot of the mask
colors = dip.EuclideanSkeleton(mask_image > 50, 'loose ends away') * original_image
# Spread that color across the full spot
# (dilation and similar operators like this one don't work with color images,
# so we apply the operation on each channel separately)
for t in range(colors.TensorElements()):
colors.TensorElement(t).Copy(dip.MorphologicalReconstruction(colors.TensorElement(t), mask_image))
# Save the result
cv2.imwrite('/home/cris/tmp/so.png', np.array(colors))

only rotate part of image python

Can someone tell me how to rotate only part of an image like this:
How to find coordinate / center of this image:
i can rotate all pict using this
from PIL import Image
def rotate_image():
img = Image.open("nime1.png")
img.rotate(45).save("plus45.png")
img.rotate(-45).save("minus45.png")
img.rotate(90).save("90.png")
img.transpose(Image.ROTATE_90).save("90_trans.png")
img.rotate(180).save("180.png")
if __name__ == '__main__':
rotate_image()
You can crop an area of the picture as a new variable. In this case, I cropped a 120x120 pixel box out of the original image. It is rotated by 90 and then pasted back on the original.
from PIL import Image
img = Image.open('./image.jpg')
sub_image = img.crop(box=(200,0,320,120)).rotate(90)
img.paste(sub_image, box=(200,0))
So I thought about this a bit more and crafted a function that applies a circular mask to the cropped image before rotations. This allows an arbitrary angle without weird effects.
def circle_rotate(image, x, y, radius, degree):
img_arr = numpy.asarray(image)
box = (x-radius, y-radius, x+radius+1, y+radius+1)
crop = image.crop(box=box)
crop_arr = numpy.asarray(crop)
# build the cirle mask
mask = numpy.zeros((2*radius+1, 2*radius+1))
for i in range(crop_arr.shape[0]):
for j in range(crop_arr.shape[1]):
if (i-radius)**2 + (j-radius)**2 <= radius**2:
mask[i,j] = 1
# create the new circular image
sub_img_arr = numpy.empty(crop_arr.shape ,dtype='uint8')
sub_img_arr[:,:,:3] = crop_arr[:,:,:3]
sub_img_arr[:,:,3] = mask*255
sub_img = Image.fromarray(sub_img_arr, "RGBA").rotate(degree)
i2 = image.copy()
i2.paste(sub_img, box[:2], sub_img.convert('RGBA'))
return i2
i2 = circle_rotate(img, 260, 60, 60, 45)
i2
You can solve this problem as such. Say you have img = Image.open("nime1.png")
Create a copy of the image using img2 = img.copy()
Create a crop of img2 at the desired location using img2.crop(). You can read how to do this here
Paste img2 back onto img at the appropriate location using img.paste()
Notes:
To find the center coordinate, you can divide the width and height by 2 :)

Categories

Resources