My objective here is to replace the spot in mask_image by a color corresponding to the spot in original_image. What I did here is to find connected components and labeling them, but I can't figure out how to find the corresponding labeled spot and replace it.
How can i put the n circles in n objects and fill them by the corresponding intensities?
Any help would be appreciated.
For example, if spot in (2, 1) in mask image should be painted by color of corresponding spot in this image below.
mask image http://myfair.software/goethe/images/mask.jpg
original image http://myfair.software/goethe/images/original.jpg
def thresh(img):
ret , threshold = cv2.threshold(img,5,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
return threshold
def spot_id(img):
seed_pt = (5, 5)
fill_color = 0
mask = np.zeros_like(img)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
for th in range(5, 255):
prev_mask = mask.copy()
mask = cv2.threshold(img, th, 255, cv2.THRESH_BINARY)[1]
mask = cv2.floodFill(mask, None, seed_pt, fill_color)[1]
mask = cv2.bitwise_or(mask, prev_mask)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
#here I labelled them
n_centers, labels = cv2.connectedComponents(mask)
label_hue = np.uint8(892*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_img = cv2.merge([label_hue, blank_ch, blank_ch])
labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR)
labeled_img[label_hue==0] = 0
print('There are %d bright spots in the image.'%n_centers)
cv2.imshow("labeled_img",labeled_img)
return mask, n_centers
image_thresh = thresh(img_greyscaled)
mask, centers = spot_id(img_greyscaled)
There is one very simple way of accomplishing this task. First one needs to sample the value at the center of each dot in mask_image. Next, one expands this color to fill the dot in that same image.
Here is some code using PyDIP (because I know it better than OpenCV, I'm an author), I'm sure something similar can be done with OpenCV alone:
import PyDIP as dip
import cv2
import numpy as np
# Load the color image shown in the question
original_image = cv2.imread('/home/cris/tmp/BxW25.png')
# Load the mask image shown in the question
mask_image = cv2.imread('/home/cris/tmp/aqf3Z.png')[:,:,0]
# Get a single colored pixel in the middle of each spot of the mask
colors = dip.EuclideanSkeleton(mask_image > 50, 'loose ends away') * original_image
# Spread that color across the full spot
# (dilation and similar operators like this one don't work with color images,
# so we apply the operation on each channel separately)
for t in range(colors.TensorElements()):
colors.TensorElement(t).Copy(dip.MorphologicalReconstruction(colors.TensorElement(t), mask_image))
# Save the result
cv2.imwrite('/home/cris/tmp/so.png', np.array(colors))
Related
I have downloaded a number of images (1000) from a website but they each have a black and white ruler running along 1 or 2 edges and some have these catalogue number tickets. I need these elements removed, the ruler at the very least.
Example images of coins:
The images all have the ruler in slightly different places so i cant just preform the same crop on them.
So I tried to remove the black and replace it with white using this code
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
im = Image.open('image-0.jpg')
im = im.convert('RGBA')
data = np.array(im) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = data.T # Temporarily unpack the bands for readability
# Replace black with white
black_areas = (red < 150) & (blue < 150) & (green < 150)
data[..., :-1][black_areas.T] = (255, 255, 255) # Transpose back needed
im2 = Image.fromarray(data)
im2.show()
but it pretty much just removed half the coin as well:
I was having a read of some posts on opencv but though I'd see if there was a simpler way I'd missed first.
So I have taken a look at your problem and I have found a solution for your two images you provided, I hope it works for you other images as well but it is always hard to tell as it can be different on an individual basis. This solution is using OpenCV for preprocessing and contour detection to get the 2nd and 3rd largest elements in your picture (largest is the bounding box around the edges) which should be your coins. Then I create a box around those two items and add some padding before I crop to size.
So we start off with preprocessing:
import numpy as np
import cv2
img = cv2.imread(r'<PATH TO YOUR IMAGE>')
img = cv2.resize(img, None, fx=3, fy=3)
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(imgray, (5, 5), 0)
ret, thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Still rather basic, we make the image bigger so it is easier to detect contours, then we turn it into grayscale, blur it and apply thresholding to it so we turn all grey values either white or black. This then gives us the following image:
We now do contour detection, get the areas around our contours and sort them by the biggest area. Then we drop the biggest one as it is the box around the whole image and take the 2nd and 3rd biggest. And then get the x,y,w,h values we are interested in.
contours, hierarchy = cv2.findContours(
thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
for cnt in contours:
area = cv2.contourArea(cnt)
areas.append((area, cnt))
areas.sort(key=lambda x: x[0], reverse=True)
areas.pop(0)
x, y, w, h = cv2.boundingRect(areas[0][1])
x2, y2, w2, h2 = cv2.boundingRect(areas[1][1])
If we draw a rectangle around those contours:
Now we take those coordinates and create a box around both of them. This might need some minor adjusting as I just quickly took the bigger width of the two and not the corresponding one for the right coin but since I added extra padding it should be fine in most cases. And finally crop to size:
pad = 15
img = img[(min(y, y2) - pad) : (max(y, y2) + max(h, h2) + pad),
(min(x, x2) - pad) : (max(x, x2) + max(w, w2) + pad)]
I hope this helps you to understand how you could achieve what you want, I tried it on both your images and it worked well for them. It might need some adjustments and depending on how your other images look the simple approach of taking the two biggest objects (apart from image bounding box) might be turned into something more sophisticated to detect the cricular shapes or something along those lines. Alternatively you could try to detect the rulers and crop from their position inwards. You will have to decide after you have done this on more example images in your dataset.
If you're looking for a robust solution, you should try something like Max Kaha's response, since it'll provide you with greater fine tuning.
Since the rulers tend to be left with just a little bit of text after your "black to white" filter, a quick solution is to use erosion followed by a dilation to create a mask for your images, and then apply the mask to the original image.
Pillow offers that with the ImageFilter class. Here's your code with a few modifications that'll achieve that:
from PIL import Image, ImageFilter
import numpy as np
import matplotlib.pyplot as plt
WHITE = 255, 255, 255
input_image = Image.open('image.png')
input_image = input_image.convert('RGBA')
input_data = np.array(input_image) # "data" is a height x width x 4 numpy array
red, green, blue, alpha = input_data.T # Temporarily unpack the bands for readability
# Replace black with white
thresh = 30
black_areas = (red < thresh) & (blue < thresh) & (green < thresh)
input_data[..., :-1][black_areas.T] = WHITE # Transpose back needed
erosion_factor = 5
# dilation is bigger to avoid cropping the objects of interest
dilation_factor = 11
erosion_filter = ImageFilter.MaxFilter(erosion_factor)
dilation_filter = ImageFilter.MinFilter(dilation_factor)
eroded = Image.fromarray(input_data).filter(erosion_filter)
dilated = eroded.filter(dilation_filter)
mask_threshold = 220
# the mask is black on regions to be hidden
mask = dilated.convert('L').point(lambda x: 255 if x < mask_threshold else 0)
# create base image
output_image = Image.new('RGBA', input_image.size, WHITE)
# paste only the desired regions
output_image.paste(input_image, mask=mask)
output_image.show()
You should also play around with the black to white threshold and the erosion/dilation factors to try and find the best fit for most of your images.
So this code is able to segment a variety of rooms it identifies into different colors as seen below. The question is, how do i obtain the area of the rooms that are colored (Like those blue rooms). Rooms are in 1m:150m ratio.
The first image is the output i need to measure, the second room is the image i used to run the code with, the third image is an original image for reference. Thanks in advance.
import numpy as np
def find_rooms(img, noise_reduction=10, corners_threshold=0.0000001,
room_close=2, gap_in_wall_threshold=0.000001):
# :param img: grey scale image of rooms, already eroded and doors removed etc.
# :param noise_reduction: Amount of noise removed.
# :param corners_threshold: Corners to retained, higher value = more of house removed.
# :param room_close: Maximum line length to add to close off open doors.
# :param gap_in_wall_threshold: Minimum number of pixels to identify component as room instead of hole in the wall.
# :return: rooms: list of numpy arrays containing boolean masks for each detected room
# colored_house: Give room a color.
assert 0 <= corners_threshold <= 1
# Remove noise left from door removal
img[img < 128] = 0
img[img > 128] = 255
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
mask = np.zeros_like(img)
for contour in contours:
area = cv2.contourArea(contour)
if area > noise_reduction:
cv2.fillPoly(mask, [contour], 255)
img = ~mask
# Detect corners (you can play with the parameters here)
#harris corner detection
dst = cv2.cornerHarris(img, 4,3,0.000001)
dst = cv2.dilate(dst,None)
corners = dst > corners_threshold * dst.max()
# Draw lines to close the rooms off by adding a line between corners on the same x or y coordinate
# This gets some false positives.
# Can try disallowing drawing through other existing lines, need to test.
for y,row in enumerate(corners):
x_same_y = np.argwhere(row)
for x1, x2 in zip(x_same_y[:-1], x_same_y[1:]):
if x2[0] - x1[0] < room_close:
color = 0
cv2.line(img, (x1, y), (x2, y), color, 1)
for x,col in enumerate(corners.T):
y_same_x = np.argwhere(col)
for y1, y2 in zip(y_same_x[:-1], y_same_x[1:]):
if y2[0] - y1[0] < room_close:
color = 0
cv2.line(img, (x, y1), (x, y2), color, 1)
# Mark the outside of the house as black
contours, _ = cv2.findContours(~img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
mask = np.zeros_like(mask)
cv2.fillPoly(mask, [biggest_contour], 255)
img[mask == 0] = 0
# Find the connected components in the house
ret, labels = cv2.connectedComponents(img)
img = cv2.cvtColor(img,cv2.COLOR_GRAY2RGB)
unique = np.unique(labels)
rooms = []
for label in unique:
component = labels == label
if img[component].sum() == 0 or np.count_nonzero(component) < gap_in_wall_threshold:
color = 0
else:
rooms.append(component)
color = np.random.randint(0, 255, size=3)
img[component] = color
return rooms, img
#Read gray image
img = cv2.imread('output16.png', 0)
rooms, colored_house = find_rooms(img.copy())
cv2.imshow('result', colored_house)
cv2.waitKey()
cv2.destroyAllWindows()
Ok so let's say that you read the segmented picture using OpenCV:
import cv2
import numpy as np
# reading the segmented picture in coloured mode
image = cv2.imread("path/to/segmented/coloured/picture.jpg", cv2.IMREAD_COLOR)
Now, suppose that you know the size in squared meters of the entire picture, so if for instance the picture reflects a total of 150m x 70m, you have a total size of 150x70 = 10500m². Let's declare this as a variable:
total_size = 10500
You also want to know the total number of pixels in the picture. If, for instance, your picture is 750*350 pixels, you have: 262500 pixels. You can just do that with:
total_number_of_pixels = image.shape[0]*image.shape[1]
Now, as I said in a comment, you also want to know the number of pixels for each unique colour in the segmented picture, which you can do using:
# count all occurrences of unique colours in your picture
unique, counts = np.unique(image.reshape(-1, image.shape[2]), axis=0, return_counts=True)
coloured_pixel_counts = sorted(zip(unique, counts), key=lambda x: x[1]))
Now, all you have left to do is just a cross-multiplication, which can be done with something like this:
rooms = []
for colour, pixel_count in coloured_pixel_counts:
rooms.append((colour, (pixel_count/total_number_of_pixels)*total_size))
You should now have a list of all colours and the respective approximated size in squared meters of the rooms of this colour.
Now, please note that, however, you would probably have to subset this list to the colours that strike your interest, as some colours seem to not really be linked to a room in your segmented pictures...
Again, please ask if anything is unclear!
So the measurement will be based on pixels, and you will need to know the maximum and minimum range of the RGB value of the color you want to "measure". I ran this code on your image to find the percentage of the green colored area to the whole area of the house and I got the following result:
The number of filtered pixels is: 331213 Which counts for %5 of the house
import cv2
import numpy as np
import math
img = cv2.imread('22I7X.png')
#Defining wanted color range
filteredColorMin = np.array([36,0,0], np.uint8) #Min range
filteredColorMax = np.array([70, 255,255], np.uint8) #High range
#Find all the pixels in the wanted color range
dst = cv2.inRange(img, filteredColorMin, filteredColorMax)
#count non-zero values from filtered range
numFilteredColor = cv2.countNonZero(dst)
#Getting total number of pixels in image to get the percentage of the filtered pixels from the total pixels
numTotalPixels=img.shape[0] *img.shape[1]
print('The number of filtered pixels is: ' + str(numFilteredColor) + " Which counts for %" + str(math.ceil((numFilteredColor/numTotalPixels)*100)) + " of the house")
cv2.imshow("original image",img)
cv2.waitKey(0)
I have a large image with some alphabets in it and cut out of one alphabet ("A"). I need to find each A in the larger image and color it to red.
Large Image:
Alphabet A:
To solve the problem, I have used the following codes-
import cv2, numpy as np
# read the image and convert into binary
a = cv2.imread('search.png', 0)
ret,binary_image = cv2.threshold(a,230,255,cv2.THRESH_BINARY_INV)
# create the Structuring element
letter_a = cv2.imread('A.png', 0)
ret,se = cv2.threshold(letter_a,230,255,cv2.THRESH_BINARY_INV)
#erosion and dilation for finding A
erosion = cv2.erode(binary_image , se)
new_se = cv2.flip(se,0)
dilation = cv2.dilate(erosion, new_se)
cv2.imwrite('dilation.jpg', dilation )
In this point, I get the following image
As you can see, I am clearly identifying all the A. However, I need to color those A to red and most importantly, write on the first large image with black letter and white background. Is there any way to do that? Maybe using numpy array write on the first image?
You can solve this as follows.
First off, to color the letters red in the main image, it is best to load it in color. A grayscale copy is created to perform the threshold.
Then a black image with the dimensions of the main image is create and the color of this image is set to red. The image with the A's is used as a mask to get an image of red A's. These red A's are then added to the main image.*
Result:
Code:
import cv2, numpy as np
# load the image in color
a = cv2.imread('search.png')
# create grayscale
a_gray = cv2.cvtColor(a,cv2.COLOR_BGR2GRAY)
ret,binary_image = cv2.threshold(a_gray,230,255,cv2.THRESH_BINARY_INV)
# create the Structuring element
letter_a = cv2.imread('A.png', 0)
ret,se = cv2.threshold(letter_a,230,255,cv2.THRESH_BINARY_INV)
#erosion and dilation for finding A
erosion = cv2.erode(binary_image , se)
new_se = cv2.flip(se,0)
dilation = cv2.dilate(erosion, new_se)
# create a red background image
red = np.zeros((a.shape[:3]),dtype=a.dtype)
red[:] = (0,0,255)
# apply the mask with A's to get red A's
red_a = cv2.bitwise_and(red,red,mask=dilation)
# Add the A's to the main image
result = cv2.add(a,red_a)
cv2.imshow('Result', result )
cv2.waitKey(0)
cv2.destroyAllWindows()
*If the letters are not black an extra step is needed, read this tutorial. But for your image this is not necessary.
I used the following codes to solve the problem-
import cv2, numpy as np
# read the image and convert into binary
color_image = cv2.imread(r'search.png', 1)
gray_image = cv2.imread(r'search.png', 0)
ret,binary_image = cv2.threshold(gray_image,230,255,cv2.THRESH_BINARY_INV)
# create the Structuring element
letter_a = cv2.imread('A.png', 0)
ret,se = cv2.threshold(letter_a,230,255,cv2.THRESH_BINARY_INV)
#erosion and dilation for finding A
erosion = cv2.erode(binary_image, se)
new_se = cv2.flip(se,0)
dilation = cv2.dilate(erosion, new_se)
for i in zip(*np.where(dilation == 255)):
color_image[i[0], i[1], 0] = 0
color_image[i[0], i[1], 1] = 0
color_image[i[0], i[1], 2] = 255
# show and save image
cv2.imwrite('all_a.jpg', color_image)
cv2.imshow('All A',color_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have an image, that I want to process. I'm using Opencv and skimage. My goal is to find the distribution of the red dots around the barycenter of all the dots. I proceed as follows : first I select the color, and then I binarize the image that I obtain. Eventually, I would just count the red pixel that are on the rings with a certain width around that barycenter, in order to have an average distribution with regards to the radius supposing a cylindrical symmetry.
My issue is that I have no idea how to find the position of the barycenter.
I would also like to know if there is an short way to count the red pixels in the rings.
Here is my code :
import cv2
import matplotlib.pyplot as plt
from skimage import io, filters, measure, color, external
I'm uploading the image :
sph = cv2.imread('image_sper.jpg')
sph = cv2.cvtColor(sph, cv2.COLOR_BGR2RGB)
plt.imshow(sph)
plt.show()
I want to select the red color. Following https://realpython.com/python-opencv-color-spaces/, I'm converting it in HSV, and I'm using a mask.
hsv_sph = cv2.cvtColor(sph, cv2.COLOR_RGB2HSV)
light_red = (1, 100, 100)
dark_red = (18, 255, 255)
mask = cv2.inRange(hsv_sph, light_red, dark_red)
result = cv2.bitwise_and(sph, sph, mask=mask)
And here is the result :
plt.imshow(result)
plt.show()
Now I'm binarizing the image, since it'll be easier to process it afterwards.
red_image = result[:,:,1]
red_th = filters.threshold_otsu(red_image)
red_mask = red_image > red_th;
red_mask.dtype ;
io.imshow(red_mask);
And here we are :
What I would like some help now to find the barycenter of the white pixels.
Thx
Edit : The binarization gives the image boolean values False/True for the pixels. I don't know how to transform them to 0/1 pixels. If False was 0 and True 1, a code to find the barycenter would be :
np.shape(red_mask)
(* (321L, 316L) *)
bari=0
barj=0
N=0
for i in range(321):
for j in range(316):
bari=bari+red_mask[i,j]*i
barj=barj+red_mask[i,j]*j
N=N+red_mask[i,j]
bari=bari/N
barj=barj/N
Another question that should have been asked here: http://answers.opencv.org/questions/
But, let's go!
The process that I have implemented uses mostly structural analysis (https://docs.opencv.org/3.3.1/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a)
First I got your image:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from skimage import io, filters, measure, color, external
sph = cv2.imread('points.png')
ret,thresh = cv2.threshold(sph,200,255,cv2.THRESH_BINARY)
Then eroded and converted it for noise reduction
kernel = np.ones((2,2),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
opening = cv2.cvtColor(opening, cv2.COLOR_BGR2GRAY);
opening = cv2.convertScaleAbs(opening)
Then used "cv::findContours (InputOutputArray image, OutputArrayOfArrays contours, OutputArray hierarchy, int mode, int method, Point offset=Point())" to find all blobs.
After that, just calculate the center of each region and do a weighted average based on the contour area. This way, I got the points centroid (X:143.4202820443726 , Y:154.56471750651224).
im2, contours, hierarchy = cv2.findContours(opening, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
areas = []
centersX = []
centersY = []
for cnt in contours:
areas.append(cv2.contourArea(cnt))
M = cv2.moments(cnt)
centersX.append(int(M["m10"] / M["m00"]))
centersY.append(int(M["m01"] / M["m00"]))
full_areas = np.sum(areas)
acc_X = 0
acc_Y = 0
for i in range(len(areas)):
acc_X += centersX[i] * (areas[i]/full_areas)
acc_Y += centersY[i] * (areas[i]/full_areas)
print (acc_X, acc_Y)
cv2.circle(sph, (int(acc_X), int(acc_Y)), 5, (255, 0, 0), -1)
plt.imshow(sph)
plt.show()
I'm new to opencv, I've managed to detect the object and place a ROI around it but I can't managed it so detect if the object is black or white. I've found something i think but i don't know if this is the right solution. The function should return True of False if it's black or white. Anyone experience with this?
def filter_color(img):
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_black = np.array([0,0,0])
upper_black = np.array([350,55,100])
black = cv2.inRange(hsv, lower_black, upper_black)
If you are certain that the ROI is going to be basically black or white and not worried about misidentifying something, then you should be able to just average the pixels in the ROI and check if it is above or below some threshold.
In the code below, after you set an ROI using the newer numpy method, you can pass the roi/image into the method as if you were passing a full image.
Copy-Paste Sample
import cv2
import numpy as np
def is_b_or_w(image, black_max_bgr=(40, 40, 40)):
# use this if you want to check channels are all basically equal
# I split this up into small steps to find out where your error is coming from
mean_bgr_float = np.mean(image, axis=(0,1))
mean_bgr_rounded = np.round(mean_bgr_float)
mean_bgr = mean_bgr_rounded.astype(np.uint8)
# use this if you just want a simple threshold for simple grayscale
# or if you want to use an HSV (V) measurement as in your example
mean_intensity = int(round(np.mean(image)))
return 'black' if np.all(mean_bgr < black_max_bgr) else 'white'
# make a test image for ROIs
shape = (10, 10, 3) # 10x10 BGR image
im_blackleft_white_right = np.ndarray(shape, dtype=np.uint8)
im_blackleft_white_right[:, 0:4] = 10
im_blackleft_white_right[:, 5:9] = 255
roi_darkgray = im_blackleft_white_right[:,0:4]
roi_white = im_blackleft_white_right[:,5:9]
# test them with ROI
print 'dark gray image identified as: {}'.format(is_b_or_w(roi_darkgray))
print 'white image identified as: {}'.format(is_b_or_w(roi_white))
# output
# dark gray image identified as: black
# white image identified as: white
I don't know if this is the right approach but it worked for me.
black = [0,0,0]
Thres = 50
h,w = img.shape[:2]
black = 0
not_black = 0
for y in range(h):
for x in range(w):
pixel = img[y][x]
d = math.sqrt((pixel[0]-0)**2+(pixel[1]-0)**2+(pixel[2]-0)**2)
if d<Thres:
black = black + 1
else:
not_black = not_black +1
This one worked for me but like i said, don't know if this is the right approach. It's ask a lot of processing power therefore i defined a ROI which is much smaller. The Thres is currently hard-coded...