I am new to OpenCV and I do not understand how to traverse and change all the pixels of black with colour code exact RGB(0,0,0) to white colour RGB(255,255,255).
Is there any function or way to check all the pixel and if RGB(0,0,0) the make it to RGB(255,255,255).
Assuming that your image is represented as a numpy array of shape (height, width, channels) (what cv2.imread returns), you can do:
height, width, _ = img.shape
for i in range(height):
for j in range(width):
# img[i, j] is the RGB pixel at position (i, j)
# check if it's [0, 0, 0] and replace with [255, 255, 255] if so
if img[i, j].sum() == 0:
img[i, j] = [255, 255, 255]
A faster, mask-based approach looks like this:
# get (i, j) positions of all RGB pixels that are black (i.e. [0, 0, 0])
black_pixels = np.where(
(img[:, :, 0] == 0) &
(img[:, :, 1] == 0) &
(img[:, :, 2] == 0)
)
# set those pixels to white
img[black_pixels] = [255, 255, 255]
Subtract 255 from each Pixel and get the positive values only
For grayscale and black and white images
sub_array = 255*np.ones(28, dtype = int)
img_Invert = np.abs(np.subtract(img,sub_array))
cv.rectangle(img,(0,0),(img.shape[1],img.shape[0],(255,255,255),thickness=-1)
cv.imshow('img',img)
Related
i have an gray scale image which is showing black background, white leaf and some other color disease spots on it.
What i want to do make white portion into black and making other than black and white pixels color into white.
img2 = cv2.imread('binary.jpeg')
binarr = np.where(img3 < 255 or img3 > 0, keep as it is, else make it 0)
plt.imshow(binarr)
but this seem like not supported by python.
any other suggestion to do this?
I labelled your question for numpy so you may get a better answer, also it is not clear what data type cv2.imread() actually returns so I've made an assumption it could be a list in which case:
binarr = [0 if pixel == 255 else 255 for pixel in img]
ie:
>>> img = [0,255,0,255,158]
>>> binarr = [0 if pixel == 255 else 255 for pixel in img]
>>> binarr
[255, 0, 255, 0, 255]
I have two images. In one image all non-alpha channel pixels are equal to 0, and I'd like the alpha channel values to equal 255 where in the other image which is of equal size, the pixels are anything but 0. In this attempt, I'm attempting to create a 4 channel np array based off of the original image, and then use np.argwhere to find where the pixel valeus are non-zero, and then in the new np array, set the alpha channel value based on that.
For example, for each pixel in my input image with values [255, 255, 255], I'd like the corresponding pixel in my new image to be [0, 0, 0, 255]. For each pixel in my input image with values [0, 0, 0], I'd like the corresponding pixel in my new image to be [0, 0, 0, 0].
mask_file = cv.imread(r'PlateMask_0001.png', cv.IMREAD_UNCHANGED)
scale_factor = 0.125
w = int(mask_file.shape[1] * scale_factor)
h = int(mask_file.shape[0] * scale_factor)
scaled = cv.resize(mask_file, (w, h))
coords = np.argwhere(scaled > 0)
new_object = np.zeros((120, 160, 4))
new_object[coords, :] = 255
cv.imshow('Mask', mask)
cv.imshow('Scaled', new_object)
cv.waitKey(0)
cv.destroyAllWindows()
This is my first question on Stack so please feel free to suggest improvements on question formatting, etc. Thank you.
Consider img1 to be your original image and img2 to be the image where alpha channel needs to be modified.
In the following, the alpha channel of img2 contains value 255 in the coordinate where img1 has (255, 255, 255):
img2[:,:,3][img1 == (255, 255, 255)] = 255
Likewise for value 0:
img2[:,:,3][img1 == (0, 0, 0)] = 0
Given a Numpy matrix of shape (height, width), I am looking for the fastest way to create another Numpy matrix of shape (height, width, 4) where 4 represents RGBA values. I would like to do this value-based; so, for all values of 0 in the first matrix I would like to have a value of [255, 255, 255, 0] in the second matrix at the same location.
I would like to do this with NumPy without needing to slowly iterate like below:
for i in range(0, height):
for j in range(0, width):
if image[i][j] = 0:
new_image[i][j] = [255, 255, 255, 0]
elif image[i][j] = 1:
new_image[i][j] = [0, 255, 0, 0.5]
As you can see, I am creating a matrix where the value 0 becomes transparent white, and 1 becomes green with an alpha of 0.5; are there faster NumPy solutions?
I am guessing numpy.where should greatly help speed up the process, but I haven't yet figured out the proper implementation for multiple and many value translations.
For a cleaner solutiuon, especially when working with multiple labels, we could make use of np.searchsorted to trace back the values for the mapping, like so -
# Edit to include more labels and values here
label_ar = np.array([0,1]) # sorted label array
val_ar = np.array([[255, 255, 255, 0],[0, 255, 0, 0.5]])
# Get output array
out = val_ar[np.searchsorted(label_ar, image)]
Note that this assumes that all unique labels from image are in label_ar.
So, now let's say we have two more labels 2 and 3 in image, something like this -
for i in range(0, height):
for j in range(0, width):
if image[i,j] == 0:
new_image[i,j] = [255, 255, 255, 0]
elif image[i,j] == 1:
new_image[i,j] = [0, 255, 0, 0.5]
elif image[i,j] == 2:
new_image[i,j] = [0, 255, 255, 0.5]
elif image[i,j] == 3:
new_image[i,j] = [255, 255, 255, 0.5]
We will edit the labels and values accordingly and use the same searchsorted solution -
label_ar = np.array([0,1,2,3]) # sorted label array
val_ar = np.array([
[255, 255, 255, 0],
[0, 255, 0, 0.5],
[0, 255, 255, 0.5],
[255, 255, 255, 0.5]])
You are right np.where is how you solve this problem. Where is a vectorized function so it should be much faster than your solution.
I'm making an assumption here that there is It doesn't have an elif that I'm aware of, but you can get around that by nesting where statements.
new_image = np.where(
image == 0,
[255, 255, 255, 0],
np.where(
image == 1,
[0, 255, 0, 0.5],
np.nan
)
)
I have the image
I am looking for python solution to break the shape in this image into smaller parts according to the contour in the image.
I have looked into solution on Canny and findContours in OpenCV but none of them works for me.
Edit:
Code used:
using Canny method
import cv2 import numpy as np
img = cv2.imread('area_of_blob_maxcontrast_white.jpg') edges = cv2.Canny(img, 100, 200)
cv2.imwrite('area_of_blob_maxcontrast_white_edges.jpg',edges)
using findContours method
import numpy as np
import argparse
import cv2
image = cv2.imread('area_of_blob_maxcontrast_white.png')
lower = np.array([0, 0, 0]) upper = np.array([15, 15, 15]) shapeMask = cv2.inRange(image, lower, upper)
(_,cnts, _) = cv2.findContours(shapeMask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE) print "I found %d black shapes" % (len(cnts)) cv2.imshow("Mask", shapeMask)
for c in cnts:
# draw the contour and show it
cv2.drawContours(image, [c], -1, (0, 255, 0), 2)
cv2.imshow("Image", image)
cv2.waitKey(0)
The trick is to make your faint single pixel boundary slightly bolder. I do it by changing any white pixel that has two adjacent black pixels (above, below, to the left or to the right) to black. (I do it extremely slow, though. I'm pretty sure there must be a smarter way to do it with OpenCV or Numpy.)
Here is my code:
#!/usr/bin/env python
import numpy as np
import cv2
THRESH = 240
orig = cv2.imread("map.png")
img = cv2.cvtColor(orig, cv2.COLOR_BGR2GRAY)
# Make the faint 1-pixel boundary bolder
rows, cols = img.shape
new_img = np.full_like(img, 255) # pure white image
for y in range(rows):
if not (y % 10):
print ('Row = %d (%.2f%%)' % (y, 100.*y/rows))
for x in range(cols):
score = 1 if y > 0 and img.item(y-1, x) < THRESH else 0
score += 1 if x > 0 and img.item(y, x-1) < THRESH else 0
score += 1 if y < rows-1 and img.item(y+1, x) < THRESH else 0
score += 1 if x < cols-1 and img.item(y, x+1) < THRESH else 0
if img.item(y, x) < THRESH or score >= 2:
new_img[y, x] = 0 # black pixels show boundary
cv2.imwrite('thresh.png', new_img)
# Find all contours on the map
_th, contours, hierarchy = cv2.findContours(new_img,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
print "Number of contours detected = %d" % len(contours)
# Fill second level regions on the map
coln = 0
colors = [
[127, 0, 255],
[255, 0, 127],
[255, 127, 0],
[127, 255, 0],
[0, 127, 255],
[0, 255, 127],
]
hierarchy = hierarchy[0]
for i in range(len(contours)):
area = cv2.contourArea(contours[i])
if hierarchy[i][3] == 1:
print (i, area)
coln = (coln + 1) % len(colors)
cv2.drawContours(orig, contours, i, colors[coln], -1)
cv2.imwrite("colored_map.png", orig)
Input image:
Output image:
Here I color only the direct descendants of the outmost contour (hierarchy[i][3] == 1). But you can change it to exclude the lakes.
I need to save a transparent image made from a numpy array. I can save the image with:
img = Image.fromarray(data, 'RGB')
But I need it to be transparent so I tried to save it with :
img = Image.fromarray(data, 'RGBA')
Then I get this error :
File "/home/pi/Documents/Projet/GetPos.py", line 51, in click
img = Image.fromarray(data, 'RGBA')
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 2217, in
fromarray
return frombuffer(mode, size, obj, "raw", rawmode, 0, 1)
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 2162, in
frombuffer
core.map_buffer(data, size, decoder_name, None, 0, args)
ValueError: buffer is not large enough
I made some research but everythings looks very complicated for the simple thing I'm trying to do...
Does anyone can help me on this one ?
Here's my complete code ( I'm pretty new to python :) ):
mouse = pymouse.PyMouse()
posX, posY = mouse.position()
print(mouse.position())
w, h = 1920, 1080
data = np.zeros((h, w, 3), dtype=np.uint8)
for x in range(posX-20, posX+20):
if x > 1679:
data[posY, w-1] = [255, 0, 0]
else:
data[posY, x] = [255, 0, 0]
for y in range(posY-20, posY+20):
if y > 1049:
data[h-1, posX] = [255, 0, 0]
else:
data[y, posX] = [255, 0, 0]
img = Image.fromarray(data, 'RGBA')
##img = Image.frombuffer('RGBA', [1080, 1920], data, "raw", 'RGBA', 0, 1)
img.save('my.png')
In order to save a transparant image, you need to have a fourth value per pixel called the alpha channel, which determines the opacity of your pixel. (RGBA stands for red, green, blue and alpha.) So the only thing that has to be changed in your code is essentialy providing that 4th alpha value using tuples of 4 values instead of 3 for a pixel. Setting the 4th value to 255 means it's completely visible, 0 would make it a 100% transparant. In the following example I simply set every pixel that you were drawing red completely visible, the others will be transparent:
mouse = pymouse.PyMouse()
posX, posY = mouse.position()
w, h = 1920, 1080
data = np.zeros((h, w, 4), dtype=np.uint8)
for x in range(posX-20, posX+20):
if x > 1679:
data[posY, w-1] = [255, 0, 0, 255]
else:
data[posY, x] = [255, 0, 0, 255]
for y in range(posY-20, posY+20):
if y > 1049:
data[h-1, posX] = [255, 0, 0, 255]
else:
data[y, posX] = [255, 0, 0, 255]
img = Image.fromarray(data, 'RGBA')
img.save('my.png')