I have a big RGB image as a numpy array,i want to set all pixel that has R=0,G=0,B=0 to R=255,G=0,B=0.
what is the fastest way?
i tried:
for pix in result:
if np.all(np.logical_and(pix[0]==pix[1],pix[2]==0,pix[2]==pix[1])):
pix [0] = 255
but in this way i don't have a single pixel. there is a similar way that it is not to iterate the index?
Here is a vectorized solution. Your image is basically an w by h by 3(colors) array. We can make use of the broadcasting rules that are not easy to grasp but are very powerful.
Basically, we compare the whole array to a 3 vector with the values that you are looking for. Due to the broadcasting rules Numpy will then compare each pixel to that three vector and tell you if it matched (so in this specific case, if the red, green and blue matched). You will end up with an boolean array of trues and falses of the same size as the image.
now we only want to find the pixels where all three colors matched. For that we use the "all" method, which is true, if all values of an array are true. If we apply that to a certain axis -- in this case the color axis -- we get an w by h array that is true, wherever all the colors matched.
Now we can apply this 2D boolean mask back to our original w by h by 3 array and get the pixels that match our color. we can now reassign them -- again with broadcasting.
Here is the example code
import numpy as np
#create a 2x2x3 image with ones
img = np.ones( (2,2,3) )
#make the off diagonal pixels into zeros
img[0,1] = [0,0,0]
img[1,0] = [0,0,0]
#find the only zeros pixels with the mask
#(of course any other color combination would work just as well)
#... and apply "all" along the color axis
mask = (img == [0.,0.,0.]).all(axis=2)
#apply the mask to overwrite the pixels
img[ mask ] = [255,0,0]
Since all values are positive or null, a simple and efficient way is:
img[img.sum(axis=2)==0,0]=255
img.sum(axis=2)==0 select good pixels in the two first dimensions, 0 the red canal in the third.
Related
My goal with the code is to print an image where I have removed all the saturated pixels (the pixels that gives off their maximum value). This is because I am analyzing data from an .fit image of a star.
The instruction I got was:
"What you need to do is figure out what the maximum pixel value is in the 2d array from the data. Then write code that will remove those values, while still keeping the image. Basically, what I want to see is a star with a hole in the middle, where you have removed the saturated pixels."
I have succeeded to find the maximum pixel value (65535) and now I need to print my image without the pixels with that particular value.
This is my code so far but now I do not know how to remove the pixels from my image.
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
fits_image_filename = "Acturus_V_2s.fit"
hdul = fits.open(fits_image_filename)
data = hdul[0].data
datacut = data[610:710,755:855]
plt.imshow(datacut, origin="lower")
MaxPixelValue = np.amax(datacut)
print(MaxPixelValue)
And this gives the output:
65535 and my image
How am I supposed to remove those pixels?
As the comments have pointed out, it's not generally possible to "remove" pixels from an image. But there are a few ways of dealing with some pixels in an image while leaving the others alone. In general, to do this you'll end up creating a boolean mask -- an array with the same shape as your image but containing values that are either True or False.
mask = datacut >= MaxPixelValue
Now the mask array contains True wherever the original image is saturated and False everywhere else.
You can set the values of the saturated pixels to NaN, which Matplotlib will handle as blank pixels (i.e., they do not have any color when using imshow()).
datacut[mask] = np.nan
Alternatively, you can set the saturated pixels in your image to some indicative color (e.g. white or black):
datacut[mask] = 0
Or you can create a numpy maskedarray, which is a sort of combination of your original image and the mask:
masked_image = np.ma.masked_array(datacut, mask)
The numpy documentation has a brief description of how to use masked arrays.
https://numpy.org/doc/stable/reference/maskedarray.generic.html
I also found some documentation from the astrophysics community that might be useful, since you're also using fits and astropy.
https://prancer.physics.louisville.edu/astrowiki/index.php/Image_processing_with_Python_and_SciPy#Masked_Image_Operations
Imagine the Image below to be a (64,64) 2D Numpy array. How do I make this Numpy array which has zeros (0) as a boundary to pick out a smaller region (Highlighted in Green). It does not matter which region it picks (as long as it is on the upper-middle-right region of the array). The final result should look like the image on the right where every other value is 0 apart from the ones highlighted in Green?
I don't know if I understand problem but you can create array 64,64 with zeros and copy green region from previous array.
To make it more visible I use array (10,10)
import numpy as np
#SIZE = 64
SIZE = 10
old_arr = np.random.randint(10, size=(SIZE, SIZE))
new_arr = np.zeros((SIZE, SIZE))
new_arr[3:5,4:6] = old_arr[3:5,4:6]
print(old_arr)
print(new_arr)
So, basically i have a array with 16 RGB color values, and i have to calculate the distance between the RGB value of a pixel in the input image and all of these 16. The RGB value which has the lower distance will be the RGB value in the output image.
The problem is: I'm using nested for loops to do these operations, and it's REALLY slow. Excerpt as follow:
for i in range (row):
for j in range (columns):
pixel = img[i, j]
for color in colorsarray:
dist.append(np.linalg.norm(pixel - color))
img[i,j] = colorsarray[dist.index(min(dist))]
dist.clear()
Is there a numpy function that can help me optimize this?
You can calculate the distances by broadcasting the arrays.
If your image has shape (x,y,3) and your palette has shape (n,3), then you can calculate the distance between each pixel and each color as an array with shape (x,y,n):
# distance[x,y,n] is the distance from pixel (x,y) to
# color n
distance = np.linalg.norm(
img[:,:,None] - colors[None,None,:], axis=3)
The index : means "the entire axis" and the index None means "broadcast the value along this axis".
You can then choose the closest color index:
# pal_img[x,y] is the index of the color closest to
# pixel (x,y)
pal_img = np.argmin(distance, axis=2)
Finally, you can convert back to RGB:
# rgb_img[x,y] is the RGB color closest to pixel (x,y)
rgb_img = colors[pal_img]
This shows how you don't really need special functions in NumPy. Unfortunately, this can be a bit hard to understand.
Untested, but you could try to vectorize your function:
# reshape to have 1D array
dimx = image.shape[0]
image = image.reshape(-1, 3)
def f(pixel):
# TODO here: logic to return, given the pixel, the closest match in the list
# vectorize the function and apply it to the image
image = np.vectorize(f)(image)
# set the shape back to original
image = image.reshape( dimx, -1, 3 )
I created a HSV mask from the image. The result is like the following:
I hope that this mask can be represented by a set of points. My original idea was to use Skimage Skeletonize to create a line and then use the sliding window to calculate the local mean for point creation.
However, skeletonize takes too long. It requires 0.4s for each frame. This is not a good idea for video processing.
Do you want the points of all True elements of the mask, or do you just want a skeleton? If the former..
import skimage as ski
from skimage import io
import numpy as np
mask = ski.io.imread('./mask.png')[:,:,0]/255
mask = mask.astype('bool')
s0,s1 = mask.shape # dimensions of mask
a0,a1 = np.arange(s0),np.arange(s1) # make two 1d coordinate arrays
coords = np.array(np.meshgrid(a0,a1)).T # cartesian product into a coordinate matrix
coords = coords[mask] # mask out the points of interest
If the latter, you can get the start and end points (from left to right) of the object in the mask in a fast way with something like
start_mat = np.stack((np.roll(mask,1,axis=1),mask),-1)
start_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),start_mat[mask]),dtype=bool)
starts = coords[start_mask]
end_mat = np.stack((np.roll(mask,-1,axis=1),mask),-1)
end_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),end_mat[mask]),dtype=bool)
ends = coords[end_mask]
This will give you a rough outline of the object. Outline points will be missing anywhere that the slope of the figure is 0. You may have to think of a vertical difference scheme for those areas. The same idea would work with np.roll(...,axis=0). You could just concatenate the unique points from rolling over rows to the points from rolling over columns to get the full outline.
Averaging the correct pairs to get the skeleton isn't so easy.
Here's a resultant outline. You can definitely make this faster than 0.4s:
Couldn't a simple For loop work?
Scan each "across" line of your bitmap looking for...
X pos where from Black meets White = new start point.
Also in same scanned line now look for a new X-pos: where from White meets Black = new end point.
Either put dots at start/end points for "outline" effect, or else put dots in "center" effect by dot.x = (end_point - start_point) / 2
I have a sparse (100k / 20000^2) 2-D boolean numpy mask corresponding to the positions of objects.
I want to update the mask to set to True all pixels within a certain radius of a True pixel in the original mask. In other words, convolve the delta-function response with a circular aperture/kernel (in this case) response at each position.
Since the master array is large (i.e. 20000 x 20000), and there are 100k positions, I need speed and memory efficiency...
For example (see numpy create 2D mask from list of indices [+ then draw from masked array]):
import numpy
from scipy import sparse
xys=[(1,2),(3,4),(6,9),(7,3)]
master_array=numpy.ones((100,100))
coords = zip(*xys)
mask = sparse.coo_matrix((numpy.ones(len(coords[0])),coords),\
shape= master_array.shape, dtype=bool)
# Now mask all pixels within a radius r of every coordinate pair in the list
mask = cookieCutter(mask,r) # <--- I need an efficient cookieCutter function!
# Now sample the masked array
draws=numpy.random.choice(master_array[~mask.toarray()].flatten(),size=10)
Thanks!
(Follows on from numpy create 2D mask from list of indices [+ then draw from masked array])
Special case of a single position: How to apply a disc shaped mask to a numpy array?
Scikit-Image has a dilation function, which would serve your purpose.