I working with a .zarr file stored in a S3 bucket. I want to realize a use case of an index create with water coverage data which allows to follow evolution of the aquatic coverage. I display the file in a dataset and after clipping the file following the study area, i plot the data variable from a dataset with matplotlib as you can see here:
It represent the managua lake, in the blue color. Dark blue correspond to ocean, white to the pixels with no data and green to space with no water. My purpose is to count in this four categories: ocean, water, no data and no water the number of pixels according to this plot which you can see upper.
I already test to make this pixel_count = ((raster_managua >= 69) & (raster_managua <= 250)).sum() so make a sum of the categorie between 69 and 250 which correspond to water bu i have this error
TypeError: '>=' not supported between instances of 'FacetGrid' and 'int'
Can you help me ?
Related
I am trying to create a chart where I want to display sales as a bar and profits using colour. The bars will be used to inform the sales figures (taller bars mean higher sales). But at the same time, the bar will be used to inform the profits too using a colour. Higher profits mean darker shade of a colour, lower profits mean a lighter shade of the colour. I do not have any restrictions on using different colours so green, amber and red are also fine (in that case I need to define the ranges, eg., between 75 and 100: green, between 25 and 74.99: amber and anything below 25 as red).
How can I achieve this in Python, please?
Thank you.
Best wishes,
Manoj.
I have two sets of satellites images. One is a set of classical satellites images, and the other is a set of infrared satellites images. I am trying to detect the vegetation inside the yellow polygon and outside the red area using Normalized difference vegetation index
(NDVI).
Visible image
Infrared image
According to the image documentation, a shift in the color spectrum was made on the Infrared images: infrared band occupies the red band, red band occupies the green band...
To calculate the NDVI image, I'm doing the following :
# Images are in BGR color space.
ndvi = (img_irc[:,:,2].astype(int) - img_visible[:,:,2].astype(int))/(img_irc[:,:,2] + img_visible[:,:,2] + 0.001)
Then, I use an Otsu threshold to extract the following mask :
To better see the effects, I add the semi-transparent mask of the impact of the detection on the satellite photo :
The result is not too bad, but there are sometimes wrong detections, like here on the roofs. I suspect that my way of extracting the spectral reflectance measurements acquired in the red (visible) and near-infrared regions by selecting the red channels is not perfect. Are there better methods to do so?
The NIR channel has apparently already been preprocessed, by the translation of the spectrum. You should get closer to the source to get the values directly from the IR spectrum values, rather than re-creating the images.
I have trained a face segmentation model on the CelebA-Mask-HQ dataset (https://github.com/switchablenorms/CelebAMask-HQ ) that is able to create a color segmentation mapping of an image with different colours for the background, eyes, face, hair, etc. The model produces a numpy array of shape (1024,1024,3). The outputted segmentation maps are a bit noisy, with some random pixels in the face labelled as eyes for example, or cloth labels popping up when it is actually background, please see the image below:
As you can see in the image, in the top left corner you see green pixels and in the face around the mustache you see green pixels (above the yellow upper lip map).
I would like to remove this 'noise' from the segmentation map, by changing these wrongly labeled small segments in the image, which are surrounded by larger correctly labeled areas, automatically to the most dominant color in that area (with adaptable window size). I could not find built-in opencv functionality for this. Do you know any efficient way to do this (I need to 'denoise' a large set of images, so ideally in a vectorized numpy-only way)?
It is very important that the image after denoising only contains the set of predefined label colors (19 different colors in total), so the noise needs to be recolored in an absolute manner without averaging (which would introduce new colors to the color palette of the image).
Thank you!
I can point you away from openCV and towards scikit-image which I am more familiar with. I would tackle this using an approach borrowed from this tutorial.
Specifically, I would do something like this:
label_image = label(image)
for region in regionprops(label_image):
# only recolor areas that are under a certain threshold size
if region.area <= 100:
#get creative with which color to recolor with...
minr, minc, maxr, maxc = region.bbox
colors = np.bincount(label_image[minr : maxr, minc:maxc])
max_color = -1
for i in range(len(colors)):
if (colors[i] > max_color) and (i != region.label):
max_color = colors[i]
crop_image = label_image[minr : maxr, minc:maxc]
label_image[minr : maxr, minc:maxc][crop_image == region.label] = max_color
I haven't tried this code out...but I think something like this may work. Let me know if it is helpful or not.
After preprocessing a fabric image, I got an output with white dots as follows:
Dot Map
The aim is to output two numbers i.e.,the number of white dots 1)Via Horizontally and 2)Via Vertically using OPENCV,python.
Challenges:
1.There are some points missing in some rows/columns but the count should be in the row/column with complete set of points.
2.Set of dots are not exactly horizontal/vertical.
PS:1) I have tried connected components(cv2.connectedComponents) to count the total number of dots but it failed because some points are missing and noise adds some points aswell.
2) I tried to count manually looping through rows and columns( by skewing if dotmap is tilted), but it gave a bad result.
How to count them or get a track of tilted dotline?
Goal is to horizontally split an image (double newspaper page) in python based on a vertical centerline that is darker than other areas around.
Example image:
Had some luck using opencv (cv2) for the initial crop and rotation of the double page from a black background using cv2.Canny, and then sorting the contours based on cv2.contourArea.
But, now I'm just interested in finding a center line and then splitting the image into two separate images. Using cv2.Canny again I see that it's able to identify that centerline, but not sure how to identify that long, vertical line and use that to split the image:
End goal would be two images like the following:
Any suggestions would be most welcome.
First, run a horizontal gradient so you only accentuate vertical edges. You can calculate a horizontal gradient with these coefficients:
-1 0 1
-2 0 2
-1 0 1
Then compute the sum of the vertical columns, you can use np.sum(array,axis=0) and you will get this:
I have re-shaped it for ease of viewing - it is actually only 1 pixel tall. Hopefully you can see the bright white line in the middle which you can find with Numpy argmax(). It will also be better when you just do a horizontal gradient because at the moment I am using the purple and yellow image with vertical and horizontal edges enhanced.
Note that the inspiration for this approach is that you said you "want to identify that long, vertical centerline" and the rationale is that a long line of white pixels will add up to a large sum. Note that I have assumed your image is de-skewed (since you said the line is vertical) and this method may not work so well on skew images where "vertical" line will be spread across several columns.