Detect vegetation using opencv on satellite images - python

I am trying to estimate the area of vegetation in square meters on satellite photos, from the colors. I don't have a training dataset, and therefore cannot do machine learning. So I know the results will not be very good, but I try anyway.
To do this, I apply a filter on the colors thanks to cv2.inRange.
import numpy as np
import cv2
img = cv2.imread('staticmap.png')
upperbound = np.array([70, 255,255])
lowerbound = np.array([40, 40,40])
mask = cv2.inRange(img, lowerbound, upperbound)
imask = mask>0
white = np.full_like(img, [255,255,255], np.uint8)
result = np.zeros_like(img, np.uint8)
result[imask] = white[imask]
cv2.imshow(winname = 'satellite image', mat = img)
cv2.imshow('vegetation detection', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
This gives the following results
So it seems that the detection is not too bad.
Now, I would like, from the density of white pixels, detect the areas where there is vegetation and areas where there is not. I imagine an output like this :
Are there any open cv functions that can do this?

You could consider using a Gaussian blur followed by Otsu thresholding like this:
import cv2
# Load image as greyscale
im = cv2.imread('veg.jpg', cv2.IMREAD_GRAYSCALE)
# Apply blur
blur = cv2.GaussianBlur(im,(19,19),0)
# Otsu threshold
_,thr = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

Related

How can I use thresholding to improve image quality after rotating an image with skimage.transform?

I have the following image:
Initial Image
I am using the following code the rotate the image:
from skimage.transform import rotate
image = cv2.imread('122.png')
rotated = rotate(image,34,cval=1,resize = True)
Once I execute this code, I receive the following image:
Rotated Image
To eliminate the blur on the image, I use the following code to set a threshold. Anything that is not white is turned to black (so the gray spots turn black). The code for that is as follows:
ret, thresh_hold = cv2.threshold(rotated, 0, 100, cv2.THRESH_BINARY)
plt.imshow(thresh_hold)
Instead of getting a nice clear picture, I receive the following:
Choppy Image
Does anyone know what I can do to improve the image quality, or adjust the threshold to create a clearer image?
I attempted to adjust the threshold to different values, but this changed the image to all black or all white.
One way to approach that is to simply antialias the image in Python/OpenCV.
To do that one simply converts to grayscale. Then blurs the image, then applies a stretch of the image.
Adjust the blur sigma to change the antialiasing.
Input:
import cv2
import numpy as np
import skimage.exposure
# load image
img = cv2.imread('122.png')
# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# blur threshold image
blur = cv2.GaussianBlur(gray, (0,0), sigmaX=2, sigmaY=2, borderType = cv2.BORDER_DEFAULT)
# stretch so that 255 -> 255 and 127.5 -> 0
result = skimage.exposure.rescale_intensity(blur, in_range=(127.5,255), out_range=(0,255)).astype(np.uint8)
# save output
cv2.imwrite('122_antialiased.png', result)
# Display various images to see the steps
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

Unable to find correct circles with cv2.HoughCircles

When I run the code below, I am not able to find consistent circles in the image. The image I am using as input is:
import numpy as np
import matplotlib.pyplot as plt
import cv2
img = cv2.imread("pipe.jpg")
# convert the image to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# copy the RGB image
cimg = img.copy()
# convert the RGB image to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect circles using hough transformation
circles = cv2.HoughCircles(image=img, method=cv2.HOUGH_GRADIENT, dp=3,
minDist=60, param1=100, param2=39, maxRadius=200)
for co,i in enumerate(circles[0, :],start=1):
i = [round(num) for num in i]
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print("Number of circles detected:", co)
plt.imshow(cimg)
plt.show()
The result I get is:
As a pre-processing step, you usually smooth the image prior to the detection. Not smoothing will prompt a lot of false detections. Sometimes you don't see pre-smoothing in some tutorials because the images dealt with have nice clean edges with very little noise. Try using a median blur or Gaussian blur before you perform the detection.
Therefore, try something like:
import cv2
img = cv2.imread("pipe.jpg")
# convert the image to RGB
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# copy the RGB image
cimg = img.copy()
# convert the RGB image to grayscale
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
### NEW - Smooth the image first
blur = cv2.GaussianBlur(img, (5, 5), 0)
# or try
# blur = cv2.medianBlur(img, 5)
# detect circles using hough transformation
circles = cv2.HoughCircles(image=blur, method=cv2.HOUGH_GRADIENT, dp=3,
minDist=60, param1=100, param2=39, maxRadius=200)
Other than that, getting the detection to find all of the circles in the image is subject to playing with the hyperparameters of the Circular Hough Transform and through trial and error.

Processing image for reducing noise with OpenCV in Python

I want to apply some kind of preprocessing to this image so that text can be more readable, so that later I can read text from image. I'm new to this so I do not know what should I do, should I increase contrast or should I reduce noise, or something else. Basically, I want to remove these gray areas on the image and keep only black letters (as clear as they can be) and white background.
import cv2
img = cv2.imread('slika1.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('gray', img)
cv2.waitKey(0)
thresh = 200
img = cv2.threshold(img, thresh, 255, cv2.THRESH_BINARY)[1]
cv2.imshow('filter',img)
cv2.waitKey(0)
I read the image and applied threshold to the image but I needed to try 20 different thresholds until I found one that gives results.
Is there any better way to solve problems like this?
The problem is that I can get different pictures with different size of gray areas, so sometime I do not need to apply any kind of threshold, and sometimes I do, because of that I think that my solution with threshold is not that good.
For this image, my code works good:
But for this it gives terrible results:
Try division normalization in Python/OpenCV. Divide the input by its blurred copy. Then sharpen. You may want to crop the receipt better or mask out the background first.
Input:
import cv2
import numpy as np
import skimage.filters as filters
# read the image
img = cv2.imread('receipt2.jpg')
# convert to gray
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# blur
smooth = cv2.GaussianBlur(gray, (95,95), 0)
# divide gray by morphology image
division = cv2.divide(gray, smooth, scale=255)
# sharpen using unsharp masking
sharp = filters.unsharp_mask(division, radius=1.5, amount=1.5, multichannel=False, preserve_range=False)
sharp = (255*sharp).clip(0,255).astype(np.uint8)
# save results
cv2.imwrite('receipt2_division.png',division)
cv2.imwrite('receipt2_division_sharp.png',sharp)
# show results
cv2.imshow('smooth', smooth)
cv2.imshow('division', division)
cv2.imshow('sharp', sharp)
cv2.waitKey(0)
cv2.destroyAllWindows()
Division result:
Sharpened result:

Remove background

I am doing OCR to extract information from the ID card. However, accuracy is quite low.
My assumption is that removing the background will make OCR more accurate.
I use the ID scanner machine (link) to obtain the grey image below. It seems that the machine uses IR instead of image processing.
Does anyone knows how to get the same result by using Opencv or tools (photoshop, gimp, etc)?
Thanks in advance.
Here are two more methods: adaptive thresholding and division normalization.
Input:
import cv2
import numpy as np
# read image
img = cv2.imread("green_card.jpg")
# convert img to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# do adaptive threshold on gray image
thresh1 = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 51, 25)
# write results to disk
cv2.imwrite("green_card_thresh1.jpg", thresh1)
# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT , (11,11))
morph = cv2.morphologyEx(gray, cv2.MORPH_DILATE, kernel)
# divide gray by morphology image
division = cv2.divide(gray, morph, scale=255)
# threshold
thresh2 = cv2.threshold(division, 0, 255, cv2.THRESH_OTSU )[1]
# write results to disk
cv2.imwrite("green_card_thresh2.jpg", thresh2)
# display it
cv2.imshow("thresh1", thresh1)
cv2.imshow("thresh2", thresh2)
cv2.waitKey(0)
Adaptive Thresholding Result:
Division Normalization Result:
EDIT:
since there are different lighting conditions, contrast adjustment is added here.
The simple approache in my mind to solve your issue is that: since the undesired background colours are Green and Red, and the desired font colour is Black, simply suppress the Red and green colours as following:
import numpy as np
import matplotlib.pyplot as plt
from skimage.io import imread, imsave
from skimage.color import rgb2gray
from skimage.filters import threshold_otsu
from skimage import exposure
def adjustContrast(img):
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
return img_rescale
# Read the image
img = imread('ID_OCR.jpg')
# Contrast Adjustment for each channel
img[:,:,0] = adjustContrast(img[:,:,0]) # R
img[:,:,1] = adjustContrast(img[:,:,1]) # G
img[:,:,2] = adjustContrast(img[:,:,2]) # B
# # Supress unwanted colors
img[img[...,0] > 100] = 255 # R
img[img[...,1] > 100] = 255 # B
# Convert the image to graylevel
img = rgb2gray(img)
# Rescale into 0-255
img = 255*img.astype(np.uint8)
# Save the results
imsave('Result.png', img)
The image will look like:
The Results are not optimal, because also your image resolution isn't high.
At the end, there are many solutions, and improvements, also you can use Morphology to make it look nicer, this is just a simple proposal to solve the problem.

How to uniform the color and texture of a face

I'm trying to cartoonify a face using opencv.Here's the original image
Currently I'm doing
Downscaling the image, applying bifilter and upscaling back to original
Then converting RGB of original image to grayscale and followed
medianblur to reduce nice
Apply Adaptive Threshold to create edgemask
Combining the image obtained from step1 with the edge mask with
bitmap
Here's the output
Then applied non-photorealistic rendering using OpenCV. Here's the final output
I want to generate face with uniform color(remove light reflection as well)without affecting the eyes, mouth. How can I achieve that either by tweaking my current code or another possible approach in opencv(python)
Based on: https://www.pyimagesearch.com/2014/07/07/color-quantization-opencv-using-k-means-clustering/
Here is a code that does what you are looking for:
import cv2
import numpy as np
from sklearn.cluster import MiniBatchKMeans
n = 32
# read image and convert to gray
img = cv2.imread('./obama.jpg',cv2.IMREAD_COLOR)
img = cv2.resize(img, (0,0), fx=.2, fy=.2)
img = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
(h, w) = img.shape[:2]
img =np.reshape(img, (img.shape[0]* img.shape[1], 3))
clt = MiniBatchKMeans(n_clusters=n)
labels = clt.fit_predict(img)
quant = clt.cluster_centers_.astype("uint8")[labels]
quant = np.reshape(quant, (h,w,3))
img = np.reshape(img, (h,w,3))
quant = cv2.cvtColor(quant, cv2.COLOR_LAB2BGR)
img = cv2.cvtColor(img, cv2.COLOR_LAB2BGR)
double = np.hstack([img, quant])
while True:
cv2.imshow('img', double)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
You can use this tutorial to apply the color quantization only to boxes containing faces.
https://realpython.com/face-recognition-with-python/

Categories

Resources