Saving images with OpenCV and accumulateWeighted(...) - python

I'm new to OpenCV, so apologies if this is a trivial question...
I'm writing an application that tracks the path of an object in real time. So far, I have successfully isolated the object and created a "trail" of its path using cv2.accumulateWeighted(). Everything looks great in the preview window, but when I save the merged frame to a file, things aren't so good.
The result varies, but typically the saved frame has much less detail than the displayed frame. I've converted the input to grayscale, and often the written file has very "dim" features.
I believe only the final frame is written (multiplied by the alpha blend), rather than the accumulated image. Any ideas would be greatly appreciated.
Sample program to demonstrate the issue:
import cv2
#---- read the next frame from the capture device
def read_frame(cap):
ret, frame = cap.read()
if ret is False or frame is None:
return None
gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
return gray_frame
#---- setup components
cap = cv2.VideoCapture(index=0)
background_subtractor = cv2.createBackgroundSubtractorMOG2(
history=30, varThreshold=50, detectShadows=False
)
#---- prime the accumulator
frame = read_frame(cap)
merged_frame = frame.astype(float)
#---- capture some frames
while True:
frame = read_frame(cap)
mask = background_subtractor.apply(frame, learningRate=0.01)
foreground = cv2.bitwise_and(frame, frame, mask=mask)
cv2.accumulateWeighted(foreground, merged_frame, 0.1)
cv2.imshow('Acccumulator', merged_frame)
key = cv2.waitKey(1)
# press 'q' to quit and save the current frame
if key == ord('q') or key == ord('Q'):
cv2.imwrite('merged.png', merged_frame)
break
The following are images when moving my hand through the scene... You can see the path of my hand in the displayed image, along with some other background elements. In the saved image, only a very dim version of my hand in the final position is saved.
This is the displayed image (using screen capture):
This is the image written to disk (using imwrite()):

I guess you want to save merged_frame as it shown by cv2.imshow.
You may limit the upper value of merged_frame to 1, scale by 255, and convert to uint8 type, before saving:
merged_frame = np.round(np.minimum(merged_frame, 1)*255).astype(np.uint8)
The type of merged_frame is float64.
When using cv2.imshow for image of type float, all the values above 1.0 are white (and below 0 are black).
Gray level of range [0, 1] is equivalent to range [0, 255] of uint8 type (0.5 is like 128).
When using cv2.imwrite the image is converted to uint8, but without clamping and scaling (simple cast to 255). The result is usually very dark.
In case you want to save the image as it shown, you need to clamp value to 1, then scale by 255.
You didn't post input samples, so I created synthetic input:
import numpy as np
import cv2
background_subtractor = cv2.createBackgroundSubtractorMOG2(
history=30, varThreshold=50, detectShadows=False
)
width, height = 640, 480
frame = np.full((height, width), 60, np.uint8)
merged_frame = frame.astype(float)
for n in range(100):
img = np.full((height, width, 3), 60, np.uint8)
cv2.putText(img, str(n), (width//2-100*len(str(n)), height//2+100), cv2.FONT_HERSHEY_DUPLEX, 10, (30, 255, 30), 20) # Green number
#frame = read_frame(cap)
frame = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
mask = background_subtractor.apply(frame, learningRate=0.01)
foreground = cv2.bitwise_and(frame, frame, mask=mask)
cv2.accumulateWeighted(foreground, merged_frame, 0.1)
cv2.imshow('Acccumulator', merged_frame)
cv2.waitKey(10)
#merged_frame = cv2.normalize(merged_frame, merged_frame, 0, 255.0, cv2.NORM_MINMAX).astype(np.uint8) # Alternative approch - normalize between 0 and 255
merged_frame = np.round(np.minimum(merged_frame, 1)*255).astype(np.uint8)
cv2.imshow('merged_frame as uint8', merged_frame)
cv2.imwrite('merged.png', merged_frame)
cv2.waitKey(0)
cv2.destroyAllWindows()
PNG image using imwrite, without camping and scaling:
PNG image using imwrite, with camping and scaling:
A better way for showing the image, is normalize the values to range [0, 1] before showing the image.
Example:
In the loop, after cv2.accumulateWeighted(foreground, merged_frame, 0.1):
norm_acccumulator = merged_frame.copy()
cv2.normalize(norm_acccumulator, norm_acccumulator, 0, 1.0, cv2.NORM_MINMAX)
cv2.imshow('Acccumulator', norm_acccumulator)

Related

overlay image mask equivalent in CV2 [duplicate]

Please look at this github page. I want to generate heat maps in this way using Python PIL,open cv or matplotlib library. Can somebody help me figure it out?
I could create a heat map for my network at the same size as the input, but I am not able superimpose them. The heatmap shape is (800,800) and the base image shape is (800,800,3)
Updated Answer -- 29th April, 2022.
After the repeated comments I have decided to update this post with a better visualization.
Consider the following image:
img = cv2.imread('image_path')
I obtained a binary image after performing binary threshold on the a-channel of the LAB converted image:
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
a_component = lab[:,:,1]
th = cv2.threshold(a_component,140,255,cv2.THRESH_BINARY)[1]
Applying Gaussian blur:
blur = cv2.GaussianBlur(th,(13,13), 11)
The resulting heatmap:
heatmap_img = cv2.applyColorMap(blur, cv2.COLORMAP_JET)
Finally, superimposing the heatmap over the original image:
super_imposed_img = cv2.addWeighted(heatmap_img, 0.5, img, 0.5, 0)
Note: You can vary the weight parameters in the function cv2.addWeighted and observe the differences.
My code starts from a heatmap matrix (224,224) called cam, which is applied to the original image called frame, via opencv;
and it seems to work pretty well:
import numpy as np
from cv2 import cv2
from skimage import exposure
...
capture = cv2.VideoCapture(...)
while True:
ret, frame = capture.read()
if ret:
#resize original frame
frame = cv2.resize(frame, (224, 224))
#get color map
cam = getMap(frame)
map_img = exposure.rescale_intensity(cam, out_range=(0, 255))
map_img = np.uint8(map_img)
heatmap_img = cv2.applyColorMap(map_img, cv2.COLORMAP_JET)
#merge map and frame
fin = cv2.addWeighted(heatmap_img, 0.5, frame, 0.5, 0)
#show result
cv2.imshow('frame', fin)
the getMap() function gets the headmap given the frame;
I found some interesting free videos about this topic:
https://www.youtube.com/watch?v=vTY58-51XZA&t=14s
https://www.youtube.com/watch?v=4v9usdvGU50&t=208s
I had some problems with grayscale images at this line
super_imposed_img = cv2.addWeighted(heatmap_img, 0.5, img, 0.5, 0)
but this one worked for me
plt.imshow(binary_classification_result * 0.99 + original_gray_image * 0.01)

How can I change the skin color of a human body image?

I have an image of a human body showing skin. How can I change the color of the skin assuming I have another skin color and assuming I have a mask of the exposed skin in the body image ?
Here is one way to do that in Python/OpenCV. I am not sure how robust it is.
Basically, we get the average color of the face. The get the difference color (in each channel) between that and the desired color. Then we add the difference to the input image. Then we use the mask to combine the original and new images.
Input:
Facemask:
import cv2
import numpy as np
import skimage.exposure
# specify desired bgr color for new face and make into array
desired_color = (180, 128, 200)
desired_color = np.asarray(desired_color, dtype=np.float64)
# create swatch
swatch = np.full((200,200,3), desired_color, dtype=np.uint8)
# read image
img = cv2.imread("zelda1.jpg")
# read face mask as grayscale and threshold to binary
facemask = cv2.imread("zelda1_facemask.png", cv2.IMREAD_GRAYSCALE)
facemask = cv2.threshold(facemask, 128, 255, cv2.THRESH_BINARY)[1]
# get average bgr color of face
ave_color = cv2.mean(img, mask=facemask)[:3]
print(ave_color)
# compute difference colors and make into an image the same size as input
diff_color = desired_color - ave_color
diff_color = np.full_like(img, diff_color, dtype=np.uint8)
# shift input image color
# cv2.add clips automatically
new_img = cv2.add(img, diff_color)
# antialias mask, convert to float in range 0 to 1 and make 3-channels
facemask = cv2.GaussianBlur(facemask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
facemask = skimage.exposure.rescale_intensity(facemask, in_range=(100,150), out_range=(0,1)).astype(np.float32)
facemask = cv2.merge([facemask,facemask,facemask])
# combine img and new_img using mask
result = (img * (1 - facemask) + new_img * facemask)
result = result.clip(0,255).astype(np.uint8)
# save result
cv2.imwrite('zelda1_swatch.png', swatch)
cv2.imwrite('zelda1_recolor.png', result)
cv2.imshow('swatch', swatch)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Desired color swatch:
Result:
import cv2
import numpy as np
import skimage.exposure
#usage
#put this script and the image face.jpg in the same directory /dir
#run these 2 commands inside bash
#cd /dir
#python change_skin_v1.py
#script_name= change_skin_v1.py
#you can change the 3 parameters: alpha, skincolor_low, skincolor_high
#path file
path_face="./face.jpg"
result_partial="./result_partial.png"
result_final="./result_partial.png"
#blending parameter
alpha = 0.7
# Define lower and uppper limits of what we call "skin color"
skincolor_low=np.array([0,10,60])
skincolor_high=np.array([180,150,255])
#specify desired bgr color (brown) for the new face.
#this value is approximated
desired_color_brg = (2, 70, 140)
# read face
img_main_face = cv2.imread(path_face)
# face.jpg has by default the BGR format, convert BGR to HSV
hsv=cv2.cvtColor(img_main_face,cv2.COLOR_BGR2HSV)
#create the HSV mask
mask=cv2.inRange(hsv,skincolor_low,skincolor_high)
# Change image to brown where we found pink
img_main_face[mask>0]=desired_color_brg
cv2.imwrite(result_partial,img_main_face)
#blending block start
#alpha range for blending is 0-1
# load images for blending
src1 = cv2.imread(result_partial)
src2 = cv2.imread(path_face)
if src1 is None:
print("Error loading src1")
exit(-1)
elif src2 is None:
print("Error loading src2")
exit(-1)
# actually blend_images
result_final = cv2.addWeighted(src1, alpha, src2, 1-alpha, 0.0)
cv2.imwrite('./result_final.png', result_final)
#blending block end

Cut and save an object recognized by color

So i would like to make a program which can detect an object by color, position and sharpness.
Now I am there that I could detect the object by color and draw its contour and bounding box.
My problem is that i really dont know how to cut out the object from the picture and save it as picture file when the program recognise its contour or bounding box.
here's a picture of what my camera is seeing
input
output
I would like to cut out what is inside of the green colored boundig box as many times as fps in the video and as long as you can see it in the video. So if the video is 30 fps and the object is visible for 10 seconds it needs to take 300 pictures.
Here is the code:
i know it looks bad, im just trying to figure out what to use to make it work
import cv2 as cv
import numpy as np
import os
import uuid
cap = cv.VideoCapture(1)
font = cv.FONT_HERSHEY_COMPLEX
path = os.getcwd()
print(path)
def createFolder(directory):
try:
if not os.path.exists(directory):
os.makedirs(directory)
except OSError:
print('Error: Creating directory. ' + directory)
createFolder("./data")
# folderName = '%s' % (str(uuid.uuid4()))
while cap.isOpened():
_, frame = cap.read()
hsv = cv.cvtColor(frame, cv.COLOR_BGR2HSV)
# blue is the chosen one for now
lower_color = np.array([82, 33, 39])
upper_color = np.array([135, 206, 194])
mask = cv.inRange(hsv, lower_color, upper_color)
kernel = np.ones((5, 5), np.uint8)
mask = cv.erode(mask, kernel)
contours, hierarchy = cv.findContours(mask, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# find contour
for contour in contours:
area = cv.contourArea(contour)
x, y, h, w = cv.boundingRect(contour)
if area > 100:
# bounding box
# cv.rectangle(frame, (x - 40, y - 30), (x + h * 3, y + w * 3), (0, 255, 0), 1)
# cutting and saving
ext_left = tuple(contour[contour[:, :, 0].argmin()][0] - 20)
ext_right = tuple(contour[contour[:, :, 0].argmax()][0] + 20)
ext_top = tuple(contour[contour[:, :, 1].argmin()][0] - 20)
ext_bot = tuple(contour[contour[:, :, 1].argmax()][0] + 20)
outfile = '%s.jpg' % (str(uuid.uuid4()))
cropped_image = frame[ext_top[1]:ext_bot[1], ext_left[0]:ext_right[0]]
# write images to a specified folder
cv.imwrite(os.path.join(path, "/data/", outfile), cropped_image)
# outputs
cv.imshow("Frame", frame)
cv.imshow("Mask", mask)
key = cv.waitKey(1)
if key == 27:
break
cap.release()
cv.destroyAllWindows()
Focusing on the question and ignoring the code style, I can say you are close to achieving your goal :)
For cropping the object, you can use the Mat copyTo method. Here is the official OpenCV documentation and here is an example from the OpenCV forums.
Now, for creating the mask from the contours, you can use the same drawCountours method you already use, but provide a negative value for the thickness parameters (for example, thickness=CV_FILLED). You can see a code snippet in this stackoverflow post and check details in the official documentation.
For saving the image to disk you can use imwrite.
So, in a nutshell, draw filled contours to a mask and use that mask to copy only the object pixels from the video frame to another mat that you can save the disk.
Instead of posting code, I will share this very similar question with an accepted answer that may have the code snippet you are looking for.

Video with certain pixel height, width in opencv

I’m trying to use opencv to take a photo on a 1080p camera, however, only want the photo to be 224x224 pixels. How can I use opencv to do this.
I currently have the following code:
Import cv2
Cap = cv2.VideoCam(0)
Cap.set(3, 224)
Cap.set(4, 224)
Ret, frame = cap.read()
However when I look at the shape of frame, it is not (224, 224, 3). Could someone please help me figure out how to make it output the dimensions of pixels I want
When you say you want a 224x224 image, it depends what you mean. If we start with this image which is 1920x1080, you might want:
(A) - the top-left corner, highlighted in magenta
(B) - the central 224x224 pixels, highlighted in blue
(C) - the largest square, resized down to 224x224, highlighted in red
(D) - the entire image distorted to fit in 224x224
So, assume in the following that you have read your frame from the camera into a variable called im using something like:
...
...
ret, im = cap.read()
If you want (A), use:
# If you want the top-left corner
good = im[:224, :224]
If you want (B), use:
# If you want the centre
x = h//2 - 112
y = w//2 - 112
good = im[x:x+224, y:y+224]
If you want (C), use:
# If you want the largest square, scaled down to 224x224
y = (w-h)//2
good = im[:, y:y+h]
good = cv2.resize(good,(224,224))
If you want (D), use:
# If you want the entire frame distorted to fit 224x224
good = cv2.resize(im,(224,224))
Keywords: Image processing, video, 1920x1080, 1080p, crop, distort, largest square, central portion. top-left, Python, OpenCV, frame, extract.
import cv2
video_capture = cv2.VideoCapture(0)
while video_capture.isOpened():
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 224)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 224)
frame = video_capture.read()[1]
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord("q"):
break
to get the current size use :
cap.get(cv2.CAP_PROP_FRAME_WIDTH)
cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
if that didn't work then Opencv doesn't have access to the (width, height )features of the camera you are using.
This is quite simple, just use the cv2.resize(frame, size) function. Example:
import cv2
cam = cv2.VideoCapture(0) # My camera is 640x480
size = (200,200) # The "size" parameter must be a tuple.
while True:
frame = cam.read()[1]
new_frame = cv2.resize(frame,size) # Resizing the frame ...
cv2.imshow('sla',new_frame)
if cv2.waitKey(1) == ord("q"):
break
cv2.destroyAllWindows()

How to resize output images in python?

Hi i run this blurdetection code in python ( source : https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ )
# import the necessary packages
from imutils import paths
import argparse
import cv2
def variance_of_laplacian(image):
# compute the Laplacian of the image and then return the focus
# measure, which is simply the variance of the Laplacian
return cv2.Laplacian(image, cv2.CV_64F).var()
# loop over the input images
for imagePath in paths.list_images("images/"):
# load the image, convert it to grayscale, and compute the
# focus measure of the image using the Variance of Laplacian
# method
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
text = "Not Blurry"
# if the focus measure is less than the supplied threshold,
# then the image should be considered "blurry"
if fm < 100:
text = "Blurry"
# show the image
cv2.putText(image, "{}: {:.2f}".format(text, fm), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 3)
cv2.imshow("Image", image)
print("{}: {:.2f}".format(text, fm))
key = cv2.waitKey(0)
with this 2173 x 3161 input file
input image
and this is the output show
the output image
The image is zoom in and dont shown full.
In the source code, they use 450 x 600 px input image :
input in source code
and this is the output :
output in source code
I think the pixels of the image influences of the output. So, how can i get the output like the output in source code to all image?
do i have to resize the input image? How to? but if I do it I'm afraid it will affect the result of his blur
Excerpt from the DOCUMENTATION.
There is a special case where you can already create a window and load image to it later. In that case, you can specify whether window is resizable or not. It is done with the function cv2.namedWindow(). By default, the flag is cv2.WINDOW_AUTOSIZE. But if you specify flag to be cv2.WINDOW_NORMAL, you can resize window. It will be helpful when image is too large in dimension and adding track bar to windows.
I just used the code placed in the question but added line cv2.namedWindow("Image", cv2.WINDOW_NORMAL) as mentioned in the comments.
# import the necessary packages
from imutils import paths
import argparse
import cv2
def variance_of_laplacian(image):
# compute the Laplacian of the image and then return the focus
# measure, which is simply the variance of the Laplacian
return cv2.Laplacian(image, cv2.CV_64F).var()
# loop over the input images
for imagePath in paths.list_images("images/"):
# load the image, convert it to grayscale, and compute the
# focus measure of the image using the Variance of Laplacian
# method
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
fm = variance_of_laplacian(gray)
text = "Not Blurry"
# if the focus measure is less than the supplied threshold,
# then the image should be considered "blurry"
if fm < 100:
text = "Blurry"
# show the image
cv2.putText(image, "{}: {:.2f}".format(text, fm), (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 3)
cv2.namedWindow("Image", cv2.WINDOW_NORMAL) #---- Added THIS line
cv2.imshow("Image", image)
print("{}: {:.2f}".format(text, fm))
key = cv2.waitKey(0)
In case you want to use the exact same resolution as the example you've given, you can just use the cv2.resize() https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#resize method or (in case you want to keep the ratio of the x/y coordinates) use the imutils class provided in https://www.pyimagesearch.com/2015/02/02/just-open-sourced-personal-imutils-package-series-opencv-convenience-functions/
You still have to decide if you want to do the resizing first. It shouldn't matter in which order you greyscale or resize.
Command you can add:
resized_image = cv2.resize(image, (450, 600))

Categories

Resources