How to subtract two video frames in python with opencv - python

I am trying to do something very simple: to subtract a bg image from a video for object tracking. I understood images can be simple subtracted from one another as follows img3 = img2 - img1. However, even when I start simple with one image, add a black line to it and store it as img2, img3 will not just show the line. When I run the following code
import cv2
img1 = cv2.imread("img1.png")
img2 = cv2.imread("img2.png")
img3 = img2 - img1
cv2.imwrite("img3.png",img3)
with bellow img1 and img2:
I get the image on the left below, instead of the image on the right:
I want to use this method for background extraction in a video, e.g. where I have a bg image file that shows an emtpy scene and a video that shows the same scene with sometimes objects moving in and out of the screen. I use the following code but similarly get a B/W image instead of just the object visible without the scene..
import cv2
import numpy as np
from PIL import Image
capture = cv2.VideoCapture("video.mov")
while True:
f, frame = capture.read()
frame = cv2.GaussianBlur(frame,(15,15),0)
frame = frame - bg
cv2.imshow("window", frame)
ps: I know about automatic background subtraction but I have very good background files and very clear empty scenes with very obvious objects so thought this should easily work!
Update: I have just found out about the PIL ImageChops difference function that works for getting what I want with two images but seems not possible to use with a video opened with opencv. Also would it be possible to do ImageChops.difference(img1,img2) manually with numpy arrays?

The closest to expected result you can get using this code:
img3 = 255 - cv2.absdiff(img1,img2)
This code will give you this:
Note that using only cv2.absdiff(img1,img2) will give the oposite of this result, because basically this operation tells you what is the difference between 2 images - if on some position there is no difference, the result (int this position) is 0.
To achieve "perfect result" (exactly what you expect) you need to apply some thresholding(or some other kind of filter which will erase left part of image).

Related

How to crop images using OpenCV without knowing the exact coordinates?

I am trying to crop an image of a piece of card/paper or such so that the card/paper is in focus. I tried the below code but the problem is that it works only when the object in question is alone in the picture. If it is a blank background with nothing else in it- the cropping is flawless, otherwise it does not work as expected.
I am attempting create a system which crops different kinds of images and puts them through a classifier and then extracts text from them.
import cv2
import numpy as np
filenames = "img.jpg"
img = cv2.imread(filenames)
blurred = cv2.blur(img, (3,3))
canny = cv2.Canny(blurred, 50, 200)
## find the non-zero min-max coords of canny
pts = np.argwhere(canny>0)
y1,x1 = pts.min(axis=0)
y2,x2 = pts.max(axis=0)
## crop the region
cropped = img[y1:y2, x1:x2]
filename_cropped = filenames.split('.')
filename_cropped[0] = filename_cropped[0] + '_cropped'
filename_cropped = '.'.join(filename_cropped)
cv2.imwrite(filename_cropped, cropped)
An sample image that works is
Something that does not work is
Can anyone help with this?
The first image works because the entire images besides your target is empty. Canny will also give other results when there is more in the image.
If you are looking for those specific cards I suggest you try to use some colour filtering first. You can try to filer for the blue/purple hue of the card.
Increasing the canny threshold could also work, but you will always still be finding the hand as well in this image unless you add some colour filtering.
You can also try Sobel edge detection . This will probably highlight the instant edges of the card pretty well. But then again, it will also show the hand, so you can't just take all the Sobel/Canny outputs. You need to add processing before it that isolates the card, or after it that can find the rectangular shape of the card in the sobel/canny.

Image.open() gives a plain white image

I am trying to edit this image:
However, when I run
im = Image.open(filename)
im.show()
it outputs a completely plain white image of the same size. Why is Image.open() not working? How can I fix this? Is there another library I can use to get non-255 pixel values (the correct pixel array)?
Thanks,
Vinny
Image.open actually seems to work fine, as does getpixel, putpixel and save, so you can still load, edit and save the image.
The problem seems to be that the temp file the image is saved in for show is just plain white, so the image viewer shows just a white image. Your original image is 16 bit grayscale, but the temp image is saved as an 8 bit grayscale.
My current theory is that there might actually be a bug in show where a 16 bit grayscale image is just "converted" to 8 bit grayscale by capping all pixel values to 255, resulting in an all-white temp image since all the pixels values in the original are above 30,000.
If you set a pixel to a value below 255 before calling show, that pixel shows correctly. Thus, assuming you want to enhance the contrast in the picture, you can open the picture, map the values to a range from 0 to 255 (e.g. using numpy), and then use show.
from PIL import Image
import numpy as np
arr = np.array(Image.open("Rt5Ov.png"))
arr = (arr - arr.min()) * 255 // (arr.max() - arr.min())
img = Image.fromarray(arr.astype("uint8"))
img.show()
But as said before, since save seems to work as it should, you could also keep the 16 bit grayscale depth and just save the edited image instead of using show.
you can use openCV library for loading images.
import cv2
img = cv2.imread('image file')
plt.show(img)

Why are my images displaying as grey squares?

I'm writing a script in Python for my image processing class, which should read a directory for images, display them, and then I will eventually add additional code to perform Otsu thresholding on these images. I can get a reference image to display properly to include Otsu thresholding; however, I run into trouble when I attempt to display the remaining images in the directory. I am not sure that my images are being read from the directory correctly, as I am trying to store them in an array; however, I can see the output window displays grey squares which correspond to the dimensions of the actual image resolutions, which suggests that they are being at least partly read correctly.
I've already attempted to isolate the script to load images and display them into a separate file and running it. I was concerned that the successful processing of my sample image (which included a black/white binarization) was somehow affecting my image display later. This was not the case, as running a separate script produced the same grey square output.
****Update****
I've managed to tweak the below script(not yet updated) to run almost correctly. By writing the full filepath directly for each file, I can get the output to display correctly. It appears there is some issue with loading images into an array, best I can tell; a potential workaround for future testing is importing file locations as a string array, and implementing that vs. loading images into an array directly.
import cv2 as cv
import numpy as np
from PIL import Image
import glob
from matplotlib import pyplot as plot
import time
image=cv.imread('Fig ref.jpg')
image2=cv.cvtColor(image, cv.COLOR_RGB2GRAY)
cv.imshow('Image', image)
# global thresholding
ret1,th1 = cv.threshold(image2,127,255,cv.THRESH_BINARY)
# Otsu's thresholding
ret2,th2 = cv.threshold(image2,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# Otsu's thresholding after Gaussian filtering
blur = cv.GaussianBlur(image2,(5,5),0)
ret3,th3 = cv.threshold(blur,0,255,cv.THRESH_BINARY+cv.THRESH_OTSU)
# plot all the images and their histograms
images = [image2, 0, th1,
image2, 0, th2,
blur, 0, th3]
titles = ['Original Noisy Image','Histogram','Global Thresholding (v=127)',
'Original Noisy Image','Histogram',"Otsu's Thresholding",
'Gaussian filtered Image','Histogram',"Otsu's Thresholding"]
for i in range(3):
plot.subplot(3,3,i*3+1),plot.imshow(images[i*3],'gray')
plot.title(titles[i*3]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+2),plot.hist(images[i*3].ravel(),256)
plot.title(titles[i*3+1]), plot.xticks([]), plot.yticks([])
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray')
plot.title(titles[i*3+2]), plot.xticks([]), plot.yticks([])
plot.show()
imageFolderPath = 'D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work'
imagePath = glob.glob(imageFolderPath + '/*.JPG')
im_array = np.array( [np.array(Image.open(img).convert('RGB')) for img in imagePath] )
temp=cv.imread("D:\Google Drive\Engineering\Senior Year\Image processing\Image processing group work\Fig ref.jpg")
cv.imshow('image', temp)
time.sleep(15)
for i in range(9):
cv.imshow('Image', im_array[i])
time.sleep(2)
plot.subplot(3,3,i*3+3),plot.imshow(images[i*3+2],'gray'): The second argument says you use gray color map. Get rid of it and you would get color displays.

Aligning and stitching images based on defined feature using OpenCV

I would like to create a panoramic image by combining 2 images in which the same feature, a plus sign.
I've used cv2.xfeatures2d.SIFT_create() to find keypoints in the image however it doesn't find the plus symbol very well. Is there some way I can improve this by making it search specifically for a plus-shaped feature?
import cv2
image1 = cv2.imread('example_image.png')
sift = cv2.xfeatures2d.SIFT_create()
kp = sift.detect(grey_image1, None)
kp_image = cv2.drawKeypoints(grey_image1, kp, None)
def showimage(image, name="No name given"):
cv2.imshow(name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return
showimage(kp_image)
The source image is here, second image to align is here. Here is the resulting image from the code above. This is an example of the desired output made using GIMP and manually aligning the two images (the second image will need to transformed to fit properly).`
NB I'm open to using other approaches outside of OpenCV/Python to solve this problem.

OpenCV cv2.rectangle output binary image

I have been trying to draw rectangle on a black image, uscv2.rectangle.Here is my code : (It is just a sample, in actual code there is a loop i.e values x2,y2,w2,h2 changes in a loop)
heir = np.zeros((np.shape(image1)[0],np.shape(image1)[1]),np.uint8);
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
It is giving the following output:
Why the image is like that? Why the boundaries are not just a line a width 5.
I have tried, but I am not able to figure it out.
Can't post this in a comment, but it's a negative answer: the same operations work for me on Windows/python 2.7.8/opencv3.1
import numpy as np
import cv2
heir = np.zeros((100,200),np.uint8);
x2=10
y2=20
w2=30
h2=40
cv2.rectangle(heir,(x2,y2),(x2+w2,y2+h2),(255,255,0),5)
cv2.imshow("img",heir);
cv2.waitKey()
Because you are loading the image to be tagged (draw rectangles) in grayscale, thats why when you are adding rectangles/bounding boxes the colors are being converted to grayscale.
To fix the issue, open image in "color" format. Since, you didn't included that part of code, here is the proposed solution:
tag_img = cv2.imread(MYIMAGE,1)
Pay attention to the second parameter here, which is "1" and means load image as color. Read more about reading images here: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html

Categories

Resources