Dominant RGB analysis video - python

I'm trying to find the most dominant RGB color in each frame of a video (made with piCamera) and save this into a csv (or matrix, anything i can use outside of the for loop). But everytime I go out of the for loop, it only saves the last frame RGB data.
I've tried: with open("filename.csv", 'w') as f: but I got a blank file; and export.to_csv only saved the last rgb value. Does anyone know how to do this? Thanks in advance!
'''
while True:
_, frame = cap.read()
#Check whether there are still frames to analyze
if frame is not None:
RGB = frame
# Display the resulting frame
shape = frame.shape
RGBS = RGB.reshape((shape[0]*shape[1],3))
num_clusters = 1
clusters = KMeans(n_clusters=num_clusters)
clusters.fit(RGBS)
# count the dominant colors and put them in "buckets"
histogram = make_histogram(clusters)
# then sort them, most-common first
combined = zip(histogram, clusters.cluster_centers_)
combined2 = sorted(values, key=lambda x: x[0], reverse=True)
# finally, we'll output a graphic showing the colors in order
bars = []
hsv_values=[]
rgb_values = []
for index, rows in enumerate(combined2):
bar, rgb, hsv = make_bar(100, 100, rows[1])
#print(rgb)
#rgb_values.append(rgb)
#hsv_values.append(hsv)
#bars.append(bar)
key = cv2.waitKey(1)
if key == 27:
break
# End the loop if there is no more frame to analyse
else:
break
'''

Related

Smoothly fade color image to grayscale based on input value between 0-1 for RGB Numpy Array

I am writing some code to display a camera input to a 32*32 LED array.
My code to get the image and display it looks like this:
def start_cam(x,y):
# Start the webcam
webcam = cv2.VideoCapture(0)
# Set frame rate to 45 frames per second
frame_rate = 45
# Loop 45 times per second
while True:
# Capture a frame from the webcam
ret, frame = webcam.read()
# Resize the frame to 16x16
frame = cv2.resize(frame, (x, y))
frame = sp_noise(frame,0.85)
# Get input orientation
orientation = 0
# Rotate the frame by 90 degrees based on user input
if orientation == 1:
frame = np.rot90(frame)
elif orientation == 2:
frame = np.rot90(frame, 2)
elif orientation == 3:
frame = np.rot90(frame, 3)
# Initialize empty list to store RGB values
rgb_list = []
# Loop through each pixel in the frame
for i in range(x):
for j in range(y):
# Get RGB values of each pixel
r, g, b = frame[i, j]
# Append RGB values to list
rgb_list = rgb_list + [b, g, r]
# Print the list of RGB values
#print(rgb_list)
rgb_out = []
for i in rgb_list:
rgb_out.append(gamma[i]//2)
rgb_out = sp_noise(rgb_out,0.2)
temp_send(rgb_out, x,y)
I have a function already made called sp.noise that adds salt and pepper static to the image based on a value between 0-1. I would like to make a second image processing function that would have the image go from being fully colored at a value of 0, to fully gray at a value of 1.
How could I go about making a smooth gray-scale function for my RGB value NP array?
I wrote a function that simply computes both gray and color, and averages them weighing them based on the input value, but that is incredibly inefficient. And reduces my FPS to unusable levels.
To "make an image fully gray" is to desaturate an image; so removing the colors while retaining the hue and brightness of the pixels. You can:
First, convert your RGB image into HSL space. This will convert your (red, green, blue) pixel triplets into (hue, saturation, lightness) triplets, where "how much color a pixel has" is contained within the single value saturation.
For OpenCV you can use something like output = cv2.cvtColor(img, cv2.COLOR_RGB2HSV) or with cv2.COLOR_BGR2HSV depending on your input color
Simple example from GeeksforGeeks
OpenCV example
Then you can write a simple desaturation function. For example, a function to multiply the saturation of each pixel by your value of range [0,1]. This will make the image "fully gray" at 0, and "fully color" at 1.
(Optional) You can then convert the image back to RGB if necessary with the same function, but different flag: output = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)

Add LSB of image pixel into LSB of pixels of video frames

I am working on a code to hide an image inside a video. For the same, I have broken down the video into its frames and extracted the RGB values of pixels of each frame. I have also written a code in python that extracts RGB values of the image.
Now, I want to embed the image inside the video. For this I want to use LSB approach according to which the LSB of video frame pixels are replaced by LSB of image pixels. The RGB pixel values are in binary form (8-bit) and the last three bits are to be replaced.
I am not getting any insight into how to proceed. Also, I need to use a method by which I can decrypt the video once encrypted.
Python code that deals with video
import numpy as np
import cv2 as cv
from numpy import binary_repr
from PIL import Image
vidcap = cv.VideoCapture("video.mp4")
if not vidcap.isOpened():
print("Cannot open")
exit()
while True:
# Capture frame-by-frame
ret, frame = vidcap.read()
# -------------------------------------------------------------> step 2 - split
# if frame is read correctly, ret is True
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
# Our operations on the frame come here
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
# Display the resulting frame
cv.imshow('frame', gray)
width, height, d= frame.shape
print("reshaped...")
row=int(width*height)
newframe = frame.reshape(row,3) #2D mein change kiya
newframe_list=newframe.tolist()
# print(type(newframe_list))
all_pixels=[] #empty list
# print(newframe_list)
for i in newframe_list:
all_pixels.extend(i)
for i in all_pixels:
x=np.binary_repr(all_pixels[i], width=8)
vidpix=x[5:]
print(x)
print("image pixels")
#kiwi wali image ke pixels iterate kr rhe
def img_pix():
img = Image.open("kiwi.jpg")
pixels = img.load() # this is not a list, nor is it list()'able
w, h = img.size
all_img_pixels = []
for m in range(w):
for n in range(h):
cpixel = pixels[m, n]
all_img_pixels.append(cpixel)
for m in range(w):
for n in range(3):
z = np.binary_repr(all_img_pixels[m][n], width=8)
imgpix = z[6:]
print(z)
img_pix()
if cv.waitKey(1) == ord('q'):
break
# When everything done, release the capture
vidcap.release()
cv.destroyAllWindows()
code dealing with image
import numpy as np
import cv2 as cv
from PIL import Image
from numpy import binary_repr
def img_pix():
img = Image.open("kiwi.jpg")
pixels = img.load() # this is not a list, nor is it list()'able
width, height = img.size
all_img_pixels = []
for m in range(width):
for n in range(height):
cpixel = pixels[m, n]
all_img_pixels.append(cpixel)
for m in range(width):
for n in range(3):
z=np.binary_repr(all_img_pixels[m][n], width=8)
imgpix=z[5:]
print(imgpix)
img_pix()
Word of caution
If you want to embed a secret in the LSBs of video frames, you need to save those video frames in a lossless format, or else the pixels of those frames will be slightly modified, thus destroying your secret. This is the same issue you'd have if you were to embed a secret in an image and then save it to jpeg.
First things first, do not load the pixels of a secret image. Just load the bytestream of the image itself. A compressed jpeg file may be, for example, 100 kB, or ~100k bytes in total. The same image, which could be 1000x1000, would have 1 million pixels (x3 for the RGB of each pixel). This would require 30x more the capacity to hide.
You haven't "extracted" the secret image and video pixels. You've merely printed them to the console. But if you do collect them in relevant lists/arrays, then you can iterate as many video frame pixels as necessary until you've embedded all your secret bits. An example where you embed a bit triplet in each RGB pixel of each video frame is shown below:
def load_secret(fname):
with open(fname, 'rb') as f:
data = f.read()
return data
secret_bytes = load_secret("kiwi.jpg")
bits = []
# As it's assumed you'll be embedding 3 bits in each pixel,
# we'll split each byte in three triplets.
for byte in secret_bytes:
for k in range(6, -1, -3):
bits.append((byte >> k) & 0x07)
# Now start reading your video frames and count how many
# triplets you have embedded so far.
index = 0
while True:
ret, frame = vidcap.read()
if index < len(bits):
# Assuming you want to embed in each RGB pixels,
# you can embed up to `width x height x 3` triplets.
size = np.prod(frame.shape)
bit_groups = np.array(bits[index:index+size], dtype=np.uint8)
# Flatten the frame for quick embedding
frame_flat = frame.flatten()
# Embed as many bit groups as necessary
frame_flat[:len(bit_groups)] = (frame_flat[:len(bit_groups)] & 0b11111000) | bit_groups
# Reshape it back
new_frame = np.reshape(frame_flat, frame.shape)
index += size
else:
new_frame = frame
# You can now write `new_frame` to a new video.
Extracting the secret is then a process of iterating over all pixels from each video frame extracting the 3 LSBs and stitching three triplets into a byte.
bits = []
while True:
ret, frame = vidcap.read()
flat_frame = frame.flatten()
bits.extend(flat_frame & 0x07)
# You need to decide how many triplets is enough to extract
bytestream = b''
for i in range(0, len(bits), 3):
bytestream += bytes([(bits[i] << 6) | (bits[i+1] << 3) | bits[i+2]])
# `bytestream` should now be equal to `secret_bytes`
with open('extracted_kiwi.jpg', 'wb') as f:
f.write(bytestream)

How to find value of each pixel in all the frames of a video using python

I have a video with dimensions say 1280, 720 and 166 frames. Now I want to track the value of each pixel at a given position. Like at position (100, 100), I want to get the value of the pixel at this position in all the 166 frames. This will give me 166 value.
Likewise I want to find values of pixels at all the positions in the frame.Then fit a curve for all the values of each pixel one by one afterward.
This is code I wrote but this is able to obtain pixel values at the specified position only.
cap= cv2.VideoCapture('video.mp4')
success, image = cap.read()
count = 0
while success:
count += 1
v = image[500, 700]
s = sum(v)/len(v)
print(" Count {} Pixel at (500, 700) - Value {} ".format(count,v))
success, image = cap.read()
cap.release()
cv2.destroyAllWindows()
And I also tried using width and height of the frame but it is reading the positions randomly and I want them to be stored in sequence because I want to plot a graph for each of them:
while success:
count += 1
for x in range(0,width):
for y in range(0,height):
print(image[x,y,z])
So what changes do I have to make in this code to get all the values.
cap= cv2.VideoCapture('video.mp4')
success, image = cap.read()
count = 0
while success:
count += 1
print("{}th image from the movie ".format(count))
for x in range(0,1280-1):
for y in range(0,720-1):
v = image[y, x]
print("Pixel at ({}, {}) - Value {} ".format(x,y,v))
success, image = cap.read()
cap.release()
cv2.destroyAllWindows()

Heatmap on Moving People

I'm writing a code to visualize motion path and coverage area that people walking in street or shopping center to show the more interesting place for people with heatmap function.
I grab the video frame and convert to gray. then extracted Height and width of the frame. After that create a mask by background subtraction algorithm and applying on the frame. Next I applied np.zeros on the frame and create the accumulator frame and add the background image with the accumulator and the converted frame from 0-1 values between 0-255 by 'cv2.convertScaleAbs' and then implemented thresholding and apply colormap_jet and finally I added the result of threshold frame with RGB color frame.
mask = cv2.createBackgroundSubtractorMOG2()
while True:
ret,frame1 = vs.read()
frame = cv2.cvtColor(frame1,cv2.COLOR_RGB2GRAY)
(height,width) = frame.shape[:2]
sub = mask.apply(frame,None,0.01)
accumulator = np.zeros((height,width), dtype = np.float)
sub = sub + accumulator
ab = cv2.convertScaleAbs(255-np.array(sub,'uint8'))
ret, acc_thresh = cv2.threshold(ab, ab.mean(),255, cv2.THRESH_TOZERO)
acc_col = cv2.applyColorMap(acc_thresh, cv2.COLORMAP_JET)
backg = cv2.addWeighted(np.array(acc_col,'uint8'),0.55,frame1,0.55,0)
cv2.imshow("frame11",backg)
But I can not get correct result when I run my code. I want to change the color of an area that people moving a lot.

Python + OpenCV: Mask image with RGB pixels from another image

I have an RGB video and a single keyframe from that video. In that keyframe, the user will apply a binary mask.
I want to create a mask of the video where pixels have values that exist in the keyframe's masked region.
In other words, I want to create a list of RGB pixel values that exist in the mask of the keyframe, and create a mask of all other frames on the condition that the pixel values exist within the list. Pixel values can be (0,0,0)-(255,255,255)
My current implementation, although technically correct, is extremely inefficient, and I imagine there must be something better.
count = 0
for x in sequence
img = cv2.imread(x)
curr = np.zeros(img.shape[:2],dtype = np.uint16)
for x in range(img.shape[0]):
for y in range(img.shape[1]):
tuple = (img[x][y][0],img[x][y][1],img[x][y][2])
if tuple not in dict:
dict[tuple] = count
curr[x][y] = count
count+=1
else:
curr[x][y] = dict[tuple]
newsequence.append(curr)
#in another function, generate mask2, the mask of the keyframe
immask = cv2.bitwise_and(newsequence[keyframe],newsequence[keyframe],mask=mask2[index].astype('uint8'))
immask = [x for x in immask.flatten() if x != 0]
#for thresholding purposes (if at least 80% of pixels with that value are selected in the keyframe)
valcount= np.bincount(immask)
truecount = np.bincount(newsequence[keyframe].flatten())
frameset = set(immask)
framemask = list(frameset)
framemask = [x for x in framemask if (float(valcount[x])/float(truecount[x]))>0.8]
for frame in range(0,numframes):
for val in framemask:
mask[frame] = np.where((newsequence[frame]==val),255,0).astype('uint8')

Categories

Resources