How to give video frame rate when appending to mp4 file - python

I'm trying to write a simple time lapse desktop video recorder which you can use to append to the same file over multiple sessions, and the following code works, however it requires encoding with ffmpeg to set an fps that isn't a default 25. I want to avoid using cv2 so i'm going the long way around this problem.
So what would be a way to write the video to include an fps setting of some sort, without having to re-encode with ffmpeg.
import fpstimer
from typing import BinaryIO
from io import BytesIO
from PIL import Image, ImageGrab
timer = fpstimer.FPSTimer(0.2)
def video(chunk: bytes, file_handler: BinaryIO):
file_handler.write(chunk)
while True:
try:
frame = ImageGrab.grab().convert('RGB')
buf = BytesIO()
frame.save(buf, format='PNG')
byte_im = buf.getvalue()
with open('output.mp4', "ab") as fh:
video(byte_im, fh)
timer.sleep()
except KeyboardInterrupt:
print('done')
raise

Related

How to process data before downloading using st.download_button with on_click callback?

I have an app running where my model gives result a np.ndarray and I'm showing the results as st.image(result_matrix). I want to add a functionality where I want to give my users the ability to download this image but the problem is that I have to convert to that to PIL.Image and send the buffer.getvalue() as input to this button. I can do this too but my users don't download very often and to save the computation power and load, I'm not converting EVERY result to PIL.Image.
Is there any functionality where you can download the data, after processing it, on demand?
I tried doing the below but gave me obvious error that it doesn't accept numpy array:
import streamlit as st
from PIL import Image
import numpy as np
from io import BytesIO
st.session_state['result'] = some_numpy_RGB_array
def process_image():
img = Image.fromarray(st.session_state['result'])
buffer = BytesIO()
img.save(buffer, format="jpeg")
st.session_state['result'] = buffer.getvalue()
_ = st.download_button(label="Download",data=st.session_state['result'],file_name="image.jpeg",mime="image/jpeg",on_click=process_image)
I'm only aware of the workaround given here:
import streamlit as st
def generate_report(repfn):
with open(repfn, 'w') as f:
f.write('Report')
st.write('done report generation')
if st.button('generate report'):
repfn = 'report.pdf'
generate_report(repfn)
with open(repfn, "rb") as f:
st.download_button(
label="Download report",
data=f,
file_name=repfn)
It's not ideal, because the user has to click two buttons: one to generate the (in your case) image, and a second one to actually download it. But I guess it's better than nothing.

Cover art size from a MP3 with Python

I'am trying to write a script that gives me the dimension of the Cover art in a mp3 file, the furthest I've is via Mutgen doing:
audiofile = mutagen.File(wavefile, easy=False)
print(audiofile.tags)
but then from that raw output how can I extract the dimensions like 400x400
You can use stagger and PIL, i.e.:
import stagger, io, traceback
from PIL import Image
try:
mp3 = stagger.read_tag('Menuetto.mp3')
im = Image.open(io.BytesIO(mp3[stagger.id3.APIC][0].data))
print(im.size)
# (300, 300)
# im.save("cover.jpg") # save cover to file
except:
pass
print(traceback.format_exc())

How to read Youtube live stream using openCV python?

I want to read a live stream from youtube to perform some basic CV things, probably we have to somehow strip of the youtube URL to convert it in a format that might be readable by openCV like?:
cap = cv2.VideoCapture('https://www.youtube.com/watch?v=_9OBhtLA9Ig')
has anyone done it?
I am sure you already know the answer by now, but I will answer for others searching the same topic. You can do this by using Pafy
(probably together with youtube_dl).
import pafy
import cv2
url = "https://www.youtube.com/watch?v=_9OBhtLA9Ig"
video = pafy.new(url)
best = video.getbest(preftype="mp4")
capture = cv2.VideoCapture(best.url)
while True:
grabbed, frame = capture.read()
# ...
And that should be it.
I've added Youtube URL source support in my VidGear Python Library that automatically pipelines YouTube Video into OpenCV by providing its URL only. Here is a complete python example:
# import libraries
from vidgear.gears import CamGear
import cv2
stream = CamGear(source='https://youtu.be/dQw4w9WgXcQ', stream_mode = True, logging=True).start() # YouTube Video URL as input
# infinite loop
while True:
frame = stream.read()
# read frames
# check if frame is None
if frame is None:
#if True break the infinite loop
break
# do something with frame here
cv2.imshow("Output Frame", frame)
# Show output window
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
#if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
# safely close video stream.
stream.stop()
Code Source
After 100-120 frames the answer from #lee hannigan was crashing on me for a live stream on youtube.
I worked out a method with Pafy to just grab x number of frames and splice them together. This ended up poorly stitching the chunks together though, and gave choppy results. Pafy may not be designed for live streams, I couldn't find a way to stitch the frames together seamlessly.
What worked in the end is below, slightly modified from guttentag_liu's answer on this post. It takes a few more packages, and is lengthy, but works. Because the file is live, it needs to be in chunks, hence saving to a temporary file. You could probably do your openCV work on each chunk, then save to a file in the end instead of re-opening.
# pip install urllib
# pip install m3u8
# pip install streamlink
from datetime import datetime, timedelta, timezone
import urllib
import m3u8
import streamlink
import cv2 #openCV
def get_stream(url):
"""
Get upload chunk url
input: youtube URL
output: m3u8 object segment
"""
#Try this line tries number of times, if it doesn't work,
# then show the exception on the last attempt
# Credit, theherk, https://stackoverflow.com/questions/2083987/how-to-retry-after-exception
tries = 10
for i in range(tries):
try:
streams = streamlink.streams(url)
except:
if i < tries - 1: # i is zero indexed
print(f"Attempt {i+1} of {tries}")
time.sleep(0.1) #Wait half a second, avoid overload
continue
else:
raise
break
stream_url = streams["best"] #Alternate, use '360p'
m3u8_obj = m3u8.load(stream_url.args['url'])
return m3u8_obj.segments[0] #Parsed stream
def dl_stream(url, filename, chunks):
"""
Download each chunk to file
input: url, filename, and number of chunks (int)
output: saves file at filename location
returns none.
"""
pre_time_stamp = datetime(1, 1, 1, 0, 0, tzinfo=timezone.utc)
#Repeat for each chunk
#Needs to be in chunks because
# 1) it's live
# 2) it won't let you leave the stream open forever
i=1
while i <= chunks:
#Open stream
stream_segment = get_stream(url)
#Get current time on video
cur_time_stamp = stream_segment.program_date_time
#Only get next time step, wait if it's not new yet
if cur_time_stamp <= pre_time_stamp:
#Don't increment counter until we have a new chunk
print("NO pre: ",pre_time_stamp, "curr:",cur_time_stamp)
time.sleep(0.5) #Wait half a sec
pass
else:
print("YES: pre: ",pre_time_stamp, "curr:",cur_time_stamp)
print(f'#{i} at time {cur_time_stamp}')
#Open file for writing stream
file = open(filename, 'ab+') #ab+ means keep adding to file
#Write stream to file
with urllib.request.urlopen(stream_segment.uri) as response:
html = response.read()
file.write(html)
#Update time stamp
pre_time_stamp = cur_time_stamp
time.sleep(stream_segment.duration) #Wait duration time - 1
i += 1 #only increment if we got a new chunk
return None
def openCVProcessing(saved_video_file):
'''View saved video with openCV
Add your other steps here'''
capture = cv2.VideoCapture(saved_video_file)
while capture.isOpened():
grabbed, frame = capture.read() #read in single frame
if grabbed == False:
break
#openCV processing goes here
#
cv2.imshow('frame',frame) #Show the frame
#Shown in a new window, To exit, push q on the keyboard
if cv2.waitKey(20) & 0xFF == ord('q'):
break
capture.release()
cv2.destroyAllWindows() #close the windows automatically
tempFile = "temp.ts" #files are format ts, open cv can view them
videoURL = "https://www.youtube.com/watch?v=_9OBhtLA9Ig"
dl_stream(videoURL, tempFile, 3)
openCVProcessing(tempFile)
Probably, because Youtube does not provide the like/dislike counts anymore, the first solution gives error. As a solution, you should comment the 53rd and 54th lines in the backend_youtube_dl.py in pafy package file as in the image below, after that the code in the first solution will work:
Secondly, you can get not get audio with OpenCV, it is a computer vision library, not multimedia. You should try other options for that.

Get Binary Representation of PIL Image Without Saving

I am writing an application that uses images intensively. It is composed of two parts. The client part is written in Python. It does some preprocessing on images and sends them over TCP to a Node.js server.
After preprocessing, the Image object looks like this:
window = img.crop((x,y,width+x,height+y))
window = window.resize((48,48),Image.ANTIALIAS)
To send that over socket, I have to have it in binary format. What I am doing now is:
window.save("window.jpg")
infile = open("window.jpg","rb")
encodedWindow = base64.b64encode(infile.read())
#Then send encodedWindow
This is a huge overhead, though, since I am saving the image to the hard disk first, then loading it again to obtain the binary format. This is causing my application to be extremely slow.
I read the documentation of PIL Image, but found nothing useful there.
According to the documentation, (at effbot.org):
"You can use a file object instead of a filename. In this case, you must always specify the format. The file object must implement the seek, tell, and write methods, and be opened in binary mode."
This means you can pass a StringIO object. Write to it and get the size without ever hitting the disk.
Like this:
s = StringIO.StringIO()
window.save(s, "jpg")
encodedWindow = base64.b64encode(s.getvalue())
use BytesIO
from io import BytesIO
from PIL import Image
photo=Image.open('photo.jpg')
s=BytesIO()
photo.save(s,'jpeg')
data = s.getvalue()
with open('photo2.jpg', mode='wb') as f:
f.write(data)
It's about the difference between in-memory file-like object and BufferedReader object.
Here is my experiment in Jupyter(Python 3.8.10):
from PIL import Image as PILImage, ImageOps as PILImageOps
from IPython.display import display, Image
from io import BytesIO
import base64
url = "https://learn.microsoft.com/en-us/archive/msdn-magazine/2018/april/images/mt846470.0418_mccaffreytrun_figure2_hires(en-us,msdn.10).png"
print("get computer-readable bytes from the url")
img_bytes = requests.get(url).content
print(type(img_bytes))
display(Image(img_bytes))
print("convert to in-memory file-like object")
in_memory_file_like_object = BytesIO(img_bytes)
print(type(in_memory_file_like_object))
print("convert to an PIL Image object for manipulating")
pil_img = PILImage.open(in_memory_file_like_object)
print("let's rotate it, and it remains a PIL Image object")
pil_img.show()
rotated_img = pil_img.rotate(45)
print(type(rotated_img))
print("let's create an in-memory file-like object and save the PIL Image object into it")
in_memory_file_like_object = BytesIO()
rotated_img.save(in_memory_file_like_object, 'png')
print(type(in_memory_file_like_object))
print("get computer-readable bytes")
img_bytes = in_memory_file_like_object.getvalue()
print(type(img_bytes))
display(Image(img_bytes))
print('convert to base64 to be transmitted over channels that do not preserve all 8-bits of data, such as email')
# https://stackoverflow.com/a/8909233/3552975
base_64 = base64.b64encode(img_bytes)
print(type(base_64))
# https://stackoverflow.com/a/45928164/3552975
assert base64.b64encode(base64.b64decode(base_64)) == base_64
In short you can save a PIL Image object into an in-memory file-like object by rotated_img.save(in_memory_file_like_object, 'png') as shown above, and then conver the in-memory file-like object into base64.
from io import BytesIO
b = BytesIO()
img.save(b, format="png")
b.seek(0)
data = b.read()
del b

PIL- is possible to give PIL a binary object rather than a file handle?

I can get PIL to work with files: Image.open('example.jpg').
Is there a way of doing the same with a jpg thats created in code, without writing that jpg to the HDD: Image.open('binaryObject').
I've tried giving PIL the binary output of a function, and I have tried (but probably got the implimentation wrong for) the parser attribute: http://effbot.org/imagingbook/imagefile.htm
from PIL import Image
f = open(image.jpg, "rb")
f_data = f.read()
try:
Image.parser()
parser.feed(f_data)
parser.close()
print "OK"
except:
print "fail"
My HDD can't keep up!
Instead of writing to disk, write the image to memory with a StringIO object:
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
fake_file = StringIO()
fake_file.write(f_data)
Now, you can pass fake_file to Image() as a file handle.

Categories

Resources