Of late I have been trying different ways of telling my Raspberry Pi to send video to YouTube live stream. One of the things I wanted to be able to do is boot the Pi up, and it automatically starts the live stream on its own. The advantages to this are huge (won't have to tote around keyboard/mouse to start the stream, or have to ssh into the Pi to start the stream).
Now what I did to accomplish this was to make a Python program that pipes the stream from my encoder(FFmpeg) directly to the stream. My goal was to make the program work, and then, set it to run automatically. But every time I run the file in my terminal this is my result:
Traceback (most recent call last):
File "stream.py", line 22, in <module>
stream.stdin.close()
NameError: name 'stream' is not defined
[h264 # 0x19ed450] Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' and 'probesize' options
pi#raspberrypi:~ $ Input #0, h264, from 'pipe:':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264, none, 25 tbr, 1200k tbn, 50 tbc
Unknown input format: 'alsa'
Now I think I can fix I can fix some of those errors, but the biggest thing that worries me there is: the fact that "alsa" is unknown. I installed "libsasound" which is supposed to make Alsa usable, but that clearly did not help.
I am using Python 3.
This is my syntax for this program:
import subprocess
import picamera
import time
YOUTUBE="rtmp://a.rtmp.youtube.com/live2/"
KEY = ("MY PERSONAL ENCODER KEY")
stream_cmd = 'ffmpeg -f h264 -r 25 -i - -itsoffset 5.5 -fflags nobuffer -f alsa -ac 1 -i hw:1,0 -vcodec copy -acodec aac -ac 1 -ar 8000 -ab 32k -map 0:0 -map 1:0 -strict experimental -f flv ' + YOUTUBE + KEY
stream_pipe = subprocess.Popen(stream_cmd, shell=True, stdin=subprocess.PIPE)
camera = picamera.PiCamera(resolution=(640, 480), framerate=25)
try:
now = time.strftime("%Y-%m-%d-%H:%M:%S")
camera.framerate = 25
camera.vflip = True
camera.hflip = True
camera.start_recording(stream.stdin, format='h264', bitrate = 2000000)
while True:
camera.wait_recording(1)
except KeyboardInterrupt:
camera.stop_recording()
finally:
camera.close()
stream.stdin.close()
stream.wait()
print("Camera safely shut down")
print("Good bye")
Now maybe I am missing something simple here, but I don't know what. I have tried many ideas (e.g. replacing Alsa with some other input, naming the "stream" function.) I have no idea.
Related
I'm using Raspberry Pi to live stream to Youtube. I use Python and this is my script.
import time
import os
key = 'YouTube'
loader='ffmpeg -f pulse -1 alsa output.platform-bcm2835 audio.analog-stereo.monitor f xllgrab -framerate 24 -video size 740x480-B etc....' + key)
proc= os.system(loader)
process_id = proc.pid
print('Running'.process_id)
time.sleep (60)
proc.terminate()
import sys
sys.exit("Error message")
The script works. However, when I try and terminate it, it remains streaming even though the script finishes.
I also tried terminating the script process_id. However it doesn't stop the stream.
I use Python subprocess and that has never caused a problem in Linux (well, limited to pytest on GitHub Action to be honest):
import time
import subprocess as sp
import shlex
key = 'YouTube'
# this doesn't look like a valid FFmpeg command. Use the version which
# successfully streamed data
loader='ffmpeg -f pulse -1 alsa output.platform-bcm2835 audio.analog-stereo.monitor f xllgrab -framerate 24 -video size 740x480-B etc....' + key)
proc= sp.Popen(shlex.split(loader))
print(f'Running{proc.pid}')
time.sleep (60)
proc.terminate() # or proc.kill()
import sys
sys.exit("Error message")
If it is pulse or x11grab device refusing to terminate, you may need to 'kill' it instead.
This question already has answers here:
Getting FFProbe Information With Python
(7 answers)
How to extract the bitrate and other statistics of a video file with Python
(2 answers)
Closed last year.
I have a video downloaded from Telegram and I need to determine its bitrate.
I have moviepy (pip install moviepy, not the developer version).
Also, I have ffmpeg, but I don't know how to use it in python.
Also, any other library would work for me.
import moviepy.editor as mp
video = mp.VideoFileClip('vid.mp4')
mp3 = video.audio
if mp3 is not None:
mp3.write_audiofile("vid_audio.mp3")
mp3_size = os.path.getsize("vid_audio.mp3")
vid_size = os.path.getsize('vid.mp4')
duration = video.duration
bitrate = int((((vid_size - mp3_size)/duration)/1024*8))
http://timivanov.ru/kak-uznat-bitrate-i-fps-video-ispolzuya-python-i-ffmpeg/
try this:
def get_bitrate(file):
try:
probe = ffmpeg.probe(file)
video_bitrate = next(s for s in probe['streams'] if s['codec_type'] == 'video')
bitrate = int(int(video_bitrate['bit_rate']) / 1000)
return bitrate
except Exception as er:
return er
Here is a solution using FFprobe:
Execute ffprobe (command line tool) as sub-process and read the content of stdout.
Use the argument -print_format json for getting the output in JSON format.
For getting only the bit_rate entry, add argument -show_entries stream=bit_rate.
Convert the returned string to dictionary using dict = json.loads(data).
Get the bitrate from the dictionary and convert it to int: bit_rate = int(dict['streams'][0]['bit_rate']).
The code sample creates a sample video file for testing (using FFmpeg), and get the bitrate (using FFprobe):
import subprocess as sp
import shlex
import json
input_file_name = 'test.mp4'
# Build synthetic video for testing:
################################################################################
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=30 -f lavfi -i sine=frequency=400 -f lavfi -i sine=frequency=1000 -filter_complex amerge -vcodec libx264 -crf 17 -pix_fmt yuv420p -acodec aac -ar 22050 -t 10 {input_file_name}'))
################################################################################
# Use FFprobe for
# Execute ffprobe (to get specific stream entries), and get the output in JSON format
data = sp.run(shlex.split(f'ffprobe -v error -select_streams v:0 -show_entries stream=bit_rate -print_format json {input_file_name}'), stdout=sp.PIPE).stdout
dict = json.loads(data) # Convert data from JSON string to dictionary
bit_rate = int(dict['streams'][0]['bit_rate']) # Get the bitrate.
print(f'bit_rate = {bit_rate}')
Notes:
For some video containers like MKV, there is no bit_rate information so different solution is needed.
The code sample assumes that ffmpeg and ffprobe (command line tools) are in the execution path.
Solution for containers that has no bit_rate information (like MKV):
Based on the following post, we can sum the size of all the video packets.
We can also sum all the packets durations.
The average bitrate equals: total_size_in_bits / total_duration_in_seconds.
Here is a code sample for computing average bitrate for MKV video file:
import subprocess as sp
import shlex
import json
input_file_name = 'test.mkv'
# Build synthetic video for testing (MKV video container):
################################################################################
sp.run(shlex.split(f'ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=30 -f lavfi -i sine=frequency=400 -f lavfi -i sine=frequency=1000 -filter_complex amerge -vcodec libx264 -crf 17 -pix_fmt yuv420p -acodec aac -ar 22050 -t 10 {input_file_name}'))
################################################################################
# https://superuser.com/questions/1106343/determine-video-bitrate-using-ffmpeg
# Calculating the bitrate by summing all lines except the last one, and dividing by the value in the last line.
data = sp.run(shlex.split(f'ffprobe -select_streams v:0 -show_entries packet=size,duration -of compact=p=0:nk=1 -print_format json {input_file_name}'), stdout=sp.PIPE).stdout
dict = json.loads(data) # Convert data from JSON string to dictionary
# Sum total packets size and total packets duration.
sum_packets_size = 0
sum_packets_duration = 0
for p in dict['packets']:
sum_packets_size += float(p['size']) # Sum all the packets sizes (in bytes)
sum_packets_duration += float(p['duration']) # Sum all the packets durations (in mili-seconds).
# bitrate is the total_size / total_duration (multiply by 1000 because duration is in msec units, and by 8 for converting from bytes to bits).
bit_rate = (sum_packets_size / sum_packets_duration) * 8*1000
print(f'bit_rate = {bit_rate}')
I am trying to save audio clips (15 seconds per clip) from live stream using VLC library. I am unable to find any option that could allow me to record only 15 seconds from the live stream. Thus I ended up using timer in my code, but the recording clips sometimes contain 10 seconds, sometimes 20 seconds (rarely 15 seconds). Also, sometimes the audio content is repeated in the clips.
Here is the code (I am a newbie so please guide me)
Code.py
import os
import sys
import vlc
import time
clipNumber = sys.argv[1]
filepath = 'http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'
movie = os.path.expanduser(filepath)
if 'http://' not in filepath:
if not os.access(movie, os.R_OK):
print ( 'Error: %s file is not readable' % movie )
sys.exit(1)
filename_and_command = "--sout=#transcode{vcodec=none,acodec=mp3,ab=320,channels=2,samplerate=44100}:file{dst=clip" + str(clipNumber) + ".mp3}"
# filename_and_command = "--sout=file/ts:clip" + str(clipNumber) + ".mp3"
instance = vlc.Instance(filename_and_command)
try:
media = instance.media_new(movie)
except NameError:
print ('NameError: % (%s vs Libvlc %s)' % (sys.exc_info()[1],
vlc.__version__, vlc.libvlc_get_version()))
sys.exit(1)
player = instance.media_player_new()
player.set_media(media)
player.play()
time.sleep(15)
exit()
Now that I want to record 1 minute of the live-stream, I invoke this python code from the bash script 4 times and it creates 4 audio clips (clip1.mp3, clip2.mp3, clip3.mp3 and clip4.mp3)
Script.sh
for ((i=1; i<=4; i++))
do
printf "Recording stream #%d\n", "$i"
python code.py "$i"
printf "Finished stream #%d\n", "$i"
done
Is there anyway to just loop the code with Python instead of invoking again and again with bash script (I tried to put the code in the loop in python, but the first clip - clip1 - keeps recording and never finishes recording). And a way to specify that I could only record 15 seconds from the live-stream instead of using time.sleep(15)
If you just want to save the file, no need to use vlc. Here is a short procedure I use to do that:
def record(filepath, stream, duration):
fd = open(filepath, 'wb')
begin = datetime.now()
duration = timedelta(milliseconds=duration)
while datetime.now() - begin < duration:
data = stream.read(10000)
fd.write(data)
fd.close()
Exemple of use to record during one second:
from urllib.request import urlopen
record('clip.mp3', urlopen('http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'), 1000)
All of the work required can easily be done with FFMPEG as:
ffmpeg -i streamURL -c copy -vn -ac 2 -acodec aac -t 15
-vn for just recording the audio part (without video)
-t for specifying the duration of stream you want to record (15 sec here)
Hi I'm attempting to capture a webcam stream with python using the ffmpeg-python wrapper library (https://github.com/kkroening/ffmpeg-python)
I have a working ffmpeg command which is:
ffmpeg -f v4l2 -video_size 352x288 -i /dev/video0 -vf "drawtext='fontfile=fonts/FreeSerif.ttf: text=%{pts} : \
x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1'" -an -y -t 15 videotests/out_localtime8.mp4
This captures 15s of video in resolution 352x288, and writes a timestamp in the bottom centre of the video.
To play with the ffmpeg-python library, I'm simply attempting to only get the drawtext filter working, here is my script:
#!/usr/bin/env python
import ffmpeg
stream = ffmpeg.input('videotests/example.mov')
stream = ffmpeg.filter_(stream,'drawtext',("fontfile=fonts/FreeSerif.ttf:text=%{pts}"))
stream = ffmpeg.output(stream, 'videotests/output4.mp4')
ffmpeg.run(stream)
The error is
[Parsed_drawtext_0 # 0x561f59d494e0] Either text, a valid file or a timecode must be provided
[AVFilterGraph # 0x561f59d39080] Error initializing filter 'drawtext' with args 'fontfile\\\=fonts/FreeSerif.ttf\\\:text\\\=%{pts}'
Error initializing complex filters.
Invalid argument
The above appears to at least reach ffmpeg but the format of the arguments is incorrect, how to correct them?
Alternatively, when I attempting to split the argument to just pass one of them, I get a different error, as follows:
stream = ffmpeg.filter_(stream,'drawtext',('text=%{pts}'))
Error is
subprocess.CalledProcessError: Command '['ffmpeg', '-i', 'videotests/example.mov', '-filter_complex', "[0]drawtext=(\\\\\\\\\\\\\\'text\\\\\\\\\\\\=%{pts}\\\\\\\\\\\\\\'\\,)[s0]", '-map', '[s0]', 'videotests/output4.mp4']' returned non-zero exit status 1.
How come there are so many backslashes? Any advice on how to proceed please.
Thank you
I worked out the correct syntax eventually. Here is a working example
#!/usr/bin/env python
import ffmpeg
stream = ffmpeg.input('videotests/example.mov')
stream = ffmpeg.filter_(stream,'drawtext',fontfile="fonts/hack/Hack-Regular.ttf",text="%{pts}",box='1', boxcolor='0x00000000#1', fontcolor='white')
stream = ffmpeg.output(stream, 'videotests/output6.mp4')
ffmpeg.run(stream)
The syntax is
ffmpeg.filter_(<video stream name>,'<filter name>',filter_parameter_name='value',<filter_parameter_name>=value)
Where necessary use quotes for the filter_parameter_name values.
Hope this helps someone.
step 1: set environment variable for ffmpeg.
step 2: below code will help to to capture image as well as video using ffmpeg in python along with its current date and time.
import subprocess
from datetime import datetime
import time
class Webcam:
def Image(self):
try:
user = int(input("How many Images to be captured:"))
except ValueError:
print("\nPlease only use integers")
for i in range (user):
subprocess.call("ffmpeg -f vfwcap -vstats_file c:/test/log"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".txt -t 10 -r 25 -i 0 c:/test/sample"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".jpg")
time.sleep(3)
def Video(self):
try:
user = int(input("How many videos to be captured:"))
except ValueError:
print("\nPlease only use integers")
for i in range (user):
subprocess.call("ffmpeg -f vfwcap -vstats_file c:/test/log"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".txt -t 10 -r 25 -i 0 c:/test/sample"+ datetime.now().strftime("_%Y%m%d_%H%M%S") +".avi")
time.sleep(5)
Web=Webcam()
print ("press 1 to capture image")
print ("Press 2 to capture video")
choose = int(input("Enter choice:"))
if choose == 1:
Web.Image()
elif choose == 2:
Web.Video()
else:
print ("wrong choose")
import subprocess : Is use to call FFMPEG command.
subprocess is the in-build module provided by python
I'm working on a music classification methodology with Scikit-learn, and the first step in that process is converting a music file to a numpy array.
After unsuccessfully trying to call ffmpeg from a python script, I decided to simply pipe the file in directly:
FFMPEG_BIN = "ffmpeg"
cwd = (os.getcwd())
dcwd = (cwd + "/temp")
if not os.path.exists(dcwd): os.makedirs(dcwd)
folder_path = sys.argv[1]
f = open("test.txt","a")
for f in glob.glob(os.path.join(folder_path, "*.mp3")):
ff = f.replace("./", "/")
print("Name: " + ff)
aa = (cwd + ff)
command = [ FFMPEG_BIN,
'-i', aa,
'-f', 's16le',
'-acodec', 'pcm_s16le',
'-ar', '22000', # ouput will have 44100 Hz
'-ac', '1', # stereo (set to '1' for mono)
'-']
pipe = sp.Popen(command, stdout=sp.PIPE, bufsize=10**8)
raw_audio = pipe.proc.stdout.read(88200*4)
audio_array = numpy.fromstring(raw_audio, dtype="int16")
print (str(audio_array))
f.write(audio_array + "\n")
The problem is, when I run the file, it starts ffmpeg and then does nothing:
[mp3 # 0x1446540] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '/home/don/Code/Projects/MC/Music/Spaz.mp3':
Metadata:
title : Spaz
album : Seeing souns
artist : N*E*R*D
genre : Hip-Hop
encoder : Audiograbber 1.83.01, LAME dll 3.96, 320 Kbit/s, Joint Stereo, Normal quality
track : 5/12
date : 2008
Duration: 00:03:50.58, start: 0.000000, bitrate: 320 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 320 kb/s
Output #0, s16le, to 'pipe:':
Metadata:
title : Spaz
album : Seeing souns
artist : N*E*R*D
genre : Hip-Hop
date : 2008
track : 5/12
encoder : Lavf56.4.101
Stream #0:0: Audio: pcm_s16le, 22000 Hz, mono, s16, 352 kb/s
Metadata:
encoder : Lavc56.1.100 pcm_s16le
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
It just sits there, hanging, for far longer than the song is. What am I doing wrong here?,
I recommend you pymedia or audioread or decoder.py. There are also pyffmpeg and similar modules for doing just that what you want. Take a look at pypi.python.org.
Of course, these will not help you turn the data into numpy array.
Anyway, this is how it is done crudely using piping to ffmpeg:
from subprocess import Popen, PIPE
import numpy as np
def decode (fname):
# If you are on Windows use full path to ffmpeg.exe
cmd = ["./ffmpeg.exe", "-i", fname, "-f", "wav", "-"]
# If you are on W add argument creationflags=0x8000000 to prevent another console window jumping out
p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
data = p.communicate()[0]
return np.fromstring(data[data.find("data")+4:], np.int16)
This is how it should work for basic use.
It should work because output of ffmpeg is by default 16 bit audio.
But if you mess around, you should know that numpy doesn't have int24, so you will be forced to do some bit manipulations and represent 24 bit audio as 32 bit audio. Just, don't use 24 bit, and the world is happy. :D
We may discuss refining the code in comments, if you need something more sophisticated.
Here's what I'm using: It uses pydub (which uses ffmpeg) and scipy.
Full setup (on Mac, may differ on other systems):
pip install scipy
pip install pydub
brew install ffmpeg # Or probably "sudo apt-get install ffmpeg on linux"
Then to read the mp3:
import tempfile
import os
import pydub
import scipy
import scipy.io.wavfile
def read_mp3(file_path, as_float = False):
"""
Read an MP3 File into numpy data.
:param file_path: String path to a file
:param as_float: Cast data to float and normalize to [-1, 1]
:return: Tuple(rate, data), where
rate is an integer indicating samples/s
data is an ndarray(n_samples, 2)[int16] if as_float = False
otherwise ndarray(n_samples, 2)[float] in range [-1, 1]
"""
path, ext = os.path.splitext(file_path)
assert ext=='.mp3'
mp3 = pydub.AudioSegment.from_mp3(file_path)
_, path = tempfile.mkstemp()
mp3.export(path, format="wav")
rate, data = scipy.io.wavfile.read(path)
os.remove(path)
if as_float:
data = data/(2**15)
return rate, data
Credit to James Thompson's blog