My objective is to split an h.264 stream into multiple parts, meaning while reading the stream from a pipe i would like to save it into x second long packages (in my case 10).
I am using a libcamera-vid subprocess on my Raspberry Pi that outputs the h.264 stream into stdout.
Might be irrelevant, depends: libcamera-vid outputs a message every frame and I am able to locate it at isFrameStopLine
To convert the stream, I use an ffmpeg subprocess, as you can see in the code below.
Imagine it like that:
Stream is running...
- Start recording to a file
- Sleep x seconds
- Finish recording to file
- Start recording a new file
- Sleep x seconds
- Finish recording the new file
- and so on...
Here is my current code, however upon running the first export succeeds, and after the second or third the ffmpeg-subprocess is terminating with the error:
pipe:: Invalid data found when processing input
And shortly after, the python process, because of the ffmpeg termination i believe.
Traceback (most recent call last): File "/home/survpi-camera/main.py", line 56, in <module> processStreamLine(readData) File "/home/survpi-camera/main.py", line 16, in processStreamLine streamInfo["process"].stdin.write(data) BrokenPipeError: [Errno 32] Broken pipe
recentStreamProcesses = []
streamInfo = {
"lastStreamStart": -1,
"process": None
}
def processStreamLine(data):
isInfoLine = ((data.startswith(b"[") and (b"INFO" in data)) or (data == b"Preview window unavailable"))
isFrameStopLine = (data.startswith(b"#") and (b" fps) exp" in data))
if ((not isInfoLine) and (not isFrameStopLine)):
streamInfo["process"].stdin.write(data)
if (isFrameStopLine):
if (time.time() - streamInfo["lastStreamStart"] >= 10):
print("10 seconds passed, exporting...")
exportStream()
createNewStream()
def createNewStream():
streamInfo["lastStreamStart"] = time.time()
streamInfo["process"] = subprocess.Popen([
"ffmpeg",
"-r","30",
"-i","-",
"-c","copy",("/home/survpi-camera/" + str(round(time.time())) + ".mp4")
],stdin=subprocess.PIPE,stderr=subprocess.STDOUT)
print("Created new streamProcess.")
def exportStream():
print("Exporting...")
streamInfo["process"].stdin.close()
recentStreamProcesses.append(streamInfo["process"])
cameraProcess = subprocess.Popen([
"libcamera-vid",
"-t","0",
"--width","1920",
"--height","1080",
"--codec","h264",
"--inline",
"--listen",
"-o","-"
],stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.STDOUT)
createNewStream()
while True:
readData = cameraProcess.stdout.readline()
processStreamLine(readData)
Thank you in advance!
Related
I'm building an icecast2 radio station which will restream existing stations in lower quality. This program will generate multiple FFmpeg processes restreaming 24/7. For troubleshooting purposes, I would like to have an output of every FFmpeg process redirected to the separate file.
import ffmpeg, csv
from threading import Thread
def run(name, mount, source):
icecast = "icecast://"+ICECAST2_USER+":"+ICECAST2_PASS+"#localhost:"+ICECAST2_PORT+"/"+mount
stream = (
ffmpeg
.input(source)
.output(
icecast,
audio_bitrate=BITRATE, sample_rate=SAMPLE_RATE, format=FORMAT, acodec=CODEC,
reconnect="1", reconnect_streamed="1", reconnect_at_eof="1", reconnect_delay_max="120",
ice_name=name, ice_genre=source
)
)
return stream
with open('stations.csv', mode='r') as data:
for station in csv.DictReader(data):
stream = run(station['name'], station['mount'], station['url'])
thread = Thread(target=stream.run)
thread.start()
As I understand I can't redirect stdout of each thread separately, I also can't use ffmpeg reporting which is only configured by an environment variable. Do I have any other options?
You need to create a thread function of your own
def stream_runner(stream,id):
# open a stream-specific log file to write to
with open(f'stream_{id}.log','wt') as f:
# block until ffmpeg is done
sp.run(stream.compile(),stderr=f)
for i, station in enumerate(csv.DictReader(data)):
stream = run(station['name'], station['mount'], station['url'])
thread = Thread(target=stream_runner,args=(stream,i))
thread.start()
Something like this should work.
ffmpeg-python doesn't quite give you the tools to do this - you want to control one of the arguments to subprocess, stderr, but ffmpeg doesn't have an argument for this.
However, what ffmpeg-python does have, is the ability to show the command line arguments that it would have used. You can make your own call to subprocess after that.
You also don't need to use threads to do this - you can set up each ffmpeg subprocess, without waiting for it to complete, and check in on it each second. This example starts up two ffmpeg instances in parallel, and monitors each one by printing out the most recent line of output from each one every second, as well as tracking if they've exited.
I made two changes for testing:
It gets the stations from a dictionary rather than a CSV file.
It transcodes an MP4 file rather than an audio stream, since I don't have an icecast server. If you want to test it, it expects to have a file named 'sample.mp4' in the same directory.
Both should be pretty easy to change back.
import ffmpeg
import subprocess
import os
import time
stations = [
{'name': 'foo1', 'input': 'sample.mp4', 'output': 'output.mp4'},
{'name': 'foo2', 'input': 'sample.mp4', 'output': 'output2.mp4'},
]
class Transcoder():
def __init__(self, arguments):
self.arguments = arguments
def run(self):
stream = (
ffmpeg
.input(self.arguments['input'])
.output(self.arguments['output'])
)
args = stream.compile(overwrite_output=True)
with open(self.log_name(), 'ab') as logfile:
self.subproc = subprocess.Popen(
args,
stdin=None,
stdout=None,
stderr=logfile,
)
def log_name(self):
return self.arguments['name'] + "-ffmpeg.log"
def still_running(self):
return self.subproc.poll() is None
def last_log_line(self):
with open(self.log_name(), 'rb') as f:
try: # catch OSError in case of a one line file
f.seek(-2, os.SEEK_END)
while f.read(1) not in [b'\n', 'b\r']:
f.seek(-2, os.SEEK_CUR)
except OSError:
f.seek(0)
last_line = f.readline().decode()
last_line = last_line.split('\n')[-1]
return last_line
def name(self):
return self.arguments['name']
transcoders = []
for station in stations:
t = Transcoder(station)
t.run()
transcoders.append(t)
while True:
for t in list(transcoders):
if not t.still_running():
print(f"{t.name()} has exited")
transcoders.remove(t)
print(t.name(), repr(t.last_log_line()))
if len(transcoders) == 0:
break
time.sleep(1)
I'm French, sorry if my english isn't perfect !
Before starting, if you want to try my code, you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
I succeed to open pcap file, read packets and write them to another file with this code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "outputfile.pcap" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath, append=True) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
So now I want to write in a FIFO and see the result in a live Wireshark window.
To do that, I just create a FIFO :
$ mkfifo /my/project/location/fifo.fifo
and launch Wireshark application on it : $ wireshark -k -i /my/project/location/fifo.fifo
I change my filepath in my Python script : o_filepath = "fifo.fifo" # fifo to write
But I have a crash ... Here is the traceback :
Traceback (most recent call last):
File "fifo.py", line 25, in <module>
o_open_file = PcapWriter(o_pcap_filepath, append=True)
File "/home/localuser/.local/lib/python3.6/site-packages/scapy/utils.py", line 1264, in __init__
self.f = [open, gzip.open][gz](filename, append and "ab" or "wb", gz and 9 or bufsz) # noqa: E501
OSError: [Errno 29] Illegal seek
Wireshark also give me an error ("End of file on pipe magic during open") : wireshark error
I don't understand why, and what to do. Is it not possible to write in FIFO using scapy.utils library ? How to do then ?
Thank you for your support,
Nicos44k
Night was useful because I fix my issue this morning !
I didn't undestand the traceback yesterday but it give me in reality a big hint : we have a seek problem.
Wait ... There is no seek in FIFO file !!!
So we cannot set "append" parameter to true.
I changed with : o_open_file = PcapWriter(o_filepath)
And error is gone.
However, packets were not showing in live...
To solve this problem, I needed to force FIFO flush with : o_open_file.flush()
Remember that you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
So here is the full code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "fifo.fifo" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
o_open_file.flush() # force buffered data to be written to the file
Have a good day !
Nicos44k
I am trying to save audio clips (15 seconds per clip) from live stream using VLC library. I am unable to find any option that could allow me to record only 15 seconds from the live stream. Thus I ended up using timer in my code, but the recording clips sometimes contain 10 seconds, sometimes 20 seconds (rarely 15 seconds). Also, sometimes the audio content is repeated in the clips.
Here is the code (I am a newbie so please guide me)
Code.py
import os
import sys
import vlc
import time
clipNumber = sys.argv[1]
filepath = 'http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'
movie = os.path.expanduser(filepath)
if 'http://' not in filepath:
if not os.access(movie, os.R_OK):
print ( 'Error: %s file is not readable' % movie )
sys.exit(1)
filename_and_command = "--sout=#transcode{vcodec=none,acodec=mp3,ab=320,channels=2,samplerate=44100}:file{dst=clip" + str(clipNumber) + ".mp3}"
# filename_and_command = "--sout=file/ts:clip" + str(clipNumber) + ".mp3"
instance = vlc.Instance(filename_and_command)
try:
media = instance.media_new(movie)
except NameError:
print ('NameError: % (%s vs Libvlc %s)' % (sys.exc_info()[1],
vlc.__version__, vlc.libvlc_get_version()))
sys.exit(1)
player = instance.media_player_new()
player.set_media(media)
player.play()
time.sleep(15)
exit()
Now that I want to record 1 minute of the live-stream, I invoke this python code from the bash script 4 times and it creates 4 audio clips (clip1.mp3, clip2.mp3, clip3.mp3 and clip4.mp3)
Script.sh
for ((i=1; i<=4; i++))
do
printf "Recording stream #%d\n", "$i"
python code.py "$i"
printf "Finished stream #%d\n", "$i"
done
Is there anyway to just loop the code with Python instead of invoking again and again with bash script (I tried to put the code in the loop in python, but the first clip - clip1 - keeps recording and never finishes recording). And a way to specify that I could only record 15 seconds from the live-stream instead of using time.sleep(15)
If you just want to save the file, no need to use vlc. Here is a short procedure I use to do that:
def record(filepath, stream, duration):
fd = open(filepath, 'wb')
begin = datetime.now()
duration = timedelta(milliseconds=duration)
while datetime.now() - begin < duration:
data = stream.read(10000)
fd.write(data)
fd.close()
Exemple of use to record during one second:
from urllib.request import urlopen
record('clip.mp3', urlopen('http://streamer64.eboundservices.com/geo/geonews_abr/playlist.m3u8'), 1000)
All of the work required can easily be done with FFMPEG as:
ffmpeg -i streamURL -c copy -vn -ac 2 -acodec aac -t 15
-vn for just recording the audio part (without video)
-t for specifying the duration of stream you want to record (15 sec here)
My raspberry pi is connected to microcontroller over serial pin. I am trying to read the data from the serial port. The script reads the data for few seconds. However, it terminates throwing following exception
serial.serialutil.SerialException: device reports readiness to read but returned no data (device disconnected?)
I have used following python code
#!/usr/bin/python
import serial
import time
serialport = serial.Serial("/dev/ttyAMA0", 115200, timeout=.5)
while 1:
response = serialport.readlines(None)
print response
time.sleep(.05)
serialport.close()
Here is the code you should be using if you are seriously trying to just transfer and print a file:
for line in serialport.readlines().split('\n'):
print line
------------------------------------------------------------
I believe you are having problems because you are using readlines(None) instead of readline() Readline() reads it a line at a time, and will wait for each one. If reading a whole file it will be slower than readlines. But readlines() expects a whole file all at once. It is obviously not waiting for your serial transfer speed.
--------------------------------------------------
My data-logging loop receives a line every two minutes and writes it to a file. It could easily just print each line like you show in the OP.
readine() waits for each line. I have tested it to wait up to 30 minutes between lines with no problems by altering the program on the Nano.
import datetime
import serial
ser = serial.Serial("/dev/ttyUSB0",9600) --/dev/ACM0 is fine
while True :
linein = ser.readline()
date = str(datetime.datetime.now().date())
date = date[:10]
time = str(datetime.datetime.now().time())
time = time[:8]
outline = date + tab + time + tab + linein
f = open("/home/pi/python/today.dat","a")
f.write(outline)
f.close()
Maybe changing to this approach would be better for you.
this is my log
File "/opt/ibm/db2-governor/helpers/utils.py", line 10, in run_cmd
output = proc.communicate(timeout = timeout)[0]
File "/opt/ibm/dynamite/python/lib/python2.7/site-packages/subprocess32.py", line 927, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/opt/ibm/dynamite/python/lib/python2.7/site-packages/subprocess32.py", line 1713, in _communicate
orig_timeout)
File "/opt/ibm/dynamite/python/lib/python2.7/site-packages/subprocess32.py", line 1786, in _communicate_with_poll
ready = poller.poll(self._remaining_time(endtime))
OverflowError: Python int too large to convert to C lon
so the code that triggers this is
output = proc.communicate(timeout = timeout)[0]
timeout is set to 20, this happens intermitently (almost never but it happens), im using python 2.7.11 with subprocess32 library, is this a python bug?
ok, i checked subprocess32.py, the line goes like this
endtime = time.time() + timeout
ready = poller.poll(self._remaining_time(endtime))
so basically timestamp is too large to convert into a c int, is there anything i can do to resolve this?
Sounds like a bug all right.
If you're interested, here's a workaround proposal: instead of communicate, read from process stdout in a thread and check if process is over by either nothing more to read or return code yield through poll.
Since you control the loop, you can wait 1 second in main thread and countdown for the timeout (not extra accurate, since sleep can drift, but that would be good enough & simple). Also kill the process when reaches 0.
import threading
output = ""
def subp(p):
global output
while True:
# read blocks but since we're in a thread it doesn't matter
data = proc.stdout.read()
if not data or proc.poll() != None:
break
output += data
# here create the process
proc = subprocess...
# create a thread, pass the process handle
t = threading.Thread(target=subp,args=(proc,))
while True:
if proc.poll() != None:
# exit: OK
break
timeout -= 1
if timeout < 0:
# took too long: kill
proc.terminate()
break
time.sleep(1)
t.join()